Garden design (for geeks) (by )

When I was about 11, I designed a garden. I remember drawing a plan of it on a page of my spiralbound notepad. Sadly, that means the original design is now long gone, but that's irrelevant - the original design assumed a plot of land the exact same shape as a page of my notepad, which is unlikely.

The important thing is that I can remember the concept.

The idea was simple - I think of a garden as a fun place to relax. Be that a pleasant spot to read a book, or a venue for a party. Where, to me, "party" involves a buffet and background music and people mingling and chatting.

Therefore, I wanted to pack in a pleasing variety of spots to read/sit/chat into a limited space. Also, being a geek, I wanted it to be intellectually interesting.

So the answer was obvious - it had to involve a maze. But more than that. Two mazes. Why not have a stream and little ponds that forms a water-maze, and then overlay that with a maze you can walk, with little bridges and stepping stones and the like where it crosses the stream, to add interest? And use a variety of materials for the maze; hedges, walls, balustrades, the stream itself - all can form barriers of varying solidity. I love strings of lights in trees and bushes, so let's run lights around it. And have lights in the stream and its ponds. Lights are pretty at night.

One idea that appealed to me was that, for parties and the like, you could have little boats with candles in circulating around the stream. Of course, if it's an actual natural stream, then all the boats would end up stuck at the grating you'd need to put up to stop them all going downstream - but if it's an artificial one (in effect, a long thin pond that wriggles around the place) you could encourage a continuous current around it by putting pumps around the place, sucking in water then emitting it in a jet, with the jets and inlets all aligned around the circuit to push the water in the same way. Extra points for style: Computer control of the pumps so, at the end of the day, you can cause all the boats to congregate in one place for easy collection...

There would need to be a more open patio / lawn area joining the maze to the house, for when you need to gather everyone together to eat and so on. And it would be nice if the other end of the maze led into some more wild and natural terrain such as woodland, after all that order. But the maze would pack a lot of different little nooks into a relatively small space, creating a garden that seems a lot larger than it really is...

I'd draw up a plan, but of course, the actual implementation totally depends on what the land you have is like, and what bits of random architectural salvage turn up to build the maze out of!

AURUM (by )

My recent thoughts about bitcoin reminded me of earlier thoughts I'd had about digital currency.

Cryptographic digital currency is a way of transferring value without trusted third parties being involved in every transaction, but within a closed domain, it's easier to go for a trusted party and cut out all the crypto maths. Which is why we have printer credits managed in a central database when we use a shared printer. We may use a digital currency to buy a credit, but once we have credits, we're happy for the owner of the printer to just store our balance in a database and decrement it whenever we print.

And within a company, complex processes are used to transfer money in and out of the company's actual bank account, but budgets within departments are usually allocated by just asking somebody to update a spreadsheet. Money moves within the company using easier, faster, simple methods than bank transfers, writing cheques and letting them clear, or exchanging cryptographic keys.

It's the same story for "ulimit" mechanisms in computer operating systems, and language-level sandboxes, that allocate budgets of things like CPU time and memory space to software running in a computer.

So, when I set out to design AURUM, the resource limit system for ARGON, I decided to make a unified abstraction across all of the above. A process has a budget, which contains arbitrary amounts of arbitrary resources; and it can subdivided that budget into sub-budgets.

That's just an accounting system, though. It needs to integrate with actual resource managers. For something like CPU time, for efficiency, the scheduler probably wants a nice simple machine word reserved for "jiffies in the budget" attached to a process context in a hardcoded way. So the AURUM system probably needs a handler for "run out of jiffies" that takes more from the actual AURUM budget and "prepays" them into the process context - and when the process' balance is requested, knows to ask what's been prepaid and not yet used, so it can report honestly. If the process is stopped, any remaining balance needs to be put back in to be re-allocated to the parent process' budget. And so on.

Similarly, interfaces to actual electronic banking (spend money in a budget by causing an actual bank transfer, or bitcoin transaction, or whatever to happen), and interfaces for incoming budgets from external sources (a bank account interface that fires off a handler when an incoming payment is detected - with that payment as the handler's budget so it can then allocate it appropriately), and so on, can be built.

And a power-constrained mobile device might have joule budgets - and operations such as driving motors, transmitters, and lights might use them up. That would be neat for handheld computers and deep space probes, which can then run less-trusted code in a sandbox with controlled access to expensive resources.

That's all well and good as a way to manage finite resources in a system, but the next level is to take a step back, look at the system as a whole, and see how this facility can be used to do other cool stuff.

This leads naturally to the semi-forgotten discipline of Agoric computing which seeks to make marketplaces and auctions a core tool to solve resource allocation problems. This has scope within an ARGON cluster, if it's shared between multiple organisational units, which can then use budgets purely within AURUM to manage their shared use of the computer resources and to contribute towards its upkeep accordingly.

But, more excitingly, with mechanisms like Bitcoin allowing for money to be transferred across trust boundaries, it starts to become practical to think about allowing our computers to participate in economies between them. What if my desktop PC and servers rented out their space disk space, CPU time, and bandwidth to all comers? And with the money they accumulated from doing so, in turn rented offsite disk space for their backups, and when I gave them a particularly tough job to do, hired some extra CPU and bandwidth to do it, dynamically? Without me having to hand-hold it all as the middleman, pulling my debit card out to pay for resources... If I wanted to do lots of resource-intensive work I might put more money in from my own pocket to give it more to hire extra resources with; if I tend to under-use my system and it makes profits from renting out spare capacity, then I could take cash from it from time to time.

I guess the first step would be to create standard protocols in AURUM for things like auctions and commodity markets, to facilitate transferring between different 'currencies' such as CPU time, bitcoin, fiat currencies, printer credits, disk space, and the like. And a standard interface to bank accounts, where balances and transaction histories can be queried, and transfers requested. A bank account in the context of AURUM would be a third party that holds control of some budget on your behalf, so it should look like an ordinary budget in every way possible. That would make it practical to have software that needs a given resource to find a way, through a registry of trusted markets, to convert them into the resources it wants.

That'd be neat...

Bitcoin security (by )

I've been learning about Bitcoin lately.

It's an electronic currency. I've seen electronic currency before - in the late 90s there were efforts to create them based on virtual banks issuing coins. The coins were basically long random serial numbers which, along with a statement of the value of the coin, were then signed by the bank. The public key of the bank is published, so people can check they're valid coins issued by the bank. The idea was that rather than withdrawing a bunch of notes from the bank, you can ask the bank to mint you a bunch of these signed numbers instead; and anyone who sees them can check their value, and eventually, return them to the bank (which can also check their value in the same way) to get their account credited.

Read more »

IPv6 versus NAT (by )

I was enthusiastic about IPv6 when I first read of it, in the late 1990s. Mainly, I liked the autoconfiguration, and the inbuilt support for anycast and multicast, which are used to great effect: there is s standard IPv6 address for "my nearest time server" and the like, which has various benefits.

However, it comes at a cost. It's a whole new Internet that has to be built alongside the existing one and a careful handover done with complex mechanisms to let them coexist transparently. And the better autoconfiguration of IPv6 isn't that useful in the presence of recent developments such as automatic IPv4 address assignment, mDNS for finding things, and of course, good old DHCP for managed networks.

And it's not working. More than a decade has passed, and IPv6 is still a toy. It's extra work to set up, and the IPv4/IPv6 migration mechanisms you need to be able to still access the IPv4 Internet actually break existing stuff, mainly because the IPv6 side isn't being maintained well (so often breaks without being noticied) and hosts using the mechanisms will prefer IPv6 over IPv4 (as otherwise, IPv6 would never get used, as almost everything that offers IPv6 also offers IPv4) if it's advertised.

So there's little motivation for people to bother turning on IPv6 - it's more work, and it breaks your Internet access (or, if you're a service provider, unless you're careful, it offers an alternate way to access your site that is more work to maintain, but breaks more often as you won't be putting as much effort into maintaining it). This means that the critical feedback loop of people wanting IPv6 because there are good things that are only on IPv6 will never kick in. It'd be stupid to try and be IPv6-only, but until useful things are IPv6-only, there's little incentive to even support IPv6 alongside IPv4.

Now, the main reason people say we should move to IPv6 is because of the IPv4 address space exhaustion. But there are other solutions.

The widespread one is Network Address and Port Translation (or "NAT" for short). Under NAT, an entire network has a single public IPv4 address and the devices inside the network are assigned addresses from a special private range (that can be reused for every private network), and outgoing connections get their source address and port rewritten so they all come from that one address, and when the replies come back, they're mapped back into the private address of the actual device. This means an entire network (which could be an entire organisation with millions of PCs, or an entire ISP with millions of customers) can use just one (or a few, if they need more ports to support all the connections at once) public IPs.

There are issues with this - the NAT device needs to remember what external ports are used by what connections, and it needs to keep track of when those connections are still being used so it can re-use the ports. But if a device is switched off or unplugged or dies, it will never explicitly close the connection,. so the NAT device has to discard connections that just aren't used for a long time, assuming the owner to have died. However, this means that long-lived connections that aren't used much tend to get killed. But since NAT is so widespread now, most apps that open those kinds of connections nowadays send "keep-alives", empty messages that just keep the connection alive so the NAT device doesn't forget them.

And it also means that devices behind NAT can't accept incoming connections; the NAT device only lets incoming connections out and remembers the return path for replies - if an incoming connection comes in, it has no way of knowing what device "wants" it unless it's been specifically configured with a "port forward". Standards like UPnP exists to allow devices to find their nearest NAT router and ask for a port forward to be set up, but they suck for various reasons I shan't elaborate right now.

This isn't a great issue, though. As a laptop user, I am resigned to being behind NAT most of the time. Almost everything I do from my laptop is based around connecting out to remote servers, and for the exceptions, I have an N2N VPN that lets my peers connect to me via an encrypted IP-level relay server. My long-lived SSH connections have keepalives turned on. It works out OK in practice.

However, I think it could easily be improved...

Before NAT became popular, the standard way of doing the same thing was via a SOCKS5 proxy. This worked much like NAT - you'd have a network using private addresses, and a single border device on that network that also had an Internet connection with a public IP. The border device ran some software - the SOCKS5 proxy.

When applications on devices inside the network wanted to connect to somewhere outside of the local network, rather than trying to reach it directly, they'd instead connect to the SOCKS5 proxy. Over that connection they'd send a request for the connection to be forwarded on. The SOCKS5 proxy would then open a connection, from its public IP address, to the destination server. It would then forward traffic between the two halves of the connection, making the device's connection to the SOCKS5 server in effect be a connection to the remote server - and back again in the opposite direction.

So it basically did the job of NAT, except that it required the devices to know about SOCKS5, and to know where the SOCKS5 server was. NAT won, as it was transparent: the NAT box just pretended to be a router offering access to the Internet (the "default route" you have to put in when manually configuring a network, or configured automatically via DHCP or PPP). SOCKS5 didn't really require you to modify the application (although many applications did add support to SOCKS5), as it was possible to write a "socksify" tool that pretended to be the OS's normal interface to the network (the "sockets API"), but which actually made connections via SOCKS where applicable.

But SOCKS5 doesn't have NAT's problems with keepalives. And it has a big advantage over NAT - the SOCKS5 protocol lets a client request an incoming connection, in which case the SOCKS5 server opens an incoming connection port on the public side and reports its address back to the app, along with a notification when the connection is taken up. It's a bit limited, as it only lets a single connection in (while a proper listening port lets multiple connections).

Also, SOCKS5 actually makes it easier to adopt IPv6. When an outgoing connection is requested, the app can specify an IPv4 address, an IPv6 address, or a hostname - and in the latter case, the SOCKS5 server could in principle find an IPv6 server at that hostname (with an AAAA record) and open an IPv6 connection, even though the application has connected to the SOCKS5 server via IPv4 - or vice versa, if the client connects to it via IPv6.

And unlike NAT, SOCKS5 has a login phase:: each connection can supply a username and password to identify the user. Under NAT, all you have is the private IP address of the device. This means that SOCKS5 servers can give better connections to more important users, and better log who did what (where that matters).

So perhaps it's time for a SOCKS5 comeback. The protocol has been extended to support IPv6, but I think it could do with a bit more sprucing up to make it more powerful and modern. Here's what I'd suggest:

  • Proper listening socket support. It should be possible to request a listening socket, and if you are accepted, then be sent messages every time a client connects; but rather than your connection then becoming the relayed client connection, the accept message just gives you a magic token identifying the connection. You can then open another connection to the SOCKS5 server and, rather than requesting an outgoing connection, offer up the magic token to accept the incoming connection and have it relayed. Or just reply on the original listening-socket connection to reject the request.

  • Listening sockets should be able to request a specific port to listen on, along with a flag to specify whether they're happy to accept another, or should just give up the attempt if they can't have the one they request. Such a request might be rejected due to it being already in use, or certain listening ports might be reserved for specific users.

  • Better UDP support. The current UDP support in SOCKS5 amounts to asking the SOCKS5 server to set up a UDP relay. All your UDP traffic must then be sent to an IP+PORT the SOCKS5 server sends in the reply, with a header added to authenticate it; this eats up some of the limited available size of a UDP packet. It'd be nice if the UDP packets could tunnel over the SOCKS5 connection, like TCP connections are, with suitable framing.

  • Ubiquitous support for SOCKS5-over-SSL in clients and servers. Then it can be used as a simple VPN - offer a SOCKS5 server on the public side of your SOCKS5 relay, too, that lets authenticated users who are outside of the office connect in to access servers on the private network. Or just trust your internal network less, as some SOCKS5 connections are better than others (due to being optionally authenticated to a specific user) so are worth stealing. For this use, it'd be nice if a SOCKS5 server could announce (when it's connected to) what addresses it provides access to - for a normal Internet gateway, it'd reply "all addresses"; for a VPN, it'd just report the private IP range.

  • Better support in devices. SOCKS5 should be a standard feature of the sockets library, not something you need to hack in under an app. SOCKS5 should be in smartphones and tablet computers. There should be the option to specify a list of SOCKS5 servers as well as a default route (they can be connected to and asked what address ranges they provide, and connections made via them accordingly). DHCP servers should announce SOCKS5 proxies (there doesn't seem to be a DHCP option for SOCKS5 proxies; am I looking in the right place?).

I think that extending SOCKS5 in the above way (to make SOCKS6!) and then getting a good implementation of it open-sourced under a BSD license and thence it device OSes as standard would be a LOT less work than migrating to IPv6, while also offering an improvement over IPv4 with NAT - and yet also able to coexist happily with IPv4+NAT, as non-SOCKS devices can still be NATed via the default route.

So, how about it? If somebody volunteers to write a decent "SOCKS Next Generation" server (using nice scaleable event-driven IO and all that) and client, I'll volunteer to help you as best I can, and write up a proper draft RFC for the enhanced protocol. If we can get the server into consumer and small office ADSL routers (whose manufacturers seem to be quite open to adding extra features to the brochures), along with advertising themselves as such via DHCP option that clients listen to, it can be come ubiquitous and useful; then we can work on getting the ISPs that to support it (making sure our SOCKS server is happy to pass connections on to an upstream SOCKS server, for when we are proxying to an ISP's own private network). I reckon that'd be a few weeks' development time, at most, then it's all about the lobbying to get it accepted into OSes and routers.

Fame and glory await!

le jbocifnu (by )

As I mentioned before, I'm teaching Mary Lojban.

The project that lead to Lojban was originally started to explore an idea - the Sapir-Worf hypothesis states that language influences thought; in its strongest form, one cannot imagine a concept one cannot put into words (but that's been largely discredited now). The weak version of the hypothesis is that language can hinder or help our cognitive processes.

Lojban was designed as a language with as much expressive density as possible - letting you clearly express precise concepts easily. The idea is that somebody who can think in Lojban can think more clearly than somebody thinking in English, for example.

I've been learning it myself, and I've certainly found it interesting - I'm limited by my slowly-expanding vocabulary, but already, I often find myself using Lojban concepts in my inner dialogue. There are concepts covered by a large class of irregular grammar in English that are just a single word in Lojban, and identifying the commonalities between all these bits of English into one thing is, in my experience, providing a lot of insight.

But it'd be awesome if I could teach my daughter to think awesomely. It'd certainly help us to attain world domination. Some of the more far-out possibilities in Lojban might take a few generations of native Lojban speakers to fully understand!

However, nobody seems to have taught Lojban to a newborn baby before, so I'm having to work out how to do it myself, based on advice from people raising bilingual children in other languages. I'm mainly starting with Lojban's attitudinals before, which are simple words used to attach emotional context - whereas in English, emotion is expressed with subtle yet crude changes in wording and emphasis, Lojban has a rich set of words to explicitly attach attitudes to sentences or any part thereof. They're useful on their own, too, to simply express the emotion on its own without making any actual statement.

They're perfect for the simple emotional world of babies, and they're easy to say. Here's the ones I've been using:

  • {.uu .oidai} ("Oooh Oy-die") - "Aw, you're suffering/uncomfortable"
  • {.ui .oinaidai} ("Whee, Oy-nie-die") - "Yay, you're comfortable"
  • {.i'i} ("Ee-hee") - "We're together"
  • {.i'isai} ("Ee-hee sie") - "We're very together" (eg, whole family cuddle!)
  • {.oi} ("Oy") - "Grr, I am suffering" (eg, when something goes wrong for me during a nappy change)
  • {.oipei} ("Oy pay") - "How are you feeling on a comfortable <-> uncomfortable axis?"

There's a vast repository of more and more subtle emotions that can be expressed as time passes.

But I'm also using some actual sentences, too. Mainly things like {xu do xagji} ("Hoo doe hag-jee"), "Are you hungry?" and pointing out what things are {ti mamta} ("Tee mamta") "That's your mum, that is!". And sometimes I throw in complex sentences, even though she won't understand, because it's useful to get used to the sound of sentences: {.iu lo mensi be do .e lo mamta be do .e mi cu prami do} ("you low mensee be doe eh low mamta be doe eh me shoe pramee doe") "your sister, your mother, and I love you" (said with a loving attitude, which doesn't quite translate into English).

But as she develops, I'm keen to explore the cases where Lojban and English don't match up well, as they are the mind-opening things that have already taught me more about language and thought. {ti mo} is a good question - it literally asks what relationships the pointed-at object is involved in or what properties it has, which invite a wide range of answers from "it's a cat" and "it's black" to "it exists in a three-dimensional space" (which sounds bizarre for a child in English, but {se canlu} ("Se shanloo") is a short phrase in Lojban that is the natural way to distinguish a real or toy cat from a picture of a cat).

All of these are rather verbose technical-sounding concepts in English, but that's part of the beauty of Lojban - they're simple words, forming parts of the core lexicon, and so they are easier concepts to teach in it!

WordPress Themes

Creative Commons Attribution-NonCommercial-ShareAlike 2.0 UK: England & Wales
Creative Commons Attribution-NonCommercial-ShareAlike 2.0 UK: England & Wales