Opening up Open Source (by )

One of the awesome things about free/open source software (FOSS) is that, as you have access to the source code, you have exactly as much power as the original authors of the software to modify and extend it.

When you are upset about something in closed-source software, or one hosted on the owner's servers on the Internet as a web app or API, all you can do is beg and plead with them, and threaten to take your future business elsewhere; they control the source code, so they have ultimate power. With FOSS software, in theory, the original authors have no special powers.

However, it often doesn't quite work out like that in practice.

Most FOSS software comes onto people's computers as a precompiled binary; they download it and run it. This is convenient and efficient. But if they then want to use their theoretical right to get in there and modify it, they need to do the following things:

  1. Learn the programming language(s) it's written in, and the libraries/frameworks/APIs it builds on top of. (This is particularly tricky if they have no previous programming experience).
  2. Find and download the sources (which shouldn't be too hard, but can still be tricky).
  3. Set up a build environment that can compile the thing. This may involve installing compilers, and development versions of libraries in order to get the header files, and so on.
  4. Actually make it compile. A lot of the time, a source package as shipped by the authors won't compile perfectly outright, as you need to apply patches written by the people who maintain the package for your platform, as each platform has their own conventions as to where things are placed in the filesystem, and their interaction with less-standardised bits of infrastructure such as service management frameworks, management of network interfaces, low-level hardware access, and so on.
  5. Learn the workings of the codebase. For a large project, this can be VERY daunting, even to a seasoned programmer.
  6. Actually design, implement, and debug the change. If the codebase is poorly architected, this can be made unnecessarily difficult, and require refactoring of existing code elsewhere within the codebase.
  7. Install the software once it's building. As it's been custom-built, it may not interact nicely with your platform's package manager, so you may end up running it out of /usr/local/bin or ~/bin or similar, and have trouble making managed packages that depend on the one you're tinkering with correctly linking to it, interaction with system configuration tools, and so on.

I think this is harder than it needs to be. It puts a lower bound on the effort required to make even a trivial change that, in many cases, means it's not worth making it; we're as beholden to the whims of the developers of the package as we would be to a closed-source software company, and the supposed benefits of open source are denied to you.

We can break down the barriers into a number of categories, and look at what can be done to solve each.

Learning the programming environment

Whether you're a seasoned programmer or not, any given piece of software you didn't write will involve a set of languages, tools, libraries, and other things, that you may not be familiar with, and these will need to be learnt.

What might help is better standards for automatically-generated API documentation, so you can find the documentation of API functions as they are used in the software you're learning how to edit by just pressing a button in your editor.

But if it became easier, commonplace, and expected for people to dig around in other people's source code, for curiosity or to make their own changes, developers of infrastructure components would feel more compelled to document their interfaces in ways that casual programmers can quickly pick them up, and to make their interfaces simpler and easier to learn, because the expected audience would be less dominated by seasoned programmers.

And, conversely, if more people were casually getting involved in simple programming tasks in order to improve the software they use (if it became easier to do so), then the general programming ability of the population would also rise, giving more people the grounding in basic conventions and concepts required to understand programming tools...

Migrating from a normal installation of the software to a hackable one

This is perhaps the biggest hurdle, and yet, the most amenable to being overcome with better technology. It covers the whole spectrum from finding and downloading the source code, setting up a build environment, getting it building on your platform, and getting it installed as a first-class citizen in the eyes of your package manager.

I can think of two technical fixes to this problem, and the best bit is, they're both things that already exist out there rather than my usual kinds of crazy new reinvent-the-wheel thinking!

Firstly, package managers like Nix make it easy to establish build environments on your own hardware, as the build environment of any package can be requested, and automatically set up for you. Also, they are built around installing software from sources in the first place, and offer downloading pre-built binaries as an optimisation. It's quite easy to adapt a nix expression that builds a software package from downloaded sources to one that builds from a source tarball you've made yourself, and install it into an isolate "profile" in such a way that it's easily kept isolated from other software you have running and might not want to risk being broken by your experimental changes yet, and to roll the change back if it doesn't work out.

I suspect that many traditional package managers are written with users in mind, and not developers, which sounds laudable; but in practice it forces the distinction between user and developer, not allowing the former to easily migrate into the latter. Nix feels written for developers, of course acknowledging that developers are also users and still want to be able to install off-the-shelf prebuilt binaries easily. The inbuilt package manager for Chicken Scheme is likewise developer-friendly, letting you directly build from arbitrary checked-out source trees into a properly installed package; my development process for most Chicken software is to run chicken-install in my in-progress sources as the first port of call to compile and run it, rather than the usual idiom in most languages of compiling and running from the source checkout then "installing" the binaries as an optional, later, step. And yet a "mere" end-user of my software can type chicken-install ugarit, and Chicken will download the latest public Ugarit release from the Internet and install it for them. If they want to join me in hacking Ugarit, they can check out the latest sources from my web site and get stuck in pretty quickly.

Secondly, the move away from compiling software ahead-of-time into distributable binaries, towards on-demand compilation of source code at run time (with caching of compiled forms, of course), means that the normal installation of some software is the source code, there is no need for an external "build environment" to convert your changes into a runnable package, and changes to the source code can be immediately used without going through any kind of build/install phase. Because this makes tinkering with the software so much easier, it can make it become a routine part of using the software, rather than something special done only by special people. The typical Emacs user will have overridden various internal functions of Emacs from within their personal configuration; although many configurations can be done without doing so, customisation through function overriding is so accessible that people routinely customise their Emacs installations in ways that the original developers didn't think (or didn't have time/energy) to add as a configurable option. I wish all open-source projects were written in such an open manner, but it will require a lot of migration away from the batch-compilation model of C, C++, and Java.

Poorly-architected existing code

This is a thorny issue; even if you can easily get into the code of your application and make the changes you want, and you understand all the tools it's built with, it can still be hard to make the changes you want because of one of a number of kinds of inherent "fragility" in the way the software is constructed.

Usually, this boils down to some variant on the idea of some information being repeated all over the code, rather than kept in one place. If your code relies on communicating between its components by using a special file, for instance, and every place where this file is read or written contains its own code to read and write this file directly, then changes such as storing the file in a different place, or adding some extra information to it, or replacing it with access to a database or something, will be difficult. You'll need to find all the places where the file is used, and individually re-write them to reflect your changes. This is laborious, and you might miss some, leading to bugs when those bits of code are run but don't reflect the changes.

However, if the mechanism of accessing this shared state (reading and writing the file) was isolated into one place in the software, with an interface that is used wherever required and reflects only the essentials of the access to the shared state, then that mechanism can easily be changed to another, as long as it still preserves those essentials, relatively painlessly and safely.

Software developers, before they even write a line of code, be thinking about how their software might be changed in future, and make sure that they split it into modules with clean interfaces, to make that easier. As side effects, it also makes their code easier to test and debug, as the interfaces serve to define and clarify the responsibilities and expectations of each module, which makes it easy to write comprehensive tests for the modules.

If this seems like hard work, then you're doing it wrong. We've all heard of code (usually in Java, for some reason) that seems to have taken the Design Patterns book as a checklist of things to do, and features pages and pages of AbstractFactoryWrappers that just indirect everything; the actual code that does the task at hand seems to be scattered thinly amongst all this framework. That's not what I mean by designing your software to be extensible. I just mean splitting it into bits with an explicit interface between each, and makings those interfaces reveal as little as possible about the workings behind them, and putting duplicated code into modules behind interfaces rather than writing the same logic more than once, rather than making it into one big ball of inter-related mud. If you're not saving time by writing software like this in the first place, or if it seems like a burden, then you need to re-think how you write code.

I think programming languages can do a lot to help us write cleaner code, too. I find that when I'm writing C and C++, it's often hard to cleanly pull bits of functionality out into other functions due to the manual memory management which complicates interfaces, and the lack of lexically-scoped first-class functions. Code written in Lispy languages tends to be a lot more easy to refactor as it grows, leading to cleaner interfaces (on average), and the automatic memory management tends to make the interfaces simpler as well.

Also, a culture of open extensibility in software means that extensibility of your code is high in the programmer's mind at all times. Developers of Emacs packages seem to expect bits of their software to be overridden, and write it accordingly.

Conclusion

I think that making programming more accessible has very many good consequences. It gives people more power to get more out of their computers. It gives people more reason to trust computers (and as we move to a more online society, people are forced to place their trust in computers in order to take part; but being forced to place your trust in something you don't trust is a harrowing experience), as they can peer inside the software to see how it works, and fix it if it doesn't. It also means that everyday users of computers have an easy, and natural, transition into learning programming, which is a very rewarding pastime; and more people contributing to open-source projects means we all get to have a better quality of life.

So, open-source software developers, I implore you to consider these points. Try to make your software open and welcoming to newcomers!

The ups and downs of Bitcoin (by )

The value of bitcoin periodically starts to rise very rapidly, then after a while, crashes and re-stabilises. Whenever a crash happens, I see a spate of anti-Bitcoin blog posts and opinion pieces, such as this by Charles Stross. Now, I think there's plenty of things wrong with Bitcoin, and plenty of worries about what its effects in the uncertain future might be; but these anti-Bitcoin pieces tend to contain a lot of misconceptions and fallacies, so I feel compelled to try and refute some of them I've been seeing.

Bitcoin's price is volatile, so it will never be useful as a currency.

It sure is volatile! There are periodic surges and crashes, superimposed on top of a generally exponential growth in value. Take a look at its price history, on a logarithmic scale, with the MtGox exchange volume in dollars.

But this volatility is all due to rapid adoption, and that won't last forever.

The current rapid deflation is not the inherent deflation of BTC (it's in its inflationary phase, anyway, with lots of new BTC being mined every day), but the rapid adoption of the new currency causing more money to flow into it. The fact that this flow is driven of public awareness of BTC is why it's so closely linked to news, with wild swings following any significant news story about Bitcoin. And the small scale of existing BTC usage compared to the size of the new influxes (and the corresponding panics!) is why the swings are so large. Once BTC has achieved the market penetration it will have in the long run, then we'll see a lot more stability.

At the moment, the growth in interest in Bitcoin is causing the value of bitcoin to rise faster than the production of new bitcoin is pushing the value back down. At some point, Bitcoin will achieve it's "saturation point", with the proportion of the world money supply in Bitcoin settling to its final level. As the rate of money flooding into Bitcoin drops, so will its rise in value drop, until eventually the inflation caused by new Bitcoins being mined will dominate and the value might even start to sink a little; unless a third factor - the rise in value of Bitcoins due to the growth of the economy it represents a stable fraction of - outweighs that (which is hard to predict). Eventually, the generation of new bitcoins will have effectively ceased, at which point the value of bitcoin will grow at about the rate of growth of the economy - a few percent each year.

Easier liquidity with existing currencies will improve the stability, presumably, as part of the volatility comes from the difficulty in exchanging Bitcoin with other currencies, making it easy for a large transaction to deplete or glut the market for Bitcoin on any given exchange. Although volatility is often cited as a reason to firewall BTC from the conventional economy, allowing it to integrate with the conventional economy will be a big step towards removing that volatility...

If Bitcoin is used like a currency (rather than a speculative investment vehicle), it will start to behave like one.

A deflationary currency is terrible as people will never want to invest in anything; it'll be more profitable to just keep savings

Ah, but as I argue above, unless I'm mistaken, in the long run, the price appreciation of Bitcoin will just be a few percent a year; the growth of the underlying economy. So it'll be about the same as an "index-linked" investment, and won't be a "market beating" investment at all. This will mean that poorly-performing investments, which grow at below the market rate, aren't very attractive; but they currently aren't, anyway.

However, will I stop putting my money into banks because it'll grow just fine when stored in my own Bitcoin wallet? So will banks have no money to lend, and be unable to offer loans to students, and startup companies, and mortgages to house buyers?

Well, even assuming that we stick with our current debt-driven economic model (and there's plenty of voices arguing for a new model to be invented, after the banking crises of the past years), a bank offering 0.1% interest will still be an attractive place for me to put my bitcoins, compared to getting 0% interest keeping them in my own wallet; the fact that my bitcoin will also be increasing in value due to deflation, regardless of where I keep it, does not influence that. Banks will have to compete with Bitcoin's ability to be transferred cheaply and easily, versus their annoying and expensive mechanisms to transfer money around, but perhaps they'll just drop SEPA and SWIFT and Faster Payments and Direct Debit and friends in favour of just offering Bitcoin transfers to and from their Bitcoin-demoninated accounts.

Sure, I wouldn't take out a mortgage denominated in bitcoins right now, with the value of a bitcoin soaring exponentially; I might buy a house for a hundred bitcoins, then after a decade, be paying a hundred-bitcoin mortgage for a house that's now worth one tenth of a bitcoin. That would be stupid. But I'd love to take out a mortgage denominated in pounds Sterling that I get to pay in bitcoins at the rate of exchange in effect at each payment; and if Bitcoin adoption levels off before bitcoin mining levels off so it has an actually inflationary phase, I'd very gladly take out a mortgage denominated in it then! But even in the very long run, when the value of a bitcoin is stably and predictably appreciating with the growth of the economy that each bitcoin represents a fraction of at a few percent a year, I'd be happy to take out a mortgage in it; I'd just expect the interest rate I paid to be adjusted to take the rate of appreciation into effect, so that the bank still makes the same effective profit out of me (and I pay the same effective premium for the service of borrowing the money), once deflation is taken into account. Long term loans and investments are calculated in terms of underlying value, and the change in the predicted mapping from that value to a number of currency units is already adjusted to take inflation into account. Basing that calculation on deflation instead won't be the end of the world.

Bitcoin isn't regulated

This is a complex point to refute, because people often don't know what they mean by it in the first place. I hear a lot of "Bitcoin isn't regulated, so you can buy child porn with it". This is confusing two unrelated concepts.

The idea is that existing currencies, such as the dollar, are "regulated". It's sort of implied that this means it's illegal to use it to buy criminal stuff, but that's due to regulations about criminal transactions, and nothing to do with regulations about the currency itself (it's just as illegal to buy drugs with bitcoins as with dollars, and we'll talk about that in the next point).

The regulation conventional currencies actually have is that there's a central bank that can control the supply, which allows them to control the rate of inflation or deflation (and, generally, they choose to inflate it). As they create the currency from scratch, they also get to choose how to introduce it into the economy - traditionally, they use it to buy government bonds, in effect giving it to the government to spend on stuff, but they could just as easily print it onto paper bills and coins and hand it out at random in the streets, although that might cause a riot.

And whether the ability of central banks to control the rate of inflation is a good thing or not is... still hotly debated. There's a grand tradition of economic thought centred around something called the Keynesian model, which is the theory behind the operation of this kind of inflationary economy, but it's not the only model, and there's no solid theoretical argument that it's the best model. Until now, the alternative models have been rather hard to test. If Bitcoin thrives as a deflationary currency, we'll get to test them. But if people tell you that Bitcoin is a disaster for the economy because it's not a good idea in a Keynesian model, that just means they don't believe a non-Keynesian model can be valid. See if you can find out why.

Another interesting point to consider is that, if the people who fear Bitcoin will destroy their Keynesian economies and replace them with anarchy and misery are true, then surely that means that there's something wrong with their Keynesian economies. If the introduction of something like Bitcoin - which will, presumably, inevitably be discovered at some point - destroys their economy, then it's inherently unstable, like a lump of explosive, just waiting for the correct stimulus to cause it to all come falling apart. That, alone, is an argument that this kind of economy is unsafe and we need to find a better one.

Bitcoin will allow money laundering and assasinations, will make tax evasion easier, and will cause the destruction of the welfare state

Well, BTC isn't all that anonymous; http://www.businessinsider.com/220-million-sheep-marketplace-bitcoin-theft-chase-2013-12 is an interesting case in point.

Also, any argument that BTC's pseudonymity will have drastic effects needs to explain why it's different from existing untracked means of transferring value, such as gold, in this perspective. Gold is easily bought and sold, and is less traceable than BTC; it's not even that "heavy and bulky" ($100,000 ~= 3kg of gold, about half a cupful; if your monthly income is "heavy and bulky" in gold, you're lucky). Bitcoin has the edge over gold in being able to be easily transferred online, but at the cost of those online transfers being public knowledge.

Bear in mind that if any legitimate business or person transacts with you in BTC, they're bound (in the UK, at least) by the same legal requirements with regards to declaring it for taxation purposes as if they'd transacted in USD or GBP (in traceable bank transfer, or anonymous cash, form) or gold. If I receive my income from my work in BTC instead of in pounds Sterling, my employer/customer and I are just as liable for correctly accounting for it for taxation purposes. People have tried evading tax by using alternative means of payment before ("payments in kind"), and there's already a framework in place to prevent it. Bitcoin is in no way "designed" for tax evasion; it's designed to allow pseudonymous transfers of Bitcoin amounts securely, while making that transfer entirely public as a side-effect. Whereas handing over gold or cash is anonymous. Is the fact that Bitcoin transfers can be done without moving physical objects really going to outweigh the traceability of Bitcoin in its usefulness for hiding income from tax authorities? So if Bitcoin won't help me avoid paying tax, it can't simply cause the death of the welfare state through fiscal starvation.

So let's look at how Bitcoin might be used to buy drugs, assassinations, child porn, and other illegal products and services. Whether you use a widely-known site like the Silk Road used to be (before it was shut down, having already been quietly infiltrated by the FBI some time ago) or a more discreet arrangement with a "dealer" contacted through word of mouth, you're going to be transferring some of your bitcoin to an address they provide. Perhaps you'll try and launder it through a mixing service first, but unless you're dealing in small amounts, those aren't very safe (even if the laundering service is "honest" and doesn't keep a log). So now you've sent your money to a criminal, and made a public record of it. If they, or another criminal they then send the money on to, are then caught using it as part of the proceeds of crime, it's linked to you. (Even more so if you used it to pay somebody to post drugs to your home address...)

If you'd handed them some cash (or gold), the link back to you is far more tenuous.

And if you're a criminal receiving Bitcoins for murders, child porn, or whatever, it's only a matter of time before one of those Bitcoin transactions is linked to a crime (either yours, or of a criminal customer of yours). So if you ever spend those Bitcoins on anything that gets delivered to your house, or convert it into other currencies at an exchange, that is now linked to your personal identity, and the police will come visiting you, as former Silk Road drug dealers are believed to be finding out at the time of writing.

I hold some bitcoins, which I've bought on exchanges. Not only are all those purchases available for examination by law enforcement - and each linked to my bank account details and, later, scans of my ID documents - but they all go to the same Bitcoin wallet, which means that they're linked to each other (and any illicit secret income from other sources) when I spend money, too. If I keep my illicit and legitimate bitcoins strictly separated (like I keep my public donation vanity addresses separate from my bought bitcoins, in order to preserve my privacy), then I can't spend my illicit earnings on shiny toys delivered to my house, and have to remain living like a pauper despite having vast theoretical wealth in ill-gotten gains. No fun!

Oh, and by the way, if your local intelligence agency decides to log Bitcoin traffic, they'll be able to see whose Internet connection you're emitting transactions on, too. Perhaps you can use Tor to avoid that, if you're careful to avoid timing attacks...

They're causing people to waste electricity on proofs of work

Despite the alarming figures in the article above, Bitcoin mining is a small fraction of the energy used by computers worldwide, and is likely to remain so. The carbon footprint of conventional banking and payments, with its extensive use of bits of paper, is also not to be sneezed at; and Bitcoin could replace that. Also, as the cost of power dominates the cost of a mining operation, a likely future trend is for bitcoin mining to occur in places where power is very cheap (such as near hydroelectric dams) or where heat is a desirable by-product which would have to be obtained via some means anyway (heating industrial processes or homes) - trends which cloud computing datacentres, which still dwarf bitcoin mining in power consumption, have already started to show.

People will steal computer power to generate Bitcoins

Botnets are losing their appeal for bitcoin mining, as it moves over to specialised hardware (the returns on mining with CPUs and GPUs decrease, a tendency that has arisen in the years since the paper linked to was written), and makes it increasingly hard for criminal botnet operators to dominate the Bitcoin supply, or take over the mining network in order to control the rules of the Bitcoin economy. Bitcoin gives botnet operators a new way to make money, which is a shame - but there's already plenty of reasons to run a botnet; eliminating them through better software security architectures and user education needs to remain a goal for us all...

The proportion of the wealth held by the top one percent of Bitcoin holders is significant

This is probably true (although it's hard to tell with the early-mined Bitcoins, which have generally never moved so can't be traced to an owner until they're spent), but this is largely due to the injustice of the larger world in which Bitcoin exists; since that wealth inequality holds for the people interested in buying Bitcoin, it ends up holding within Bitcoin, too. People converting their dollar fortunes into Bitcoin fortunes does nothing (directly) to change the wealth distribution of the world. What is more interesting is the effect of Bitcoin's value changes compared to the value of existing currencies; assuming Bitcoin continues to rise in value and become a fungible and liquid currency, there will be a generation of early-adopting crypto geeks (generally not very rich people) who will get rich off of it from early mining experiments or buying them as a curiosity when they were valued at a dollar or so, and a later generation of already-rich investors who bought them once they showed their tendency for exponentially rising value and got even richer. The latter group will have ridden the growth of Bitcoin to increase the wealth inequality, while the former group will presumably have used it to narrow the wealth gap, by going from not rich to rich.

But both will be a "one-off" event; once the Bitcoin price stabilises, the meteoritic growth will stop, and nobody can get rich from it again. Bitcoin is just following the same pattern as the value of shares in a successful tech company, although perhaps on a larger scale; it is an endemic problem of our economic system that such growth events make a few lucky early adopters rich from nothing, and a middle generation of quick investors even richer than they already were.

"Bitcoiners are techno-libertarian anarcho-syndicalists"

I've often heard it complained that Bitcoin adopters are crackpots with a fringe economic theory they're trying to force upon the world, either blind to its obvious flaws, or greedy to move to a system that will reward people in their situation, with no regard for the costs to humanity as a whole.

I'm sure some of them are. And I'm sure that some of its opponents are also crackpots with fringe economic theories of their own, which Bitcoin is a threat to.

But most Bitcoin enthusiasts are either people looking to make money through investing in it, or people who want a better way to pay for things online. I fall into the latter category; I was attracted to Bitcoin because I want to be able to pay my monthly outgoings from a cronjob, rather than setting up standing orders that get delayed if they happen to fall on a weekend or bank holiday, and because I want to be able to create Web applications that can accept payments without paying huge surchages, risks of chargebacks, and other such obstacles. I bought some bitoins back when they cost £2.20 each, in order to experiment with them and in order to try and promote a Bitcoin economy by spending them; I've managed to spend a few, but the range of available options was disappointingly small until quite recently. The immense growth in the value of my little hoard has been a pleasant surprise, to be honest, but I've never had the spare cash to be a proper investor!

Conclusions

Bitcoin has already changed the world, if subtly; and I think it's showing no signs of stopping. And I can't honestly tell if it's going to be a force for good or a force for bad, in the long run; we know so little about how economies actually work that we can't even predict the behaviour of the one we already have, yet alone theoretical new ones. But what I do know is that ours has lots of flaws, both in terms of the theoretical underpinnings of a debt-based economy, and the practical implementation of banks, markets, and taxation as we have them. The flaws in the current system are hard to address, even if we knew how to fix them, because it has a tremendous inertia; so, although I think Bitcoin has some definite advantages, and some definite disadvantages, and many unknowns, I think it's about time we took a risk and shook things up a little...

Insomnia (by )

There's something about the combination of having spent many weeks in a row without more than the odd half-hour here and there to myself (time when I get to do whatever I like, rather than merely choosing which of the list of things I need to get done urgently I will do next, or just having no choice at all), and knowing I need to get up even earlier the next morning than usual (to dive straight into a long day of scheduled activities), that makes it very, very, hard for me to sleep.

So, although I got to bed in good time for somebody who has to wake up at six o'clock, I have given up laying there staring at the ceiling, and come down to eat some more food (I get the munchies past midnight), read my book without disturbing Sarah with my bedside light, and potter on my laptop. I need to be up in five hours, so hopefully emptying my brain of whirling thoughts will enable me to sleep.

There's lots of things I want to do. Even though it's something I need to get done by a deadline, I'm actually enthusiastic about continuing the project I was working on today; making an enclosure for our chickens. This is necessary for us to be able to go away from the house for more than one night, which is something we want to do over Christmas; thus the deadline.

Three of the edges of the enclosure will be built onto existing walls or woodwork, but one of them needs to cut across some ground, so I've dug a trench across said bit of ground, laid an old concrete lintel and some concrete blocks in the trench after levelling the base with ballast, and then mixed and rammed concrete around them. When I next get to work on it, I'll mix up a large batch of concrete and use it to level the surface neatly (and then ram any left-overs into remaining gaps) to just below the level of the soil, then lay a row of engineering bricks (frog down) on a mortar bed on top of that in order to make a foundation that I can screw a wooden batten to. With that done, and some battens screwed into the tops of existing walls that don't already have woodwork on, I'll be able to build the frame of the enclosure (including a door), then attach fox-proof mesh to it, and our chickens will have a new home they can run around in safely.

Thinking about how I'm going to lay the next batch of concrete in a nice level run, working around the fact that I only have a short spirit level by placing a long piece of wood in there and levelling it with wedges and then using it as a reference to level the concrete to, has been one of the things running around in my head this evening.

Another has been the next steps from last Friday, when I had a fascinating meeting with a bunch of interesting people in the information security world. You see, I've always been interested in the foundation technologies upon which we build software, such as storage management, distributed computing, parallel computing, programming languages, operating systems, standard libraries, fault tolerance, and security. I was lucky enough to find a way into the world of database development a few years ago, which (with a move to a company that produces software to run SQL queries across a cluster) has broadened to cover storage management, distribution, parallelism, AND programming languages. So imagine my delight when said company starts to develop the security features in the product, and I can get involved in that; and even more when (through old contacts) I'm invited to the inaugural meeting of a prestigious group of peopled interested in security. That landed me an invite to the second meeting (chaired by an actual Lord, and held in the House of Lords!), the highlight of which was of course getting to talk to the participants after the presentations. I found out about the Global Identity Foundation, who are working pn standardising the kind of pseudonymous identity framework I have previous pined for; I'm going to see if I can find a way to get more involved in that. But I need to do a lot of reading-up on the organisations and people involved in this stuff, and figuring out how I can contribute to it with my time and money restrictions.

I'd really like to have some quiet time to work on my secret fiction project, too. And I want to investigate Ugarit bugs. Some bugs in the Chicken Scheme system have been found and fixed lately, so I need to re-test all these bugs to see if any of the more mysterious ones were artefacts of that. I'm in a bit of a vicious circle with that; the longer it is since I've been tinkering with the Ugarit internals, the longer it'll take me to get back into it, and the more nervous I feel about doing so. I think I might need to pick off some lighter bit of work with good rewards (adding a new feature, say) and handle that first, to get back into the swing of things. Either way, I'll need a good solid day to dig into it all again; trying to assemble that from sporadic hours just won't cut it.

I'm still mulling over issues in the design of ARGON. Right now I'm reading a book on handling updates to logical databases - adding new facts to them, and handling the conflicts when the new facts contradict older ones, in order to produce a new state of the database where the new fact is now true, but no contradictions remain. I need to work this out to settle on a final semantics for CARBON, which will be required to implement distributed storage of knowledge within TUNGSTEN. I need a semantics that can converge towards a consensus on the final state of the system, despite interruptions in internal network connectivity within the cluster causing updates to arrive in different orders in different places; doing that efficiently is, well, easier said than done.

I really want to finish rebuilding my furnace, which I hoped to get done this Summer, but I'm still assembling the structural supports for it. I've made a mould to cast shaped refractory bricks for the lining of the furnace, but I've yet to mix up the heatproof insulating material the bricks need to be made out of and start casting the bricks, as I still need to work out how I'll form the tuyere.

I want to get Ethernet cabled to my workshop, because currently I don't have a proper place for working on my laptop; I have to do it on the sofa in the lounge to be within range of the wifi, which isn't very ergonomic, doesn't give me access to my external screens, and is prone to interruption by children. I find it very motivating to be in "my space", too; the computer desk in the workshop is all set up the way I like it. And just for fun, I'd like to rig the workshop with computer-controlled sensors and gizmos (that kind of thing is a childhood dream of mine...).

This past year, I've tried booking two weekend days a month for my projects, in our shared calendar. This worked well at the start of the year, with projects such as the workshop ladder and eaves proceeding well, but it started to falter around the Summer when we got really busy with festivals and the like. I started having to fit half-days in around other things, which meant spending too much time getting started and clearing up compared to actually getting things done, so my morale faltered; and with so much other stuff on, I've been increasingly inclined to spend my free time just relaxing rather than getting anything done. On a couple of occasions I've tried taking a week off work to pursue my projects, but I then feel guilty about it and start allocating days to spending more time with the children or tidying the house, and before I know it, five days off becomes one day of actual project work. I need to stop feeling guilty about taking time to do the things I enjoy, because if I don't, I'll be too tired and miserable to do a good job of the things I should be doing! And rather than booking my monthly project days around other stuff that's going on, next year I'm going to mark out my two days each month in advance, and then move them elsewhere in the month if Sarah needs me to do something on that particular day, to decrease the chance of ending up having to scrape together half-days around the month (or to skip project days entirely, as I ended up doing last month). I feel awful about saying I'm going to spend days doing what I feel like doing rather than the things the rest of my family need me to drive them to, but if I don't, I think I'm going to fall apart!

Now... off and on I've spent forty minutes writing this blog post. So with my whirling thoughts dumped out, I'm going to go back to bed and see if I can sleep this time around. Wish me luck!

Alien number systems (by )

An interesting question came up on Twitter, and it started to be hard to fit what I wanted to say in tweets, so I decided to write it up on here.

Basically, the brief (as I read it) was to design a writing system for numbers that might be used by a civilisation who used numbers to measure physical quantities, rather than to count things. My theory is that we developed positional number systems as they make it easy to add up totals in columns, and that accounting was the original driving force behind our development of numbers.

Now, scientists and engineers like to use "scientific notation", which means you write a number like "1.57 * 10^5"; generally three or so digits, written in the form of a single digit, a decimal place, then two or so more digits, then a multiplier by a power of ten. That's convenient because measurements of the real world generally have a given precision, easily expressed as a number of digits that can be obtained, independent of their magnitude, which is then easily expressed as an exponent.

So, I reckon, a civilisation that built its number system for scientific notation might do things a bit differently.

So here's what I came up with.

Let's have ten digits; I'll write them as 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 as that's a convenient set of symbols on my keyboard, but to avoid confusion, I'll write numbers in my proposed system inside curly brackets, like so: {943}.

The digits represent powers of two. 9 is 2^9, or 512; 0 is 2^0, or 1. To write a number in the range 1 to 1023, we turn it into binary and write the digits corresponding to the bits that are one, in descending order. So the number {943} means 2^9 + 2^4 + 2^3, or 536. You always write the digits in descending order.

You can't write zero that way, except as an empty string, but that can be mistaken for "nothing has yet been written", so let's use a separate symbol for zero: say {X}.

If we want to express fractions, we use a radix point. The digits after the radix point are another number that is, basically, divided by 1024; so one and a half would be written {0.9}. If you need more than ten bits of precision after the radix point, use another one; {0..9} would mean 1+1/2048. I'm tempted to reverse the order of the digits on the right of the radix point, making 0 represent 512 and 1 represent 256 and so on, so that {0.0} represents one and a half; but perhaps that's just more complicated.

But the way of representing very small numbers, or very large numbers, is to use exponential notation. But because the exponent of a number is more significant than the mantissa (the bit we've already discussed so far), it should go first, with a separator symbol. We write the exponent as a number in the above format; 1024 is raised to that number and then multiplied by the mantissa.

So if we use {$} as our separator symbol, one is written {0}, three is written {10}, 3*1024 is written {0$10}, and so on.

Very small numbers are written using a dividing exponent, which comes AFTER the mantissa and uses a different separator (say {/}). So {10/0} is 3/1024.

A number like 1023 is awkward to write - it's {9876543210}. But unless you need that level of precision, the entire ten bits, you'd normally just round it to 1024 - {0$0}.

I chose ten digits, not because I happen to have ten of them on my keyboard, but because it means that the simple form with no radix points gives you a range from 0..1023, which is about three significant figures in decimal; the precision to which higher-precision engineering measurements are made to. It's just plain difficult to be more precise than that with mass-produced instruments (you can, but the instruments tend to be very fussy about being calibrated and looked after). A civilisation with better technology than us might routinely use ten bits of precision by default for day to day calculations, I reckon.

The reason I went for the powers-of-two-as-digits representation is that you use more digits to represent more accuracy, rather than larger numbers as we do in our positional number system. However, there's some wasted space; I mandated that the digits be listed in descending order, so my number system doesn't have a meaning for a digit sequence like {123}. Perhaps that could be used for something?

Is information security good? (by )

One of the interesting things to have come from Edward Snowden's leaks of classified documents is that the American National Security Agency has been working to introduce flaws into the design and implementation of security technologies, in order to make it easier for them to break said security for their own ends.

There's been a lot of outrage about that. The argument for it is that the ready availability of strong security technology makes it easier for bad folks to conceal their crimes (and, worse, conceal the fact that they are planning such crimes, so they cannot be stopped in advance), so the NSA is right in acting to make sure people don't have strong security technology. However, even if we can trust the NSA (and that is far from certain) such vulnerabilities can be found by people we certainly can't trust: "cyber-criminals" intent on stealing our credit card details in order to rob us of our money, commercial competitors looking for strategic advantage, and so on.

There are also deeper issues that have been raised; this means that the NSA is covertly working to sabotage the products of US companies. Should they be allowed to do that? Can those companies now sue them for damages?

But I think that, at the heart of the debate over this, is an even deeper issue.

We have the NSA - the part of the US government officially responsible for information security - acting to subvert the information security available to US individuals and companies, on the grounds that it is harmful to the public if they have strong security. While on the other hand, we have individuals and companies striving for better security; working to make more secure products, choosing products that claim to provide security benefits, and so on.

This shows, to me, that there's a big unresolved question that US society as a whole - government and non-government together - needs to ask themselves: Is information security good? The government's official position seems to be that information security is harmful, as it makes it harder to provide a more general notion of security that is threatened by criminals, foreign governments, and terrorists; while everyone else's position seems to be that information security is good because they don't want information criminals and foreign governments stealing their secrets (terrorists don't seem to have cottoned to this trick yet) - and, maybe, because they don't want the government knowing ("stealing" is a contentious term here, as the government gets to define what "stealing" is) their secrets, too.

So before they can really debate whether the NSA's actions are justified or not, I think the US needs to step back and look at the bigger question: Should information security be a right, or not? If not, then they should just use legislation to stop companies and people from wasting resources trying to achieve it while other resources are being spent subverting it so they only receive an illusion thereof. That's just plain inefficient. And if information security is deemed good, then the NSA should be prevented from subverting it, and refocus its efforts on ways of doing its job without being able to break encryption; traffic analysis, meta-data analysis, exploiting specific installations of security systems where a threat is suspected, and so on are all time-honoured mechanisms that work even against well-educated adversaries that use encryption systems that the NSA hasn't been able to subvert.

WordPress Themes

Creative Commons Attribution-NonCommercial-ShareAlike 2.0 UK: England & Wales
Creative Commons Attribution-NonCommercial-ShareAlike 2.0 UK: England & Wales