It’s time for Bitcoin to die (by )

In my original writings about Bitcoin ( 2011: Bitcoin Security 2013: The Ups and Downs of Bitconi Bitcoin Pseudoonymity The True Value of a Bitcoin On the Unfair Distribution of Capital Bitcoin and Banks Bitcoin: Better than a Euro Bank?), I was pretty positive about the whole thing, but since then I've changed my mind for three reasons:

  1. The Bitcoin community has become dominated by price speculation/investment, obsessing about its current price, rather than about actually doing Internet money properly.
  2. Perhaps because the price has surged so rapidly, it's become very profitable for miners to compete for the mining bounties, leading to enormous amounts of mining hardware being manufactured and consuming enormous amounts of power.
  3. The Bitcoin protocol has struggled to scale to handle high transaction volumes, in part due to it being a difficult technical problem and in part due to politics with various groups fighting over the correct solution (a fight which, to some extent, is fuelled by the vested interests of investors and miners wanting to keep the status quo), leading to transaction fees being unreasonably high as people compete to get their transactions processed.

In the early days, an increasing number of online shops accepted bitcoin; but many have now stopped, and new ones don't seem to be being added any more. Bitcoin's bid to become Proper Internet Money has, sadly, failed. Perhaps the mining energy issue could have been avoided with a different hashing function that's less amenable to economies of scale, or if the mining rewards had been set lower so that mining wasn't so profitable (or would reducing the supply just push prices up further?)... The scaling issues leading to high fees, and the slow transactions, have meant that Bitcoin remains clunky compared to card payments, so it never took off very well as a way of paying for stuff, meaning that its value focussed more and more on being an investment and a way for transferring large sums. Ironically, it did become "digital gold", but not in a good way.

But Bitcoin isn't the only fish in the sea... Discounting all the I-made-my-own-blockchain clones with no real technological differences, there are a few interesting other cryptocurrencies that have arisen.

First worth mentioning would be Ethereum, which extends Bitcoin's transaction processing model to a much more generic distributed computation model, leading to all sorts of interesting (and sometimes hilarious or horrible) things. However, my main conclusion from watching all this is that human software development practices aren't mature enough to write autonomous financial algorithms yet, so Ethereum is, in my mind, an interesting experiment but not practically useful for anything yet. And it's also proof-of-work based, so has the same problem with miners consuming power, and has people speculating on it as an investment, and so on.

But far more interesting to me right now is Nano, which aims squarely at Bitcoin's original goal - being Internet money. The distributed consensus algorithm doesn't involve mining blocks, so is fast and there aren't transaction fees, and there aren't any miners burning CPU cycles to try and win money. I've tried it, and it actually works; you tell your wallet an address to send to and an amount (or scan a QR code) and press "send" and the money is ready to spend in the recipient's wallet in about a second - with no fees deducted. Instead of using hashing power to break ties, the network uses voting power voluntarily assigned; every Nano wallet appoints a "representative" node, and the voting power of a node is the sum of the balance of the wallets appointing it. Wallets can change what node is their representative instantly by sending a message to the network, so the community can easily ensure that the voting weight is widely spread to avoid anybody having too much power, and debates about protocol changes can be resolved by letting users choose which nodes (running different versions of the software) they give their voting weight to.

It's not all perfect, however. As transactions are free and fast, an attacker can spam the system by creating a bunch of wallets and shuffling tiny amounts of money between them, burdening the network with validating and storing those transactions. As part of the countermeasures against that, transactions submitted by wallets need to have a small proof-of-work attached - meaning that you DO need to burn CPU cycles to make transactions. The amount of CPU work needed depends on the load on the network, to automatically raise the cost of transactions based on how many transactions per second the system can sustain; so will rising legitimate demand outpace the improvements in node hardware and hosting, until the proof-of-work cost of a transaction becomes excessive? Will spammers continue to find ways around the limitations and overload the network (as I write this in March 2021, Nano is recovering from a recent spam attack that has delayed transactions for days, while the developers work on some cunning new algorithms to prevent it happening again)?

And, like any currency whose price is set by the market, Nano will attract speculators, leading to price volatility, which as at the very least an inconvenience to its use as an actual currency.

But it's already allowing some interesting applications. WeNano is a... game of sorts? It's a smartphone-based Nano wallet, but its primary feature is that one can create Pokestop-esque "spots" based on geographical locations on Earth, to which people can donate nano, and then people who are within a certain distance of that spot can claim some of it (subject to a rate limit). The spots also have a chat function, and can act as trading hubs for classified ads paid in Nano, and there's some business integration thing for accepting Nano payments I've not looked into. Spots have been created by people wanting to promote Nano, and as a way to send aid to economically unstable countries (you don't need to be near a location to create a spot there or donate money to it); and presumably, if Nano becomes more widespread, businesses will place spots at their locations to attract footfall. Only a payment system with zero fees could make such a system practical, given the small size of the amounts of money involved. And, to my great relief, it's showing takeup in actual payment applications, such as the Wirex debit card and Kappture point-of-sale payment systems.

Nano isn't the only consensus algorithm without mining, though - there's also the Avalanche algorithm, which looks promising but hasn't built the community of people and applications that Nano has. I have high hopes for it, though!

So - Bitcoin must die, as it's failed to become a useful financial system, and is now just wasting resources on mining. The technology of distributed consensus has moved on, and Bitcoin (and its many clones) are just propelled forwards now by sheer inertia.

Lambda bodies in Scheme (by )

So, if you look at a recent Scheme standard such as R7RS, you'll see that the body of a lambda expression is defined as <definition>* <expression>* <tail expression>; zero or more internal definitions, zero or more expressions evaluated purely for their side-effects and the results discarded, and a tail expression whose evaluation result is the "return value" of the resulting procedure.

I used to find myself using the internal definitions as a kind of let*, writing procedures like so:

(lambda (foo) (define a ...some expression involving foo...) (define b ...some expression involving a and/or foo...) ...some final expression involving all three...)

But the nested defines looked wrong to me, and if I was to follow the specification exactly, I couldn't intersperse side-effecting expressions such as logging or assertions amongst them. And handling exceptional cases with if involved having to create nested blocks with begin.

For many cases, and-let* was my salvation; it works like let*, creating a series of definitions that are inside the lexical scope of all previous definitions, but also aborting the chain if any definition expression returns #f. It also lets you have expressions in the chain that are just there as guard conditions; if they return #f then the chain is aborted there and #f returned, but otherwise the result isn't bound to anything. I would sometimes embed debug logging and asserts as side-effects within expressions that returned #t to avoid aborting the chain, but that was ugly:

(and-let* ((a ...some expression...) (_1 (begin (printf "DEBUG\n") #t)) (_2 (begin (assert (odd? a)) #t))) ...)

And sometimes #f values are meaningful to me and shouldn't abort the whole thing. So I often end up writing code like this:

(let* ((a ...) (b ...)) (printf "DEBUG\n") (assert ...) (if ... (let* ((c ...) (d ...)) ...) ...))

And the indentation slowly creeps across the page...

However, I think I have a much neater solution!

Read more »

The Polyp Mixer (by )

So, on my desk I often have a desktop computer and a laptop. I've got a decent HDMI/USB KVM switch so I can flip my big monitor, keyboard and mouse between the two, and that's great.

However, I also have a hi-fi amplifier and speakers for audio output. This is hooked up to the desktop PC, and has selectable inputs, one of which is connected to a lead for the laptop - but I rarely plug the laptop in. This is because I can only select one input on the amplifier; and although I'm usually only listening to media from one device, I want to be able to hear notification pings from either. So I tend to leave the laptop on its own nasty little speakers and only have nice audio from the desktop PC.

Clearly, this sucks. Many years ago I had a cheapo mixing console that sat on my desk, with my CD player, minidisc player, and PC connected to the inputs, outputting into my amplifier; it was cool to be able to just hit play on anything and hear the result through my good speakers, and having all those knobs and sliders to play with was definitely gratifying. However, it was bulky, full of useless-to-me features like phono inputs and cross faders, and eventually died a death from being left switched on all the time.

Plus, I'd recently resolved to do more electronics, so there was only one thing to do: Make a mixer.

The Polyp Mixer

Read more »

Receipt printer hacking (by )

So, for Christmas, I got a receipt printer. It's a Jepod JP-5890K, the important specifications of which being:

  • Mains powered
  • USB connectivity (appears as a standard USB printer)
  • 58mm wide thermal paper rolls (widely available, cheap)
  • 384 dot horizontal resolution
  • No automatic cutter, you need to tear the paper off yourself
  • Costs less than £30

I asked for this thing because I noticed I was using a lot of Post-It notes to basically copy stuff down off the screen, and automating that seemed fun.

Read more »

Scoping generic functions (by )

So, my favourite model of object-oriented programming is "Generic Functions".

The idea is that, rather than the more widespread notion of "class-based object orientation" where methods are defined "inside" a class, the definition of types and the definition of methods on those types are kept separate. In practice, this means three different kinds of definitions:

  1. Defining types, which may well be class-like "record types with inheritance" and rules about what fields can be read/written in what scopes and all that, but could be any kind of type system as long as it defines some sort of "Is this an instance of this type?" relationship, possibly allowing subtyping (an object may be an instance of more than one type, but there's a "subtype" relationship between types that forms a lattice where any graph of types joined by subtype relationships has a single member that is not a subtype of any other member").
  2. Defining generic functions, by providing the structure of the argument list (but not the types of the arguments, although in systems with subtyping, there may be requirements made that some arguments' types are subtypes of some parent type) and the type of the return value and binding that to a name.
  3. Defining methods on a generic function, which are a mapping from a set of actual argument types to an implementation of the function, for a given generic function.

Note that the method refers to the type and the generic function, and is the only thing that "binds them together". Unlike in class-based OO, the definition of the type does not need to list all the operations available on that type. For instance, one module might define a "display something on the screen" generic function taking a thing and a display context as arguments; this module might be part of a user interface toolkit library. Another module might define a type for an address book entry, with a person or organisation's name and contact details. And then a third module might provide an implementation of the display-on-screen generic function for those address book entries. All three modules might well be written by different people, and only the third module needs to be aware that both the other modules exist; their authors might never hear of each other.

This is good for programmers, in my opinion, as it makes it easier to build systems out of separately-designed parts; it exhibits what is sometimes called "loose coupling". In a class-based system, the author of the address book type would either need to be aware of the user-interface toolkit and make sure their address book entry class also implemented the "display on a screen" interface and declare an implementation of the UI logic (which might not be their interest, especially if there's a large number of UI toolkits to choose from), or users of the address book class in combination with that UI toolkit would need to do the tiresome work of writing "wrapper classes" that contain an address book entry as an instance member, and then implement the display on a screen interface, and have to wrap/unwrap address book entries as they move in and out of user-interfacing parts of the application.

"Ah, but what if the user inherits from the address book entry class and implements the display-on-screen interface in their subclass?", you might say, but that's only a partial solution: sure, it gives you objects that are address book entries AND can be displayed on screen, but only if you explicitly construct an instance of that class rather than the generic address-book entry class - and third party code (such as parts of the address book library itself) wouldn't know to do that. Working around this with dependency injection frameworks is tedious, and success relies on every third-party component author bothering to use a DI framework instead of just instantiating classes in the way the language encourages them to do. An ugly solution, when generic functions solve the problem elegantly.

It also provides a natural model for multiple dispatch. Class-based "methods within classes" mean that every method is owned by one class, and methods are invoked on one object. In our address book UI example, the generic function to display things on screens accepts two arguments - the thing to display and a display context. In a class-based system, this means that the display method defined on our address book entry is passed a display context argument and can invoke operations on it defined by the display context class/interface/type, and if it wants different behaviour for displaying on a colour versus monochrome screen (remember them?) it needs to make that a runtime decision. However, in a generic function system, there would be separate subtypes of "display context" for "monochrome" and "colour", each defining different interfaces for controlling colours. This means you can provide separate methods on the display GF for an address book entry in colour or monochrome or, if you didn't need to worry about colour as you just displayed text in the default style, have a single implementation in terms of the generic "display context" supertype.

This feature is particularly welcome for people writing arithmetic libraries, who want to define multiplication between scalar and matrix, matrix and scalar, matrix and vector, vector and matrix, vector and scalar, scalar and vector, etc.

You can use run-time type information to implement all of this in a single-dispatch system, but (a) it's tedious typing (in both sense of the word) for the programmer, (b) it is not extensible (if somebody writes a "multiply" method in the "Matrix" class that knows to look for its argument being a scalar, vector, or other matrix, what is the author of a third-party "Quaternion" class to do to allow a Matrix to be multipled by a Quaternion?), (c) this robs the compiler of the opportunity to do really fancy optimisations it can do when it knows that this is a polymorphic generic function dispatch.

However, generic functions present a big problem for me, as an aspiring functional programming language author: scoping.

Read more »

WordPress Themes

Creative Commons Attribution-NonCommercial-ShareAlike 2.0 UK: England & Wales
Creative Commons Attribution-NonCommercial-ShareAlike 2.0 UK: England & Wales