Ada Lovelace Day: Barbara Snell (by )

For Ada Lovelace Day I'm going to write about my aunt Barbara.

She's never been one to be arbitrarily limited by society - in the 1950s, she went and toured the world on her own; which was quite something for a woman in her early 20s to do!

She's retired now, but her career was in linguistics. In particular, she was a technical translator, translating equipment manuals from other languages to English; I've never obtained a full list of the languages she knows, but (from memory) all the main European languages, Russian, and Japanese have been mentioned.

Anyway, she happened to be working for Xerox when the job of translating some documentation relating to the Xerox Star came across her desk.

At the time, translators worked with typewriters; they'd type up a first draft of the translation, with lots of corrections pencilled in as they went along, as it's quite common to find you want to revise something you've already translated when you come to write a later paragraph. They would then have to type up a better copy incorporating the corrections; but this might then come back with amendments proposed by the marketing department or other stakeholders. So the translators spent a lot of time doing menial work.

So imagine Barbara's excitement when she read the manuals for an electronic word processor... So, never one to let convention stand in her way, she petitioned the management to let the translation department have some. This request was eventually fulfilled, and as she predicted, translation became a lot more efficient...

But Barbara continued to be vocal about the opportunities for computers to help with translation, driving developments within the company and starting a series of conferences on the use of computers in translation, which is perhaps why Xerox is considered "is the private company that has contributed the most to the expansion of machine translation"ref.

This was all about when I was being born, of course. But when, around 2000, Barbara retired and closed down her own translation business, I had the chance to take my pick of computer equipment as she was clearing out the office; I took away a 486 that became the home router - but I always wished I'd managed to claim her Xerox Star...

Circuits in Epoxy (by )

Continuing from my previous experiments in epoxy casting, this time I decided to cast a circuit in epoxy, as that's my eventual goal.

So I made a test circuit with four LEDs and their series resistors on a large piece of stripboard, with unnecessarily long leads on everything and a few different orientations of components, in order to check whether shrinkage is an issue at the kinds of scales I plan to work at:

the circuit side profile

Then I mixed up some resin:

mixing the resin

And poured it into a business card box and placed the circuit in. I chose the business card box since it looked like the same sort of plastic the proper moulds at resin-supplies were made of, and that's supposed to not stick to the epoxy:

resin in the box

Luckily, it seemed that the epoxy does indeed not stick to this stuff, as it came out easily, leaving a perfectly clear surface, with no damage to the electronics:

Underside of the castingTop side of the castingDetail of the top surface

It's really weird to have one of my sloppily-made circuits that's completely waterproof and ruggedised:

Look, it works underwater!

We had to prop it up with one end out of the water so you can actually see that all the LEDs are lit, though:

Propped up so you can see all the LEDs

I think I'll still need to experiment with silicone moulds, though - as I'm unlikely to find boxes of the right plastic in the precise sizes I want.

Epoxy casting (by )

Inspired by this I set out to learn how to cast circuits in transparent epoxy.

You see, making decent cases for things is sometimes the hardest bit about an electronics project, and an issue that had been a major roadblock in my interest in wearable computers. What point was there in building something if it wouldn't last long under the wear and tear of being attached to me, and getting rainwater inside it?

So I obtained the smallest set of two-part clear epoxy from resin-supplies.co.uk.

Read more »

C++ (by )

I went the usual route for programmers of my generation; started off in BASIC on an eight-bit home micro, then got a PC and messed around with BASIC there before moving up to Pascal then to C and C++, with meddling with assembly in parallel; assembly was never your main language, just something you had to learn for special things like inner loops and messing with low-level things.

I was interested in programming languages, so I read a lot of books on them, but BASIC, Pascal, assembly and C++ were the only ones I had access to implementations of.

So I was messing with C++ in the early 1990s, with an implementation (Borland Turbo C++ (for DOS!), version 2 IIRC) that had things like classes and iostream, but no templates (so no STL) or RTTI. So these were things I fiddled with fleetingly as I experimented with DJGPP, a port of GCC to DOS, but I was soon using my new-found Internet access to get hold of other programming language implementations to play with, so I never progressed beyond the basics.

But, occasionally, I heard of people doing interesting things with templates in particular. Templates are usually initially explained as a way of implementing type parameters to classes, like Java 1.5's generics, but they can do lots more than that; they are closer in spirit to Haskell's type classes. Templates are literally code templates with parameters, that the compiler fills in with values at compile time (and, so the values can be normal values such as integers - or types - or other templates) to generate syntactic constructs such as classes and functions. Normal usage is to define a collection class template in terms of a type parameter, so the compiler can then generate a collection of ints, a collection of pointers to a given struct type, or even a collection of values of an arbitrarily complex class. Templates can be overloaded more or less like C++ functions, so you can have multiple templates with the same 'name' (and signature) but with some being partially specialised, which means that template instantiation can contain conditionals, in effect.

Which is where it gets complex. Templates can recurse, generating entire class hierharchies. See, as well as the usual case of a class having a member that's of a type given as a template parameter, a class template could use a template parameter as a class to inherit from. By recursing on that parameter, hierarchies can be built on the fly. Inline function templates that recurse can create arbitrarily complex bits of code. In many ways, it rivals the power of Lisp's macros, except that it was blatantly never designed to be that powerful, so you have to use crazy exotic workarounds built by people who have studied the C++ spec with a fine-toothed comb in order to do all sorts of things.

Oh dear.

Let's take a closer look at what went wrong.

Those of us who are familiar with one or more programming languages may sometimes find it hard to see things objectively. There's too much accumulated habit in programming, which sometimes prevents us from seeing the wood for the trees. Also, some people who are not familiar with programming may be interested to know what on Earth I'm talking about here, so I'm going to kill two birds with one stone and shift into the soft and floppy world of metaphor.

Let's imagine that programming is like building houses and bridges and things like that. Programming languages, in this metaphor, are construction techniques. Assembly language is like building the structures with an atomic force microscope, positioning atoms by hand in order to build metal crystals, stones, and mortar, in order to create reinforced concrete structures. Although this technique will let you build the strongest possible structures by simply arranging carbon atoms in a tetrahedral structure with evacuated cavities to decrease weight and crystal dislocations in order to harden the structure through stressing - it will take you an age to build anything large, and when you try and split the construction process up, you have to be very careful that the faces where subcomponents meet will match exactly; they need to be precise to an atomic level, since there's no equivalent of slopping mortar into a gap, that will automatically ooze to match the surfaces on both sides.

Also, building a structure that way is intimately tied to the local chemistry. A bridge made out of solid diamond, as described above, is useless in an environment with a high-oxygen atmosphere and a normal temperature range of a few thousand degrees Kelvin, as diamond burns quite nicely. A design for a diamond bridge expressed at the atomic level can't be easily translated into a bridge of the same shape in steel. Even things like the crystal dislocations to harden it and little vacuum bubbles to decrease the weight don't apply in the same way to steel. The metaphor here being that assembly language applies only to a given CPU type.

BASIC, on the other hand, is like Lego. You can very quickly build small structures, even with little skill. They'll look a little lumpy, but it gets small jobs done quickly and is easy to learn. Indeed, it's great as a training tool for potential future engineers, although a lot will have to be unlearnt in order to move from Lego to poured-concrete construction. Also, the same design can be realised in plastic bricks, diamond bricks, or frozen methane-ice bricks, meaning the same design can be applied in lots of different chemistries.

Mainstream high-level languages like Pascal, C, C++ and Java are like assembling things from premade blocks - from bricks up to giant prefabricated beams and lintels. They even let you come up with your own prefabricated components by specifying how they should be built from basic components, although they only come with small basic units, barely higher than those offered by assembly (much smaller than the bricks of BASIC); but at least those basic units are mainly independent of chemistry. Languages with module systems make a surprising difference for building large practical structures in a commercial environment - modules are like catalogues of prefabricated components, which make it easier for them to be shared and reused between projects.

Declarative languages are like building a former out of wood and pouring concrete into it. Rather than explicitly positioning structural members, we just specify the overall shape we want, and let an automatic process (the flowing of liquid concrete under gravity) fill in the details for us. It's quick, but it doesn't give us fine control over the result we get. No graceful suspension bridges.

Dynamic languages are a bit like our BASIC lego bricks (plus the ability to order prefabricated components like higher-level languages), except that we only get one kind of fundamental brick; the good thing is that this one brick can support load AND conduct electricity AND transport water AND transport sewage away. This means you can build some very simple and compact buildings, by having the walls transport electricity and water to where it's needed and take waste water away, but you have to be careful to make sure the bricks don't get confused and do the wrong thing (such as feeding electricity into the water supply, or spewing sewage out of the wall).

And then we get to languages with metaprogramming. Very basic metaprogramming - perhaps at the level of C macros - is a bit like being able to ask for prefabricated components made to custom dimensions. Rather than a catalogue listing "50cmx50cmx10m pre-stressed concrete beam", we can have an "XcmxXcmxYm pre-stressed concrete beam", and fill in our own X and Y when we order it.

Whereas C++ templates and Lisp macros are like being able to set up companies that build entire arbitrarily complex building modules to spec. Writing a metaprogramming abstraction is like setting up a company that, given the width and depth of a river and the size of a road, will return you a standard bridge to take that road across that river. The downside is that it'll be their standard bridge that looks about the same as all their other bridges; but the upside is that if you don't like it, you can still build your own bridge out of basic components, or design a bridge template of your own that you reuse. Indeed, you could design a bridge template for multi-lane roads that works by building lots of a single-lane bridge template, side by side. Or a template for long bridges that works by building any number of a simple arch bridge (which can only cross a given maximum distance) between pontoons sunk into the river bed.

But the problem is that C++ is an extension to C. C is a very low-level language, barely above assembly as these things go; every construct in C has an obvious and simple representation in assembly language. C++ is an attempt to add high-level loveliness such as metaprogramming and catalogues of large components on top of that.

And, as such, it's hampered by its low-level past. C++ programmers have to worry about low-level details that higher-level languages completely handle for you. Such as storage management, and only limited runtime type information meaning that a lot of information has to be made statically known at compile time.

Going back to the building metaphor, the fact that C++ requires the documentation for an API to be clear about whether passing in a pointer counts as passing ownership (with the obligation to delete), and under what circumstances that object may be deleted (which has a bearing on what else can be done with that pointer by the caller after it's been passed in), is a bit like having a high-level catalogue of building parts, but requiring them all to come with chemical formulae for and accurate engineering drawings for their joining surfaces, and requiring the users of such prebuilt parts to examine every case where the parts will touch other parts, so they can check them for chemical compatibility (bolting steel to bronze parts won't do, as they'll electrolytically corrode each other), and making sure the faces will mate correctly; if they won't, then the designer will need to allow for some kind of mortar to go between them, which in itself will have to be chemically compatible with both surfaces.

While higher-level languages are a bit like having international standards for load-bearing surface connections (standard sizes of bolts, standard surface coatings that are chemically compatible), electrical connections, etc. It makes it all a whole lot easier. The cost is a loss of fine control; in a very few circumstances you might need to really control how two parts are connected, perhaps in making a bridge that, in an earthquake, will fall apart in a very controlled manner. But you need to precisely specify how everything mates even when you don't really care, which slows down the design process, and makes it easier for a human error in the construction process (the wrong kind of mortar used in a joint - they all look the same!) to cause a problem that only becomes apparent years after the structure is completed (when the slow corrosion of a beam by the acidic mortar causes it to collapse). C++ is ripe with subtle cases that produce "undefined behaviour". Calling delete on a polymorphic class without a virtual destructor. Passing an instance of a class to a function with ellipsis arguments. All of these things are easy to do, cause no compile-time errors, and may well work fine at run time most of the time. Whereupon they are almost impossible to trace back to their cause.

I'm not saying that people should never be given the power to specify things at that level - but that it should be done by letting users go to that level of detail explicitly. For example, being able to specify a component at the atomic level, but then packaging it along with information about its surface properties so the high-level building design system can automatically and correctly integrate with it. Languages like Chicken Scheme let you embed C code, as long as you tell Chicken the types of all values passed in and out so it can perform automatic wrapping and unwrapping.

C++'s templates get around some of these problems; it's possible to make templates that automatically adapt their interfaces depending on what they're interfacing to. This means that the users of the templates can just let the magic happen for them and not need to worry much about it, but it means that template developers need to understand the issues and anticipate them in advance, to make sure their templates will work correctly for the user.

Also, templates have been stretched beyond their original design, which is like using a system for automatically choosing the right material to use as mortar to fill a gap, with support for including layers of other materials such as damp-proof courses, to build entire pillars by telling them to fill a very large gap with repeating layers of concrete. It gets the job done, but it's working around the system rather than working with it. In the resulting design, a pillar is labelled as a "gap that needs filling with something" rather than as a pillar.

There's a great ingenuity there. I'm in awe of the job Bjarne Stroustrup and the C standards committee have done in building such powerful facilities on top of such a meager language as C. I think it's misguided, but brilliant. And I'm in awe of people like Andrei Alexandrescu who have figured out how to make C++ do useful things it was never designed to do, through cunning and devious tricks.

The same kind of cunningness is shown by the engineers who look at things like quantum mechanics and use them to invent the MOS transistor, and from there, figure out how to mass produce vastly complex integrated logic circuits for pennies. It's amazing to take what resources physics throws at you and manage to build things like computers out of it, just as it's amazing to take what the C++ language specification throws at you and manage to produce a template that works out if one type is convertible to another by - get this - setting up two functions, overloading the same name, one with ellipsis arguments and the other of the target type, but returning results of different sizes; then creating a function that returns a value of the source type, and examining the value of sizeof(func1(func2()) to see if it's the same as that of the return type of the fallback ellipsis function (which matches anything) or the more specific function, to see which matched.

It gets the job done, but it takes a wizard to figure out how to do it. Sure, the wizard can do it and wrap it up in a nice little reusable package that anyone can use, but it shouldn't have to be this way. Semiconductor engineers have to do complex things to get faster chips because they have no choice in their substrate. But programmers do have a choice - they can choose a better language.

I feel that it is the obligation of language designers to make their language such that useful things can get done without hacks only wizards can come up with. Anybody should be able to do useful things.

I'm renowned for liking Lisp. Most of the clever tricks done with templates are trivial as Lisp macros, or even as plain old Lisp source without needing macros, or are just completely irrelevant in Lisp (smart pointers? Hahaha!). Many things are trivial in Lisp that aren't worth doing in C++, because C++ only allows programs that can be type checked at compile time, and only a subset of correct programs can be statically proven correct in any given automatic checker. But it's not all rosy; templates arrange to do a lot of the stuff done at run time in Lisp at compile time. The reason they're cranky and complex is that doing a lot of this stuff at compile time is a lot harder than at run time; but getting it done at compile time means type errors are caught during compilation rather than lurking at run time, and the compiler can generate very optimal code.

The Lisp community has done some work on adding optional static typing; Common Lisp allows type declarations, and compilers may use them to generate code on a par with a C compiler, but I've never seen a typing system as rich as C++'s with templates and so on in a Lisp.

It'd be an interesting experiment - combing the power of templates to statically type-check complex stuff, with the power of not HAVING to statically type-check everything. How would it impact on the design of the standard libraries? Primitives like car (which returns the head element of a linked list) would need to have complex types to support both cases: (car <List>) :: <Anything> for the general case, but (car <List(X)>) :: [<X>|Error], as attempting to call car on an empty list of Xs is an error - unless we keep exceptions out of the type system.

Jean typing (by )

Jean had fun in nursery yesturday. They're doing a Chinese new year theme, so she was talking excitedly about "Chinese New Ear" all the way home and breaking the nice little hat she'd made.

Sarah had to go to a meeting, so Jean and I had the evening to ourselves. Since I'm often away in London, I wanted to make a treat of this for Jean, so I cooked her pasta (her favourite) and we watched some film together, but then she said she wanted to type on my laptop, so we gave it a go.

I asked her to choose words, then I showed her the letters to spell them out. I was glad to see that she naturally pressed the keys properly, rather than holding them down as many people do. She got quite into this, especially when I started feeding her typings to the speech synthesiser. I turned on caps lock after a while so the on-screen letters would look the same as the ones on the keys. She tried a few nonsense words of her own as well as ones I spelt out for her, too. Here's what she typed:

mummy
jean
orange
CATS
SO3Y7LZG33XV7BNKZ
DINOSAUR
HELLO
JEAN
7NL.
G.XZJOPV
GF.N

WordPress Themes

Creative Commons Attribution-NonCommercial-ShareAlike 2.0 UK: England & Wales
Creative Commons Attribution-NonCommercial-ShareAlike 2.0 UK: England & Wales