Error correcting codes (by )

I've just finished reading A Commonsense Approach to the Theory of Error-Correcting Codes. The book does exactly what it says on the tin; it explains error correcting codes in terms of linear modulo-2 algebra, only getting into Galois fields and all that in an appendix.

And, as such, it does little more than scratch the surface. It only goes into Reed-Solomon codes that can correct single word errors, for example. But hey, I'm not complaining - it's done a great job of giving me an intuitive understanding of Hamming codes, cyclic codes (such as CRCs), the single-word-correcting RS codes, and so on. And I've learnt a lot about Linear feedback shift registers.

But it strikes me that the whole field of error correcting code is a bit insular. The maths required to really grasp it are really complex. Finite fields are bizarre things. While lots of people can experiment with things like data compression or basic encryption, error correction codes, the third cornerstone of low-level coding technology, is quite inscrutable.

This sucks.

Read more »

Code-stealing macros (by )

I'm reading Thinking Forth, which has conveniently been made available as a PDF for free reading, and a reference therein to the ability of languages with metaprogramming facilities to represent things that, in other languages, might need to be part of the language core as user libraries:

Nevertheless a brilliantly simple technique for adding Modula-type modules to Forth has been implemented, in only three lines of code, by Dewey Val Shorre

Read more »

Intermediate languages (by )

One way of making a compiler portable to different host platforms is to make it compile to a processor-independent (or mainly processor independent) low-level language.

GCC, perhaps the most widespread compiler of them all, uses this approach: it produces mainly processor-independent "RTL" (register transfer language), which is close to assembly language in general, but not aligned to any particular processor's implementation thereof. I say mainly processor-independent since earlier stages in the compiler do access target details and produce different RTL for different targets.

Read more »

Scheme bindings (by )

One of the things I don't like about Scheme is that just about everything in it is mutable. The most awkward ones are the value bindings of symbols.

You see, when a Scheme implementation (compiler or interpreter) is processing some source code, at any point in the program it is aware of the current lexical environment - the set of names bound at that point in the program, and what they're bound to.

Read more »

Paradigm shifts (by )

When I was a kid, I used to read a lot. I'd devour the technical sections of libraries for new things to learn about. Then I got an Internet connection, and tore into academic papers with a vengeance. Then when I left home and got a job, I had money, so I would buy a lot of books on things that I couldn't find in the library.

I look fondly back on when I read things like Henry Baker's paper on Linear Lisp, Foundational Calculi for Programming Langauges, the Clean Book. Or when I first learnt FORTH and Prolog, and when I read SICP or when I learnt about how synchronous logic could implement any state machine.

All of these were discoveries that opened up a new world of possibilities. Mainly, new possibilities of interesting things I could design, which is one of my main joys in life.

However, after a while, I started to find it harder and harder to find new things to learn about. Nearly a decade ago I all but gave up on the hope of finding a good technical book to read when I went into even large bookshops with an academic leaning. I started browsing the catalogues of academic publishers like MIT Press and Oxford University press, picking out good things here and there; that's where most of my Amazon wishlist comes from. But even then, most of the books I find there are merely ones that will give me more detail on things I already know the basics of, rather than wholly new ideas.

But, of course, the underlying problem is that my main field of interest - computer science - has only been pursued seriously for about seventy years. Modern computing (as most people see it) isn't really the product of current computer science research; industry lags far behind academia in many areas. The computer software we run today is primarily based on the produce of academia around the 1960s (imperative object-oriented programming languages, relational databases, operating systems with processes that operate on a filesystem, virtualisation, that sort of thing). This is for a number of reasons (some more valid than others; but, we are catching up, mainly thanks to the social effects of the Internet), but it means that there's little incentive for industry to actually fund more computer science. So the rate of new ideas actually being developed is far less than the rate at which I can satisfy my curiosity by learning them!

One answer is to try and come up with new paradigm-shifting ideas myself. I'm trying, but I'm not really good enough - I can't compete with proper academics who get to spend all day bouncing ideas back and forth with other proper academics; I can't really get my head deep enough into the problem space to see as far as they do. All I can really do is solve second-level problems, such as how to integrate different systems of programming so that one can use the most appropriate one for each part of one's program without suffering too much unpleasantness at the boundary between them.

Which is why, whenever I read something about some fun new deep idea, I have to stretch my mind to encompass it in the first place.

And that's half the fun...

WordPress Themes

Creative Commons Attribution-NonCommercial-ShareAlike 2.0 UK: England & Wales
Creative Commons Attribution-NonCommercial-ShareAlike 2.0 UK: England & Wales