Logging, Profiling, Debugging, and Reporting Progress (by )

I have a hypothesis. The hypothesis is that, in software, the goals of logging stuff for audit purposes, reporting progress to a user on the course of a long-winded process, profiling a system to see where it is spending its time in the long run, and debugging certain kinds of problems (in particular, hangs of multithreaded systems come to mind, although many other kinds of problems too), can all fundamentally be met by a single piece of software infrastructure.

I say this because, largely, I write very similar code to do all four of them, and end up building unnervingly similar-looking ad-hoc solutions, despite the fact that a single well-developed tool could do all of them really well.

Basically, all of them boil down to a bit of code stating what it's about to do and what it's just done at interesting points.

Let's look at each in turn.

Software often writes, either to a log file for unattended bits of software such as daemons, or direct to the user if there is one, lines of text describing the current state of affairs. Generally, it logs when steps in the process are started (to document what it's about to do), when steps end (to document the results), and sometimes during iterations over something or other (although that's really just a special case of the previous cases, as each iteration is itself a step). This is often done for auditing reasons - to find out what operations led up to some state of the system, for a plethora of reasons - and sometimes to offer a kind of primitive progress reporting; you can look at the log to see how far something has gotten.

But we have more sophisticated progress reporting, sometimes. Rather than using a dumb logging facility to say "I'm about to process 1000 objects", then "I have processed N objects so far" periodically and letting the user read the text and do the maths to work out how far through we are, we can use a more evolved progress reporting infrastructure (usually called a "progress bar"), telling it when the process starts (displaying the bar) and ends (removing the bar from the display) and periodically updating it on progress by telling it how many steps we have to do and how many are currently done. Given that higher-level information, it can produce a more intuitive display, and even estimate the time to completion and report a rate of progress.

Although the display is rather different, there is a great commonality in the interface - we are stating the fact that a process has started (and how many steps it will take), and periodically stating how far we have done, and then confirming when we are finished. A high-level progress reporting interface could just as easily generate lines in a log file as display a progress bar on a graphical screen, from the same software interface.

I've also used logging for debugging, quite extensively. Rather than logging user-level messages that relate to the user's model of the state of the system, I can also log messages about the internal state of things "under the covers", and use that to guide my own investigations of problems caused by the internal state going awry. Sometimes I will do that by just printing those messages to a console I can observe, and removing or commenting out the code that does so when I'm not debugging; but more featureful logging infrastructures allow for "debug logging" that is hidden away unless specially requested, meaning I can leave the logging statements present in "production code" and turn them on at run-time, which can be a great boon.

Meanwhile, in profiling and debugging server applications with multiple threads handling requests in parallel, I have often used a particular technique. I've given each thread a thread-local "current status string", and then peppered my code at interesting points with a call to set the current thread's status string with a summary of what it's about to do. I can then easily ask the system to display a dump of all running threads and what they're currently doing. Java makes this easy with threads having a name string and the ability to introspect the tree of threads and thread groups; in a system written in C using different processes rather than threads, I've written my own infrastructure to do it using a shared memory segment with each process getting a fixed-sized slot therein and some lock-free atomic update mechanisms to avoid races between the processes and the sampling tool.

This lets me do two things. Firstly, when the system is grindingly slow, I can quickly see what every thread is up to. Is everyone all blocked in the same operation, all queuing for the same limited resource? Secondly, when something hangs, I can look at the state of the unresponsive threads to see what they're doing. Generally, this shows me what lock they're stalled on, and who has the lock (or, more tellingly, if nobody seems to have the lock). And finally, I can profile the system during heavy load by periodically sampling the status of each thread and then building a histogram of statuses, to see which statuses take up most of the time. However, I had to be careful with this - it worked well if the status string merely recorded what step was in progress, but not if the status string included details of the data being worked upon, because that made the strings different even if the system was at the same step, so they didn't count for the same histogram bucket. The solution was to mark such parameter data in some way (such as by always quoting it with square brackets) so that profiling tools can make themselves blind to them.

Sometimes I've run into trouble with the fact that the same procedure might be called in many different places; so the thread status shows me that I'm using that procedure (and where I am in it) but doesn't tell me why. If I was attaching a debugger I could view the entire stacktrace, but that pauses execution (which may interfere with the delicate race condition I'm hunting down) and it's fiddly to do that for every thread or process in a system to get a good snapshot of the overall state, which is why I prefer the explicit-status approach in the first place. The solution is simple: rather than replacing the entire status string, each procedure should just append their status to the existing string, and remove it when it's done. In Java I did this by capturing the current thread name and then assigning "originalName + processStep" each time, then restoring "originalName" at the end of the process; in C, I did it by recording a pointer to the end of the current string and just writing our new string there, and then setting it to "\0" at the end (being careful not to overrun the buffer!). This turned the status string into a kind of mini-backtrace, but rather than logging each and every procedure call, it only logs things considered important enough to log. When looking at a snapshot of a system with a hundred concurrent requests in progress, this is a great time-saver.

But, clearly, all of the above really just comes down to "reporting the current status of a thread of execution". All that changes is what the infrastructure does with that information. There's no reason why the same notification from the software cannot generate a line in a log file, an on-screen log message to a waiting user, the creation, updating, or removal of a progress bar, the updating of a global scoreboard on what all the threads in the system are up to, and the updating of a profiling histogram of where the system is spending its resources.

So here's my proposal for an API.

(task-start! name [args...]) => task handle
Starts a task, which is some unit of work. Tasks may be dynamically nested within a thread, opening tasks within other tasks. Returns a task handle. The name is a string describing the task, which should itself be static, but may refer to the additional arguments positionally with %<number> syntax.
(task-status! handle status [args...]) => void
Notifies the system that the given task's current status is the given status string, which again may refer to the arguments positionally with %<number> syntax.
(counted-task-start! steps unit-s unit-p name [args...]) => task handle
Starts a task with the given number of counted steps. The singular name of a step is unit-s and the plural name is unit-p, eg "byte" and "bytes". Otherwise, as per task-start!
(counted-task-status! handle steps-completed [status [args...]]) => void
Notifies the system that the given counted task's current status is the given status string, which again may refer to the arguments positionally with %<number> syntax, and that the given number of steps have been completed so far. The status and arguments can be omitted if there's really nothing more to say then the number of steps completed.
(task-end! handle) => void
Ends a task (counted or not), given its handle.
(task-fault! handle message [args...]) => void
Logs a fatal fault (internal failure within the task) of a task. The message should be a static string, referring positionally to the arguments with the usual syntax.
(task-error! handle message [args...]) => void
Logs an fatal error (invalid inputs, or invalid behaviour from a subcomponent, but not a problem with the task itself) encountered by a task. The message is as above.
(task-warning! handle message [args...]) => void
Logs a non-fatal problem encountered by a task. The message is as above.
(task-debug! handle message [args...]) => void
Logs a debug-level event encountered by a task. The message is as above.

Now, this looks like a very ordinary logging framework, with the exception of an explicit hierarchy of nested tasks and the explicit mention of "counted" tasks with a known number of steps. Yet those two additions allow for the same interface to cover all of the above goals.

Seeing how they might generate a log file for auditing is trivial. The explicit knowledge of task nesting lets us give context to common subtasks used in lots of different places, be it by something as simple as indenting each log message according to the subtask nesting depth or creating a an aggregrate "task pathname" by combining all the names of parent tasks into one long string to log.

Generating a profile is also trivial; using the task names and status strings without interpolating the arguments, we can obtain a measure of what bit of the code we're in - either in isolation, or including the entire context of parent tasks, as desired. And we can also generate histograms of the arguments for each different status if we want; if a given subtask takes widely varying amounts of time depending on how it's called, we can find out what arguments make it run slowly to narrow down the problem.

Debugging is helped by turning on the display of debug-level messages in the log, and by making a snapshot of the current status (status and name of the current task and all parent tasks) of each thread/process in the system available for introspecting the current state of the system. That's a useful thing to include in a crash log, too.

But reporting progress to the user is where having a common infrastructure really shines. Rather than needing the explicit construction of "progress dialogs" in interactive applications, the infrastucture could notice when a thread has spent more than a few seconds inside a counted task and produce on automatically. It would display the progress of the highest-level parent counted task of the current task hierarchy, as the overall progress measure of the operation in progress; but if a subtask takes more than a few seconds, then it becomes worthwhile automatically expanding the dialog to list the entire hierarchy of nested counted tasks with progress bars (the very tip of the hierarchy, that has not gained several seconds of age, should be elided to avoid spamming the user with endless wizzing progress bars for short-lived subtasks; only display subtasks that have shown themselves to take user-interesting amounts of time). And the display of non-counted subtasks and their textual statuses within the hierarchy can be turned on as an option for users who want to see "more verbose" information, perhaps along with each task then growing a scrollable log showing that task's history of statuses, warnings, starting subtasks, and so on.

How best to handle multithreaded tasks is an open question, depending really on the threading model of the language you're using. Perhaps tasks should require an explicit parent-task handle to be passed in, to make it clear what the task hierarchy is in the presence of threads; or perhaps newly created threads should inherit their parent's current task. Either way, with threading, it's possible for a task to be parent to more than one active subtask, and a progress reporting user interface will need to handle that, perhaps by dividing the space underneath the parent task into multiple vertical columns for each concurrent subtask, when that level of detail is required.

Also left unspecified is more detail on the "log level" concept; I've just suggested a few (fault, error, warning, status update (called "notice" or "info" in most existing logging systems) and debug, but the system really needs to make some intelligent decision as to who to notify. A server-based interactive application given a fault condition really needs to apologise to the user in vague terms, while sending a detailled crash log and raising an alarm to the system's administrators, for instance. And more complex systems composed of multiple components may have fairly complex rules about who should be notified about what level of event in what context, which I've not even begun to worry about here...

But, to conclude, I think it's a shame that there's so many very different bits of infrastructure that have very similar interfaces and are used in similar ways by software, but with very different goals. I think it'd be great to create a single progress-reporting interface for applications to use, and make different goals into pluggable components beneath that interface!

I need a holiday! (by )

This year, I've been alternating between bleak depression and enthusiastic elation.

Luckily, it's easy to see a pattern - the elation is when I let myself get distracted by interesting things; the depression is when I have to tear my attention back to what needs to be done rather than what I feel like doing!

It's been a funny year. On the one hand we've moved into a much larger house, with much better facilities, that's warmer and easier to keep clean and tidy. My work is great, and I've managed to catch up on some things that have been hanging over me for years - tax paperwork, terminating my limited company (that had become nothing more than a thorn in my side since I stopped freelancing), simplifying and upgrading my server setup, tidying up my home directory and organising my life. On the other hand, I've been so busy that the new home has mainly been a place to eat and sleep rather than something I've had much chance to enjoy, and I'm behind on the (small, reasonable) list of projects I wanted to do this year - with no year left to do them; I've so far spent only a handful of days on my own projects in the entire year.

I spent a whole day sorting out my workshop on my birthday in April, and ended that day with a few little things to finish off - which are still waiting for me. I've not finished the ring casting, which should only take a couple more days, nor rebuilt my furnace, which should take a few days more.

I've done a bit better on computer-based projects as I can do them wherever I have my laptop; I've done some work on my fiction project, and made progress on my organisational infrastructure to convert a huge pile of "things that need investigating to even begin to decide what needs doing about them" into a tractable TODO list, and done some writing for the ARGON project web site.

But, with my ability to concentrate on what I'm supposed to be doing rapidly waning, it's clear that I need some time off. So, I've booked the week before Christmas off of work, and I hope to:

  1. Do what I can to fix the roof in the workshop.

    • It leaks. This will be hard to fix properly, as it'll require spending lots of money on materials; and possibly can't be done until there's some warmer, drier, weather to dry the decking out. But I'll see if I can improve on the current bodge somewhat, at least to give the decking a chance to dry properly without regular re-soakings.
    • There's great big gaps in the eaves, all round the walls, varying from a centimetre up to about twenty centimetres, through which an icy wind blows. All the warm air from the heater disappears, and ivy creeps in. I need to seal them up (minus a controllable air vent to let out humid air and fumes from welding - perhaps an air vent plus an extractor fan with a fume hood would be the way to go in the long run). I plan to saw some strips of wood to length so they can go between the rafters, nail them in place, and use judicious amounts of sealant to keep the tenacious ivy at bay and to account for my general inability to cut wood to exact lengths properly.
  2. Run Ethernet to the workshop so I have an Internet connection there. This will involve spending some money on outdoor-suitable conduit and fittings, and trunking for the interior runs, then drilling lots of holes in walls and running cables through and sealing the gaps. But the result will be that I can actually do computer work at a desk with a comfy chair, rather than hunched over a laptop on the sofa with children tugging at me.

  3. Start building the computer infrastructure in the workshop. I'm looking at a battery-backed low-voltage power system feeding a Raspberry Pi (which I already have, waiting - Sarah got me one for my birthday), bristling with sensors. Because sensors are fun.

  4. If the weather and time permit, work on my ring casting and the furnace, although that somewhat requires dry weather. We'll see.

  5. Chill out, play computer games, write fiction and ARGON prose.

  6. Order the bits to build a chord keyer - I doubt I'll have time to build it by the time they arrive in the post, so I'm saving that for a project I can do at Bristol Hackspace in the new year.

But I need to take care that next year isn't like this one. Taking on so many responsibilities that I struggle to maintain my productivity means I get less stuff done, not more, and makes it hard to prioritise my effort sensibly. I'm going to book three weekend days each month, in advance, for my projects or simple relaxation, rather than just thinking I'll do them "when I get a free day" only to find that all of my weekends are booked up months in advance. I'll be open to rearranging them in order to fit around the days when Sarah or the children need me, or we're visiting people for events - most of the time, it doesn't matter what actual day I do things on. Sometimes this will involve getting a whole weekend, and then just a single day at the other end of a month; that's fine, just as long as it lets me keep making progress on my projects, and giving me a chance to unwind from the stresses of constantly doing what I must do, rather than what I want to do.

Static Typing (by )

I read this fine blog post

And this is precisely what is wrong with dynamically typed languages: rather than affording the freedom to ignore types, they instead impose the bondage of restricting attention to a single [sum] type!

Ah, bless. The author means well, but is confusing matters. They're kind of right that the distinction between so-called "dynamically typed languages" and so-called "statically typed languages" isn't what people think it is, but I think they've still not quite grasped it, either.

Certainly, almost all languages have types of some kind (the only real exceptions are ones that directly operate on memory as an array of bits, and expect the user to request the interpretation of any given region of memory in each operation, such as assembly language). So-called "dynamically typed languages" (let's call them SCDTLs from now on, and SCSTLs for so-called "statically typed languages") usually have numbers, and strings, and so on as separate types. What is missing in them compared to SCSTLs is the ability to say "This variable will only ever hold variables of a given type"; and the argument of the author is that, therefore, SCDTLs force every variable to be of a single "could be anything" type, while SCSTLs let you be more expressive. And in an SCSTL you could indeed create a big sum type of everything and use that for all your variables and, pow, it'd be just like a SCDTL, once you'd written all the clunky wrappers around stuff like addition to throw a run-time error if the arguments aren't all numeric, and unbox them from the sum type, and box the result up. Oh, and you need to maintin your giant Anything Sum Type, adding any user-defined types to it

That's what the author omits to mention. SCDTLs have all this automatic machinery to do that for you, while in SCSTLs you need to do it by hand! Eugh!

Working with sum types is useful. It's handy for writing programming tools such as generic container data structures. SCSTLs tend to have tools such as parametric types to act as a short-cut around the difficulties of doing that stuff with explicit sum types, but it boils down to the same kind of thing under the hood.

Now, a lot of the rhetoric around SCSTLs via SCDTLs comes from a rather blinkered viewpoint, comparing something like PHP (almost everything fails at run time!) to something like C (sum types are like pulling teeth!) - both sides have come together a long way.

Haskell is perhaps the shining example of a SCSTL nowadays, with its polymorphism and parametric typeclasses offering advanced ways to express huge sum types without having to spell them out.

And from the SCDTLs side, Common Lisp lets you declare the types of variables when you want, meaning that they are assigned the "Anything" sum type by default, but you can narrow them down when required. That gives you the convenience of an automatically-specified and automatically-unboxed Anything sum type when you want it, plus the static checkability (and efficient compilation) of finer-grained types when you want them. (And Chicken Scheme's new scrutinizer and specialisation system is a rapidly-developing example of a similiar model for Scheme, too).

And there's no reason why the SCSTLs can't come further, with an inbuilt Anything type and automatic boxing and unboxing of sum types, and more flexible type systems that can express more subtle distinctions, reducing the cases where "it's just too complex and we need to go dynamic".

Then we'll meet in the middle, and the only difference will be that SCDTLs have a syntax where type declarations are optional and default to Anything, while SCSTLs have a syntax that makes you declare types, even if they're Anything. They'll largely become interchangeable, and instead, we'll compare languages' type systems on a different scale: how complicated they are.

You see, Haskelll's type system can be hard to understand. It's quite complicated, in an attempt to be able to statically describe all the sorts of cases where people might wish they had a SCDTL instead. The development of type systems has largely been one of dealing with this; starting with things like generic container classes, then getting into complex stuff about being able to make the types of parts of a product type dependent on the values of other other members and whatnot, fine-grained types such as "odd integer", and so on. As SCDTLs gain the ability to declare types, they tend to start off with quite simple type systems, as it's easy to "just go dynamic"; they're initially happy to put in a bit of typing to speed up inner loops of numerical code and to tighten up error checking on interfaces. While languages without easy access to an Anything type rely on having a type system that can express lots of things, because it's a real pain when they have to escape it. But if they meet in the middle, the tradeoff will, instead, be one of a more complete type system that lets you check for more at compile time and gain more efficient code generation - versus the mental burden of understanding it.

I suspect that Haskell is, perhaps, a bit too far in the "complex" direction, while the support for parametric containers in Chicken Scheme is weak (why can't I define my own complex type if I create a "set" or "dict" container implementation, for example?) - we'll meet somewhere in the middle!

[Edited: Clarified the initial paragraphs a bit]

The Long Game (by )

I've written before about my plethora of projects and how I'm trying to spend more time on them, and to focus on ones that can produce immediate rewards (such as Ugarit) at the cost of longer-term ones (such as ARGON).

However, I have projects I can't even start on without access to massive resources. I have them Far Out Beyond The Back Burner, just in case I gain the resources required to start them within my lifetime, but without any great expectation of doing so.

I'm listing them in an approximate order based on what ones I think would be easiest to start, and would in turn make later ones more approachable.

Molecular nanotechnology

I'm hoping for a proper Drexlerian revolution of molecular manufacturing. A post-scarcity economy of cheap diamond and home production of anything you can download or design a plan for, as long as it doesn't require exotic atoms (which only really rules out nuclear devices; no big deal).

Post-scarcity is always a relative term, however. Sure, we'd be in a world where we can use solar power to directly convert our own waste products back into all the goods we currently hunger for; where a small patch of land gives you enough space to plant a tiny nanotech seed (that anybody else on Earth can make for you at practically zero cost) to grow yourself a solar array and then use the energy from it to harvest raw materials from the ground and air to make yourself a house that provides a level of material luxury beyond what even the richest humans alive right now can have. But we'd still need some kind of economy to buy land in the first place, and to buy skills and services (from designing things you can tell your home to build for you to entertainment).

I hope that my skills as a designer of intricate systems would be held in high regard in such a world, so I don't need to spend too much time working. As a molecular assembly unit can just be fed a design and sit making it overnight, I won't need to spend my time laboriously making complex machinery; I want to focus as much as possible on spending my time designing the machinery and software for my next steps.

Get offworld

However, although I'll still need money to buy services, I have some plans that would require large amounts of material, and that might be expensive as the human population rises. So I'd focus my available resources on building one of those tiny nanotechnological seeds and firing it into space, to start converting asteroids into nano-replicators, under control from a nice radio dish I'd command my house to grow. I wouldn't be the only person to think of this, and I could expect territorial claims to start appearing around the solar system pretty sharpish, so it would be good to start quickly.

I might stay living on Earth, or try to build a large spacecraft and relocate to orbit if that's practical. However, my physical location will be largely irrelevant, and more so in later stages of the plan.

Artificial intelligence

Having to work so that I can hire the services of humans to fill in gaps in my design skills, or just to save me time so I can progress my plans faster, is a bottleneck. And a risk, as the rest of the human race may not react rationally to the emergence of a post-scarcity world and start wiping itself out. One way out (which is rather speculative, as I don't know if it would work) would be to turn all that asteroid mass I'm converting with my space probes into solar-powered computers and setting them the task of evolving intelligence in a simulated neural network or rule engine. Rather than doing lots of hard thinking about the nature of intelligence, I'd brute-force it - a massively parallel genetic algorithm trying to find a configuration of the simulation which can answer questions I'd feed into it. I'd train it on a mixture of my own questions and exercises from textbooks in fields of interest to me. With a large enough training set, I should be able to evolve a system that's a general function from questions posed in English with access to the background knowledge implied by the kinds of textbooks I trained it from, to answers in English. If it worked, I would have an artificial intelligence, without an artificial sentience.

That difference is quite profound. Artificial sentience opens up ethical questions: should it have the rights of a person? But I have no need to create a mind in the image of my own, with desires and awareness of time and sensory capacity and a continuity of consciousness based on memory of past events. All I need is a function from question to answer, that can be embedded into software that needs it. I can ask it questions beyond the scope of its training (if I manage to evolve it to be sufficiently general) by including appropriate textbook material in the question.

I could use it to solve problems by posing them as questions, firstly. But I could also use it for intelligent automation; systems could react to events by feeding the nature of the event, as well as background information about the situation and relevant history, in as part of a question as to the best course of action to follow to meet some defined goal.

Weak life extension

I may be lucky to get this far within my lifespan as it stands, but I don't want to risk any further, so I will have been learning (or assembling reference material for my AI) about human biology sufficient to cure ailments, and decelerate or reverse the process of aging, in case I need a bit more time to complete the next stage.

Accelerated consciousness

We think by exchanging pulses between the neurons in our brains. The neuron is a cell that, beyond the normal structures required of a functioning cell, contain one or more long thread-like structures called axons, which enable the neuron to connect to other neurons elsewhere in the brain; and the connection points, which are called synapses. We're still a bit vague on exactly what happens inside the synapses; we have an idea of their properties, but we can't really test it well enough to see if it's complete. Hopefully nanotechnology will let us put probes inside working neurons and examine them better.

But fully mapping the function of the synapse can come later. I'll start with a lower-hanging fruit: mapping the functioning of the axon.

Signals travel through axons at about eight metres per second. Signals travel through copper cable at about two hundred million metres per second. If I could inject nanomachines into my cranium that would trace out the neurons, finding the synapses and the axons that join them together into the neuron, and bypassing the axons with insulated copper cables carrying electronic signals directly between the synapses, I would significantly increase the speed at which I thought.

The danger would be timing dependencies in the brain. If a neuron fires, sending a pulse down a long axon, while the same pulse also travels via shorter axons through one or more extra synaptic junctions, then changing the speed of propogation down axons without changing the speed of processing of synapses would result in the relative timings of the effects of the initial firing arriving at the destination differing. So I'd start by having my electronic bypasses insert a delay to more exactly simulate the original axons at first, and try selectively decreasing it in various parts of my brain first, to see what happened (and with an automatic return to normal timings after fifteen seconds, like when you change the resolution on your display and the OS isn't sure if you can then actually see the dialog asking you if the result is OK).

In the worst case, I'd have to take time to study the synapse so I could model it in an electronic system and thus create a timing-perfect electronic model of my brain, but that would take longer. It is necessary for later steps in the plan, but it would be nice to reap the benefits of accelerated consciousness sooner than that, in order to make better use of my time.

It's hard to say how fast I could make myself go. The hard limit (if the response of the synapses was irrelevant to the speed of thought, and axon delays were the limiting factor) would be that I would think two hundred million divided by eight, which is twenty five million, times as fast. At that speed, anything that wasn't moving at a good fraction of the speed of light would appear immobile to me. I would seem to be frozen, stuck in an immobile body, and I'd probably go mad from boredom and claustrophobic panic. So I wouldn't do that. Since I'd already tapped all my axons, I'd divert my peripheral nervous system to a virtual body in a 3D computer simulation. Then I could do all the thinking and planning and designing and reading and writing I wanted to. Of course, fetching stuff from the Internet would be a pain; if I sent out an HTTP request to Wikipedia for some information, it would take a long subjective time for the response to come back. Likewise with communicating with friends by IRC and email.

But even if removing axon delays only made my thoughts happen ten or a hundred times as fast, due to synapse delays being significant, I'd still need to go into a virtual world to live without the slowness of my physical body trapping me. And unless it was only a few times as fast as normal living, I would find myself spending a lot of time waiting for the world outside to react to my latest HTTP request or other action.

So I would probably program my control software to make my synaptic delays infinite - suspending neural firing - until something interesting happened (or a timeout occurred; I'd want to wake up at least once a millisecond just to see what was happening through my real eyes, in case there was an explosion in progress or something else I needed to attend to).

I'd probably want to automate management of my body. Walking by taking note of my inner ear and eyes a hundred times a second and deciding what impulses to send to my muscles would be hard work; I'd need to automate it to the level of choosing a direction of motion and a desired body position and facial expression and letting the computer walk for me, checking up on it ten times a second or so. I'd want to be able to tell my mouth to speak a sentence and leave it to get on with it, and whenever I checked up on my body I'd replay to the past few seconds of recorded audio and video so that I could discern speech directed at me.

Driving my physical body might take only a tiny fraction of my time. So why not drive several? I could control heaps of robot bodies at the same time, by just examining the state of each in turn, via radio links. I could be an entire team of robot ninjas infiltrating a building at the same time. That would be awesome.

However, interacting with computers would be a pain. As much faster as my brain was, computers would be correspondingly slower. My 3D virtual world would need to be quite basic, even with a massively parallel nanotech computer rendering it and only needing to render my foveal region in full resolution, or it just wouldn't be able to generate frames fast enough for me. Waiting milliseconds for a web browser to actually render a page into an image would be intolerable. I would need to run very simple software on very fast processors if I wanted interactive responses.

But either way, my main limiting factor - time to design things - is now significantly relieved.

Mind transfer

But the logical next step is to get rid of those synapses, and entirely replace my brain with an electronic version. This would gain me the rest of the speed improvement available. Also, an electronic synapse would probably be smaller than the real thing, and it wouldn't need the body of the neural cell any more, so I could make my entire brain much smaller, thereby gaining an extra few times speed by just removing the distances those two hundred metre per second electrical impulses have to travel.

But being a fully digital simulation would have other benefits. My neural interconnection map and synaptic states would be a string of bits that could be transferred and a new electronic brain built and initialised from. This could be used to back me up in case of the physical destruction of my brain and body. It could also be used to work around the annoying consequences of communications delays being so notable when living at twenty five million times the normal speed; I could transmit my brain state into deep space and have my brain constructed there in order to get hands-on with some process, then send it back afterwards (or resume the old version still at home if the transferred copy is lost or corrupted somehow). If I build a solar antimatter refinery and made enough antimatter to send a nanoseed probe to Alpha Centauri at nearly light-speed (which might take a decade or so), and had it build an installation there, I could even visit it at the cost of four years of unconciousness while I was in transit each way. But that's nothing compared to the costs and risks of sending my physical brain there and back.

In principle I could duplicate myself and run multiple instances of myself in parallel, but I don't think I'd need to - with accelerated consciousness, I don't think that thinking time would be my bottleneck any more. A reason to run clones of myself at great distances in order to have more real-time interaction with events over a large area might develop, but I don't know of any reason why I'd need to do that, offhand.

Acausal near-godhood

So, assuming I've managed to not kill myself by tinkering with my brain, and I've not run into competition with other humans and been imprisoned or destroyed by them, I'm now a disembodied intelligence able to simultaneously operate bodies anywhere within a few tens of light-milliseconds of wherver I'm currently sentient from, and able to migrate between brains at the speed of light, and to be fairly immortal due to having backup copies of myself that activate if the "currently live me" stops checking in every millisecond. Arguably, I will have crossed some kind of technological singularity, as tinkering with my own cognition has made me able to out-think any normal human being (or team thereof), purely by being able to research and plan my actions in great detail - in the time it takes a visual signal to travel from the eye to the brain of a normal human being. But the post-singularity me would still be perfectly comprehensible to a normal human and vice versa; it is the quantity of my thought which will improve, not the quality.

Perhaps I will have had to leave the solar system of my birth by now, in order to keep my freedom from other humans, or whatever becomes of governments and corporations in a post-scarcity world, trying to lay claim to resources I need for my plans. But ideally I'll still be in touch with a happy brotherhood of humans rather than striking out alone or with a small circle of like-minded family and friends.

However, this next stage will probably have to happen in another solar system. Even if the rest of the human race isn't particularly hungry for energy and I can have the entire output of the Sun, that might not be sufficient. And if my experiments fail, I might destroy the solar system. So this step probably needs to happen in other star systems.

Basically, I want to implement time loop logic. There's a number of ways that might allow us to send a single bit of information back in time, and that's all I need. Perhaps I can string a cable (or send a photon) around a rapidly rotating singularity, or a uranium atom spinning in an intense magnetic field, or through the centre of a ring singularity, in order to create a timelike trajectory. Or some trick involving quantum mechanics. I'll try them all, and any others I or my AI manage to come up with.

Now, being able to build a hypercomputer with time-loop logic, and being able to solve NP problems in polynomial time, would be pretty neat. But that's not the eventual goal. Rather than just implementing pure functions such as prime factorisation in the hypercomputer, I want to perform I/O. With side effects. From inside a time loop.

You see, the consistency principle which underlies time-loop logic can be justified in quantum mechanics; in the presence of a time loop, the wave function of a contradictory state cancels itself out and becomes zero because of the link between its past and future. This is used to ensure that the desired answer arrives out of the negative delay gate in the first place, by ensuring a contradiction if it doesn't.

But what if we have a sensor attached to the computer, and arrange to have a contradiction if the value of the sensor is not equal to a desired value? Situations where the physical system monitored by the sensor would fail to produce that value are contradictory, so the physical system's wave function cancels them out and we can only have the desired states.

That gets interesting if the sensor is measuring the speed of light in a vacuum. What we have build is known as a "reality editor" and grants the owner godlike powers.

Of course, the equipment is part of the time loop, so the physical system being measured changing is not the only possible non-contradictory outcome; there's also the possibility that your equipment might just fail. Since the quantum mechanical odds of your equipment failing are probably much higher than those of the speed of light changing, you will almost certainly get an equipment failure rather than destroying the universe by altering its fundamental constants and causing all the matter to collapse to a point.

So let's set our sights a little lower. How about moving on from nanotechnology to femtotechnology? Tinkering with the energy levels inside atomic nuclei is tricky, but if we can build a sensor to tell if we've managed it, we could use a time loop to force the hand of physics. We can work out the chance of quantum tunnelling producing the desired state by pure luck alone, and make sure that the chance of our equipment failing is below that - by duplicating it. Don't forget we have the matter and energy of entire star systems to hand. Make trillions of time loop devices with their own sensors, all observing the same system. Make it more likely for the system to enter the desired state than all the time loop devices failing together.

So the time loop reality editor cannot provide complete omnipotence; it's limited by the probability of a complete system failure, and can only cause events which are more probable than its own failure, so it would be rated up to a certain improbability level (in a manner that sounds slightly familiar...). Indeed, in case of miscalculation of the probabilities or sheer bad luck causing a device failure rather than the desired event, it would be wise for each time loop unit to have a "circuit breaker" that is the most likely part to fail and can be simply reset, rather than risking more permanent, hard-to-diagnose, or violent failure modes of a device containing a significant amount of stored energy in one form or another.

An interesting possibility is of using the reality editor to not only make, but design, things. Rather than building a sensor that checks if a working femtocomputer processing element is created, create one that tests whatever is standing on the target platform is a fully working computer meeting certain design requirements, and see what appears. As the quantum-mechanical basis of the reality editor will tend to favour the most likely, and therefore generally simplest, solution, some interestingly optimal designs might result.

Perhaps the first thing to try and make is a more compact and powerful reality editor?

Oh, and negative delay gates will enable faster-than-light communications, so I can interact with my ever-expanding interstellar empire in real time now.

Hopefully, that will be enough to keep me busy and occupied until the heat death of the universe starts to loom. At which point, hopefully I will have figured out how to:

Create new universes

Probably by tinkering with black holes or something, if not by turning as much of the mass in the Universe as possible into a giant reality editor. Either way, make a new universe with a new entropy gradient I can use to power my ongoing experiments.

Differentophobia (by )

At the time of writing, there's been recent controversy about a fast food chain called Chick-fil-A, whose management have made statements against gay marriage, and who financially support organisations campaigning against it. I'm not going to go into detail about that, as it's just one more battle in a long war against the idea that it's OK to fail to understand that people who are different to you are still people.

There's a natural human tendency to categorise people into Us and Them. We have emotional reactions like empathy towards "Us", as we can imagine ourselves in their situations. We tend to trust "Us", and help "Us" when they are in need, and would rally to defend "Us" from harm.

"They", on the other hand, are assumed to be attempting to take something from "Us". "They" are not empathised with; we do not imagine what it is like to be "Them". We merely see that "They" are different, and therefore, we cannot imagine what "They" are thinking; and we imagine that "They" must feel the same about "Us", and therefore not have our best interest at heart; we assume that "They" will feel no compunction against causing "Us" harm if it's in "Their" interests, so we are quick to defend ourselves against "Them", including pre-emptive strikes. If "They" ask for something, it is clearly "Them" trying to take things from "Us", rather than "Them" and "Us" negotiating to find a compromise over some shared resource, and we must stand up against it or "They" will take everything and leave "Us" with nothing.

This general mechanism is behind a lot of pain and suffering in this world. Once somebody has been classified as "Them" in somebody's eyes, it's very hard to lose that classification, as all the good deeds they do and other evidence of trustworthiness are easily interpreted as deceit. Meanwhile, the criminals in the "Us" group use their implicit trustworthiness to great advantage. This simple classification into "Us" and "Them" presumably did us some good when were crouching in caves, but now we're a globally interconnected society, it's harming us.

I have suffered little from it, personally; I am white (and live in a country where that's normal), male (and live in a society where sexism has abated, but is still rife), without any visible disabilities, living in the country I grew up in, with an accent and mannerisms which can fit into most social "levels" (I come from a lower working class background, but went through elite education into a profession, so I have experience of all sorts of people), straight and cisgender. My only "minority" trait is being an unabashed nerd, which is something I can easily hide when amongst people who would be bored stiff by a thrilling discussion about logic circuit design; and I hide that in order to not bore people stiff, rather than out of a fear of discrimination.

But it still deeply irritates me, because it's just a stupid waste of happiness. The human race has enough to worry about without us being nasty to each other.

I like meeting people who are different, as it's interesting to try and find my own limitations and boundaries. How much of who I am is because of the limited range of experiences I've had, rather than inherent limitations of my human brain? I'd still really like to sit down with some gay and bisexual people of both genders, to try and really find out what the experiences of love, limerence and sexual desire are like for them; there's clearly some underlying difference because the objects of their attraction are different, but does that extend to differences in what it feels like for them? Bisexual people would be well-placed to compare, but then their experience might not be representative of what purely homosexual people feel. As I was at an all-male school when I became really interested in girls, I've never felt I've entirely understood heterosexual flirting/dating/courting protocols, and I'm quite interested in how much is purely social convention rather than fundamental parts of our evolved mating mechanisms; finding out what it's like from the perspective of homosexual people, who have been forced out of the social conventions in the first place and had to form their own in originally hidden communities, might provide me with insights into my own heterosexuality! I watch the activities of my polyamorous friends with interest!

I am not, however, immune to the fear of different people. If I suddenly find myself in an unfamiliar cultural environment, I feel a twinge of alarm. This is partly justified; in an unfamiliar culture, my social protocols may be incorrect, and might cause offence, and somebody who is not familiar with my own culture might not realise that this is unintentional. So when I find myself in such a situation, I am well advised to pause and think carefully about how I act - but purely to ensure that I present a respectful and friendly first impression; I then try to find common ground and establish an understanding that I can expand upon until I am comfortable with the new situation.

When a group of hoodie-wearing black youth rush past me in an alleyway, yes, I am mindful that they might try to mug me. But I am also just as mindful that a group of smartly-dressed white females might mug me, too. After all, if I wanted to mug somebody, I would carefully avoid mugger stereotypes in order to lull people into a false sense of security! If I ran a Fagin-esque gang of pick pockets, I would train little old ladies in martial arts and equip them with knives to be my agents! But I digress... I do not discriminate in my distrust. And yet my distrust is provisional; if I know nothing about somebody, I will assume that they might be a threat, and I will take care not to give them an opportunity to harm me or those in my care in any way; but I will not show hostility towards somebody unless they prove their bad intententions by makin a move to attack first. And on the other hand, I will extend trust towards people who have earned it, but I do so proportionally, assessing the cost of losing the thing I have entrusted them with against the benefits of trusting them.

It's also important to recognise when people are being afraid of you because you are different to them. Their initial reaction to you might be to flinch, to be defensive, or even to assume that you conform to their culture's stereotype of yours. And if you are feeling skeptical of them being one of Them, then their initially negative reaction would just needlessly reinforce your suspicion. If you then act in a way that makes you seem hostile and defensive in their eyes, then your relationship has gotten off to a probably irreparably bad start.

So, dear reader, when you feel that frisson of fear and distrust when you meet, or hear about, strange people doing strange things, recognise that feeling for what it is: a quick warning that these people might find you strange and frightening, so you must be polite and welcoming; and try to turn the other cheek if they exhibit knee-jerk defensive reactions towards you. The more you can learn from them about the amazing variety of the human experience, the better a person you will be.

WordPress Themes

Creative Commons Attribution-NonCommercial-ShareAlike 2.0 UK: England & Wales
Creative Commons Attribution-NonCommercial-ShareAlike 2.0 UK: England & Wales