Whatever happened to S-Buffers? (by )

Back when I were a lad, I went through the obligatory phase of wanting to write a 3D computer game, and during my research, I came across S Buffering, a technique for rendering 3D graphics much more efficiently than a Z Buffer, the technique used in modern 3D graphics cards.

In software, an S buffer certainly seems to be inherently faster than a Z buffer, and at the time of the original article on S buffers, a software S buffer would (he claimed) outperform a hardware Z buffer.

Plus, they very elegantly handle transparency, and transparent surfaces abound in modern games.

But what happened to them? I thought about them while unable to sleep last night, so did some googling today, and all I found apart from copies of the original article was a forum post in which somebody basically asks the same question.

So I'd be keen to see if an S buffer on a modern CPU would outperform a 3D video card (I suspect video bandwidth would be the limiting factor, though). But mostly, I'd be keen to think about whether one could implement an S buffer in hardware that would produce better price/performance than a Z buffer...

Cranham Scout Group now has a web site (by )

I've just registered and set up:

http://www.cranham-scouts.org.uk/

This is to prevent people looking for the scout group just ending up on our personal blog... Enjoy!

More on the Bussard Fusor (by )

As I mentioned before, Dr. Bussard is claiming to have solved the secret of fusion power generation - but now here's him giving a talk to Google about it!

UPDATE: Link fixed...

The Eye Of Horus (by )

As I mentioned in passing before, I've been writing my own server status monitoring package, The Eye Of Horus, because I wanted to better monitor my own servers.

Well, I installed it today, both to get started with some actual monitoring and to try it out in a real environment before releasing it properly, and the first thing I found was that the load on my primary server was high. As in, around 5. And a bit of digging revealed that it was Postfix being kept busy - delivering spam.

So I upgraded postfix on it, and my backup mail server, to the most recent version in pkgsrc, and added a bunch of SMTP-level anti spam checks to take the load off of spamassassin - and pow, system load has dropped to reasonable levels again.

The Eye Of Horus has saved the day already!

It's not yet as featureful as Nagios, but it's a better architecture, so it's easier to configure and has potential to overtake Nagios in the feature stakes. I've written an optional module for it to log statistics (load average, disk space, etc) to RRDtool databases, and hooks into the Web status display CGI to allow it to link to graphs produced from RRDtool, which is pretty nice.

Merging BitTorrent and HTTP (by )

I've been kicking an idea around for a while now, so I thought I'd blog it, rather than just sit on it then feel frustrated when somebody else has it and gets RICH and FAMOUS and POPULAR...

Basically, BitTorrent makes publishing large files on the Web much less of a burden on the server than HTTP. If I put a 10MB file up on an HTTP server and give out the URL, everyone who fetches the file will transfer 10MB from my server. The same 10MB, over and over again.

If, however, I run a BitTorrent seed on my 10MB file, connecting to a tracker server, and give people the .torrent file describing my file and naming the tracker, then people with BitTorrent clients can connect to the tracker and find a list of connected clients with parts of that file (initially, just my seed client), and start fetching chunks of the file from them. As soon as a few people are downloading my file at once, they can actually start sharing chunks between themselves - my seed sends a chunk to one client, then my seed and that client are both available to send chunks to more clients. This reduces the load on my server a LOT, and thus reduces the cost of publishing large files.

Lovely stuff.

However, it's complex. Rather than dump a file in a directory on my web server and give out the URL, I have to run a tracker server, create a .torrent file, run a seed client, and distribute the .torrent file (perhaps by copying it to a directory on a web server and giving out the resulting URL).

It strikes me that one could probably write an extension to HTTP, implemented by an Apache module, that:

  1. If a GET request for a file comes in with a special header stating that the client supports it, then engaging this special behaviour. Otherwise, sending the file as normal. The server may be configured to send the file as normal if its size is below a certain limit, too.
  2. Have a tracker built into the server. I think the tracker protocol is HTTP anyway?
  3. If one does not already exist, automatically generating a .torrent for the file, naming itself as the server, and sending that as the response body

Then clients/web browsers that support it could then automatically fetch static files using BitTorrent, from servers that support it, while still maintaining perfect backwards compatibility between mixtures of old and new servers and clients, and without needing any extra admin effort (beyond perhaps installing and enabling the Apache module).

As far as I can tell, it'd be better than Web Seeding.

WordPress Themes

Creative Commons Attribution-NonCommercial-ShareAlike 2.0 UK: England & Wales
Creative Commons Attribution-NonCommercial-ShareAlike 2.0 UK: England & Wales