Well, at long last, I'm finally getting paid to mess around with the kinds of things I find REALLY interesting - a task which, at the moment, involves setting up shared memory and semaphores between processes for some high-speed shared cache action. Sort of like PostgreSQL does.
Anyway, I've found a few quirks of Mac OS X's System V IPC setup that I thought I'd best share.
ipcs
should be setuid or setgid or something. It grovels in kernel memory to find out what IPC objects exist and their state, but when run as a normal user, it doesn't have permissions to do so and fails silently; ipcs
always returns that nothing's allocated, while ipcs -T
reports garbage values.
Talking of ipcs -T
, the IPC system limits are (as usual) set via sysctls. But if you try and change them, they refuse to alter. It turns out that you can set them, but only once - the first time this set of sysctls is written to the kernel, it sets up its internal data structures and considers the sysctls read-only thereafter until the next boot.
You have been warned.
I'm reporting the former at least to Apple as a bug...
Since I've been getting our home network a bit better organised lately, the home server is now actually accessible from both the wired and wireless networks (and could be accessible from the outside, too, once I've sorted out suitable security measures), so it's high time I started making use of it.
The first thing I've done has been to set up a home Wiki. There's various bits of information that Sarah and I share, but that one or the other of us is 'in charge of' depending on whose computer it lives, so rather than putting a bunch of text files on the shared file server area, it seems logical to do it with a Wiki.
I'd been wanting to research the current state of the art in Wiki software anywhere; the only other Wiki I run, the ARMuC Wiki, runs on UseMod, which I've never grown to love properly.
Anyway, my researches led me to PmWiki, and I'm quite sold on it - it's written in PHP so doesn't require CGIs, and it has a software design philosophy that I agree with; a simple core with modular extensibility.
So we now have a Sutton's Mill Intranet for our domestic odds and ends. And with a little bit of simple plugin writing, the home page lists the status of important household sensors - currently just the incoming mains voltage and frequency (we get a lot of mains power problems out here!) and the battery backup system status, but hopefully soon to include external temperatures too.
We're using the Wiki to store our monthly budget, our goals for each month (chosen at the New Year), our template shopping list of things we need to check we have sufficient stocks of, and our list of favourite recipies (since we have a habit of forgetting them, then one day going "Blimey! I've not cooked that lovely Thai turmeric rice in months!"), and we'll shove more stuff in as we come up with it - basically, from now on, whenever one of us has to go and look something up for the other, we'll Wiki it for posterity.
Well, having eliminated the VLANs from my network problems, I've been busily taking advantage of them again, and working around the fact that daapd
and samba
don't seem to talk very well to iTunes and MacOS X's smbfs
.
Read more »
Ok, having eliminated all VLANs from the equation, I still see iTunes connecting to daapd
giving up a few tens of seconds into each song. So it looks like the latest release of iTunes doesn't like daapd for some reason.
However, SMB performance is still dreadful, with common "server disconnected" error messages.
I've confirmed it's not the LAN at fault (unless subtly so) by doing HTTP, SCP, and telnet-to-chargen and getting good rates.
Right now I'm trying an experiment with smbclient
rather than the smbfs that comes with Mac OS X - and it seems to be running fine... so it looks like there's some problem between OS X's smbfs and the version of samba I have, which is just bizarre.
Perhaps I ought to set up Appletalk sharing - at least that way Sarah can access the household music collection, anyway...
Yesterday I configured Squid on my internal network; machines on the office LAN can use it if configured to use an HTTP proxy, while machines on the wifi LAN are forced to use it as a transparent proxy via port forwarding on the router (I'm slowly making the wifi LAN more and more like a cheap ISP's network - it's an open wifi, so I'm keen to force its users to be well-behaved).
The thing is, watching Squid's logs, I was horrified at just how few pages it felt it could cache. I'd always imagined, when developing Web apps, that anything fetched with GET could be cached for a while (and might even be prefetched). So when I actually dug a little deeper, I found that just about anything dynamically generated (including quite static pages that just use a bit of PHP to automatically include the same navigation in every page, with the currently selected option highlighted, for example) is, unless the script author has made special effort, generally not cacheable.
You can check the cacheability of pages with this useful cacheability testing tool.
Blog software is terrible at this, for example, despite generally having very cacheable pages
Rather than explain the rules in detail here, I'll link to somebody who already has. In particular, read the section on writing cache-aware scripts.
Read more »