Last Minute Workshop! (by )

Tomorrow Salaric Craft will be doing the Creative Take Over at the BlackFriers Hub in Gloucester - Wed 5th December 2018. Wednesday is their normal creative take over with workshops and co-working space and of course free biscuits!

I have been looking for spaces to run my Upcycled Christmas Workshops which are free with donation bucket and they have Kindly stepped forward asked me to host!

And host I shall - I am bringing with me ink stamps, pens, crayons for rubbings etc.... and a big roll of recycled packing paper for turning into your very own custom wrapping paper and also.... bits to make upcycled cards and name tags 🙂

If anybody else in the Gloucester area would be interested in this workshop then please ping me a message 🙂

Mind your Is and Qs: The Art of Frequency-Division Multiplexing (by )

As previously discussed, I've been learning about radio lately. As part of that, I've been diving into things I've always found confusing in the past and trying to properly understand them; when I succeed at this, I'd like to share what I've found, hopefully clarifying things that aren't explained so well elsewhere and helping others in the same situation I was in...

I'm going to start by explaining some pretty basic stuff about waves, which most of yuo will already know, but bear with me - I'm trying to emphasise certain things (phase!) that are often glossed over a bit at that level, and cause confusion later.

Today, we're going to talk about modulation. To be specific, frequency-division multiplexing: the technique of sharing some communication medium between lots of channels by putting them on different frequencies.

This is used to great effect in the radio spectrum, where the shared medium is the electromagnetic field permeating all of space; but it applies just as well to any medium capable of carrying waves between a bunch of transmitters and receivers. The media vary somewhat in how signals at different frequencies propogate, and what background noise exists, and what hardware you need to interface to them, but the principle remains the same. Examples other than radio include:

  • Ripples on a lake
  • Electrical impulses in a coaxial cable connecting multiple stations (old Ethernet, cable TV)
  • Noises in air

The key thing about these media is that if a transmitter emits a wave into them, then that wave (subject to propogation distortions) plus some ever-present background noise will arrive at a receiver. So if we can find a way for everyone to communicate using waves, without messing up each other's communications, we're good. It doesn't matter that impulses in a cable travel along one dimension, ripples on a lake travel in two, and noises (or radio waves) in air travel in three: as we're usually thinking about a wave coming from a transmitter to a receiver, we can just think about the one-dimensional case all the time. That's enough for communication - more dimensions are involved when we use waves to find our position (GPS!), but that's not what we're talking about here.

Waves

Waves can be all sorts of shapes: square waves with sharp transitions between two levels, triangular waves that go up smoothly then turn around and go down smoothly, and then turn around again and go back up, between two levels; complex wiggly waves that go all over the place... but, it turns out, all of those wave shapes can be made by adding up a bunch of smoothly curving sinusoidal waves.

Such a wave is "periodic": it's the same pattern, repeated again and again, each repitition identical to the last. Each repitition of the pattern is called a "cycle"; as that word suggests, the origin of sinusoidal waves is in the geometry of circles - we don't need to go into that properly here, but what we do need to know is that each full cycle of the wave corresponds to going around a circle, and as such, we can talk about how far along the cycle we are in terms of the circular angle covered. A full cycle is 360 degrees. Half of it is 180 degrees; since the wave goes from the middle, to a peak, to the middle, to a trough, to the middle again before repeating, 180 degrees is enough to get you from the middle on the wave to another, with a single peak or trough inbetween; or to get you from a peak to the next trough. 90 degrees gets you from where the wave crosses the middle to the next peak or trough, or from a peak or trough to the next middle-crossing.

Those waves can be described entirely by three numbers:

  • Amplitude: How big the waves are. This can be measured from trough to peak, or can be measured in terms of how far the troughs and peaks deviate from the middle - the difference is just a factor of 2, as they're symmetrical.
  • Frequency / Wavelength: The frequency is how many complete cycles of the wave (measured, say, from peak to peak) pass a fixed point (such as a receiver) per second. As all waves in our media travel at the same speed, this means that you can also measure the same thing with the wavelength - how much physical distance, in metres, a complete cycle of the wave takes up. The frequency is in Hertz, which means "per second"; the wavelength is in metres; and multiplying the frequency by the wavelength always gives you a speed (in metres per second), that being the speed of wave propogation. In empty space, radio waves travel at the speed of light (because light is a radio wave, in effect): about 300,000,000 metres per second.
  • Phase: This is a trickier one, and often neglected. Unlike the other two, which are things you could unambiguously measure at any point on the wave and get the same answer, "phase" is relative to an observer. Imagine two waves are coming at you from different sources, with the same amplitude and frequency - but the peaks of wave A arrive slightly before the peaks of wave B. Remembering how each cycle of the wave can be considered as a 360 degree rotation, we might say that wave B is lagging 90 degrees behind A, if B's peak arrives when A is just crossing the middle after a peak.

So, unlike amplitude and frequency/wavelength, phase is always a relative measure between two waves, or perhaps between a wave and itself somewhere else: if a transmitter is emitting a wave and we're receiving it from a thousand metres away, because it takes time for that wave to travel, we will be seeing it phase-shifted compare to what the transmitter is emitting at that point in time. The size of the phase shift depends on the wavelength; if the wavelength is a thousand metres, then an entire cycle fits between us and the transmitter, so we'll get a 360 degree phase shift - but since the every cycle is identical, we won't be able to tell and it will look the same as a 0 degree phase shift. However, if the wavelength is two thousand metres, we'll be exactly half a cycle behind the transmitter, and we'll see a 180 degree phase shift.

(In order to try and innoculate my children against getting confused about phase when they learn about it at school, I have always referred to the situation when they're buttoning their clothes up and find that they've been putting button N into buttonhole N+1 or N-1 as a "phase error".)

The fact that the wave repeats exactly after a cycle is important: it means that phase shifts will always be somewhere between 0 and 360 degrees (for periodic waves, at least); but by measuring it a bit differently, you could also measure phases between -180 and +180 degrees, with negative numbers indicating that the wave is lagging behind the reference, rather than counting that as being nearly 360 degrees towards the next cycle.

Another important thing is that, for two waves of the same frequency, the phase difference between them is the same if the two waves travel along together in the same direction. That makes sense, as they travel at the same speed. But what about waves of different frequency? At some points, the two waves will briefly overlap perfectly, perhaps both peaking at the same time, or perhaps some other arbitrary point in the cycle - at that point, they have a zero phase difference. But even a microsecond later, the higher-frequency wave will be slightly further ahead in its cycle than the lower-frequency wave: the phase difference steadily increases until the higher-frequency wave sneaking ahead in the number of cycles it's covered, until finally, it gets an integer number of cycles ahead of the lower-frequency wave - and the phase difference is back at zero. The phase difference between two waves of different frequency therefore constantly changes, linearly (changing by the same amount every meter travelled), but wrapping around to always be between 0 and 360, or -180 and 180, depending on how you measure it. And the rate of change of phase depends on the frequency difference.

But let's put thinking about phase on the back burner for a moment and talk about the most basic way of sharing a communications medium: Continuous Wave (CW) modulation.

Continous Wave (CW)

In this scheme, a bunch of different transmitters can share a medium by using different frequencies - and choosing to either transmit on that frequency, or not. Receivers can look for waves with the frequency of the transmitter they're interested in, and either see a wave, or not.

We can use that to communication a very simple fact, such as "I am hungry": transmit when hungry, switch off when fed. This is used to good effect by babies in the "noises in air" medium (yes, parents can pick out their own baby in a room of crying babies, by the frequency). It can also be used to communicate arbitrarily complex stuff, by using it to transmit serial data using RS-232 framing; or by using short and long pulses to transmit a code such as Morse.

So how close together can be pack different CW channels? Can we have one transmitter on 1,000 Hz and another on 1,001 Hz? Well, not practically, no. A receiver needs to listen for a mixture of signals coming in, and work out if the frequency it's looking for is in there, to tell if the transmitter is on or off right now. As it happens, the techniques for doing this all boil down to ways of asking "In a given time period, how much total amplitude of wave was received between these two frequencies?"; and the boundaries of the two frequencies are always slightly fuzzy too - a signal just below the bottom frequency will still register a bit, albeit weakly.

If you are doing very fast CW, turning on and off rapdily to send lots of pulses per second because you have a lot to say, you'll need to use very small time periods in your receiver, so you get the start and stop of each pulse accurately enough to tell if it's a short or long pulse, and to avoid multiple pulses going into a single time period.

If you have lots of channels close together, you'll need a very narrow range of frequencies you look between. The width of that range of frequencies is known as the "bandwidth"; us computery people think of bandwidth in bits per second, the capacity of a communications link, but the reason we call that "bandwidth" is because it's fundamentally constrained by the actual width of a frequency band used to encode that binary data stream!

If you do both, then the amount of total amplitude you'll spot in your narrow frequency band and your short time period will be very low when the transmitter is transmitting - and it will get harder and harder to distinguish it from the background noise you receive even when nothing is transmitting.

So: Yes, you can have very close-spaced channels - if the noise level is low enough and your CW pulses are slow enough that you can have a long enough time period in your receiver, to get reliable detection of your pulses. But it's always a tradeoff between pulse speed, how wide your frequency band is, background noise levels, and how often your receiver will be confused by noise and get it wrong.

You might think "Wait a minute, that's silly. If the transmitter emits a sinusoidal wave and that turns up at the receiver, you can simply measure the wavelength and frequency; and if you start a clock ticking at the same frequency you can even detect any sudden changes in phase in the wave. How is that in any way fuzzy or unclear?"; but that doesn't scale to when your receiver is picking up the sum of a load of different waves. If there's two waves of very different frequency then it's easy to tell them apart, but if they're of very similar frequency it's another matter entirely.

Amplitude Modulation (AM)

Sometimes we want to send something more complicated than just an on/off signal. Often, we want to send voices, or pictures - both of which can be encoded into a single-dimensional signal: a quantity that varies with time, such as the voltage encountered on a microphone (pictures get a little more involved, but let's not worry about that right now). Rather than just turning our transmitter on and off, we could vary the amplitude of the signal it sends along a spectrum, and thus communicate a varying signal.

Of course, this only works if the signal we're sending (known as the "baseband signal") has a maximum frequency well beneath that of the frequency we're transmitting at (the "carrier frequency"); the same limits as with turning a CW transmitter on and off quickly apply - your carrier wave needs to complete at least a few cycles for its amplitude to be reliably measured, before it changes.

Because you're changing the amplitude of the carrier to convey the baseband signal, this is known as "amplitude modulation", or AM. You can think of it as multiplying the baseband signal with the carrier signal and transmitting the result.

Of course, this operation is symmetrical - the result of sending a 10kHz sine wave baseband on a 1MHz carrier is the same as sending a 1MHz baseband signal on a 10kHz carrier - but we agreed to only do this when the maximum baseband frequency is well below the carrier frequency, so we always know which way round it goes!

By convention, let's treat our baseband signals as being between -1 and +1; our carrier signal is generated at the power level we want to transmit at, so if the baseband signal is 1 we're just transmitting at full power, and if the baseband signal is 0, we're not transmitting anything.

Indeed, continuous wave is just a special case of AM, where the baseband signal is either a train of rectangular pulses, switching at will between 0 and 1.

Now, we mentioned that we need to place CW frequencies a little way apart, because otherwise a receiver couldn't distinguish them reliably - and the distance apart depended, amongst other things, on how quickly the CW signal turned on and off. This of course applies to general AM signals, too: the rate at which they turn on and off, in the general case, being replaced by the maximum frequency in the baseband signal. The higher the frequencies, the further apart your carrier frequencies need to be before the multiple signals interfere with each other.

But... what does that really look like?

Imagine you have a receiver configured with a very narrow input bandwidth; one intended for receiving slow Morse CW might have a bandwidth of 500Hz or so. What would you pick up if you tuned it to the frequency of an AM transmitter? What if you went a bit above or a bit below?

Clearly, if the transmitter was just transmitting a constant level, that's what you'd pick up - which is easy to think about if it's transmitting zero (you receive nothing) or some positive quantity. Of course, if the receiver doesn't know what the maximum amplitude of the transmitter is, it will have no way of knowing if a signal it receives at any given level is 100% of the transmitter power, or merely 10% of it - so it's kind of hard to say what the level means, unless it's zero. More annoyingly, if the transmitter transmits -1, then what we'll get is the full carrier power but inverted. As that inversion swaps peaks for troughs and leaves the middles the same, this is the same as a 180 degree phase shift; the only way to tell it apart from transmitting +1 is to have observed the signal when it's transmitting something positive, and started a clock ticking at the carrier frequency, so we can notice that we're now receiving peaks when we would normally have been receiving troughs.

It's certainly possible to make this kind of thing work: you have to periodically transmit a reference signal, say +1 for a specified time period, so that receivers can wait for that "synchronisation pulse" and therefore learn the phase and maximum amplitude of the signal, and then compare that against the signal received going forward.

But a more common convention is to avoid negative baseband signals entirely. Squash the baseband input range of -1 to +1 up into 0 to 1, by adding 1 and then dividing by 2. This means that a baseband input of 0 maps to a signal transmitting the carrier at half power; a baseband input of +1 maps to full carrier power; and a baseband input of -1 maps to zero carrier power. That avoids the problems of identifying negative baseband signals, but seemingly still leaves the problem of working out what the actual transmit power is... However, if we're not transmitting constant baseband amplitudes, but are instead transmitting an interesting baseband signal, that wiggles up and down around zero with approximate symmetry, then the average signal power we receive will be half of the peak carrier power. Tada!

But, our narrow-bandwidth CW receiver can't pick that up, because it will be changing too rapidly for it. So what WILL it pick up? I'm afraid we're going to need to break out some maths...

As we mentioned earlier, any wave can be made by adding up a bunch of sinuusoidal waves, with varying amplitudes, frequencies, and phases (relative to what, though, as phase is always relative? Well, don't worry too much abut that for now, we'll get into it when I talk about Fourier transforms in a future post). If we can work out what our receiver will pick up when we transmit a single sinusoidal wave as our baseband signal, we can easily work out what it will receive when we transmit a complex signal - because if our baseband signal is the sum of a load of sinusoidal waves A+B+C+D, and we multiply that by a carrier signal X and transmit X(A+B+C+D), that's the same as XA + XB + XC + X*D: in other words, if we amplitude-modulate the sum of a number of baseband signals, the transmitted signal is just the sum of the transmited signals we'd get if we'd modulated each of the baseband signals separately.

So, let's just think about how a single sine wave gets modulated. Let's do that by introducing the sine function, sin(x), whose value is the instantenous amplitude of a sinusoidal wave (with amplitude 2 from peak to trough, or 1 from middle to peak) as x moves from 0 to T. T is 360 if you're working in degrees; people doing this properly prefer to use a quantity that's two times pi (because they're working in radians), but we'll just call the unit of a full circle T and let you use whatever units you like.

So if we want a sinusoidal wave of amplitude A (from middle to peak), frequency F, and phase (relative to some arbitrary starting point) P, then its signal at time t will be:

A * sin(t*F*T + P)

Now, imagine that's our baseband signal (or, to be precise, one sinusoidal component of it). Imagine we have a carrier signal, with amplitude Ac, frequency Fc, and carrier phase Pc, which at time t will be:

Ac * sin(t*Fc*T + Pc)

If we push the baseband signal up from the -1..+1 range into 0..1, as discussed, and then multiply it by the carrier, our modulated output signal will be:

Ac * sin(t*Fc*T + Pc) * (1 + A * sin(t*F*T + P) / 2)

If you distribute the central multiplication over the brackets on the right, you get:

Ac * sin(t*Fc*T + Pc) + A * Ac * sin(t*Fc*T + Pc) * sin(t*F*T + P) / 2

That has two parts, joined by a +.

The left hand part is just the carrier signal.

The right hand part is more interesting. It's got A*Ac/2 in it: the product of the carrier and baseband amplitudes, divided by two - and it's got this intriguing sin(X)*sin(Y) factor, where X = t*Fc*T + Pc and Y = t*F*T + P. I'll spare you the maths, and tell you now that sin(X)*sin(Y) = sin(X-Y + T/4) / 2 - sin(X+Y + T/4) / 2.

Now, X-Y+T/4 is (t*Fc*T+Pc) - (t*F*T+P) + T/4, which simplifies to t*T(Fc-F) + Pc - P + T/4, and X+Y+T/4 simplies to t*T(Fc+F) + Pc + P + T/4.

Also, we noticed earlier that inverting a sine wave is the same as a 180 (T/2) phase shift, so we can swap that subtraction by an addition, and adding an extra T/2 phase to the second sin. As the first one already has a + T/4 phase shift let's keep it symmetrical (remember that phases wraps around at T) and turn the second one into a - T/4.

Putting it all back together, our modulated signal is:

Ac * sin(t*Fc*T + Pc) + A * Ac * sin(t*T(Fc-F) + Pc - P + T/4) / 4 + A * Ac * sin(t*T(Fc+F) + Pc + P - T/4) / 4`

So we have three sinusoidal signals added together.

  1. The carrier, unchanged: amplitude Ac, frequency Fc, phase Pc.
  2. A signal with amplitude A * Ac / 4, frequency Fc-F, and phase Pc - P + T/4.
  3. A signal with amplitude A * Ac / 4, frequency Fc+F, and phase Pc + P - T/4.

Unless F, the baseband frequency, is very low, our narrow-bandwidth receiver tuned to the carrier frequency Fc will only pick up the first part: the unchanged carrier - it will be seemingly blind to the actual modulation! All the baseband signal ends up on the two other signal components, whose frequencies are above and below the carrier frequency by the baseband frequency. If we tune our receiver up and down around the carrier frequency, we'll pick up these two copies of the baseband signal, phase shifted and with quartered amplitude.

These two copies of the baseband signals are known as "sidebands". The first one, with frequency equal to the carrier frequency minus the baseband frequency, is the "lower sideband"; the other, with frequency equal to the carrier frequency plus the baseband frequency, is the "upper sideband".

You'll note that the phases of the two sideband signals, relative to the carrier phase (so subtract Pc from both) are -P + T/4 and P - T/4. Note that these are the same apart from a factor of -1.

If our baseband signal was a complex mixture of sinusoids, then the modulated signal will be the carrier, plus a "copy" of the baseband signal shifted up in frequency by the carrier frequency, and shifted forward in phase by the carrier phase minus a quarter-cycle; plus a second copy of the baseband signal, shifted up like the first, but then inverted in frequency difference from the carrier, and in phase.

And this tells us how closely we can pack these AM signals - we need a little more than the maximum baseband frequency above and below the carrier frequency, to make space for the two sidebands.

"But wait wait wait, that doesn't make sense," I hear you cry. "Where do these sidebands come from? If I have my transmitter and it has a power knob on it and I turn that power knob up and down, so it emits a sine wave of varying amplitude, there's nothing more complicated going on than a sine wave of varying amplitude. How can you tell me that's actually THREE sine waves?!"

Well, that's a matter of perspective. But if you do the maths and add up three sine waves of equally-spaced frequency with the right phase relationship, you'll get what looks like a single sine wave that varies in amplitude. So when a receiver receives it, it's powerless to "tell the difference". A varying-amplitude sine wave and the sum of those three constant-amplitude sine waves are exactly the same thing.

And that's how amplitude modulation works. When you listen to an AM radio, you're listening to an audio frequency signal (converted from vibrations in the air into an electronic signal by a microphone) that's been modulated onto a radio frequency carrier signal and transmitted as radio waves through space. You can do this without very complicated electronics at all!

Single Sideband (SSB)

If you're trying to pack a lot of channels in close together, however, having to transmit both sidebands, both carrying a copy of the baseband signal, is a bit wasteful. Also, it wastes energy - transmitting a signal takes energy, and our modulated signal consists of the unmodulated carrier plus the two sidebands, each at at most a quarter of the amplitude of the raw carrier (remember that A is at most 1) - at least two thirds of the energy is in that unchanging carrier!

If we can generate a signal that looks like an AM signal, but remove the constant carrier and one of the sidebands, we can use a quarter of the energy to get the same modulated signal amplitudes in the surviving sideband (or, for the same energy, get four times the signal amplitude). Therefore, single-sideband is popular for situations where we want efficient use of power and bandwidth, such as voice communications between a large number of power-limited portable stations. But for broadcasting high-quality sounds such as music, we tend to want to use full AM - power isn't such an issue for a big, fixed, transmitting station and we can afford to use twice the bandwidth to get a better signal; as an AM signal has two copies of the baseband signal in the two sidebands, the receiver can combine them to effectively cancel out some of the background noise.

Of course, you need to make sure that the transmitter and the receiver both agree on whether they're using the upper sideband (USB) or the lower one (LSB) - otherwise, they won't hear each other as one will be transmitting signals on the opposite side of the carrier frequency to the side the other's listening! And if the receiver adjusts the carrier frequency they're listening on to try and find the signal, they'll hear it with the frequencies inverted, which won't produce recognizable speech... A single sideband receiver can listen to an AM signal by just picking up the expected sideband, but an AM receiver will not pick up a SSB signal correctly, due to the lack of the constant carrier to use as a reference.

But amplitude modulation (of which SSB is a variant) is, fundamentally, limited by the fact that background noise will always be indistinguishable from the signal in the sidebands; all you can do is to transmit with more power so the noise amplitude is smaller in comparison. However, there is a fundamentally different way of modulating signals that offers a certain level of noise immunity...

Frequency Modulation (FM)

What if we transmit a constant amplitude signal, but vary its phase according to the baseband signal?

If we go back to our carrier:

Ac * sin(t*Fc*T + Pc)

And baseband signal:

A * sin(t*F*T + P)

Rather than having the carrier at some constant phase Pc, let's set Pc to the baseband signal, scaled so that the maximum baseband range of -1..+1 becomes a variation in phase of, say, at most T/4 each side of zero:

Pc = A * sin(t*F*T + P) * T/4

Thus making our modulated signal:

Ac * sin(t*Fc*T + A * sin(t*F*T + P) * T/4)

This is called "phase modulation". But you never hear of "phase modulation", only "frequency modulation". Why's that?

Well, a receiver has a problem with detecting the phase of the signal. Phase is always relative to some other signal; in this case, the transmitter is generating a signal whose phase varies compared to the pure carrier. The receiver, however, is not receiving a pure carrier to compare against. The best it can do is to compare the phase of the signal to what it was a moment ago - in effect, measure the rate of change of the phase. To make that work, the transmitter must change the phase at a rate that depends on the baseband signal, rather than directly with the baseband signal. But how to do that?

You may recall, from when we first talked about phase, that the phase difference between two signals of slightly different frequency changes with time - it goes from 0 to T (or -T/2 to T/2, depending on how you measure it) at a constant rate, and that rate depends on the frequency difference. That means that a frequency difference between two waves is the same thing as the rate of change of phase between the two waves...

This makes sense if you look at our basic wave formula A * sin(t*F*T + P) - if P is changing at a constant rate, then P is some constant X times time t, so we get A * sin(t*F*T + t*X), or A * sin(t*(F*T + X)), or A * sin(t*T*(F + X/T)) - we've just added X/T to the frequency (and T is a constant).

So all the transmitter needs to do is to vary the frequency of the signal it transmits, above and below the carrier frequency, in accordance with the baseband signal; and the receiver can measure the rate of change of phase in the signal it receives to get the baseband signal back.

Therefore, we call it frequency modulation (FM). The neat this is that, because we don't care about the amplitude of the received signal - just its phase - we don't tend to be affected by noise as much because noise is added, so mainly changes the amplitude of the received signal.

So what would we see if we tuned across an FM signal with our narrow-bandwidth receiver? How much bandwidth does an FM channel need, for a given maximum baseband frequency?

Surely, we get to choose that - if we decide that a baseband signal of +1 means we add 1kHz and -1 means we substract 1kHz, then the channel width we need will be 1kHz either side of the carrier frequency, 2kHz total, regardless of the baseband frequency involved? Unless the baseband frequency becomes a sizeable fraction of the carrier frequency, of course; we can't really measure the frequency of the modulated signal if it's varying drastically in phase at timescales approaching the cycle time!

But just as multiplying our carrier by the baseband frequency for AM caused Strange Maths to happen and create sidebands out of nowhere, something similar happens with FM. Now, I could explain the AM case by hand-waving over the trigonometic identities and show how sidebands happened, but the equivelant in FM is beyond my meagre mathematical powers. I'll have to delegate that to Wikipedia.

General Modulation: Mind your Is and Qs

Before we proceed, I must entertain you with an interesting mathematical fact.

Imagine our carrier signal at time t again:

A * sin(t*F*T + P)

Imagine we're using that as a carriar, so F is constant, and we're thinking about modulating its amplitude A or its phase P to communicate something. We're varying two numbers, so it should be no surprise that we can rearrange that into a different form that still has two varying numbers in it. It just so happens that we can write it as the sum of two signals:

I * sin(t*F*T) + Q * sin(t*F*T + T/4)

This is kinda nice, because that's just varying amplitudes of two constant-amplitude constant-phase sinusoidal signals at the same frequency, with a quarter-wave phase difference between them. This form means we don't have P inside the brackets of the sin() any more - and it was varying things inside the brackets of sin() that made the maths of AM and FM so complicated to work out.

But what's the connection between our original variables A and P, and our new ones I and Q? Well, it's quite simple:

I = A * sin(P)

Q = A * sin(P + T/4)

This can be used to build a kind of "universal modulator": given a carrier signal and two inputs, I and Q, output the sum of the carrier times I and the carrier phase-shifted by T/4, times Q.

You can then build an AM, FM, USB or LSB transmitter by working out I and Q appropriately. If your input baseband signal is X:

For AM (varying A, holding P = 0): I = X, Q = 0.

For FM (holding 'A = 1', varying P so that X is the rate of change of P): I = sin(X * t), Q = sin(X * t + T/4).

For LSB, I = X, Q = X phase-shifted by T/4

For USB, I = X, Q = -(X phase-shifted by T/4)

The latter two deserve some explanation! Let's imagine that X, our baseband signal, is a single sine wave, with zero phase offset to keep it simple:

Ax * sin(t*Fx*T)

For LSB, that gives us:

I = Ax * sin(t*Fx*T)

and

Q = Ax * sin(t*Fx*T + T/4)

Feeding that into the modulation formula to get our modulated signal:

I * sin(t*F*T) + Q * sin(t*F*T + T/4)

gives us:

Ax * sin(t*F*T) * sin(t*Fx*T) + Ax * sin(t*F*T + T/4) * sin(t*Fx*T + T/4)

We've got sin(X)*sin(Y) again, so can use sin(X)*sin(Y) = sin(X-Y + T/4) / 2 - sin(X+Y + T/4) / 2 to expand them out, and cancel out like terms to get:

(Ax / 2) * sin(t*F*T - t*Fx*T + T/4)

Or:

(Ax / 2) * sin(t*T*(F - Fx) + T/4)

That's a single sine wave, at frequency F - Fx - just the lower sideband!

Handily, we can get I and Q back from a modulated signal by just multiplying it by the same two carrier signals again. If we take our modulated signal:

I * sin(t*F*T) + Q * sin(t*F*T + T/4)

...and multiply it by sin(t*F*T) we magically get I back; and multiplying it by sin(t*F*T + T/4) magically gets us Q back (if anyone can explain how that works with maths, please do, because I can't figure it out; but I've experimentally verified it...).

We can visualise a modulated carrier in terms of I and Q on a two-dimensional chart. Conventionally, I is the X axis and Q is the Y axis. If there is no signal, I and Q are both zero - we get a dot in the middle of the chart. The amplitude of the received signal is the distance from the center to the dot, and the phase relative to the expected carrier is the angle, anti-clockwise from the positive X axis (extending to the right from the origin). If there is an amplitude-modulated signal, then that dot moves away from the centre by the baseband signal, at some angle depending on the phase difference between the received signal and the reference carrier in the receiver.

What will that angle be? Well, if we lock our reference carrier in the received to the phase of the signal when we first pick it up, then as AM doesn't change the phase, it will remain zero - the dot will just move along the positive I axis. If we don't have any initial phase locking, and just go with whatever arbitrary phase difference exists between our reference oscillator and the received signal (which will depend on when the referenced oscillator was started compare with when the transmitter's oscillator was started, and the phase shift caused by the propogation time of the signal, which depends on your distance - so, pretty arbitrary overall), we will find that our I/Q diagram is just rotated by some arbitrary angle. But that's fine, as with FM, it's the change of phase that matters, not the actual phase.

If we receive an FM signal, then the amplitude will remain constant but the phase will change, meaning that the dot will wiggle back and forth along a curved line - an arc of a circle about the origin.

Noise in the signal will cause the distance from the origin to vary, but it won't cause much variation in the angle unless it gets overpoweringly strong.

The fun thing about I/Q modulation is that it means we can take any two baseband signals and modulate them onto a single carrier, as long as their frequencies are well below the carrier frequency. We can modulate amplitude and frequency at the same time. We could have stereo audio by using I and Q as the left and right channels, respectively.

But, in practice, we tend to use such general I/Q modulation for digital data, rather than putting two analogue baseband signals together!

Digital data modes (QAM)

Say, rather than sending an analogue signal such as voice, you want to send a stream of symbols - such as characters of text.

You can assign each symbol in the alphabet you want to send to a particular I/Q value pair, and feed that into your transmitter.

At the receiver, we'll pick them up, with some arbitrary rotation due to the reference oscillator being at an arbitrary phase and thus being at an arbitrary difference and some arbitrary scaling of the I/Q pairs because we don't know how much the signal has been degraded by distance.

Bbut if we can somehow work out that phase difference and rotate the I/Q diagram to get it the right way around again, and can work out the peak signal strength expected from the transmitter and scale the I/Q values up to their proper range, we can decode the signal by comparing the received I/Q against our list of I/Q values assigned to each symbol in the alphabet, and picking the nearest - noise will shift the signal around a bit, but the nearest one is our best bet.

As usual, if we send symbols faster (so there's less signal at each I/Q value for each symbol) then the influence of noise rises, so we must trade off the rate at which we send symbols (the "baud rate"), how far apart the symbol I/Q points are, and how many symbols we get wrong per second.

How do we get that original phase-lock, though, to rotate the pattern to the right angle? Well, we might make our choice of points (which is known as a "constellation") not rotationally symmetrical, and make sure that we transmit enough different symbols to make the rotation of the constellation obvious at least once every time unit (which might be "a second" or it might be "at the start of every message" or whatever). For instance, make the constellation look like a big arrow pointing along the I axis, and start each message with a few symbols including the tip of the arrow and a couple each from the central line and the two lines in the arrow-head. The receiver can watch the pattern until it becomes clear which phase angle it needs, and then decode merrily on that basis. It will need to keep checking the phase angle and updating it - if there's even the slightest difference between the frequency in the transmitter's oscillator and the frequency in the receiver's oscillator, the I/Q signals at the receiver will slowly rotate as time passes.

Or, you can avoid the need in the same way that FM does - rather than having the constellation defined in terms of points in the I/Q diagram, define it in terms of distance from the origin and rotation angle for each symbol. Each symbol then goes to an I/Q pair that's at the specified distance, but at an angle that depends on the last symbol transmitted, plus or minus that offset. The receiver doesn't need to synchronise at all; it just needs to grab I/Q values and work out the distance from the origin, and the rotation angle between symbols received. If none of the symbols have a phase difference of zero, we get an extra benefit - every symbol involves a change in the modulated signal, even if it's the same symbol again, so we can automatically detect the rate at which symbols are being sent, and not get confused as to how many are being sent when the same symbol is sent repeatedly! You need to send a single symbol before any transmission that is used purely for the receive to use as a phase reference for the NEXT symbol which actually carries some data, of course, but that's a small price to pay.

To find the distance from the I/Q origin of peak transmitter power, we have to either make sure we transmit a symbol using maximum power at suitable intervals so the receiver can update their expectations and scale the I/Q diagram to the right size, or put our entire constellation in a circle so the amplitude is always the same; or, perhaps, have our constellation consist of a series of circles that use different phase differences in each circle - if you have three symbols in the smallest circle, and four in the next circle, and five in the next one up, and those three circles are rotated so that no two points on different circles are on the same phase angle, then for any two received symbols, we will be able to tell what circles they are on just from the phase difference between them - and thus know how much to scale them to place them on those circles. But if we're transmitting a reference symbol at the start of every transmission to establish the phase difference to the first actual message symbol, as suggested in the previous paragraph, we can just send that at a known power level and use that as a reference for both phase AND distance from the origin.

The modulated signal is a mixture of AM and FM, as the transmitted symbol's point in the constellation varies both in distance from the origin (amplitude) and in angle rotated since the last symbol (frequency shift).

Because the I/Q representation of a signal with respect to a carrier is known as "quadrature" (the I and Q stand for "in-phase" and "quadrature", respectively), this combined AM-and-FM is known as "Quadrature Amplitude Modulation", or "QAM". Standard constellations have names such as "256-QAM" (which has 256 different symbols, handy for transmitting bytes of data!).

Conclusion

So there you have it. That's not a complete summary of all the tips and tricks used to jam information into communications media; but it should explain the basics well enough for you to make the best of the Wikipedia pages for things like OFDM and UWB!

Note: In order to try and reduce the cognitive load, I've simplified the maths above somewhat - using sin with a phase difference rather than cos, for instance. It still works to demonstrate the concepts, and produces the same results as the conventional formulae apart from perhaps a changed sign here or there!

I'd like to clarify the explanations of various kinds of waves with diagrams, but I don't have the time to draw any right now! I may be able to come back to it later.

The Good, The Bad and The Ugly (by )

Yesterday we went to the allotment to begin the slow down for winter.

preparing the planter for raspberry canes

Preping the allotment for some fruit canes 🙂 and broad beans, finding frogs which are now safely in Mary's container/fairy garden, digging up the last few root veg that were ready - all yay!

raised bed prepared for broad beans

We also have late tomatoes which are still producing green tomatoes so it is time for more chutney making!

Late green tomatoes awaiting the chutney process

Discovering someone has stolen two of my galvanised herb pots turfing my rosemary plant which I grew from seed when we first moved to the cotswolds >:( They left the third so I think they must have gotten interrupted. Big boo >:(

There was also wood smashed up and thrown into one of our planters and the grumpy man who is sometimes there came and stared at me and the kids for a while before moving on :/

Ending with the frog pics because they are cute - Mary has named it Slimey unsurprisingly. It is in the Fairy garden because a) cats b) streamers.

Mary's friend Slimey the Frog

Slimey is pictured her in the empty worry trays that I use for weeding.

Slimey the frog

Last night was the only opportunity we had to take the kids to fireworks but I hadn't slept properly and then had failed to have a nap so decided not to go with Al and the kids - he say's the fireworks would have been borderline for me as there were a lot of flashing crowns and wands and shoes and I hadn't had a lot of sleep so I made the right decision in not going - I want to get through this November without a seizure as they set me back so far and it's like I've hit my head all over again so I am being extra cautious because I am so much better that I just don't want to risk it - but I felt like a bad parent not going but Al would have had to carry a fold up seat and stuff for me as well and I think it would have made it miserable for everyone if I had gone - we toasted marshmallows over candles when the girls got back and I managed to cook my first meal proper with little mess and no burning or mess ups since the head injury (this is cooking on my own rather than with Al or Jean) - so pretty pleased with that.

Toasting marshmallows over candles

However...

TW: miscarriage

Having nightmares at the mo - the next week is going to be tricky but I have craft supplies and have already made a bazzilion cards and keyrings - my plan is to bulldozer through - this is the day last year where I left the house walking and excited about all the awesome stuff I had planned for November and had to be brought home by a friend as thing started to go wrong very quickly and I couldn't even walk properly and felt like I had been hit with the flu hammer - nightmares are of course all hospital based or searching for lost babies/kittens/cute things or failing to rescue them no matter how hard I try - very grateful to the NHS and very aware I wouldn't be here if they hadn't worked so hard - I ended up with a server BP crash and seizure fortunately I was already at the hospital - this one has affected me far harder than the one in the summer - but it was a pregnancy that was older and it was far more traumatic. Trouble is I get cascade so the memories of that hospital trip flash to others including the trips to A&E whilst half way through Jean's pregnancy.

Big irony about this is that I can recall peoples birthdays and I even struggle with things like Christmas but this and the ectopic are burned into me. I accidentally woke Al up crying last night which I wanted to avoid :/

I'm also getting hacked off with myself for still being so drenched in this - it doesn't help that the pelvis has just not really recovered so I have a reminder every time I walk - I basically gave birth to the placenta which was the size of a small baby.

It's eating at my core partly because I fear it was perhaps my last chance to have a baby.

My craft obsession:

Card making is a big thing for me when I am feeling too frazzled or sick to do much else. Alaric bought me supplies, including a Christmas gift box colouring in book.

christmas crafting supplies

I've been making cards including ones of my St Oswold's picture I did in the summer.

St Oswold's greeting cards

Last night when I couldn't sleep I remembered I had lots of split rings to make keyrings with - so I combined them with my bracelet charms.

snowflake keyrings

I am making lots of lucky dip pouches to go in the treasure chest.

keyrings in silver style

I have to end the post with my guardian cat - Hydrogen has been through the mill of life and still likes to sit with me and purr. Here she is being a Dragon Cat with her horde of keyrings.

Dragon Cat and her hoard of keyrings

Radio Waves (by )

I really love learning things, and recently I've finally been removing a long-standing thorn in my side - the fact that I don't really understand radio frequency electronics and the propagation of radio waves.

I've tried to fix this a few times in the past, but the resources I'd read never seemed to quite explain the whole picture - and I couldn't see how to piece the things they explained together into one coherent understanding of the electromagnetic world; they were clearly only shedding light on little corners of a totality that still remained mysterious to me.

Well, there are still gaps in my understanding... but I've made some progress, and in the hope that I can help others struggling with the same confusions as I was, I'd like to share my way of understanding it all.

One thing that bothered me was that explanations of transmission-line behaviour seemed to flip between talking about instantaneous voltages and currents at some point in the line, sampling the analogue signal traveling down the line - or talking about an RMS average voltage or current, and thereby causing me to struggle to make sense of what they were saying. But I think I now get transmission lines to some extent (although I'm still hazy on wave guides, because I've not gotten around to looking into them yet). And I was never quite sure what the impedance of a whole transmission line really meant, regardless of its length. If I had a transmission line and put a resistor over the end of it, and hooked it up to a battery and an ammeter, I knew that the current flowing would depend on the total resistance of the line and the resistor at the end - which would depend on the length of the line, as its resistance would be in ohms per meter. So what the heck was this impedance thing about? How did impedance mismatches cause reflections?

So, here's how I think about transmission lines now. The "DC model" of hooking a battery up to one end of a line and reading the current that flows into it is, of course, perfectly true - we can set the circuit up and test it; the reason it doesn't contradict with this weird parallel world of impedances is that the DC model is a steady state model of the system. When you first connect that battery to the line, current is going to start flowing into it, crawling along at a sizeable fraction of the speed of light; but until that current has reached the end, flowed through the terminating resistor, and flowed all the way back, it can't possibly have communicated any information about the total resistance of the line and its terminating resistor... So how much current initially flows from the battery, and why? Of course, the line can be thought of as two series of tiny inductors (with resistors in series, if we assume the inductors are perfect) with tiny capacitors connecting the two conductors, due to the inherent inductance of the wire and the inherent capacitance of the gap between them; you can imagine that the current from the battery has to charge the capacitors through the inductors for the voltage/current surge to propagate down the line. But what made "impedance" really click for me was going back to the basics of Ohm's Law and seeing it as a ratio of voltage to current. At any point along that transmission line, a certain instantaneous current will be flowing - and there will be a certain instantaneous voltage between the two conductors at that point; and the voltage divided by the current is the impedance.

So, if a 12 volt voltage source is connected suddenly to a 50 ohm impedance cable, an instantaneous current of 0.24 amps (12̣̣̣̣÷50) will flow. Now, as Kirchoff's current law tells us, the currents flowing into a node must sum to zero; so with the imaginary node at any point on the transmission line connecting two halves of it, the input current must equal the output current (although some current may be lost to resistive heating or leakage, that's not relevant in this case). So what happens when there's a change in impedance at some point in the transmission line? The current must remain the same, but the impedance changes - so the voltage must change, to make Ohm's Law still hold. If my 50 ohm cable is connected to a 75 ohm cable, that 0.24 amps flows into it and changes into a voltage of 18 volts (0.24×75). Which is why high impedance transmission lines are less lossy; a transmitter putting a watt of power down such a line (with proper impedance matching) will push out a higher voltage with less current that one putting a watt of power into a low impedance line - and resistive losses in the cable are worse for higher currents.

How about the reflections when impedance changes? I'm still a little hazy on this, but I think it's something along the lines of this: imagine a point just where the impedance changes in our example of moving from a 50 ohm cable to a 75 ohm one. A current is flowing into that point, but the voltage is higher after that point than before - which is going to create a current traveling back the other way. Where I'm hazy on is how this happens at junctions where the impedance falls (is it to do with the fact that the current flows alternately backwards and forwards, so the junction is traversed by current in both directions anyway, and the phases where the current travels from low to high impedance are what create the reflections? If so, isn't that a kind of rectifying action, that will create harmonics and intermodulation? But what is the "direction" of a signal traveling along a line, anyway? If we froze the signal in time, we'd just see a sine wave of voltage and a sine wave of current along the transmission line - if we restart time, how does it "know" what direction to propagate in? Something to do with the relative phase of the voltage and current waves?) So, yeah, I've a little more to learn there.

But this model of impedance does explain a lot. I wondered why the angles of the radials of a ground-plane monopole antenna affected impedance, but now it makes sense - the end of the transmission line basically spreads out to become a dipole, or a monopole and its ground plane; the electrical field of the traveling signal has to cross a larger region of space, so it makes sense that the voltage required to do so might vary depending on the amount of space crossed. All the mysterious constants, like the fact that a dipole trimmed to 0.48 times the wavelength has an impedance of 70 ohms are really down to the electromagnetic stretchiness of space: the impedance is the voltage required to push one amp along a transmission line (a dipole antenna just being an oddly-shaped transmission line, handing the signal over to the even weirder transmission line that is free space itself), and that is a function of the permittivity and permeability of that space.

This model also explains how impedance matching transformers work. A 1:2 turn ratio transformer will transform X volts and Y amps on the "left" into 2*X volts and Y/2 amps on the "right"; as the impedance is V/I, that means it converts R ohms on the left into R*4 ohms on the right, simply through changing the voltages and currents. A 1:N turn ratio transformer makes a 1:N^2 change in impedance.

Antennas with multiple elements are confusing, but I'm not sure anybody really understands them - as far as I can tell, the design process is almost always to mock it up in a finite-element computer simulation or build a prototype and tweak the design until the desired parameters are obtained experimentally; the mutual interactions between the elements (not to mention ground, support structures, and the transmission line feeding the antenna) are just too complicated to analyse.

I really don't get why there's a near field and a far field (or that funny one in between that, I think, is just a mixture of the two). Does the antenna both far and near fields at once, and the near field is stronger but doesn't spread out far, so the far field is negligible when close to the antenna? Or does the antenna create a near fields, which "decays into" the far field as it spreads out? Nothing I've found seems to explain.

I'm not very clear on why a balanced transmission line that's shorted at one end and open at the other end has varying impedance along its length, and can be used for impedance matching, but it doesn't create reflections from the ends.

But, I can understand how to run a cable to a dipole or monopole antenna, manage the impedance transitions, and make it radiate efficiently. That's progress!

National Poetry Day UK 2018 (by )

Today is National Poetry Day - so I shall be spending the day on various social media sites and blogs sharing all the stuff from the archives - namely the writing exercises and write ups on interesting writers.

Where you can find me:

Saffy
TheMonsterBlogs
Sarah Snell-Pym Writer and Artist
Pinterest Wopo Inspired
Google+

Where you can read my poetry:

Turquoise Monster
Snell-Pym Poetry
Orange Monster
Magenta Monster

Where you can find writing exercises:

World Poetry Writing Month
Magenta Monster

Poetry is a pretty big thing for me - I never thought it was going to be, I kind of thought the whole thing wasn't for me. I like what I like and that has little to do with the academic studies of the poetic form. I like poems in different styles from different times and cultures and so on. I find people being elitist and pretentious about it all... annoying and ignorant.

That is one of the reasons that I have backed the local poetry festivals the way I have - they offered spaces for voices that were rarely heard and the internet also helped.

This month I am lucky in that I am hosting events at the Gloucester Poetry Society and I am taking part in a Slam over in Cheltenham (still not sure about the competition part of the slam culture but I like the events over all and the people I see there - they are fun and diverse).

I've been going through my poetry and trying to decide how best to move forward with it all - I have singularly failed to actually submit work once again but thanks to people chasing me I am in a couple of pamphlets which are to be release later this month I believe.

I have several collections that were stalled with head injury and what not which I should really revisit and sort out and I have a few more zines that are pretty much ready to go!

But I also want to produce more books with other people - I liked WigglyPet Press's involvement with last years poetry anthology but it has now been out for a year so I feel that it is time for me to publish something new!

At the moment I am thinking of something themed around space, the cosmos, exploration and the unknown but I have not decided on the format yet!

WordPress Themes

Creative Commons Attribution-NonCommercial-ShareAlike 2.0 UK: England & Wales
Creative Commons Attribution-NonCommercial-ShareAlike 2.0 UK: England & Wales