INTELLIGENT NOISE
The article below appeared in the December 1962 issue of Analog Science Fact / Science Fiction

No matter what the technical definition of "noise" may be, the practical fact is that any messages that you can not interpret is noise to you. But when it looks like noise, sounds like noise, acts like noise, and can't be recognized as a message...that's the latest technique in telecommunications!

Intelligent Noise
By Alfred Pfanstiehl

Secondary examined the plastic sheets once more, his sharply filed fingerhorn tracing the symbols and wavy lines. The folds of deep red flash around his ears quivered in his equivalent of a broad grin. This ought to make Primary most happy! He swiveled his acceleration couch to face Primary and waited for recognition.

The captain looked up from his viewing screen and clicked a finger. "Did you run the analysis three times?" he asked.

"Affirmative, Primary. I ran all possible modulation auto-correlations more than three times because it seemed so strange. It's definite that only the lowest propagatable frequencies contain intelligence --- and only voice type undulating signals at that, except for a few unmodulated carriers. Nothing at all in the higher bands. I can't understand it."

"You don't have to understand," Primary huffed. "We have seen cultural anomalies before. The computer could find no pulsed or phase-coherent tracking beams at all?"

Secondary let himself quiver a happy smile. "Absolutely none. They don't have the faintest idea we're up here. I let the receivers integrate much longer than necessary, and even checked the visible light bands and on up to the limit of our sensors. Just noise."

This was going to be pleasant and easy. Primary relaxed in his couch. His fished out a fresh packet of eye-glitter, horned it open and directly under his eyelids in one smooth motion. Yes, he could take his time on this one, and make sure he found the planet's most vulnerable spots. He savored the sparkling colors a moment before getting to work on the job at hand.

At Western UN Tracking Center, Mike Witkowski reached for the blinking phone. "West Center here... Chuck Park? Hi... How are things at East? ...Really? Nineteen minutes. Give me the gross ephemerides off standard... Yeah... You are sure it isn't friendly? O.K., we'll get on it as it comes around. Sure... You alerted Safety? ...All right, I'll call you back."

Fifty-one minutes later, just as Primary was happily arming the smaller torps -- no need to waste the big ones -- a couple of lovers on a hilltop wished on a rather odd-colored shooting star that terminated in a beautiful puff.

This article is about a very real breakthrough in conservation. The commodity to be conserved is the electromagnetic spectrum. We'll take a look at the fundamental workings of a system using radio waves in an entirely new manner so that an ordinary radio receiver simply can't hear the radar beams, control signals, teletypes, voices or other commmunications using the system. In fact, the more sensitive and selective the receiver, the less it picks up!

That alien ship had superb equipment for analyzing every type of radio signal on the air, or so they thought. Their receivers tuned up and down the radio spectrum and sharply filtered out every signal. They did a fine job of combing out all electromagnetic frequencies, sorting them into their respective cycle-per-second slots, suppressing much of the noise and static, and presenting the operator with a record of just what every transmitter was putting out within hundreds of miles. They couldn't possibly miss any radar beams powerful enough to seek out their ship.

But they did. Their equipment was so good at suppressing uncoherent, unintelligent noise that it had to miss. What we were broadcasting was intelligent noise --- but apparently wholly random, useless noise, just the same. And even if they know all about the equipment we were using, it would remain extremely difficult and unlikely that they could make head or tail of our transmissions, or even jam them. (You never say impossible in this game; the technique we are going to describe is a true-blue breakthrough, but that doesn't preclude more breaks coming along later.)

With conservation of the electromagnetic spectrum the way is clear for even more delightful pictures of acid dripping in stomachs, sounds of gunshots, music, et cetera, plus radar search beams, control signals to missiles and airliners and taxis and garage doors et cetera, et cetera.

We have long known of the upcoming scarcity of available radio frequencies. There are lots of them, yes, but it happens that some bands are much more convenient to use than others. It's in the way they bounce around and the kind of equipment required to generate and receive them. Our natural resources such as air, water, fossil fuels, farming land and the electromagnetic spectrum can be conserved by not using them, but that's stagnation. Rationing or letting the price rise are ugly ideas. So we step up the hunt for new resources and re-use techniques, try substitutes, ask people please to stop waste and pollution, and once in a while discover significant ways to use what Nature has given us more efficiently.

At least until we learn how to turn psi phenomena on and off as a practical matter, it looks as if we are stuck with the good old electromagnetic spectrum for speedy communications. This covers every radiation, from a few cycles per second frequency on up through the familiar radio waves of kilo-, mega-, and kilomegacycles per second, infrared, daylight, ultraviolet, and clear up through the gamma, X, and cosmic rays. The convenient ways to measure and work with this type of radiation usually is to deal with frequencies -- cycles per second -- or wavelengths -- centimeters per wave. The waves we talk about are some sort of changing "distortions of space" with the energy being handed back and forth between the stresses we identify as magnetic and electrostatic. Our whole mathematical basis for describing these waves and the energy they transmit uses the ideal and beautifully perfect "sine wave" as the model for how these signal energies change in time.

If your school-days trigonometry left you a little cold, so that a sine function only means "opposite side over the hypotenuse of a right triangle" and nothing more, you're missing one of Nature's glories. Actions that approach sine wave ideal motion, at least as far as we can measure, lie at the base of so many scientific observations and analyses. It is easy to become so impressed -- or accustomed -- to the mathematical power of the sine wave function that you tend to force all your thinking about motions -- especially electrical motions in the field of electronics -- into its undulating matrix. Sometimes this leads to defining frequencies in negative time, or in talking about zero-length portions of sine waves, but by golly, it all fits together so beautifully!

The sine wave derives from the circle: It is a plot of the distance from a point on the circumference to the horizontal diameter as the point moves steadily around the circle. It doesn't bother us that there is no such thing as a perfect circle -- or perfect sine wave -- exists except as an idea. We do need such ideal, perfect concepts as foundations for our imperfect, practical measurements and designs. The sine wave concept works magnificently when we deal with the ac inductance of coils, the capacitance of condensers, the radiation on antennas and such practical devices. These are the elements we combine, with other items, to make frequency-selective filters, impedence-matching transformers and all the other gadgets a circuit designer has available to make radio transmitters and receivers.

Perhaps too magnificently. The model we are accustomed to analyzes any non-sine wave signal -- it can be square, triangular, short pulses, a conglomeration such as music or speech, or even the outline of the skyline of New York -- into combinations of smooth sine waves with various frequencies, amplitudes and starting points -- phase -- in time. This model checks out so well that perhaps it has trapped us.

We build filters and "wave analyzers" which break down any non-sinusoidal complex wave components. And we can turn around and synthesize almost any complex wave from a set of pure sine wave elements. So it looks as if we have come to confuse this model, which works so well with the types of circuits we have designed by using the model, that we forget it is only a model. Marks on paper are not the real thing.

There happens to be other, although not so convenient, ways of analyzing complex waves. And you seldom get breakthroughs by sticking to conventional thinking and methods. (The history of this breakthrough is not that of a wild tangent from conventional thinking, by the way; that seldom works, either. It's just a little switch in emphasis, with good doses of imagination and serendipity.)

It's conventional to think of the whole electromagnetic spectrum as a continuous stream of frequencies, starting with the slowly changing, long wavelengths, and going on up to the faster frequency, shorter wavelengths. We commonly restrict our thinking to the "frequency domain" approach, identifying each frequency as a point on the line covering all frequencies. And now we find that this stream of frequencies is getting mighty crowded. We ration out specific frequency bands for each transmitter we allow to broadcast radio wave energy. The best swimming holes in this stream are filled, and much of it is uncomfortably polluted by natural and man-made trash -- static, ignition noise, et cetera.

Thirty or forty years ago the few naked boys splashing around near the low end of this stream didn't bother anyone; it was all pretty much for fun, anyway. But today you should hear some of these same lads -- no longer in suntanned freedom -- cuss when a missile count-down is delayed while some doctor's diathermy machine is located and turned off. It was only slightly, but very expensively, out of whack.

Beware of analogies! This one identifies the whole set of available radio frequencies as a continuous line -- stream -- in which we reserve spaces for specific signals -- swimmers. It is the customary view. But until recently it has been holding back truly efficient use of the radio spectrum.

We've been trying to teach the swimmers not to splash too much and to keep their arms down in order to make room for others. We assign narrow channels for each body, and only let those in who are carrying legitimate messages or on some specific duty. This is wise and necessary when the message-carrying swimmers are conventional transmitters and receivers that must be turned to specific, narrow bands of frequencies.

But now we have Pseudo Random Noise Generators -- pronounced PRNG. With them, an apparently wide band of radio frequencies can carry many more messages simultaneously than when each message is assigned a narrow, side-by-side frequency channel. The technique stops interference and "cross-talk" between the messages.

There are other approaches to conservation of our radio frequency spectrum besides using PRNG techniques. An obvious one is to use thin, non-spreading, aimed beams from the transmitter to the receiver whenever broadcasting isn't required and when point-to-point clear views are available -- or "scatter propagation" is employed. Thus lots of sets can operate on the same assigned frequency channel, as long as only wanted pairs can "see" each other. Such microwave links are common and effective, but their application is rather limited.

Another approach, and a very elegant one, is to wring the dead time out of conversation. Sometimes when you telephone overseas, every slight pause in your string of works and sylalbles is detected and filled with someone else's utterance. This is based on probability theory, and it is quite a scramble. But it works, letting one radio link carry several times as many talkers as before. It's limited to voice type communications, of course.

And still another approach is to develop components able to transmit and receive at higher and higher radio frequencies. The higher you go, the more room is available to pack in communication channels, but the broadcasting and propagation problems get tougher. "Coherent light" using the LASER is the latest in really high frequency utilization. X rays might be next, for all we know.

But this is the story of PRNG.

Without using any mysterious new components, PRNG systems seem to do funny things with the electromagnetic spectrum. The same old tubes or transistors, coils, condensers, resistors and such are all you need -- except that some of them have to be good quality, fast acting elements. All we do is put them together without being too bound by "frequency domain" thinking. We pay more attention to the "time domain" concept, and we remember that frequencies of pure sine waves are merely one convenient, if limiting, way of looking at what goes on. We find a much better way of working with radio signals by remembering that a specific frequency, during any short slice of time, is really best tagged as a time rate of change of phase. (More on this later.)

The electromagnetic spectrum: Well, Newton's prism -- and drops of mist -- show us a rainbow spread of a small part of it. A radio dial or TV channel selector tunes over another part, picking out fairly narrow bands of frequencies from all the others around. We draw a nice two-dimensional picture of any specific portion of the spectrum, the array of frequencies along the horizontal scale and vertical lines at certain frequency points to represent the signal level or amount of energy present at that frequency. The panoramic spectrum analyzer is a useful device that sweeps up and down the radio frequencies -- within fairly narrow limits -- and shows a beautiful graph on an oscilloscope screen of all the signals being picked up. Each one appears as a nice vertical line as long as it is present.

This is handy for people looking for possible interference signals prior to a missile shoot, so they can select a clear channel for their guidance and telemetry signals. It is used for monitoring illegal broadcasting or enemy operations. Radio and radar engineers use spectrum analyzers for "seeing" the modulation sidebands of their transmitters. These sidebands pop up at definite points on each side of the main center frequency, and appear to be the information-carrying elements. It's normally important to avoid "over modulation" which spreads these sidebands over too much of the spectrum.

So what's wrong with this? Frequencies are unique; either you have one or you don't. Radio waves carry energy at certain frequencies, don't they?

Yes, but how about this word frequency... so we count the number of cycles per unit time, and that's it, eh? Cycles of what? Well, of the repeating pattern of distortions of space, or whatever an electromagnetic wave is, as it zips by a radio's antenna at about the speed of light. This generates an electrical voltage in the sine wave pattern -- rise, peak, fall, reverse dip, return -- for each cycle. That voltage can push a current, and current can do work. That's energy. If you are not too far away from the broadcasting station it's enough energy to be detected and encourage appropriate wiggles of the iron diaphragm of an earphone without being amplified, in the case of a crystal set.

But when we take a close look at that cycle of radio-frequency voltage or current, we find moments of zero energy as the sine wave crosses its zero line to reverse the direction of push. (Truly zero energy for zero time, to be sure, but certainly very small energy for some time.) Perhaps we ought to wonder what those vertical lines in a typical frequency spectrum represent. Apparently they show only a constant peak energy or some sort of average over a dynamic cycle.

The concept of instantaneous frequency is purely mathematical but we don't have to define frequency as the time rate of change of phase, which it is, to see the fundamental notion here. Perhaps we ought to suspect those lines in the frequency spectrum that represent energy -- or signal amplitude -- at a certain frequency. Better think of changing energy, rising and falling -- really transforming from one mode of energy to another -- as time passes. Now we can narrow our view to smaller and smaller slices of time, and not worry about whole cycles as units.

This puts us in the time domain. It is just as "real" aa way to look at the oscillations of radio waves as the conventional frequency domain view. But it happens to be worse than inconvenient -- it's downright hopeless -- to design conventional tuned circuits and filters for radio transmitters, receivers, hi-fi sets or servo systems unless you stick with frequency domain thinking. There is no other way calculate and measure frequency responses, band widths, phase shifts, et cetera, that electronic engineers learn are all-important.

So the idea of two-dimensional frequency spectra has been accepted without question. Of course, it doesn't particularly bother anyone that such a graph can only show a steady, constant condition. When "amplitude modulation sidebands" are shown, for instance, it's for the rather limited case of modulation by an unchanging signal. We make a quick snapshot of a steady tone or series of pulses, but not quick enough to slice into the individual cycles. Start talking into the microphone, and the sidebands have to dart around, pop on and off, and general stir up quite a fuss to follow your speech. One of the little difficulties with frequency domain thinking, handy as it is, comes from the fact that signal frequencies don't stay constant; in fact they can't, if the intent is to communicate something.

If we really want to visualize when we have how much energy at what frequency, we have to draw, or rather construct, three-dimensional graphs. These are fun, and give you lovely jagged mountains or ski slopes.

These 3-D graphs still can't show us the whole picture, although they are much better than semi-long exposure snapshot views in two dimensions -- conventional frequency spectra. But they still have a frequency axis. Perhaps we need 4-D graphs to include the up and down oscillations of each frequency itself. A room full of 3-D graphs would do, in case you have trouble building four-dimensional objects. Thesse would be stacked in rows, showing a time sequence of what goes on, with very small increments of time between each graph. Rather awkward, this combining of two amplitude and two time axes -- but that's what pure time domain thinking would require.

Actually, this discussion of pure and complete time domain thinking is a bit of a digression. We can go ahead with the basic idea of PRNG systems without getting that involved. It's just that we must look carefully at what frequency is. The highly useful but conventional mathematical model sometimes assumes too much -- for breakthroughs -- such as only the existence of pure sine waves which oscillate forever.

The time domain view shows us the rise and fall of each cycle of a radio wave. Now we might think it is a shame we can't "do something" about those slices of time when the signal energy approaches the zero line and reverses. What a waste of perfectly good time while there is practically zero energy going out!

Not really. We need periods of "nothing" in order to contrast with periods of "something" and thus communicate, whether by voice, morse code or a modulated radio frequency signal. But we don't have to alternate through periods of "nothing"; we might as well have periods of "something else" as long as they differ from the "something" that marks our desired signal -- and we can separate the two. This is one way of touching on what a PRNG system does: the "something else" can be the energy of other channels of communication if we can keep it out of the channel we want to listen to. We can, by changing our ideas of how to tune in a signal. Now it is possible for many channels to appear to use the same set of frequencies in the spectrum; the tuning is in the time domain rather than the frequency domain. It is more efficient because little slices of time that formerly were reserved and wasted now can be used.

If all this is about the process of communication, we had better clarify just what communication is. Simple: It's information transfer. Then what is information? Scientists don't appreciate hazy notions and "tendencies" or just fancy names for things. They try to tie things down to isolated quantities they cna measure, duplicate, predict. So the atom of information is called one "bit." Let's be satisfied that a bit is some sort of irreducible minimum yes-no datum. We use on-off, this-that, arranged in codes to transfer information. (Human voice is an astonishingly poor code -- redundant and very wasteful of time, at least for straightforward, non-subtle communications, but brains are good at interpreting this code through a lot of noise and code variations.)

Time is the important factor; we want to use as little time as possible to transfer the most possible information. So we deal in information rate, in bits per second. Any wasted, unused time, even in the smallest of slices, is one influence limiting information rate; the other is the signal-to-noise ratio.

In the rather complex study of information theory we end up relating information rate with channel bandwidth. We find theoretical and practical minimum limits on the size of the hole in the frequency spectrum we think has to be reserved to establish radio communication, for example, at any particular bits per second, signal energy and disturbing noise energy levels. Some of the theoretical limits still hold, but PRNG systems do great things to those practical limits.

The really important factor in practical cases is noise. Nature and man's doings are noisy. Non-information signals are scattered all over the electromagnetic spectrum, using the same agency of propagation that we find convenient to use for communication.

Information rate, remember, requires channel bandwidth even if there is no noise. If you transmit one and only one frequency, steadily, not letting it change in amplitude or frequency -- no shifting of phase -- then you really aren't sending much information. It would be O.K. as a radio homing beacon for aircraft, or if all you wanted to know was whether something was on or off. Such a signal, using zero bandwidth, would merely take forever to establish one bit.

We have to use other frequencies, simultaneously or in sequence, to establish bits of information quickly. We have to spread out along the spectrum so many cycles per second to send a certain number of bits per second. Sadly, this means the receiver's bandwidth to accomodate the desired signal -- and reject others -- can't help but let in some noise, too. The ever-present noise from space, spark plugs and even the components of the receiver itself can't be wished away. A receiver can have as many tubes or transistors or paramps or framistamps as you want, amplifying whatever hits the antenna by any amount, but all the good that does if the included noise is much greater than the wanted signal.

Pseudo Random Noise Modulation -- we should be calling it this, but PRNG is easier to pronounce -- is only one of a number of systems using correlation, which is the magic word these days. They all go a long way toward beating the noise problem, but PRNG goes them one better with its multiple-channels-in-the-same-bandwidth. Before we describe the system itself, we have two more concepts to discuss. These are a priori information and filter memory.

You have a priori information if you know something of what you expect to hear, in advance. Not everything; no need to communicate, in that case, but something -- such as the basic frequency, the particular code or language, or the type of modulation being used. PRNG transmissions depend on having a lot more a priori knowledge than ordinary methods require. It buys efficiency.

Among its other duties, a radio receiver acts as a filter, selecting out the particular band of frequencies to which it is tuned. But it isn't quite right to think of this filtering action as if it were like using a sieve. Frequency is a per-second commodity, involving the idea of time. You can't tell if you have a frequency, not to say measure its cycles per second, unless you watch it for some period of time. One cycle, or even a fraction of a cycle will do. An ordinary tuned filter does this; it remembers how the input signal was changing a short while ago by sort of matching this with its own resonant frequency. Hit it with the right frequency for a while and heavy currents start circulating back and forth, around the circuit. For a while... and that's the rub.

A narrow bandwidth filter -- we say one with a high "Q" -- that we would like to use for fine selectivity and noise reduction takes quite a time to get it rocking, to start its filtering action. Usually many, many cycles of the resonant frequency -- in the order of one third of the Q, and Q can be in the thousands. Not only that, but it is hard to stop it quickly when the input signal is removed. It has electrical intertia. Well, this can limit the information rate we can expect in any particular bandwidth. Yet if we use wider bandwidth -- filters with lower Q -- we open up the gates for more noise, too. In conventional systems, we simply have to live with wide bandwidth for any fast-changing, high information rate signal, and lick the noise problem with sheer signal power. It's always a compromise. Brute force isn't very sophisticated, however.

This noise problem is the main reason why ordinary radars with any sort of respectable range must transmit such tremendous peak powers in their pulses. One can get well cooked if foolish enough to stand close to the antenna. Now in the works: radars with equivalent range capabilities that you can hardly tell are on the air if you use a standard type receiver tuned as carefully as you like to the correct center frequency. Such a receiver would buzz like a banshee if tuned to a regular radar.

PRNG systems do their magic, as I said, without resorting to mysterious new components. They merely employ a new type of modulation and a new concept of tuning for one signal among many. You are familiar with the old standby modulations, AM and FM. Amplitude or frequency -- sometimes both -- of a radio frequency carrier wave are controlled by the intelligence frequencies, meaning the voice, music, television impulses, teletype codes or whatever. There are various adaptations of AM and FM, such as PCM (pulse code modulation), PDM (pulse duration modulation), PAM (pulse amplitude modulation) and others. Pseudo Random Noise Modulation is actually a sort of PCM, but with finer use of a priori modulation.

And don't forget that business about filter memory. All those pulse types of modulation run into this narrow-band, response-limiting roadblock -- except PRNG systems.

Now we'll take a look at the lash-up in a PRNG transmitter and receiver to see how the system provides more efficient use of the electro-magnetic spectrum. It turns out that this business of time-scatter tuning -- instead of frequency-band tuning -- is easier to understand in principle than a color television set, and not much more complicated in actual practice.

The heart of the system, the PRN Generator itself, is a bit difficult to describe, but that's a minor detail. We'll talk like Systems Engineers and worry only about what goes in and out of black boxes. (You didn't really want a course in flip-flops, shift registers, digital feedback and all that, did you? We're after fundamentals.)

So we have a black box called a PRNG. We feed a steady series of electrical pulses into it. These are generated by a "clock" oscillator that ticks at something like ten million pulses per second. It can be controlled -- modulated -- to go a little faster or slower as required.

We get out of the PRNG a rather special kind of pulse code. This code repeats itself continually. It is made of a variety of short, medium and long pulses scattered in a calculated but apparently random sequence. It is easy to built two PRNG's that will deliver exactly the same code sequences if fed by clocks going at similar rates.

A portion of one of these on-off Pseudo Random Codes might look like this:

The important feature is that on a relatively long-term average, the pulses are "on" for just as much time as they are "off." The total average energy is just half of the peak; the line through the center places equal areas of "on" time above and below the line. (The same could be said for the "off" times.) We say the code's duty cycle over many pulses if fifty per cent. A lower duty cycle would mean that the pulses would "weigh" less than the spaces. Anything but a fifty per cent duty cycle would defeat our purpose.

Now we take the business end of a normal radio transmitter, where the radio frequencies are generated and power-amplified for delivering to the transmitting antenna. Ordinarily, the voice, music, TV pulses, control signals or whatever intelligence we want to broadcast are used to modulate -- to control -- the radio frequencies directly -- using AM or FM.

But in the case we're talking about, we take the steady radio frequencies and split them into two lines with 180° phase difference. The two radio signals thus produced look like mirror images of each other:

These radio frequencies are usually up in the hundreds of megacycles per second, so the few cycles shown in the drawing only represent a small fraction of a millionth of a second.

The next step is to build a very fast operating switch to select one or the other of those 180° apart radio-frequency lines as the one to be broadcast. This switch is purely electronic; no mechanical toggle, switch or commutator could possibly go fast enough. The control signal for the switch is the pseudo random code. So now what we actually transmit is either one phase or the other of the radio frequency, flipping back and forth as the code sequence dictates.

If a regular receiver tries to tune in on this fast, odd, apparently random phase flipping signal, nothing happens! The normal receiver, remember, is primarily a narrow band filter; it has a high "Q"; it takes time for it to recognize any particular frequency or sudden shift in frequency. The 180° flipping signal amounts to sudden shifting of one-half cycles in frequency, and these flips simply occur too fast. Before the receiver can detect one of the pulses, the signal turns upside down. The sad result in the tuned circuits is that nearly all of the input energy is canceled out. The filter memory does it. You could make a wide-band receiver fast enough to pick up the code, but it couldn't help picking up a lot more noise, too. Then how do we tune in on this mess without hearing that noise, pseudo or truly random?

With a priori knowledge. We put a duplicate PRNG in our receiver, take its exactly matching code -- modulating a coherent local oscillator, if you want the whole story -- and mix it with the signal coming in from the receiving antenna. If we do this correctly -- minor details -- each 180° phase turn-over flip of the received signal is matched by a flip of the locally generated signal. The code codes track right along together, and we get a nice, continuous, non-flipping signal out. We arrange for the codes to aid each other rather than cancel, of course. The local generator in our receiver "knows" just when to reverse its phase to match the incoming signal, so this arrangement is sometimes called a "matched filter." In particular, a matched filter does not have electrical intertia. Now, most happily, the steady output of the combining-aiding mixing process can be filtered in a very narrow band circuit. We don't care about filter memory because this signal isn't changing. The narrow bandwidth cuts out tremendous amounts of interfering noise, giving us a much more respectable signal-to-noise ratio.

This doesn't tell us how we talk or communicate anything over such a transmission link. All we seem to have at this point is a steady, clean signal. Well, remember the clock oscillator that feeds a fast line of pulses to the pseudo random noise generator black box? The PRNG at the receiver includes a duplicate clock, of course, but it is a bit of a trick to make sure this clock does duplicate the one at the transmitter. Not just in frequency, but in pulse-by-pulse correlation. We say the two clocks must be "phase coherent" which means they must match up very closely. I can't say match up perfectly becuase nothing in practical science is perfect. (Scientists and engineers spend a lot of time trying to define how perfect any measurement can be in the inevitable presence of noise to disturb things.) If we want to make one action match another, we must have feedback. So we detect when the match is getting a little off, then correct the situation manually or automatically. In the feedback loop that keeps the receiver's PRNG right on the button, we have an error signal to control the clock oscillator -- to speed it up or slow it down a little as required. This can just as well be a pretty fast correcting action, maybe at a rate of several million corrections per second.

All right, the receiver's clock follows the transmitter's clock. What is we purposely make little changes in the transmitter's clock? Why, the receiver's clock follows right along, dutifully maintaining close correlation. So all we do is modulate the clock frequency with whatever intelligence we want, and this information is transmitted. Not only the clocks, either the radio frequencies must match cycle by cycle, too. This provides another way to carry information if we want it.

And that's about all there is to this breakthrough, in principle. Of course the "minor details" include some pretty sticky problems, but we do have operating systems that do a fair job of applying these principles, and the future looks bright. Perhaps the main "minor detail" is in acquisition: after the PRNG's are tracking along in close correlation, everything is fine, but getting them lined up in the first place is troublesome. These controlled random codes have definite starting points in time; the clock frequency and phase are never known too exactly; and the radio frequency carrier -- being flipped in phase -- is also somewhat of a variable. The receiver has the real a priori key of the coded signal, but to use this key it must first search over the possible combinations of those variables -- code start time, clock frequency and carrier frequency. This can take many minutes unless some clever tricks are used to speed things up. After the receiver locks on, everything is fine; it can hold the signal.

The fundamental idea of spectrum conservation as achieved by PRNG systems lies in the receiver's ability to "listen" to a very small section of the spectrum at just the right moments for the expected signal rather than to a larger portion of the spectrum all the time. A spectrum analyzer, which adds its own influence due to its narrow band filter memory, sees the PRNG signal as if it were scattered all up and down a wide portion of the spectrum. Small pieces of the signal seem to pop on and off here and there -- just like noise. No wonder a regular receiver, filtering out a narrow portion of the spectrum, can't make head or tail of this mess, and sees only a little additional noise, if anything. It takes our special receiver with the particular and very specific code generator built in to be able to pick up all those little pieces of signal and re-assemble them. The beautiful thing is that the same general portion of the spectrum can be used for many such PRNG transmissions. A receiver can recognize only its own code; all other codes simply don't correlate, and look like any other type of noise. You either get the right signal or you get nothing but noise. No cross-talk, ever. Yes, there is a distinct limit to how many PRNG transmissions can be carried simultaneously -- we don't really get away with breaking all the laws of information theory. Each PRNG tranmission does add to the total noise, and certainly some noise does get through the receiver's filter. This can disturb the quality of communication and even make the receiver lose its lock-on to the correct code.

These systems, in the final analysis, manage to conserve the spectrum by using "dead time" between pulses of energy, and by eliminating the former practical need for "guard bands" of unused spectrum space around each signal to prevent interference. Ordinary filters just don't trim away adjacent frequencies sharply enough. The matched filter idea gets around this limitation.

One analogy, for getting the time domain philosophy, might be to compare this method of signal recognition with those Ishihara color vision test cards. Scattered around in all sorts of colorful "noise" there is -- or isn't -- a nice plain Roman letter -- if your eyes aren't color blind. With the right equipment you can tune in on the message just fine. But if you aren't able to distinguish red from green, all you see is an unorganized mess. The totally new method of separating and recognizing electrical signals by noisily scattering them all over the time domain instead of restraining them to clean-cut frequencies, is equivalent to learning how to recognize colors, or, say, polarized light.

Some electronics engineers in the audience might think I'm leading the rest of you astray on that question of modulation sidebands. I've said a tuned circuit can generate them at the receiver; they weren't necessarily generated at the transmitter. And I havn't mentioned Laplace transforms, a mighty tool mathematically relating time and frequency domains. We could have some interesting arguments on this, but it would resolve to the type of mathematical model set up, and doesn't really matter in this case. A Fourier analysis does provide a set of components that will synthesize the signal, but not the only possible set.

I'll hazard a prediction. Perhaps 20 years from now your tri-D set will tune to a specific pseudo random code rather than to a specific frequency when you switch from one space opera to another. Coded noise radar signals have already bounced off Venus. If it's good enough for the old girl, it's good enough for us.


Alfred Pfanstiehl (pronounced FAN-steel): 14 February 1919 to 11 October 1990
Junior Physicist at Manhattan Project Site 1 "Metallurgical Laboratory" located at the University of Chicago.

Alfred was the second of four children to Carl (17 September 1887 to 1 March 1942) and Caryl Pfanstiehl. In 1915 Carl started what would become Pfanstiehl Laboratories (by way of Special Chemicals Corporation) in the basement of his home in Highland Park, Illinois. This venture eventually became part of Ferro Pfanstiehl Corporation in Waukegan, Illinois.

Carl's father (Alfred's grandfather) was Reverend Albertus A. Pfanstiehl (14 November 1855 to 7 July 1928) and mother was Julia nee Barnes. Carl's paternal grandfather was Pieter Frederik Pfanstiehl, born 12 June 1806 in Breda, Netherlands and became a shoemaker. He left the Netherlands in 1847, engaged in various businesses and died 8 July 1892 in Holland, Michigan.
(From Michigan Pioneer and Historical Society, volume XXII, 1894)

Alfred's older brother, Cody (4 September 1916 to 1 February 2007) was the Spokesman for the District of Columbia Transit Authority for 21 years, from 1961 to 1982.


Last modified March 14, 2018
Comments to Webmaster
Click here for an overview of the PhasorPhone controversy.