The definition of Spread Spectrum in 97.3(c)8 rests on the phrase "using
bandwidth-expansion modulation emissions". This clearly lacks the technical
precision required

- for digital mode developers to know what techniques can and can not be
incorporated in modes used by US stations (e.g. pseudo-random coding, as
Alan points out below)

- for US digital mode users to determine if and on what frequencies an
accurately-documented mode can be used

A constructive response to the Ros debacle would be to propose improved
language for 97.3(c)8 that is clear and unambiguous. Assuming the proposed
definition does not increase the likelihood of causing harmful interference
or permit encrypted communications (concerns implicit in 97.311), the FCC
would likely welcome a change that improves our ability to abide by the
regulations without consuming their scarce resources.

    73,

        Dave, AA6YQ



-----Original Message-----
From: digitalradio@yahoogroups.com [mailto:digitalra...@yahoogroups.com]on
Behalf Of Alan Barrow
Sent: Tuesday, July 13, 2010 1:22 PM
To: digitalradio@yahoogroups.com
Subject: Re: [digitalradio] Re: Random data vs Spread Spectrum



graham787 wrote:
> So, if bits are added to the transmit waveform that are not performing a
function of helping to re-create an error free replication of the input
data, it meets my test as spread spectrum. If the symbols in the transmit
waveform cannot be predicted by the previous sequence of bits over time at
the input, it also would meet my test as spread spectrum. To reiterate on
this point, just because the symbols of the transmit waveform are changing
during an unchanging input, does not imply spread spectrum.
>
> Instead, they may well be the result of a defined randomizer process
followed by multiple layers of FEC and modulation coding.
>

While I do not support ROS in any form, I think the group is on a very
slippery slope here with well intentioned but misinformed definitions &
tests that may haunt us in the future!

Just the fact that data is randomized does not define SS. There has to
be a spreading factor, which has some rough definitions based on
practical applications, but is not addressed in any FCC definitions.

Skip's well intentioned but overly simplistic test of looking at the bit
stream is not enough to define SS. There are many legitimate reasons to
code data resulting in a pseudo-random fashion that have nothing to do
with SS!

The most common is coding so the transitions between bit's can easily be
detected even in noise. It's a problem when sequential bits look the same.

You can also factor in FEC. There are many, many writeups on
convolutional encoding that go into this. (Viterbi & reed-solomon are in
wide usage)

But it's also useful to spread the energy out in the bandwidth and avoid
sidebands created by single tones of long duration. There are multiple
modem/modes which do this, some in very wide usage.

So yes, SS (really DSSS) is pseudo-random. But not all pseudo-random
coding is SS, and we should not be proposing that as a litmus test!

The real test should be:
- direct or BPSK modulation via a pseudo-random code in addition to any
coding for FEC (convolutional, etc)
- A spreading factor significantly higher than the original data rate

The 2nd item is the key part, and it's listed but virtually never quoted
in this group, but is listed in nearly all the SS definitions. Nor is it
addressed in the FCC part 97 rules.

It's not enough that the bandwidth is higher than the data rate would
imply, as nearly all modes with FEC would fail that by definition.

The key is the "significantly wider" aspect, also referred to in
ITU/IEEE definitions as "typically orders of magnitude greater than the
data rate". And this is why many engineers question whether any SSB
generated mode could be "real" SS. ROS only did it by having the
original data rate lower than the SSB bandwidth.

About the lowest commercial DSSS implementations use a spreading factor
of 16:1, and that's for consumer grade without noise performance concerns.

Most DSSS implementations in the real world use spreading factors of 100
or greater, as that's when you start seeing significant noise recovery
improvements.

In DSSS, the "processor gain" which improves noise resilience is
directly related to the spreading factor.

I've posted multiple definitions from the ITU & IEEE in the past for
DSSS. Wikipedia, which has some good information, does not constitute a
formal definition like the ITU & IEEE references do. (Part of the reason
that wikipedia is not admissible as sources for college & research papers).

There is no shortage of formal definitions, we should not have to invent
our own. There are also some very readable definitions from mfg's for
their DSSS components. Like this one:
< http://www.maxim-ic.com/app-notes/index.mvp/id/1890 >

So ROS (RIP) is very odd in this aspect, as it's nowhere near
conventional DSSS implementations in it's spreading factor, yet is
higher than the spreading seen by FEC & convolutional encoding. This is
a constraint of the AFSK/SSB encoding, but does pose some questions as
to how it should be treated.

In all the discussion of SS, bandwidth, etc, everyone is missing the
point that DSSS wider bandwidth usage is offset by use of CDMA.
(collision detection multiple access). DSSS is nearly always used with
many stations on the same "channel" with the same key. It's no accident
that cellular went from analog techniques to DSSS..... it maximizes use
of their spectrum!

So the idea of ROS having multiple net frequencies is just silly, all
ROS stations should be using the same frequency! For that matter, so
should most of our advanced modes including winmor, ALE, etc. And we
have to factor in the fact that multiple stations could/should be using
the same spectrum when you examine bandwidth of DSSS.

Set aside all the unprofessional behavior by the pro & anti ROS
contingents...

I believe ROS as implemented did not offer enough processing gain to
justify usage on crowded bands like 40m. But I think we hams lost an
opportunity to experiment with new modes that had promise in the way the
ARRL/FCC interactions took place.

But with a higher spreading factor used on a dedicated frequency
allocation, or in a section of large signal higher band (10, 12, 15?) it
might have shown some promise. Due to the nature of HF and FM, you could
have run weak signal ROS with a higher spreading factor in the FM
section of 10m with no impact to FM operations. (just an example of how
we could have experimented).

And certainly I'm opposed to some of the well intentioned but
significantly mis-informed "tests" and definitions being proposed. We
may be locking US hams out of the next great mode.

As far as I'm concerned, this whole ROS episode is an embarrassment to
ham radio, and is a textbook case of how not to introduce a new mode,
interact with the FCC, etc.

As I've had visibility to getting FCC approval for new modes in two
different cases, I can guarantee this episode set us back in the eyes of
the FCC. This is the fault of the author, as well as well intentioned
individuals out on a crusade.

Have fun:

Alan
km4ba




Reply via email to