The "ignoring the call if it can't be decoded" IS the attempt to assure that 
the protocol works. That allows subsequent transmissions to follow the initial 
transmission. If this didn't occur, then there would be a lot more "dropped 
transmissions" It's only when the two stations have different routing in which 
it becomes a big issue. And in general, subsequent transmissions should all 
have the same routing, just maybe a different source callsign.

So as much as you think that it's broken, I think that Icom did the right thing.

From: dstar_digital@yahoogroups.com [mailto:dstar_digi...@yahoogroups.com] On 
Behalf Of Nate Duehr
Sent: Friday, October 16, 2009 7:49 PM
To: dstar_digital@yahoogroups.com
Subject: Re: [DSTAR_DIGITAL] Re: Beeps


Okay, interesting. Please review and note that I never said a receiving station 
gets a CORRUPTED callsign.  The result is actually that the receiving station 
gets *no callsign* at all.  Then the firmware in the controller or the software 
in the GW was programmed in such a way as to treat that missing header as if 
the prior transmission had never unkeyed, EVEN THOUGH there was a 
non-corrupted, perfectly copyable END to the prior transmission.

(D-STAR does end-of-transmission correctly by the way, transmissions don't just 
stop, there's a defined "I'm done transmitting" pattern.  But if the overlying 
network ignores that data, or it isn't passed far enough up the stack to where 
things like the GW have visibility to it, it's wasted.)

The result is the same, the system breaks and has no way to self-recover.

Streaming protocols that include routing information in the stream 
(source-routed) need to try a little harder to get the required information for 
the system to work, to the far-end.  Especially over a physical medium as 
unpredictable as RF.  That or they need to throw out the ENTIRE transmission 
and try again.  (Which of course, would be completely annoying and non-workable 
for people used to analog FM.)

Basically, I'm saying this isn't Ethernet/copper wire this data's passing over, 
and a protocol designed for a noisy medium can't just send the header/routing 
information once, or it's bound to be a very "brittle" protocol as most 
protocol engineers would state it.  It is. Very brittle.  There are plenty of 
protocols that do work in heinous amounts of physical layer noise... D-STAR's 
not one of them.

That's not a bad/good judgment against D-STAR, it just "is"...



--

  Nate Duehr, WY0X

  n...@natetech.com<mailto:n...@natetech.com>

On Fri, 16 Oct 2009 22:32 +0000, "Jonathan Naylor" 
<naylo...@yahoo.com<mailto:naylo...@yahoo.com>> wrote:


Nate,

If you look at the D-Star protocol you'll see in AP2 (in the file shogen.pdf) 
the details of the FEC applied to the header, doubling its length to 660 bits. 
It's a convolution code as opposed to concatenated block codes as used by AMBE.

The checksum in the header is then used to ensure that the header FEC has done 
its job. The header data is repeated in the slow data throughout the 
transmission but without FEC but with the checksum. There is no excuse for a 
receiving station to get a corrupted callsign bar bad programming.

Jonathan G4KLX

Reply via email to