Re: [Elecraft] CW copy: Wayne's solution---------------WHY???

2009-01-23 Thread Terry Schieler
Jim,

I thought that my sarcasm would be obvious in my post below Wayne's.  I
probably should have included a :o).

73, Terry...W0FM

:o)  :o)  :o)  :o)  

-Original Message-
From: JIM DAVIS [mailto:nn...@astound.net] 
Sent: Thursday, January 22, 2009 5:49 PM
To: Terry Schieler; elecraft@mailman.qth.net
Subject: Re: [Elecraft] CW copy: Wayne's solution---WHY???

On Thu, 22 Jan 2009 15:04:11 -0600
  "Terry Schieler"  wrote:
> 
> Wayne Burdick wrote:
> 
> Humans use lexicographical and semantic clues to fill in dropped CW 
> characters, and computers can do the same. But this goes way beyond the 
> simple signal processing used in, say, the K3's present CW decoder or 
> the one used in HRD. (I studied natural language recognition in college 
> and was anxious to play with either neural networks or traditional AI 
> methods as the foundation for CW decoding, but my other classes got in 
> the way :)
> 
> One idea from the early days of AI is the so-called "blackboard" model. 
> Imagine a garbled sentence on a blackboard, with various experts 
> offering their opinions about what each letter and word is based on 
> their specialized knowledge of word morphology, letter frequency, 
> syntax, semantics, etc. You weigh these opinions based on degree of 
> confidence, and once there's enough evidence for a letter or word, you 
> fill it in, which in turn offers additional information to the 
> highest-level expert, who might be considering the actual meaning of a 
> phrase. His predictions can then strengthen the evidence for lower 
> level symbols, and so on. Such methods are very algorithm-intensive, 
> but might be useful for some aspects of CW stream parsing.
> 
> A neural network could handle this, too, and has the advantage of 
> self-organization. This is how I'd approach it (assuming unlimited free 
> time--not!). You could use any of several different types of networks 
> that have been proven successful at NLP (natural language processing).
> 
>For example, you might take the incoming CW, break it into samples (say 
> a few samples per bit at the highest code speed to be processed), shift 
> the serial data representing 5 to 20 letters into a serial-to-parallel 
> shift register, then feed the parallel data to the network's inputs. Or 
> you could use a network with internal feedback (memory), with just one 
> input, which itself could be "fuzzy" (the analog voltage from an 
> envelope detector) or digital (0 or 1 depending on the output of a 
> comparator, looking at the CW stream). The output might be a parallel 
> binary word, perhaps ASCII, or a single output with multiple levels, 
> where the voltage itself represents a symbol.
> 
> To make this work, you need at least three things: an input 
> representation that provides adequate context (e.g., if you want to 
> decode a letter, the input should contain at least a few letters on 
> either side of the target); a sufficiently complex network; and a large 
> corpus of clean text with which to train the network (probably 
> thousands of words, drawn from actual on-air content).
> 
> One classic method of training the network involves placing known-good 
> signals at the input, then comparing the desired outputs to the actual 
> outputs, and "back-propagating" the resulting error through the 
> network--from outputs to hidden layers to inputs--so that the network's 
> nodes gradually acquire the proper "weights." Once the network has been 
> trained to the point that it perfectly copies clean CW, you can then 
> present it with a noisy signal stream. A well-designed network would be 
> able to correct dropped CW elements or even letters if its internal 
> representation is highly evolved. The network will have learned 
> language-specific rules, and you don't have to know how it works, 
> anymore than you know how your own brain does it.
> 
> The actual implementation is left as an exercise for the reader. If you 
> come up with an algorithm written in 'C', let me know and I'll try to 
> port it to the K3's PIC.
> 
> Wayne
> N6KR
> 
> 
> Sounds good, Wayne.  When can you have it done?  Upper right hand button
> would be my choice.
> 
> 73 de Terry, W0FM


Why would anybody want to use a "CW decoder" in the first place???

I guess that we who can "mentally" copy CW in our BRAINS have an advantage
over those
who really did'nt APPLY THEMSELVES to accomplish what we did (thousands
WORLDWIDE!!!)

I'm not an elitist nor are our other "Brethren" who can copy "Intl. Morse",
we just
appreci

Re: [Elecraft] CW copy: Wayne's solution---------------WHY???

2009-01-23 Thread Julian, G4ILO



JIM DAVIS-11 wrote:
> 
> Why would anybody want to use a "CW decoder" in the first place???
> 
Perhaps for the same kind of reasons people use a DX Cluster instead of
tuning round the bands and listening.

CW will always have the unique advantage that it can be copied without
computer assistance, but it is still a digital mode, and if the fact that it
can be sent and received using a computer makes more people use the mode, I
don't think that's a bad thing.

-
Julian, G4ILO. K2 #392  K3 #222.
http://www.g4ilo.com/ G4ILO's Shack   http://www.ham-directory.com/ Ham
Directoryhttp://www.g4ilo.com/kcomm.html KComm for Elecraft K2 and K3 
-- 
View this message in context: 
http://n2.nabble.com/HRD-cw-copy-tp2195214p2202455.html
Sent from the Elecraft mailing list archive at Nabble.com.

___
Elecraft mailing list
Post to: Elecraft@mailman.qth.net
You must be a subscriber to post to the list.
Subscriber Info (Addr. Change, sub, unsub etc.):
 http://mailman.qth.net/mailman/listinfo/elecraft

Help: http://mailman.qth.net/subscribers.htm
Elecraft web page: http://www.elecraft.com


Re: [Elecraft] CW copy: Wayne's solution---------------WHY???

2009-01-22 Thread Simon (HB9DRV)
- Original Message - 
From: "JIM DAVIS" 

> Why would anybody want to use a "CW decoder" in the first place???

When you've received a QSL card from a deaf Danish ham as I did in 1979 
you'll realise why. This ham decoded by placing his 'listening' finger on 
the cone of the loudspeaker.

Simon Brown, HB9DRV
www.ham-radio-deluxe.com 

___
Elecraft mailing list
Post to: Elecraft@mailman.qth.net
You must be a subscriber to post to the list.
Subscriber Info (Addr. Change, sub, unsub etc.):
 http://mailman.qth.net/mailman/listinfo/elecraft

Help: http://mailman.qth.net/subscribers.htm
Elecraft web page: http://www.elecraft.com


Re: [Elecraft] CW copy: Wayne's solution---------------WHY???

2009-01-22 Thread JIM DAVIS
On Thu, 22 Jan 2009 15:04:11 -0600
  "Terry Schieler"  wrote:
> 
> Wayne Burdick wrote:
> 
> Humans use lexicographical and semantic clues to fill in dropped CW 
> characters, and computers can do the same. But this goes way beyond the 
> simple signal processing used in, say, the K3's present CW decoder or 
> the one used in HRD. (I studied natural language recognition in college 
> and was anxious to play with either neural networks or traditional AI 
> methods as the foundation for CW decoding, but my other classes got in 
> the way :)
> 
> One idea from the early days of AI is the so-called "blackboard" model. 
> Imagine a garbled sentence on a blackboard, with various experts 
> offering their opinions about what each letter and word is based on 
> their specialized knowledge of word morphology, letter frequency, 
> syntax, semantics, etc. You weigh these opinions based on degree of 
> confidence, and once there's enough evidence for a letter or word, you 
> fill it in, which in turn offers additional information to the 
> highest-level expert, who might be considering the actual meaning of a 
> phrase. His predictions can then strengthen the evidence for lower 
> level symbols, and so on. Such methods are very algorithm-intensive, 
> but might be useful for some aspects of CW stream parsing.
> 
> A neural network could handle this, too, and has the advantage of 
> self-organization. This is how I'd approach it (assuming unlimited free 
> time--not!). You could use any of several different types of networks 
> that have been proven successful at NLP (natural language processing).
> 
>For example, you might take the incoming CW, break it into samples (say 
> a few samples per bit at the highest code speed to be processed), shift 
> the serial data representing 5 to 20 letters into a serial-to-parallel 
> shift register, then feed the parallel data to the network's inputs. Or 
> you could use a network with internal feedback (memory), with just one 
> input, which itself could be "fuzzy" (the analog voltage from an 
> envelope detector) or digital (0 or 1 depending on the output of a 
> comparator, looking at the CW stream). The output might be a parallel 
> binary word, perhaps ASCII, or a single output with multiple levels, 
> where the voltage itself represents a symbol.
> 
> To make this work, you need at least three things: an input 
> representation that provides adequate context (e.g., if you want to 
> decode a letter, the input should contain at least a few letters on 
> either side of the target); a sufficiently complex network; and a large 
> corpus of clean text with which to train the network (probably 
> thousands of words, drawn from actual on-air content).
> 
> One classic method of training the network involves placing known-good 
> signals at the input, then comparing the desired outputs to the actual 
> outputs, and "back-propagating" the resulting error through the 
> network--from outputs to hidden layers to inputs--so that the network's 
> nodes gradually acquire the proper "weights." Once the network has been 
> trained to the point that it perfectly copies clean CW, you can then 
> present it with a noisy signal stream. A well-designed network would be 
> able to correct dropped CW elements or even letters if its internal 
> representation is highly evolved. The network will have learned 
> language-specific rules, and you don't have to know how it works, 
> anymore than you know how your own brain does it.
> 
> The actual implementation is left as an exercise for the reader. If you 
> come up with an algorithm written in 'C', let me know and I'll try to 
> port it to the K3's PIC.
> 
> Wayne
> N6KR
> 
> 
> Sounds good, Wayne.  When can you have it done?  Upper right hand button
> would be my choice.
> 
> 73 de Terry, W0FM

Why would anybody want to use a "CW decoder" in the first place???

I guess that we who can "mentally" copy CW in our BRAINS have an advantage over 
those
who really did'nt APPLY THEMSELVES to accomplish what we did (thousands 
WORLDWIDE!!!)

I'm not an elitist nor are our other "Brethren" who can copy "Intl. Morse", we 
just
appreciate it's VALUE, not only in past years, but current as well!!!

Regards,

Jim/nn6ee

___
Elecraft mailing list
Post to: Elecraft@mailman.qth.net
You must be a subscriber to post to the list.
Subscriber Info (Addr. Change, sub, unsub etc.):
 http://mailman.qth.net/mailman/listinfo/elecraft

Help: http://mailman.qth.net/subscribers.htm
Elecraft web page: http://www.elecraft.com


Re: [Elecraft] CW copy: Wayne's solution

2009-01-22 Thread Doug Faunt N6TQS +1-510-655-8604
This would have been an interesting project back when K6XN and I were
at Schlumberger's AI lab.

73, doug


   From: "Andrew Faber" 
   Date: Thu, 22 Jan 2009 13:26:42 -0800

   Wayne,
 This is great stuff, but
 Suggestion for Elecraft: make a K3 panadaptor and a KW automatic, SO2R amp 
   higher priorities!
 73, Andy, AE6Y
   - Original Message - 
   From: "Terry Schieler" 
   To: "'wayne burdick'" ; "'Dan Romanchik KB6NU'" 
   
   Cc: "'Elecraft Mailing List'" 
   Sent: Thursday, January 22, 2009 1:04 PM

   >
   > Wayne Burdick wrote:
   >
   > Humans use lexicographical and semantic clues to fill in dropped CW
   > characters, and computers can do the same. But this goes way beyond the
   > simple signal processing used in, say, the K3's present CW decoder or
   > the one used in HRD. (I studied natural language recognition in college
   > and was anxious to play with either neural networks or traditional AI
   > methods as the foundation for CW decoding, but my other classes got in
   > the way :)
___
Elecraft mailing list
Post to: Elecraft@mailman.qth.net
You must be a subscriber to post to the list.
Subscriber Info (Addr. Change, sub, unsub etc.):
 http://mailman.qth.net/mailman/listinfo/elecraft

Help: http://mailman.qth.net/subscribers.htm
Elecraft web page: http://www.elecraft.com


Re: [Elecraft] CW copy: Wayne's solution

2009-01-22 Thread Andrew Faber
Wayne,
  This is great stuff, but
  Suggestion for Elecraft: make a K3 panadaptor and a KW automatic, SO2R amp 
higher priorities!
  73, Andy, AE6Y
- Original Message - 
From: "Terry Schieler" 
To: "'wayne burdick'" ; "'Dan Romanchik KB6NU'" 

Cc: "'Elecraft Mailing List'" 
Sent: Thursday, January 22, 2009 1:04 PM
Subject: Re: [Elecraft] CW copy: Wayne's solution


>
> Wayne Burdick wrote:
>
> Humans use lexicographical and semantic clues to fill in dropped CW
> characters, and computers can do the same. But this goes way beyond the
> simple signal processing used in, say, the K3's present CW decoder or
> the one used in HRD. (I studied natural language recognition in college
> and was anxious to play with either neural networks or traditional AI
> methods as the foundation for CW decoding, but my other classes got in
> the way :)
>
> One idea from the early days of AI is the so-called "blackboard" model.
> Imagine a garbled sentence on a blackboard, with various experts
> offering their opinions about what each letter and word is based on
> their specialized knowledge of word morphology, letter frequency,
> syntax, semantics, etc. You weigh these opinions based on degree of
> confidence, and once there's enough evidence for a letter or word, you
> fill it in, which in turn offers additional information to the
> highest-level expert, who might be considering the actual meaning of a
> phrase. His predictions can then strengthen the evidence for lower
> level symbols, and so on. Such methods are very algorithm-intensive,
> but might be useful for some aspects of CW stream parsing.
>
> A neural network could handle this, too, and has the advantage of
> self-organization. This is how I'd approach it (assuming unlimited free
> time--not!). You could use any of several different types of networks
> that have been proven successful at NLP (natural language processing).
>
> For example, you might take the incoming CW, break it into samples (say
> a few samples per bit at the highest code speed to be processed), shift
> the serial data representing 5 to 20 letters into a serial-to-parallel
> shift register, then feed the parallel data to the network's inputs. Or
> you could use a network with internal feedback (memory), with just one
> input, which itself could be "fuzzy" (the analog voltage from an
> envelope detector) or digital (0 or 1 depending on the output of a
> comparator, looking at the CW stream). The output might be a parallel
> binary word, perhaps ASCII, or a single output with multiple levels,
> where the voltage itself represents a symbol.
>
> To make this work, you need at least three things: an input
> representation that provides adequate context (e.g., if you want to
> decode a letter, the input should contain at least a few letters on
> either side of the target); a sufficiently complex network; and a large
> corpus of clean text with which to train the network (probably
> thousands of words, drawn from actual on-air content).
>
> One classic method of training the network involves placing known-good
> signals at the input, then comparing the desired outputs to the actual
> outputs, and "back-propagating" the resulting error through the
> network--from outputs to hidden layers to inputs--so that the network's
> nodes gradually acquire the proper "weights." Once the network has been
> trained to the point that it perfectly copies clean CW, you can then
> present it with a noisy signal stream. A well-designed network would be
> able to correct dropped CW elements or even letters if its internal
> representation is highly evolved. The network will have learned
> language-specific rules, and you don't have to know how it works,
> anymore than you know how your own brain does it.
>
> The actual implementation is left as an exercise for the reader. If you
> come up with an algorithm written in 'C', let me know and I'll try to
> port it to the K3's PIC.
>
> Wayne
> N6KR
>
>
> Sounds good, Wayne.  When can you have it done?  Upper right hand button
> would be my choice.
>
> 73 de Terry, W0FM
>
>
>
>
> ___
> Elecraft mailing list
> Post to: Elecraft@mailman.qth.net
> You must be a subscriber to post to the list.
> Subscriber Info (Addr. Change, sub, unsub etc.):
> http://mailman.qth.net/mailman/listinfo/elecraft
>
> Help: http://mailman.qth.net/subscribers.htm
> Elecraft web page: http://www.elecraft.com 

___
Elecraft mailing list
Post to: Elecraft@mailman.qth.net
You must be a subscriber to post to the list.
Subscriber Info (Addr. Change, sub, unsub etc.):
 http://mailman.qth.net/mailman/listinfo/elecraft

Help: http://mailman.qth.net/subscribers.htm
Elecraft web page: http://www.elecraft.com


Re: [Elecraft] CW copy: Wayne's solution

2009-01-22 Thread Terry Schieler

Wayne Burdick wrote:

Humans use lexicographical and semantic clues to fill in dropped CW 
characters, and computers can do the same. But this goes way beyond the 
simple signal processing used in, say, the K3's present CW decoder or 
the one used in HRD. (I studied natural language recognition in college 
and was anxious to play with either neural networks or traditional AI 
methods as the foundation for CW decoding, but my other classes got in 
the way :)

One idea from the early days of AI is the so-called "blackboard" model. 
Imagine a garbled sentence on a blackboard, with various experts 
offering their opinions about what each letter and word is based on 
their specialized knowledge of word morphology, letter frequency, 
syntax, semantics, etc. You weigh these opinions based on degree of 
confidence, and once there's enough evidence for a letter or word, you 
fill it in, which in turn offers additional information to the 
highest-level expert, who might be considering the actual meaning of a 
phrase. His predictions can then strengthen the evidence for lower 
level symbols, and so on. Such methods are very algorithm-intensive, 
but might be useful for some aspects of CW stream parsing.

A neural network could handle this, too, and has the advantage of 
self-organization. This is how I'd approach it (assuming unlimited free 
time--not!). You could use any of several different types of networks 
that have been proven successful at NLP (natural language processing).

For example, you might take the incoming CW, break it into samples (say 
a few samples per bit at the highest code speed to be processed), shift 
the serial data representing 5 to 20 letters into a serial-to-parallel 
shift register, then feed the parallel data to the network's inputs. Or 
you could use a network with internal feedback (memory), with just one 
input, which itself could be "fuzzy" (the analog voltage from an 
envelope detector) or digital (0 or 1 depending on the output of a 
comparator, looking at the CW stream). The output might be a parallel 
binary word, perhaps ASCII, or a single output with multiple levels, 
where the voltage itself represents a symbol.

To make this work, you need at least three things: an input 
representation that provides adequate context (e.g., if you want to 
decode a letter, the input should contain at least a few letters on 
either side of the target); a sufficiently complex network; and a large 
corpus of clean text with which to train the network (probably 
thousands of words, drawn from actual on-air content).

One classic method of training the network involves placing known-good 
signals at the input, then comparing the desired outputs to the actual 
outputs, and "back-propagating" the resulting error through the 
network--from outputs to hidden layers to inputs--so that the network's 
nodes gradually acquire the proper "weights." Once the network has been 
trained to the point that it perfectly copies clean CW, you can then 
present it with a noisy signal stream. A well-designed network would be 
able to correct dropped CW elements or even letters if its internal 
representation is highly evolved. The network will have learned 
language-specific rules, and you don't have to know how it works, 
anymore than you know how your own brain does it.

The actual implementation is left as an exercise for the reader. If you 
come up with an algorithm written in 'C', let me know and I'll try to 
port it to the K3's PIC.

Wayne
N6KR


Sounds good, Wayne.  When can you have it done?  Upper right hand button
would be my choice.

73 de Terry, W0FM




___
Elecraft mailing list
Post to: Elecraft@mailman.qth.net
You must be a subscriber to post to the list.
Subscriber Info (Addr. Change, sub, unsub etc.):
 http://mailman.qth.net/mailman/listinfo/elecraft

Help: http://mailman.qth.net/subscribers.htm
Elecraft web page: http://www.elecraft.com