Johan,

Replies inline below.

On Aug 3, 3:34 am, "Johan S. R. Nielsen" <j.s.r.niel...@mat.dtu.dk>
wrote:
> ...
> I guess that Ensembles as you describe them could be implemented
> completely on
> top of the object hierarchy I am suggesting. E.g. a XCodesEnsemble
> would have
> a constructor for specifying exactly the ensemble (sort of like a
> sub-sub-family?) of XCodes; so it would have a function
> get_random_code which
> returns a XCode object. This could easily be standardised to such an
> extend as
> allowing generic testing for ensembles.

That was exactly my thought.  My current LDPCEnsemble class has both a
random_element() method which returns a parity check matrix, and a
random_graph() method.  Both should be suitable for generic testing.
I would modify random_element() to return an instance of one of your
Code subclasses.

> > The class of codes I am working with (LDPC codes) has an efficient
> > decoding algorithm (belief propagation, O(n)), but encoding is O(n^2)
> > in general.  And computing the generator matrix for encoding is non-
> > trivial as well.  There are also sometimes multiple encoding
> > algorithms.  So in this case, at least, the complexity of encoding and
> > decoding are reversed.
>
> I wasn't aware of this at all; come to think of it, I should have
> added a
> disclaimer in my first post, that most everything I wrote would
> violate some of the generalities
> of the field ;-)

No problem.  I thought your summary was quite good.  I should have
confined my corrections to items pertinent to the interface. :-)

> I guess this calls for the encoding part supporting almost the same
> structure
> as decoding. I think, then, that the part about having to implement
> the decoder
> as a class in some package coding.decoders (and by extension, also the
> encoders) should be optional. It seems very superfluous to have a
> ReedSolomonEncoder-class, but maybe not a LDBCEncoder (and several of
> the
> latter), but then the other way around for decoders. Same as for
> decoders, the
> encode() function of the LDBCCode class could then take an optional
> argument
> for which algorithm to use for encoding, in the case more than one was
> implemented.

I think having both encoder and decoder objects is a good idea.  Your
Code class could store callable 'encoder' and 'decoder' objects as
attributes.  The encode() and decode() functions could simply delegate
to these objects.  Specifying the 'encode_algorithm=X' and/or
'decode_algorithm=Y' in the constructor would specify which encoder/
decoder object to instantiate for the corresponding attributes, with
appropriate defaults.  Since the encoder and decoder are full objects,
it's easy for them to defer any expensive initialization until first
use (since they can maintain state), and they can be swapped at
runtime.

This is a detailed suggestion, but hopefully a good one.

> > The received data are not in the same field for "soft-decision"
> > decoding.  This is very important for my applications.
>
> You're right - it also reminds me of the case with erasure decoding.
> My initial
> idea is to more or less ignore the matter and again utilise run-time
> types: for
> soft-decision decoding, you would use a decoder, which expects a
> "codeword" of floats, right? So simply give that to it. The decoder
> function could do an initial type check and throw an exception if it
> was given a discrete codeword. Different testing functions would need
> to be implemented for soft-decision decoders, but it should still be
> straightforward. Maybe there is a problem here for soft-decision
> decoding over finite fields; how does that even work?

I know some people have done this, but I haven't looked into the
details.

> For erasures, the decode function could as its argument take a tuple,
> where the
> first element is the received word, and the second element is a list
> of
> booleans, the i'th element of which indicates whether the i'th element
> of the
> received word is significant. By the function still taking only one
> argument,
> it can fit into the argument standard of the decoder-functions (whose
> first
> argument is the received word and optional second argument is chosen
> decoder).
> This could extend to various generalisations of soft-decision decoding
> as well,
> I guess. In general, it can always be up to the specific decoder
> itself how to
> interpret the input; the end-user just needs to know, so he can give
> valid
> input, and so he can choose the appropriate testing function.

Or, similar to the general soft-decoding case, you can use log-
likelihood ratios over the domain {+Inf, -Inf, 0}.  But LLRs don't
work for symbols over general fields.  I can't think of an "obvious",
consistent decoder interface at the moment.  If the decoders can be
swapped, the type/structure of the channel output should be part of
your API.

> > Do be careful with the equality operator.  Sometimes it is assumed
> > that "==" means the same Python class as well as the same "data".
> > Since a code is *defined* by its mapping, I think I would support "=="
> > meaning code equivalence without respect to Python class, e.g. the
> > example you gave.  Someone can always test for class equality if
> > desired.
>
> As I mentioned, I think this thing raises some issues which can be
> deferred til
> later (such as whether or not codes containing the same codewords (but
> not
> necessarily giving the same mapping) are equivalent). Once the rest is
> settled,
> I'll gladly go into that discussion :-)

No problem.  Multiple notions of equality can be handled by methods.
I'm not enough of a purist to argue over the "right" definition of
equality for the operator.  But I can listen and provide feedback. :-)

Good luck!

- Ryan

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org

Reply via email to