There's also the python implementation of Jones & Mewhort (2007)'s BEAGLE
system (with orthographic representation inspired by Cox, Kachergis,
Recchia & Jones, 2011) that I coded recently:

https://github.com/mike-lawrence/wikiBEAGLE

I'm *fairly* sure I reproduced their reported methods accurately, but those
interested should double-check (it's pretty simple, code-wise).

The result is a real-valued vector that would still have to be converted
into an SDR for use with NUPIC I presume.You can stack words into sentences
by the methods shown in wikiBEAGLEprobe.py to provide more aggregated
input, but this seems like it would be doing the work of what the NUPIC
system is intended to achieve.

Mike


--
Mike Lawrence
Graduate Student
Department of Psychology & Neuroscience
Dalhousie University

~ Certainty is (possibly) folly ~


On Thu, Aug 15, 2013 at 6:18 AM, Marek Otahal <[email protected]> wrote:

> Thanks Ian,
> looking forward to read more about it.
> Added to the NLP wiki section :
> https://github.com/numenta/nupic/wiki/Natural-Language-Processing
>
> Cheers, Mark
>
>
> On Thu, Aug 15, 2013 at 6:06 AM, Ian Danforth <[email protected]>wrote:
>
>> https://code.google.com/p/word2vec/
>>
>> Perhaps most interestingly they've released large pre-trained word vector
>> sets (look near the bottom)
>>
>> Ian
>>
>> _______________________________________________
>> nupic mailing list
>> [email protected]
>> http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org
>>
>>
>
>
> --
> Marek Otahal :o)
>
> _______________________________________________
> nupic mailing list
> [email protected]
> http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org
>
>
_______________________________________________
nupic mailing list
[email protected]
http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org

Reply via email to