Yes, I used this dataset:
http://www-etud.iro.umontreal.ca/~boulanni/icml2012

On Fri, Mar 20, 2015 at 12:33 PM, Matthew Taylor <[email protected]> wrote:

> Eric, how did you do this? Did you train a model on some type of
> music, then send in a few initial notes and ask for it to predict
> further?
> ---------
> Matt Taylor
> OS Community Flag-Bearer
> Numenta
>
>
> On Fri, Mar 20, 2015 at 8:23 AM, Eric Laukien <[email protected]>
> wrote:
> > I made HTFE generate music with MIDI, I encoded the notes by making the
> > input the size of the number of notes and making an input "1" if the
> > corresponding note was audible in that time frame.
> > This should work with NuPIC as well, since the inputs are binary.
> > I used this Python library to generate the MIDIs:
> > https://code.google.com/p/midiutil/
> >
> > I attached some music it generated :)
> >
> >
> >
> > On Fri, Mar 20, 2015 at 9:52 AM, Matthew Taylor <[email protected]>
> wrote:
> >>
> >> Well, the idea is to transform MIDI into NuPIC input. How we do that
> >> is the question. I've been thinking about this a lot lately, and I
> >> think the first step would be to try to take one MIDI track (one
> >> instrument in a MIDI song) and transform it into scalar values that we
> >> could feed into a NuPIC model. Ideally, we'd create a MIDI encoder,
> >> but I'm not sure if MIDI files are set up for streaming or not (I
> >> assume MIDI streams well).
> >>
> >> For instance, a MIDI song might have a bunch of different tracks for
> >> each instrument, and we could train models on each track. If I could
> >> get each track input into different NuPIC models, I might be able to
> >> identify when the song moves from verse to chorus and back, when any
> >> new refrain is introduced, when there is a key change or time change,
> >> etc.
> >> ---------
> >> Matt Taylor
> >> OS Community Flag-Bearer
> >> Numenta
> >>
> >>
> >> On Fri, Mar 20, 2015 at 5:53 AM, Chris Albertson
> >> <[email protected]> wrote:
> >> > Use the language you know best, whatever that is.
> >> >
> >> > No those two things are different.  The first does MIDI and second
> does
> >> > audio.  MIDI does NOT contain any audio information  If they were to
> >> > interact then you'd need something to produce sound from MIDI which we
> >> > call
> >> > a "virtual instrument".
> >> >
> >> > Sounds like you might be re-inventing wheels.  What is the "big
> >> > picture"?.
> >> >
> >> > On Fri, Mar 20, 2015 at 4:55 AM, Richard Crowder <[email protected]>
> >> > wrote:
> >> >>
> >> >> I have MIDI equipment and SW. But not dealt with MIDI parsing for a
> >> >> long
> >> >> time. Which language, assuming Python?
> >> >>
> >> >> Of interest, I had wondered whether the
> >> >> https://github.com/abudaan/MIDIBridge could interact with
> >> >> https://github.com/bbcrd/peaks.js and JHTM ?
> >> >>
> >> >> On Thu, Mar 12, 2015 at 3:11 PM, Matthew Taylor <[email protected]>
> >> >> wrote:
> >> >>>
> >> >>> Has anyone worked with MIDI before?
> >> >>>
> >> >>> http://www.midi.org/techspecs/midispec.php
> >> >>>
> >> >>> ---------
> >> >>> Matt Taylor
> >> >>> OS Community Flag-Bearer
> >> >>> Numenta
> >> >>>
> >> >>
> >> >
> >> >
> >> >
> >> > --
> >> >
> >> > Chris Albertson
> >> > Redondo Beach, California
> >>
> >
>
>

Reply via email to