why not use pymc or r:
- http://code.google.com/p/pymc/
- http://cran.r-project.org/web/views/Bayesian.html
On Fri, Apr 6, 2012 at 9:41 AM, Shankar Satish wrote:
> Hello everyone,
>
> I was supposed to prepare a proposal for bayesian networks in sklearn.
> However as i researched the details f
numpy and the parallelization gained from libblas + liblapack.
Curious if anyone can recommend an efficient SIS example?
Much, much appreciated,
Timmy Wilson
Cleveland, OH
--
Virtualization & Cloud Management Using
ude in the gist a minimal python script which
> reproduces the error from the included data, that would be very helpful.
> Jake
>
> Timmy Wilson wrote:
>> Thanks Jake
>>
>> here are the files -- https://gist.github.com/1453617
>>
>> here's the c
can take a look and see if I can figure out what the problem is.
> Alternatively, you could just push the plot there, but it would be
> harder for me to experiment with. Thanks!
> Jake
>
> Timmy Wilson wrote:
>> This was rejected because attachments were > 40K
>>
This was rejected because attachments were > 40K
What's the preferred way to pass random/tmp files?
-- Forwarded message --
From: Timmy Wilson
Date: Fri, Dec 9, 2011 at 8:57 AM
Subject: Re: [Scikit-learn-general] RuntimeError: Factor is exactly singular
To: scikit-learn
otion, viscosity, and springlike attractive and
repulsive forces.
"
http://www.mitpressjournals.org/doi/abs/10.1162/106454603321489509
On Fri, Dec 9, 2011 at 9:36 AM, Timmy Wilson wrote:
> I first ran into energy-based learning when studying neural nets.
>
> Recently i found a few promis
I first ran into energy-based learning when studying neural nets.
Recently i found a few promising papers/examples that focus on energy
based graph embedding.
I'm curious what the community thinks of this brand of learning?
The guys @ Gephi published a nice overview of force/energy
'standard' + 'modified' both work fine
'hessian' + 'ltsa' both have issues
ltsa is printing:
RuntimeWarning: Diagonal number 2 is exactly zero. Singular matrix.
and then setting everything to -nan
i tried adding random noise and increasing n_neighbors -- but no dice
Fabian's suggestion is sti
I get the following error when running Hessian-based LLE::
/home/timmyt/projects/smarttypes/smarttypes/scripts/reduce_twitter_graph.py
in ()
32 print "Passed our little test: following %s users!" %
len(tmp_followies)
33
---> 34 results = reduce_graph(adjacency_matrix, follow
t it. So here
we go.
"
Joe's in Berkeley -- maybe he'll join us ;]
On Wed, Dec 7, 2011 at 9:28 AM, Olivier Grisel wrote:
> 2011/12/7 Timmy Wilson :
>> I would love to sit in, and learn, and contribute where i can.
>>
>> Probably won't have time fo
I would love to sit in, and learn, and contribute where i can.
Probably won't have time for this during the sprint -- but i want to
throw it out there:
The importance of locality in many manifold learning algos them good
candidates for distribution.
On Wed, Dec 7, 2011 at 3:22 AM, Olivier Grise
Awesome!
Thank you David -- backproppy looks nice + simple -- exactly what i
needed to experiment/learn with.
On Mon, Nov 28, 2011 at 1:22 PM, David Warde-Farley
wrote:
> On Mon, Nov 28, 2011 at 06:42:03PM +0100, Andreas Müller wrote:
>
>> I think it should be pretty straightforward, replacing
Thanks Guys!
> This is neither a Deep Belief Network nor a stack
> of RBMs, just a regular feed forward neural network
> that has a particularly well chosen set of initial weights.
Agreed. This is what i'm imagining.
Assuming good results, i'm sure i'll want to move to a GPU implementation.
In
rt -- running gradient
descent backpropagation on the weights established by step 1.
Has anyone tried this, or something similar?
I found a library that uses MDP to do something similar --
http://organic.elis.ugent.be/node/270 -- but i'd like to do it all w/
Edwin's cod
14 matches
Mail list logo