Re: [Numpy-discussion] numpy.random.randn

2018-03-13 Thread Marko Asplund
On Fri, 9 Mar 2018 11:38:55, Robert Kern wrote: > > Sorry for being a bit inaccurate. > > My Scala code actually mirrors the NumPy based random initialization, so > > I sample with Gaussian of mean = 0 and std dev = 1, then multiply with 0.01. > > Have you verified this? I.e. save out the Scala-in

Re: [Numpy-discussion] numpy.random.randn

2018-03-09 Thread Robert Kern
On Thu, Mar 8, 2018 at 12:44 PM, Marko Asplund wrote: > > On Wed, 7 Mar 2018 13:14:36, Robert Kern wrote: > > > > With NumPy I'm simply using the following random initilization code: > > > > > > np.random.randn(n_h, n_x) * 0.01 > > > > > > I'm trying to emulate the same behaviour in my Scala code

Re: [Numpy-discussion] numpy.random.randn

2018-03-08 Thread Marko Asplund
On Wed, 7 Mar 2018 13:14:36, Robert Kern wrote: > > With NumPy I'm simply using the following random initilization code: > > > > np.random.randn(n_h, n_x) * 0.01 > > > > I'm trying to emulate the same behaviour in my Scala code by sampling > from a > > Gaussian distribution with mean = 0 and std

Re: [Numpy-discussion] numpy.random.randn

2018-03-07 Thread Robert Kern
On Wed, Mar 7, 2018 at 1:10 PM, Marko Asplund wrote: > > However, the results look very different when using random initialization. > With respect to exact cost this is course expected, but what I find troublesome > is that after N training iterations the cost starts approaching zero with the Num

Re: [Numpy-discussion] numpy.random.randn

2018-03-07 Thread Marko Asplund
On Tue, 6 Mar 2018 12:52:14, Robert Kern wrote: > I would just recommend using one of the codebases to initialize the > network, save the network out to disk, and load up the initialized network > in each of the different codebases for training. That way you are sure that > they are both starting f

Re: [Numpy-discussion] numpy.random.randn

2018-03-06 Thread Robert Kern
On Tue, Mar 6, 2018 at 1:39 AM, Marko Asplund wrote: > > I've some neural network code in NumPy that I'd like to compare with a Scala based implementation. > My problem is currently random initialization of the neural net parameters. > I'd like to be able to get the same results from both implemen