On Fri, 9 Mar 2018 11:38:55, Robert Kern wrote:
> > Sorry for being a bit inaccurate.
> > My Scala code actually mirrors the NumPy based random initialization, so
> > I sample with Gaussian of mean = 0 and std dev = 1, then multiply with
0.01.
>
> Have you verified this? I.e. save out the Scala-in
On Thu, Mar 8, 2018 at 12:44 PM, Marko Asplund
wrote:
>
> On Wed, 7 Mar 2018 13:14:36, Robert Kern wrote:
>
> > > With NumPy I'm simply using the following random initilization code:
> > >
> > > np.random.randn(n_h, n_x) * 0.01
> > >
> > > I'm trying to emulate the same behaviour in my Scala code
On Wed, 7 Mar 2018 13:14:36, Robert Kern wrote:
> > With NumPy I'm simply using the following random initilization code:
> >
> > np.random.randn(n_h, n_x) * 0.01
> >
> > I'm trying to emulate the same behaviour in my Scala code by sampling
> from a
> > Gaussian distribution with mean = 0 and std
On Wed, Mar 7, 2018 at 1:10 PM, Marko Asplund
wrote:
>
> However, the results look very different when using random initialization.
> With respect to exact cost this is course expected, but what I find
troublesome
> is that after N training iterations the cost starts approaching zero
with the Num
On Tue, 6 Mar 2018 12:52:14, Robert Kern wrote:
> I would just recommend using one of the codebases to initialize the
> network, save the network out to disk, and load up the initialized network
> in each of the different codebases for training. That way you are sure
that
> they are both starting f
On Tue, Mar 6, 2018 at 1:39 AM, Marko Asplund
wrote:
>
> I've some neural network code in NumPy that I'd like to compare with a
Scala based implementation.
> My problem is currently random initialization of the neural net
parameters.
> I'd like to be able to get the same results from both implemen