gregation? If that is the
> > problem, then how to force it to do aggregation after receiving each
> > portion of data from Workers?
> > >
> > > Best regards, Alexander
> > >
> > > -Original Message-
> > > From: DB Tsai [mailto:dbt...@dbts
m Workers?
> >
> > Best regards, Alexander
> >
> > -Original Message-
> > From: DB Tsai [mailto:dbt...@dbtsai.com]
> > Sent: Friday, January 23, 2015 11:53 AM
> > To: Ulanov, Alexander
> > Cc: dev@spark.apache.org
> > Subject: Re: Maximum size of vect
then how to force it to do aggregation after receiving each portion
> of data from Workers?
>
> Best regards, Alexander
>
> -Original Message-
> From: DB Tsai [mailto:dbt...@dbtsai.com]
> Sent: Friday, January 23, 2015 11:53 AM
> To: Ulanov, Alexander
> Cc: dev@sp
53 AM
To: Ulanov, Alexander
Cc: dev@spark.apache.org
Subject: Re: Maximum size of vector that reduce can handle
Hi Alexander,
When you use `reduce` to aggregate the vectors, those will actually be pulled
into driver, and merged over there. Obviously, it's not scaleable given you are
doing
Hi Alexander,
When you use `reduce` to aggregate the vectors, those will actually be
pulled into driver, and merged over there. Obviously, it's not
scaleable given you are doing deep neural networks which have so many
coefficients.
Please try treeReduce instead which is what we do in linear regre