Hi,
Any ideas what's going wrong or how to fix it? Do I have to downgrade to
0.9.x to be able to use Spark?
Best regards,
*Sampo Niskanen*
*Lead developer / Wellmo*
sampo.niska...@wellmo.com
+358 40 820 5291
On Fri, Oct 30, 2015 at 4:57 PM, Sampo Niskanen <sampo.ni
t anything useful.
I'm also facing another issue with loading a lot of data from MongoDB,
which might be related, but the error is different:
https://groups.google.com/forum/#!topic/mongodb-user/Knj406szd74
Any ideas?
*Sampo Niskanen*
*Lead developer / Wellmo*
sampo.niska...@wellmo.com
+358 40 820 5291
to analyze
time-related elements.)
How can this be achieved?
*Sampo Niskanen*
*Lead developer / Wellmo*
sampo.niska...@wellmo.com
+358 40 820 5291
id or session id) that makes
> you algorithm parallel. In that case you can use the snippet above in a
> reduceByKey.
>
> hope this helps
> -adrian
>
> Sent from my iPhone
>
> On 22 Oct 2015, at 09:36, Sampo Niskanen <sampo.niska...@wellmo.com>
> wrote:
>
&
sc.stop()
> }
> }
>
>
>
> prints
>
>
>
> WrappedArray(((1,A),(3,B)), ((3,B),(7,C)), ((7,C),(8,D)), ((8,D),(9,E)))
>
>
>
> Otherwise you could try to convert your RDD to a DataFrame then use windowing
> functions in SparkSQL with the LEAD/LAG f
)
Thanks.
*Sampo Niskanen*
*Lead developer / Wellmo*
sampo.niska...@wellmo.com
+358 40 820 5291
On Fri, Feb 28, 2014 at 10:46 AM, Prashant Sharma scrapco...@gmail.comwrote:
You can enable debug logging for repl, thankfully it uses sparks logging
framework. Trouble must