Hi,
Any ideas what's going wrong or how to fix it? Do I have to downgrade to
0.9.x to be able to use Spark?
Best regards,
*Sampo Niskanen*
*Lead developer / Wellmo*
sampo.niska...@wellmo.com
+358 40 820 5291
On Fri, Oct 30, 2015 at 4:57 PM, Sampo Niskanen
wrote:
> Hi,
but couldn't make out anything useful.
I'm also facing another issue with loading a lot of data from MongoDB,
which might be related, but the error is different:
https://groups.google.com/forum/#!topic/mongodb-user/Knj406szd74
Any ideas?
*Sampo Niskanen*
*Lead developer / Wellmo*
sampo.niska...@wellmo.com
+358 40 820 5291
t;
>
> sc.stop()
> }
> }
>
>
>
> prints
>
>
>
> WrappedArray(((1,A),(3,B)), ((3,B),(7,C)), ((7,C),(8,D)), ((8,D),(9,E)))
>
>
>
> Otherwise you could try to convert your RDD to a DataFrame then use windowing
> functions in SparkSQL with
or session id) that makes
> you algorithm parallel. In that case you can use the snippet above in a
> reduceByKey.
>
> hope this helps
> -adrian
>
> Sent from my iPhone
>
> On 22 Oct 2015, at 09:36, Sampo Niskanen
> wrote:
>
> Hi,
>
> I have anal
me other way to analyze
time-related elements.)
How can this be achieved?
* Sampo Niskanen*
*Lead developer / Wellmo*
sampo.niska...@wellmo.com
+358 40 820 5291
(line 2528)
Thanks.
*Sampo Niskanen*
*Lead developer / Wellmo*
sampo.niska...@wellmo.com
+358 40 820 5291
On Fri, Feb 28, 2014 at 10:46 AM, Prashant Sharma wrote:
> You can enable debug logging for repl, thankfully it uses sparks logging
> framework. Trouble must be with wrappe
va:606)
at sbt.Run.invokeMain(Run.scala:68)
at sbt.Run.run0(Run.scala:61)
at sbt.Run.execute$1(Run.scala:50)
at sbt.Run$$anonfun$run$1.apply$mcV$sp(Run.scala:54)
at sbt.TrapExit$.executeMain$1(TrapExit.scala:33)
at sbt.TrapExit$$anon$1.run(TrapExit.scala:42)
Spark context available as sc.
ected within the standard
spark-shell?
Thanks.
*Sampo Niskanen*
*Lead developer / Wellmo*