utable.LinkedHashMap?
Should I file a JIRA for it?
Much credit should be given to Martin Grotzke from EsotericSoftware/kryo
who helped me tremendously.
Best,
Rahul Palamuttam
On Fri, Aug 26, 2016 at 10:16 AM, Rahul Palamuttam
wrote:
> Thanks Renato.
>
> I forgot to reply all last t
ed it is null.
The iterator requires the firstEntry variable to walk the LinkedHashMap
https://github.com/scala/scala/blob/v2.11.8/src/library/scala/collection/mutable/LinkedHashMap.scala#L94-L100
I wonder why these two variables were made transient.
Best,
Rahul Palamuttam
On Thu, Aug 25, 2016 a
askResultGetter$$anon$2.run(TaskResultGetter.scala:50)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
I hope th
Hi sudhanshu,
Try user-unsubscribe.spark.apache.org
- Rahul P
Sent from my iPhone
> On Aug 21, 2016, at 9:19 AM, Sudhanshu Janghel
> wrote:
>
> Hello,
>
> I wish to unsubscribe from the channel.
>
> KIND REGARDS,
> SUDHANSHU
ent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
I hope this is a known issue and/or I'm missing something important in my
setup.
Appreciate any help or advice!
Best,
Rahul Palamuttam
wrote:
> I can reproduce it in spark-shell. But it works for batch job. Looks like
> spark repl issue.
>
> On Thu, Mar 3, 2016 at 10:43 AM, Rahul Palamuttam
> wrote:
>
>> Hi All,
>>
>> We recently came across this issue when using the spark-shell and
>>
Hi All,
We recently came across this issue when using the spark-shell and zeppelin.
If we assign the sparkcontext variable (sc) to a new variable and reference
another variable in an RDD lambda expression we get a task not serializable
exception.
The following three lines of code illustrate this
ere is a recent JIRA which I thought was interesting with respect to our
discussion.
https://issues.apache.org/jira/browse/SPARK-10399t JIRA
There's also a suggestion, at the bottom of the JIRA, that considers
exposing on-heap memory which is pretty interesting.
- Rahul Palamuttam
On Wed, Se
;s in performance or in general software architecture.
With python in particular the collect operation must be first written to
disk and then read back from the python driver process.
Would appreciate any insight on this, and if there is any work happening in
this area.
Thank you,
Rahul Palamutt
ks
> Best Regards
>
> On Tue, Jul 28, 2015 at 12:08 AM, Rahul Palamuttam > wrote:
>
>> Hi All,
>>
>> I hope this is the right place to post troubleshooting questions.
>> I've been following the install instructions and I get the following error
&g
Hi All,
I was wondering why the recommended number for parallelism was 2 -3 times
the number of cores on your cluster.
Is the heuristic explained in any of the Spark papers? Or is it more of an
agreed upon rule of thumb?
Thanks,
Rahul P
--
View this message in context:
http://apache-spark-us
, Jul 27, 2015 at 11:48 AM, Rahul Palamuttam
wrote:
> All nodes are using java 8.
> I've tried to mimic the environments as much as possible among all nodes.
>
>
> On Mon, Jul 27, 2015 at 11:44 AM, Ted Yu wrote:
>
>> bq. on one node it works but on the other it gives
nts on the two nodes ?
> Does the other node use Java 8 ?
>
> Cheers
>
> On Mon, Jul 27, 2015 at 11:38 AM, Rahul Palamuttam > wrote:
>
>> Hi All,
>>
>> I hope this is the right place to post troubleshooting questions.
>> I've been following the in
Hi All,
I hope this is the right place to post troubleshooting questions.
I've been following the install instructions and I get the following error
when running the following from Spark home directory
$./build/sbt
Using /usr/java/jdk1.8.0_20/ as default JAVA_HOME.
Note, this will be overridden b
14 matches
Mail list logo