kedHashMap?
Should I file a JIRA for it?
Much credit should be given to Martin Grotzke from EsotericSoftware/kryo
who helped me tremendously.
Best,
Rahul Palamuttam
On Fri, Aug 26, 2016 at 10:16 AM, Rahul Palamuttam <rahulpala...@gmail.com>
wrote:
> Thanks Renato.
>
> I fo
ull.
The iterator requires the firstEntry variable to walk the LinkedHashMap
https://github.com/scala/scala/blob/v2.11.8/src/library/scala/collection/mutable/LinkedHashMap.scala#L94-L100
I wonder why these two variables were made transient.
Best,
Rahul Palamuttam
On Thu, Aug 25, 2016 at 11:13 PM, Ren
etter$$anon$2.run(TaskResultGetter.scala:50)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
I hope this is a kn
Hi sudhanshu,
Try user-unsubscribe.spark.apache.org
- Rahul P
Sent from my iPhone
> On Aug 21, 2016, at 9:19 AM, Sudhanshu Janghel
> wrote:
>
> Hello,
>
> I wish to unsubscribe from the channel.
>
> KIND REGARDS,
> SUDHANSHU
hreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
I hope this is a known issue and/or I'm missing something important in my
setup.
Appreciate any help or advice!
Best,
Rahul Palamuttam
Hi All,
We recently came across this issue when using the spark-shell and zeppelin.
If we assign the sparkcontext variable (sc) to a new variable and reference
another variable in an RDD lambda expression we get a task not serializable
exception.
The following three lines of code illustrate this
JIRA which I thought was interesting with respect to our
discussion.
https://issues.apache.org/jira/browse/SPARK-10399t JIRA
There's also a suggestion, at the bottom of the JIRA, that considers
exposing on-heap memory which is pretty interesting.
- Rahul Palamuttam
On Wed, Sep 9, 2015 at 4:52 AM
or in general software architecture.
With python in particular the collect operation must be first written to
disk and then read back from the python driver process.
Would appreciate any insight on this, and if there is any work happening in
this area.
Thank you,
Rahul Palamuttam
--
View
.jar
Thanks
Best Regards
On Tue, Jul 28, 2015 at 12:08 AM, Rahul Palamuttam rahulpala...@gmail.com
wrote:
Hi All,
I hope this is the right place to post troubleshooting questions.
I've been following the install instructions and I get the following error
when running the following from
Hi All,
I was wondering why the recommended number for parallelism was 2 -3 times
the number of cores on your cluster.
Is the heuristic explained in any of the Spark papers? Or is it more of an
agreed upon rule of thumb?
Thanks,
Rahul P
--
View this message in context:
27, 2015 at 11:48 AM, Rahul Palamuttam rahulpala...@gmail.com
wrote:
All nodes are using java 8.
I've tried to mimic the environments as much as possible among all nodes.
On Mon, Jul 27, 2015 at 11:44 AM, Ted Yu yuzhih...@gmail.com wrote:
bq. on one node it works but on the other it gives
the environments on the two nodes ?
Does the other node use Java 8 ?
Cheers
On Mon, Jul 27, 2015 at 11:38 AM, Rahul Palamuttam rahulpala...@gmail.com
wrote:
Hi All,
I hope this is the right place to post troubleshooting questions.
I've been following the install instructions and I get the following
Hi All,
I hope this is the right place to post troubleshooting questions.
I've been following the install instructions and I get the following error
when running the following from Spark home directory
$./build/sbt
Using /usr/java/jdk1.8.0_20/ as default JAVA_HOME.
Note, this will be overridden
13 matches
Mail list logo