I have both SPARK-2878 and SPARK-2893.
--
View this message in context:
http://apache-spark-developers-list.1001551.n3.nabble.com/SPARK-2878-Kryo-serialisation-with-custom-Kryo-registrator-failing-tp7719p8046.html
Sent from the Apache Spark Developers List mailing list archive at Nabble.com
l.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>
> --
>
>
>
>
> --
> View this message in context:
> http://apache-spark-developers-list.1001551.n3.
ontext:
http://apache-spark-developers-list.1001551.n3.nabble.com/SPARK-2878-Kryo-serialisation-with-custom-Kryo-registrator-failing-tp7719p7989.html
Sent from the Apache Spark Developers List mailing list archive at Nabble.com.
-
@rxin With the fixes, I could run it fine on top of branch-1.0
On master when running using YARN I am getting another KryoException:
Exception in thread "main" org.apache.spark.SparkException: Job aborted due
to stage failure: Task 247 in stage 52.0 failed 4 times, most recent
failure: Lost task
I am still a bit confused that why this issue did not show up in 0.9...at
that time there was no spark-submit and the context was constructed with
low level calls...
Kryo register for ALS was always in my application code..
Was this bug introduced in 1.0 or it was always there ?
On Aug 14, 2014
Here: https://github.com/apache/spark/pull/1948
On Thu, Aug 14, 2014 at 5:45 PM, Debasish Das
wrote:
> Is there a fix that I can test ? I have the flows setup for both
> standalone and YARN runs...
>
> Thanks.
> Deb
>
>
>
> On Thu, Aug 14, 2014 at 10:59 AM, Reynold Xin wrote:
>
>> Yes, I unde
Is there a fix that I can test ? I have the flows setup for both standalone
and YARN runs...
Thanks.
Deb
On Thu, Aug 14, 2014 at 10:59 AM, Reynold Xin wrote:
> Yes, I understand it might not work for custom serializer, but that is a
> much less common path.
>
> Basically I want a quick fix fo
Yes, I understand it might not work for custom serializer, but that is a
much less common path.
Basically I want a quick fix for 1.1 release (which is coming up soon). I
would not be comfortable making big changes to class path late into the
release cycle. We can do that for 1.2.
On Thu, Aug
That should work, but would you also make these changes to the
JavaSerializer? The API of these is the same so that you can select one or
the other (or in theory a custom serializer)? This also wouldn't address
the problem of shipping custom *serializers* (not kryo registrators) in
user jars.
On
Graham,
SparkEnv only creates a KryoSerializer, but as I understand that serializer
doesn't actually initializes the registrator since that is only called when
newKryo() is called when KryoSerializerInstance is initialized.
Basically I'm thinking a quick fix for 1.2:
1. Add a classLoader field t
In part, my assertion was based on a comment by sryza on my PR (
https://github.com/apache/spark/pull/1890#issuecomment-51805750), however I
thought I had also seen it in the YARN code base. However, now that I look
for it, I can't find where this happens, so perhaps I was imagining the
YARN behav
By the way I have seen this same problem while deploying 1.1.0-SNAPSHOT on
YARN as well...
So it is a common problem in both standalone and YARN mode deployment...
On Thu, Aug 14, 2014 at 12:53 AM, Graham Dennis
wrote:
> Hi Reynold,
>
> That would solve this specific issue, but you'd need to b
Hi Reynold,
That would solve this specific issue, but you'd need to be careful that you
never created a serialiser instance before the first task is received.
Currently in Executor.TaskRunner.run a closure serialiser instance is
created before any application jars are downloaded, but that could b
Graham,
Thanks for working on this. This is an important bug to fix.
I don't have the whole context and obviously I haven't spent nearly as much
time on this as you have, but I'm wondering what if we always pass the
executor's ClassLoader to the Kryo serializer? Will that solve this problem?
Hi Deb,
The only alternative serialiser is the JavaSerialiser (the default).
Theoretically Spark supports custom serialisers, but due to a related
issue, custom serialisers currently can't live in application jars and must
be available to all executors at launch. My PR fixes this issue as well,
Sorry I just saw Graham's email after sending my previous email about this
bug...
I have been seeing this same issue on our ALS runs last week but I thought
it was due my hacky way to run mllib 1.1 snapshot on core 1.0...
What's the status of this PR ? Will this fix be back-ported to 1.0.1 as we
I now have a complete pull request for this issue that I'd like to get
reviewed and committed. The PR is available here:
https://github.com/apache/spark/pull/1890 and includes a testcase for the
issue I described. I've also submitted a related PR (
https://github.com/apache/spark/pull/1827) that
I've submitted a work-in-progress pull request for this issue that I'd like
feedback on. See https://github.com/apache/spark/pull/1890 . I've also
submitted a pull request for the related issue that the exceptions hit when
trying to use a custom kryo registrator are being swallowed:
https://github
See my comment on https://issues.apache.org/jira/browse/SPARK-2878 for the
full stacktrace, but it's in the BlockManager/BlockManagerWorker where it's
trying to fulfil a "getBlock" request for another node. The objects that
would be in the block haven't yet been serialised, and that then causes th
I don't think it was a conscious design decision to not include the
application classes in the connection manager serializer. We should fix
that. Where is it deserializing data in that thread?
4 might make sense in the long run, but it adds a lot of complexity to the
code base (whole separate code
Hi Spark devs,
I’ve posted an issue on JIRA (
https://issues.apache.org/jira/browse/SPARK-2878) which occurs when using
Kryo serialisation with a custom Kryo registrator to register custom
classes with Kryo. This is an insidious issue that non-deterministically
causes Kryo to have different ID nu
21 matches
Mail list logo