We are seeing lots of stability problems with Spark 2.1.1 as a result of
dropped events. We disabled the event log, which seemed to help, but many
events are still being dropped, as in the example log below.
I there any way for me to see what listener is backing up the queue? Is
there any
, they don't get reported.
>
> I might be mistaken, if somebody has a good explanation, would also like
> to hear.
>
> On Fri, May 19, 2017 at 5:45 PM, Miles Crawford <mil...@allenai.org>
> wrote:
>
>> Hey ya'll,
>>
>> Trying to migrate from Spark 1.6.1 to
Could I be experiencing the same thing?
https://www.dropbox.com/s/egtj1056qeudswj/sparkwut.png?dl=0
On Wed, Nov 16, 2016 at 10:37 AM, Shreya Agarwal
wrote:
> I think that is a bug. I have seen that a lot especially with long running
> jobs where Spark skips a lot of
shows crazy output:
https://www.dropbox.com/s/egtj1056qeudswj/sparkwut.png?dl=0
The applications seem to complete successfully, but I was wondering if
anyone has an idea of what might be going wrong?
Thanks,
-Miles
s to the ML libraries.
Thanks,
-miles
Instead of reading *.jhist files direclty in Spark, you could convert your
.jhist files into Json and then read Json files in Spark.
Here's a post on converting .jhist file to json format.
http://stackoverflow.com/questions/32683907/converting-jhist-files-to-json-format
--
View this message in
It is completed apps that are not showing up. I'm fine with incomplete apps
not appearing.
On Tue, Apr 12, 2016 at 6:43 AM, Steve Loughran <ste...@hortonworks.com>
wrote:
>
> On 12 Apr 2016, at 00:21, Miles Crawford <mil...@allenai.org> wrote:
>
> Hey there. I have my s
.
The problem is that the history server doesn't seem to notice new logs
arriving into the S3 bucket. Any idea how I can get it to scan the folder
for new files?
Thanks,
-miles