experiences.
Thanks,
On Mon, Mar 28, 2016 at 10:40 PM, Spark Newbie
wrote:
> Hi All,
>
> The default value for spark.streaming.blockQueueSize is 10 in
> https://github.com/apache/spark/blob/branch-1.6/streaming/src/main/scala/org/apache/spark/streaming/receiver/BlockGenerator.scala.
> I
ue. I suppose the main
consideration when increasing this size would be the memory allocated to
the executor. I haven't seen much documentation on this config. And any
advise on how to fine tune this would be useful.
Thanks,
Spark newbie
Pinging again ...
On Wed, Nov 25, 2015 at 4:19 PM, Ted Yu wrote:
> Which Spark release are you using ?
>
> Please take a look at:
> https://issues.apache.org/jira/browse/SPARK-5594
>
> Cheers
>
> On Wed, Nov 25, 2015 at 3:59 PM, Spark Newbie
> wrote:
>
>>
Pinging again to see if anyone has any thoughts or prior experience with
this issue.
On Wed, Nov 25, 2015 at 3:56 PM, Spark Newbie
wrote:
> Hi Spark users,
>
> I have been seeing this issue where receivers enter a "stuck" state after
> it encounters a the following exc
Using Spark-1.4.1
On Wed, Nov 25, 2015 at 4:19 PM, Ted Yu wrote:
> Which Spark release are you using ?
>
> Please take a look at:
> https://issues.apache.org/jira/browse/SPARK-5594
>
> Cheers
>
> On Wed, Nov 25, 2015 at 3:59 PM, Spark Newbie
> wrote:
>
>>
Hi Spark users,
I'm seeing the below exceptions once in a while which causes tasks to fail
(even after retries, so it is a non recoverable exception I think), hence
stage fails and then the job gets aborted.
Exception ---
java.io.IOException: org.apache.spark.SparkException: Failed to get
broadca
Hi Spark users,
I have been seeing this issue where receivers enter a "stuck" state after
it encounters a the following exception "Error in block pushing thread -
java.util.concurrent.TimeoutException: Futures timed out".
I am running the application on spark-1.4.1 and using kinesis-asl-1.4.
When
Are you using EMR?
You can install Hadoop-2.6.0 along with Spark-1.5.1 in your EMR cluster.
And that brings s3a jars to the worker nodes and it becomes available to
your application.
On Thu, Oct 15, 2015 at 11:04 AM, Scott Reynolds
wrote:
> List,
>
> Right now we build our spark jobs with the s3
e Spark's configuration page). The job by default does not get
> resubmitted.
>
> You could try getting the logs of the failed executor, to see what caused
> the failure. Could be a memory limit issue, and YARN killing it somehow.
>
>
>
> On Wed, Oct 14, 2015 at 11:05
regardless of whether they were successfully processed or not.
On Wed, Oct 14, 2015 at 11:01 AM, Spark Newbie
wrote:
> I ran 2 different spark 1.5 clusters that have been running for more than
> a day now. I do see jobs getting aborted due to task retry's maxing out
> (default 4) d
15 at 4:04 PM, Tathagata Das wrote:
> Is this happening too often? Is it slowing things down or blocking
> progress. Failures once in a while is part of the norm, and the system
> should take care of itself.
>
> On Tue, Oct 13, 2015 at 2:47 PM, Spark Newbie
> wrote:
>
>>
Hi Spark users,
I'm seeing the below exception in my spark streaming application. It
happens in the first stage where the kinesis receivers receive records and
perform a flatMap operation on the unioned Dstream. A coalesce step also
happens as a part of that stage for optimizing the performance.
Hi Spark users,
Is there an easy way to turn on DEBUG logs in receivers and executors?
Setting sparkContext.setLogLevel seems to turn on DEBUG level only on the
Driver.
Thanks,
logs? I
can send it if that will help dig into the root cause.
On Fri, Oct 9, 2015 at 2:18 PM, Tathagata Das wrote:
> Can you provide the before stop and after restart log4j logs for this?
>
> On Fri, Oct 9, 2015 at 2:13 PM, Spark Newbie
> wrote:
>
>> Hi Spark Users,
&
Hi Spark Users,
I'm seeing checkpoint restore failures causing the application startup to
fail with the below exception. When I do "ls" on the s3 path I see the key
listed sometimes and not listed sometimes. There are no part files
(checkpointed files) in the specified S3 path. This is possible be
15 matches
Mail list logo