Re: overriding spark.streaming.blockQueueSize default value

2016-03-29 Thread Spark Newbie
experiences. Thanks, On Mon, Mar 28, 2016 at 10:40 PM, Spark Newbie <sparknewbie1...@gmail.com> wrote: > Hi All, > > The default value for spark.streaming.blockQueueSize is 10 in > https://github.com/apache/spark/blob/branch-1.6/streaming/src/main/scala/org/apache/spark/

overriding spark.streaming.blockQueueSize default value

2016-03-28 Thread Spark Newbie
suppose the main consideration when increasing this size would be the memory allocated to the executor. I haven't seen much documentation on this config. And any advise on how to fine tune this would be useful. Thanks, Spark newbie

Re: SparkException: Failed to get broadcast_10_piece0

2015-11-30 Thread Spark Newbie
Pinging again ... On Wed, Nov 25, 2015 at 4:19 PM, Ted Yu <yuzhih...@gmail.com> wrote: > Which Spark release are you using ? > > Please take a look at: > https://issues.apache.org/jira/browse/SPARK-5594 > > Cheers > > On Wed, Nov 25, 2015 at 3:59 PM, Spark New

Error in block pushing thread puts the KinesisReceiver in a stuck state

2015-11-25 Thread Spark Newbie
Hi Spark users, I have been seeing this issue where receivers enter a "stuck" state after it encounters a the following exception "Error in block pushing thread - java.util.concurrent.TimeoutException: Futures timed out". I am running the application on spark-1.4.1 and using kinesis-asl-1.4.

SparkException: Failed to get broadcast_10_piece0

2015-11-25 Thread Spark Newbie
Hi Spark users, I'm seeing the below exceptions once in a while which causes tasks to fail (even after retries, so it is a non recoverable exception I think), hence stage fails and then the job gets aborted. Exception --- java.io.IOException: org.apache.spark.SparkException: Failed to get

Re: SparkException: Failed to get broadcast_10_piece0

2015-11-25 Thread Spark Newbie
Using Spark-1.4.1 On Wed, Nov 25, 2015 at 4:19 PM, Ted Yu <yuzhih...@gmail.com> wrote: > Which Spark release are you using ? > > Please take a look at: > https://issues.apache.org/jira/browse/SPARK-5594 > > Cheers > > On Wed, Nov 25, 2015 at 3:59 PM, Spark New

Re: s3a file system and spark deployment mode

2015-10-15 Thread Spark Newbie
Are you using EMR? You can install Hadoop-2.6.0 along with Spark-1.5.1 in your EMR cluster. And that brings s3a jars to the worker nodes and it becomes available to your application. On Thu, Oct 15, 2015 at 11:04 AM, Scott Reynolds wrote: > List, > > Right now we build our

Re: Spark 1.5 java.net.ConnectException: Connection refused

2015-10-15 Thread Spark Newbie
l the number > retries (see Spark's configuration page). The job by default does not get > resubmitted. > > You could try getting the logs of the failed executor, to see what caused > the failure. Could be a memory limit issue, and YARN killing it somehow. > > > > On Wed,

Re: Spark 1.5 java.net.ConnectException: Connection refused

2015-10-14 Thread Spark Newbie
rdless of whether they were successfully processed or not. On Wed, Oct 14, 2015 at 11:01 AM, Spark Newbie <sparknewbie1...@gmail.com> wrote: > I ran 2 different spark 1.5 clusters that have been running for more than > a day now. I do see jobs getting aborted due to task retry's maxin

Re: Spark 1.5 java.net.ConnectException: Connection refused

2015-10-14 Thread Spark Newbie
PM, Tathagata Das <t...@databricks.com> wrote: > Is this happening too often? Is it slowing things down or blocking > progress. Failures once in a while is part of the norm, and the system > should take care of itself. > > On Tue, Oct 13, 2015 at 2:47 PM, Spark Newbie <s

Spark 1.5 java.net.ConnectException: Connection refused

2015-10-13 Thread Spark Newbie
Hi Spark users, I'm seeing the below exception in my spark streaming application. It happens in the first stage where the kinesis receivers receive records and perform a flatMap operation on the unioned Dstream. A coalesce step also happens as a part of that stage for optimizing the performance.

DEBUG level log in receivers and executors

2015-10-12 Thread Spark Newbie
Hi Spark users, Is there an easy way to turn on DEBUG logs in receivers and executors? Setting sparkContext.setLogLevel seems to turn on DEBUG level only on the Driver. Thanks,

Re: Spark checkpoint restore failure due to s3 consistency issue

2015-10-09 Thread Spark Newbie
? I can send it if that will help dig into the root cause. On Fri, Oct 9, 2015 at 2:18 PM, Tathagata Das <t...@databricks.com> wrote: > Can you provide the before stop and after restart log4j logs for this? > > On Fri, Oct 9, 2015 at 2:13 PM, Spark Newbie <sparknewbie1...@

Spark checkpoint restore failure due to s3 consistency issue

2015-10-09 Thread Spark Newbie
Hi Spark Users, I'm seeing checkpoint restore failures causing the application startup to fail with the below exception. When I do "ls" on the s3 path I see the key listed sometimes and not listed sometimes. There are no part files (checkpointed files) in the specified S3 path. This is possible