Re: java.io.FileNotFoundException

2016-07-05 Thread Jacek Laskowski
On Tue, Jul 5, 2016 at 2:16 AM, kishore kumar wrote: > 2016-07-04 05:11:53,972 [dispatcher-event-loop-0] ERROR > org.apache.spark.scheduler.LiveListenerBus- Dropping SparkListenerEvent > because no remaining room in event q > ueue. This likely means one of the

Re: java.io.FileNotFoundException

2016-07-04 Thread kishore kumar
arn. Is this cluster or client deploy mode? Have >>>> you seen any other exceptions before? How long did the application run >>>> before the exception? >>>> >>>> Pozdrawiam, >>>> Jacek Laskowski >>>> >>>> https:

Re: java.io.FileNotFoundException

2016-07-04 Thread kishore kumar
n, Jul 4, 2016 at 10:57 AM, kishore kumar <akishore...@gmail.com> >>> wrote: >>> > We've upgraded spark version from 1.2 to 1.6 still the same problem, >>> > >>> > Exception in thread "main" org.apache.spark.SparkException: Job >>> a

Re: java.io.FileNotFoundException

2016-07-04 Thread Jacek Laskowski
gt; Exception in thread "main" org.apache.spark.SparkException: Job aborted >> due >> > to stage failure: Task 286 in stage >> > 2397.0 failed 4 times, most recent failure: Lost task 286.3 in stage >> 2397.0 >> > (TID 314416, salve-06.domain.com): j

Re: java.io.FileNotFoundException

2016-07-04 Thread kishore kumar
ion: Job aborted > due > > to stage failure: Task 286 in stage > > 2397.0 failed 4 times, most recent failure: Lost task 286.3 in stage > 2397.0 > > (TID 314416, salve-06.domain.com): java.io.FileNotFoundException: > > /opt/mapr/tmp/h > > > adoop-tmp/had

Re: java.io.FileNotFoundException

2016-07-04 Thread Jacek Laskowski
ed due > to stage failure: Task 286 in stage > 2397.0 failed 4 times, most recent failure: Lost task 286.3 in stage 2397.0 > (TID 314416, salve-06.domain.com): java.io.FileNotFoundException: > /opt/mapr/tmp/h > adoop-tmp/hadoop-mapr/nm-local-dir/usercache/user1/appcache/application_14

Re: java.io.FileNotFoundException

2016-07-04 Thread karthi keyan
in thread "main" org.apache.spark.SparkException: Job aborted > due to stage failure: Task 286 in stage > 2397.0 failed 4 times, most recent failure: Lost task 286.3 in stage > 2397.0 (TID 314416, salve-06.domain.com): java.io.FileNotFoundException: > /opt/mapr/tmp/h > > adoop-t

Re: java.io.FileNotFoundException

2016-07-04 Thread kishore kumar
domain.com): java.io.FileNotFoundException: /opt/mapr/tmp/h adoop-tmp/hadoop-mapr/nm-local-dir/usercache/user1/appcache/application_1467474162580_29353/blockmgr-bd075392-19c2-4cb8-8033-0fe54d683c8f/12/shuffle_530_286_0.inde x.c374502a-4cf2-4052-abcf-42977f1623d0 (No such file or directory) Kindly help me

Re: java.io.FileNotFoundException

2016-06-04 Thread kishore kumar
se and elasticsearch, >>>> >>>> the error which we are encountering is >>>> Exception in thread "main" org.apache.spark.SparkException: Job aborted >>>> due to stage failure: Task 38 in stage 26800.0 failed 4 times, most recent >>>&g

Re: java.io.FileNotFoundException

2016-06-03 Thread kishore kumar
>>> mode on yarn which loads data into hbase and elasticsearch, >>> >>> the error which we are encountering is >>> Exception in thread "main" org.apache.spark.SparkException: Job aborted >>> due to stage failure: Task 38 in stage 26800.0 failed 4 time

Re: java.io.FileNotFoundException

2016-06-03 Thread Jeff Zhang
d >> due to stage failure: Task 38 in stage 26800.0 failed 4 times, most recent >> failure: Lost task 38.3 in stage 26800.0 (TID 4990082, hdprd-c01-r04-03): >> java.io.FileNotFoundException: >> /opt/mapr/tmp/hadoop-tmp/hadoop-mapr/nm-local-dir/usercache/sparkuser/appcache

Re: java.io.FileNotFoundException

2016-06-03 Thread kishore kumar
hich we are encountering is > Exception in thread "main" org.apache.spark.SparkException: Job aborted > due to stage failure: Task 38 in stage 26800.0 failed 4 times, most recent > failure: Lost task 38.3 in stage 26800.0 (TID 4990082, hdprd-c01-r04-03): > java.io.FileNotFou

java.io.FileNotFoundException

2016-05-31 Thread kishore kumar
ge 26800.0 failed 4 times, most recent failure: Lost task 38.3 in stage 26800.0 (TID 4990082, hdprd-c01-r04-03): java.io.FileNotFoundException: /opt/mapr/tmp/hadoop-tmp/hadoop-mapr/nm-local-dir/usercache/sparkuser/appcache/application_1463194314221_211370/spark-3cc37dc7-fa3c-4b98-aa60-0acdf

Re: java.io.FileNotFoundException(Too many open files) in Spark streaming

2016-01-06 Thread Priya Ch
Running 'lsof' will let us know the open files but how do we come to know the root cause behind opening too many files. Thanks, Padma CH On Wed, Jan 6, 2016 at 8:39 AM, Hamel Kothari wrote: > The "Too Many Files" part of the exception is just indicative of the fact >

Re: java.io.FileNotFoundException(Too many open files) in Spark streaming

2016-01-06 Thread Priya Ch
The line of code which I highlighted in the screenshot is within the spark source code. Spark implements sort-based shuffle implementation and the spilled files are merged using the merge sort. Here is the link https://issues.apache.org/jira/secure/attachment/12655884/Sort-basedshuffledesign.pdf

Re: java.io.FileNotFoundException(Too many open files) in Spark streaming

2016-01-05 Thread Annabel Melongo
Vijay, Are you closing the fileinputstream at the end of each loop ( in.close())? My guess is those streams aren't close and thus the "too many open files" exception. On Tuesday, January 5, 2016 8:03 AM, Priya Ch wrote: Can some one throw light on this ?

Re: java.io.FileNotFoundException(Too many open files) in Spark streaming

2016-01-05 Thread Priya Ch
Yes, the fileinputstream is closed. May be i didn't show in the screen shot . As spark implements, sort-based shuffle, there is a parameter called maximum merge factor which decides the number of files that can be merged at once and this avoids too many open files. I am suspecting that it is

Re: java.io.FileNotFoundException(Too many open files) in Spark streaming

2016-01-05 Thread Priya Ch
Can some one throw light on this ? Regards, Padma Ch On Mon, Dec 28, 2015 at 3:59 PM, Priya Ch wrote: > Chris, we are using spark 1.3.0 version. we have not set > spark.streaming.concurrentJobs > this parameter. It takes the default value. > > Vijay, > > From

Re: java.io.FileNotFoundException(Too many open files) in Spark streaming

2015-12-28 Thread Priya Ch
Chris, we are using spark 1.3.0 version. we have not set spark.streaming.concurrentJobs this parameter. It takes the default value. Vijay, From the tack trace it is evident that

Re: java.io.FileNotFoundException(Too many open files) in Spark streaming

2015-12-25 Thread Chris Fregly
and which version of Spark/Spark Streaming are you using? are you explicitly setting the spark.streaming.concurrentJobs to something larger than the default of 1? if so, please try setting that back to 1 and see if the problem still exists. this is a dangerous parameter to modify from the

java.io.FileNotFoundException(Too many open files) in Spark streaming

2015-12-23 Thread Vijay Gharge
Few indicators - 1) during execution time - check total number of open files using lsof command. Need root permissions. If it is cluster not sure much ! 2) which exact line in the code is triggering this error ? Can you paste that snippet ? On Wednesday 23 December 2015, Priya Ch

Re: java.io.FileNotFoundException(Too many open files) in Spark streaming

2015-12-23 Thread Priya Ch
ulimit -n 65000 fs.file-max = 65000 ( in etc/sysctl.conf file) Thanks, Padma Ch On Tue, Dec 22, 2015 at 6:47 PM, Yash Sharma wrote: > Could you share the ulimit for your setup please ? > > - Thanks, via mobile, excuse brevity. > On Dec 22, 2015 6:39 PM, "Priya Ch"

Re: java.io.FileNotFoundException(Too many open files) in Spark streaming

2015-12-22 Thread Priya Ch
Jakob, Increased the settings like fs.file-max in /etc/sysctl.conf and also increased user limit in /etc/security/limits.conf. But still see the same issue. On Fri, Dec 18, 2015 at 12:54 AM, Jakob Odersky wrote: > It might be a good idea to see how many files are open

java.io.FileNotFoundException(Too many open files) in Spark streaming

2015-12-17 Thread Priya Ch
Hi All, When running streaming application, I am seeing the below error: java.io.FileNotFoundException: /data1/yarn/nm/usercache/root/appcache/application_1450172646510_0004/blockmgr-a81f42cd-6b52-4704-83f3-2cfc12a11b86/02/temp_shuffle_589ddccf-d436-4d2c-9935-e5f8c137b54b (Too many open

Re: java.io.FileNotFoundException(Too many open files) in Spark streaming

2015-12-17 Thread Jakob Odersky
It might be a good idea to see how many files are open and try increasing the open file limit (this is done on an os level). In some application use-cases it is actually a legitimate need. If that doesn't help, make sure you close any unused files and streams in your code. It will also be easier

java.io.FileNotFoundException: Job aborted due to stage failure

2015-11-26 Thread Sahil Sareen
on s.property = c.property from X YZ org.apache.spark.SparkException: Job aborted due to stage failure: Task 4 in stage 5710.0 failed 4 times, most recent failure: Lost task 4.3 in stage 5710.0 (TID 341269, ip-10-0-1-80.us-west-2.compute.internal): java.io.FileNotFoundException: /mnt/md0/var/lib/spark

Re: java.io.FileNotFoundException: Job aborted due to stage failure

2015-11-26 Thread Ted Yu
orted due to stage failure: > Task 4 in stage 5710.0 failed 4 times, most recent failure: Lost task > 4.3 in stage 5710.0 (TID 341269, > ip-10-0-1-80.us-west-2.compute.internal): > java.io.FileNotFoundException: > /mnt/md0/var/lib/spark/spark-549f7d96-82da-4b8d-b9fe- > 7f6fe82384

Spark 1.4.2- java.io.FileNotFoundException: Job aborted due to stage failure

2015-11-24 Thread Sahil Sareen
aborted due to stage failure: Task 4 in stage 5710.0 failed 4 times, most recent failure: Lost task 4.3 in stage 5710.0 (TID 341269, ip-10-0-1-80.us-west-2.compute.internal): java.io.FileNotFoundException: /mnt/md0/var/lib/spark/spark-549f7d96-82da-4b8d-b9fe-7f6fe8238478/blockmgr-f44be41a-9036-4b93-8608

get java.io.FileNotFoundException when use addFile Function

2015-07-15 Thread prateek arora
I am trying to write a simple program using addFile Function but getting error in my worker node that file doest not exist tage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3, slave2.novalocal): java.io.FileNotFoundException: File file:/tmp

get java.io.FileNotFoundException when use addFile Function

2015-07-15 Thread prateek arora
Hi I am trying to write a simple program using addFile Function but getting error in my worker node that file doest not exist tage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3, slave2.novalocal): java.io.FileNotFoundException: File file:/tmp

Re: spark java.io.FileNotFoundException: /user/spark/applicationHistory/application

2015-05-29 Thread igor.berman
in yarn your executors might run on every node in your cluster, so you need to configure spark history to be on hdfs(so it will be accessible to every executor) probably you've switched from local to yarn mode when submitting -- View this message in context:

spark java.io.FileNotFoundException: /user/spark/applicationHistory/application

2015-05-28 Thread roy
hi, Suddenly spark jobs started failing with following error Exception in thread main java.io.FileNotFoundException: /user/spark/applicationHistory/application_1432824195832_1275.inprogress (No such file or directory) full trace here [21:50:04 x...@hadoop-client01.dev:~]$ spark-submit --class

Re: java.io.FileNotFoundException when using HDFS in cluster mode

2015-03-30 Thread Akhil Das
Mar 29 22:05 stderr -rw-r--r-- 1 nickt nickt0 Mar 29 22:05 stdout But it's failing due to a java.io.FileNotFoundException saying my input file is missing: Caused by: java.io.FileNotFoundException: Added file file:/home/nickt/spark-1.3.0/work/driver-20150329220503-0021/hdfs:/host.domain.ex

RE: java.io.FileNotFoundException when using HDFS in cluster mode

2015-03-30 Thread java8964
Subject: java.io.FileNotFoundException when using HDFS in cluster mode Hi List, I'm following this example here https://github.com/databricks/learning-spark/tree/master/mini-complete-example with the following: $SPARK_HOME/bin/spark-submit \ --deploy-mode cluster \ --master spark

Re: java.io.FileNotFoundException when using HDFS in cluster mode

2015-03-30 Thread nsalian
Try running it like this: sudo -u hdfs spark-submit --class org.apache.spark.examples.SparkPi --deploy-mode cluster --master yarn hdfs:///user/spark/spark-examples-1.2.0-cdh5.3.2-hadoop2.5.0-cdh5.3.2.jar 10 Caveats: 1) Make sure the permissions of /user/nick is 775 or 777. 2) No need for

java.io.FileNotFoundException when using HDFS in cluster mode

2015-03-29 Thread Nick Travers
): -rw-r--r-- 1 nickt nickt 15K Mar 29 22:05 learning-spark-mini-example_2.10-0.0.1.jar -rw-r--r-- 1 nickt nickt 9.2K Mar 29 22:05 stderr -rw-r--r-- 1 nickt nickt0 Mar 29 22:05 stdout But it's failing due to a java.io.FileNotFoundException saying my input file is missing: Caused

When uses SparkFiles.get(GeoIP.dat), got exception in thread main java.io.FileNotFoundException

2015-02-07 Thread Gmail
o.e.j.s.ServletContextHandler{/streaming,null} 2015-02-08 06:51:17,065 INFO [main] handler.ContextHandler (ContextHandler.java:startContext(737)) - started o.e.j.s.ServletContextHandler{/streaming/json,null} Exception in thread main java.io.FileNotFoundException: /tmp/spark-d85f0f21-2e66-4ed7-ae31

Re: SparkContext.wholeTextFiles() java.io.FileNotFoundException: File does not exist:

2014-10-09 Thread jan.zikes
the standard EC2 installation? __ Od: Sean Owen so...@cloudera.com Komu: jan.zi...@centrum.cz Datum: 08.10.2014 18:05 Předmět: Re: SparkContext.wholeTextFiles() java.io.FileNotFoundException: File does not exist: CC: user@spark.apache.org

Re: SparkContext.wholeTextFiles() java.io.FileNotFoundException: File does not exist:

2014-10-09 Thread Rahul Kumar Singh
: 08.10.2014 18:05 Předmět: Re: SparkContext.wholeTextFiles() java.io.FileNotFoundException: File does not exist: CC: user@spark.apache.org Take this as a bit of a guess, since I don't use S3 much and am only a bit aware of the Hadoop+S3 integration issues. But I know that S3's lack

Re: SparkContext.wholeTextFiles() java.io.FileNotFoundException: File does not exist:

2014-10-08 Thread jan.zikes
-src.zip/py4j/protocol.py, line 300, in get_return_value py4j.protocol.Py4JJavaError: An error occurred while calling o30.partitions. : java.io.FileNotFoundException: File does not exist: /wikiinput/wiki.xml.gz at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:517

Re: SparkContext.wholeTextFiles() java.io.FileNotFoundException: File does not exist:

2014-10-08 Thread jan.zikes
/protocol.py, line 300, in get_return_value py4j.protocol.Py4JJavaError: An error occurred while calling o30.partitions. : java.io.FileNotFoundException: File does not exist: /wikiinput/wiki.xml.gz at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:517

Re: SparkContext.wholeTextFiles() java.io.FileNotFoundException: File does not exist:

2014-10-08 Thread Sean Owen
py4j.protocol.Py4JJavaError: An error occurred while calling o30.partitions. : java.io.FileNotFoundException: File does not exist: /wikiinput/wiki.xml.gz at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:517

java.io.FileNotFoundException in usercache

2014-09-25 Thread Egor Pahomov
-20140925151931-a4c3/3a/shuffle_4_30_174 java.io.FileNotFoundException: /local/hd2/yarn/local/usercache/epahomov/appcache/application_1411219858924_15501/spark-local-20140925151931-a4c3/3a/shuffle_4_30_174 (No such file or directory) couple days ago. After this error spark context shuted down. I'm

org.apache.spark.SparkException: java.io.FileNotFoundException: does not exist)

2014-09-16 Thread Hui Li
was due to java.io.FileNotFoundException java.io.FileNotFoundException: File file:/root/test/sample_svm_data.txt does not exist at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:511

Re: org.apache.spark.SparkException: java.io.FileNotFoundException: does not exist)

2014-09-16 Thread Aris
: eecvm0203.demo.sas.com (PROCESS_LOCAL) 14/09/16 10:55:21 INFO TaskSetManager: Serialized task 12.0:1 as 1733 bytes in 0 ms 14/09/16 10:55:21 WARN TaskSetManager: Lost TID 24 (task 12.0:0) 14/09/16 10:55:21 WARN TaskSetManager: Loss was due to java.io.FileNotFoundException

java.io.FileNotFoundException: shuffle

2014-07-02 Thread nit
pressure..but I could never figure out the root cause. -- 14/07/02 07:34:45 WARN TaskSetManager: Loss was due to java.io.FileNotFoundException java.io.FileNotFoundException: /var/storage/sda3/nm-local/usercache/nit/appcache/application_1403208801430_0183/spark-local-20140702065054-388d/0e

java.io.FileNotFoundException: http://IP/broadcast_1

2014-07-01 Thread Honey Joshi
Hi All, We are using shark table to dump the data, we are getting the following error : Exception in thread main org.apache.spark.SparkException: Job aborted: Task 1.0:0 failed 1 times (most recent failure: Exception failure: java.io.FileNotFoundException: http://IP/broadcast_1) We dont know

答复: 答复: java.io.FileNotFoundException: /test/spark-0.9.1/work/app-20140505053550-0000/2/stdout (No such file or directory)

2014-05-11 Thread Francis . Hu
主题: 答复: 答复: java.io.FileNotFoundException: /test/spark-0.9.1/work/app-20140505053550-/2/stdout (No such file or directory) i looked into the log again, all exceptions are about FileNotFoundException . In the Webui, no anymore info I can check except for the basic description of job

java.io.FileNotFoundException: /test/spark-0.9.1/work/app-20140505053550-0000/2/stdout (No such file or directory)

2014-05-05 Thread Francis . Hu
AbstractHttpConnection: /logPage/?appId=app-20140505053550-executorId=2logType=stdout java.io.FileNotFoundException: /test/spark-0.9.1/work/app-20140505053550-/2/stdout (No such file or directory) at java.io.FileInputStream.open(Native Method) at java.io.FileInputStream.init

答复: java.io.FileNotFoundException: /test/spark-0.9.1/work/app-20140505053550-0000/2/stdout (No such file or directory)

2014-05-05 Thread Francis . Hu
: java.io.FileNotFoundException: /test/spark-0.9.1/work/app-20140505053550-/2/stdout (No such file or directory) Do those file actually exist? Those stdout/stderr should have the output of the spark's executors running in the workers, and its weird that they dont exist. Could be permission