ChowFrom: eyc...@hotmail.com
To: 2dot7kel...@gmail.com
CC: gen.tan...@gmail.com; user@spark.apache.org
Subject: RE: no space left at worker node
Date: Mon, 9 Feb 2015 10:59:00 -0800
Thanks. But, in spark-submit, I specified the jar file in the form of
local:/spark-etl-0.0.1-SNAPSHOT.jar. It comes
$$anon$1.run(DriverRunner.scala:74)
Subject: Re: no space left at worker node
From: 2dot7kel...@gmail.com
To: eyc...@hotmail.com
CC: gen.tan...@gmail.com; user@spark.apache.org
Maybe, try with local: under the heading of Advanced Dependency Management
here: https://spark.apache.org
.nabble.com/no-space-left-at-worker-node-tp21545.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h
: Sun, 8 Feb 2015 12:09:37 +0100
Subject: Re: no space left at worker node
From: gen.tan...@gmail.com
To: eyc...@hotmail.com
CC: user@spark.apache.org
Hi,
I fact, I met this problem before. it is a bug of AWS. Which type of machine do
you use?
If I guess well, you can check the file /etc/fstab
,nodiratime,comment=cloudconfig 0 0
There is no entry of /dev/xvdb.
Ey-Chih Chow
--
Date: Sun, 8 Feb 2015 12:09:37 +0100
Subject: Re: no space left at worker node
From: gen.tan...@gmail.com
To: eyc...@hotmail.com
CC: user@spark.apache.org
Hi,
I fact, I met
Thanks Gen. How can I check if /dev/sdc is well mounted or not? In general,
the problem shows up when I submit the second or third job. The first job I
submit most likely will succeed.
Ey-Chih Chow
Date: Sun, 8 Feb 2015 18:18:03 +0100
Subject: Re: no space left at worker node
From: gen.tan
; eyc...@hotmail.com
CC: user@spark.apache.org
Subject: Re: no space left at worker node
You might want to take a look in core-site.xml, andsee what is listed as usable
directories (hadoop.tmp.dir, fs.s3.buffer.dir).
It seems that on S3, the root disk is relatively small (8G), but the config
files
likely will succeed.
Ey-Chih Chow
--
Date: Sun, 8 Feb 2015 18:18:03 +0100
Subject: Re: no space left at worker node
From: gen.tan...@gmail.com
To: eyc...@hotmail.com
CC: user@spark.apache.org
Hi,
In fact, /dev/sdb is /dev/xvdb. It seems
Hi Gen,
Thanks. I save my logs in a file under /var/log. This is the only place to
save data. Will the problem go away if I use a better machine?
Best regards,
Ey-Chih Chow
Date: Sun, 8 Feb 2015 23:32:27 +0100
Subject: Re: no space left at worker node
From: gen.tan...@gmail.com
To: eyc
2015 20:09:32 -0800
Subject: Re: no space left at worker node
From: 2dot7kel...@gmail.com
To: eyc...@hotmail.com
CC: gen.tan...@gmail.com; user@spark.apache.org
I guess you may set the parameters below to clean the directories
, 8 Feb 2015 20:09:32 -0800
Subject: Re: no space left at worker node
From: 2dot7kel...@gmail.com
To: eyc...@hotmail.com
CC: gen.tan...@gmail.com; user@spark.apache.org
I guess you may set the parameters below to clean the directories:
spark.worker.cleanup.enabled
-20150208173200-/01649880
./app-20150208173200-5152036.
Any suggestion how to resolve it? Thanks.
Ey-Chih ChowFrom: eyc...@hotmail.com
To: gen.tan...@gmail.com
CC: user@spark.apache.org
Subject: RE: no space left at worker node
Date: Sun, 8 Feb 2015 15:25:43 -0800
By this way
By this way, the input and output paths of the job are all in s3. I did not
use paths of hdfs as input or output.
Best regards,
Ey-Chih Chow
From: eyc...@hotmail.com
To: gen.tan...@gmail.com
CC: user@spark.apache.org
Subject: RE: no space left at worker node
Date: Sun, 8 Feb 2015 14:57:15 -0800
30963708 1729652 27661192 6% /mnt
Does anybody know how to fix this? Thanks.
Ey-Chih Chow
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/no-space-left-at-worker-node-tp21545.html
Sent from the Apache Spark User List mailing list archive
14 matches
Mail list logo