From BlockManager code + ShuffleMapTask code, it writes under
spark.local.dir or java.io.tmpdir.
val diskBlockManager = new DiskBlockManager(shuffleBlockManager,
conf.get(spark.local.dir, System.getProperty(java.io.tmpdir)))
On Mon, Mar 3, 2014 at 10:45 PM, Usman Ghani us...@platfora.com
I've encountered similar problems.
Maybe you can try using hostname or FQDN (rather than IP address) of your node
for the master URI.
In my case, AKKA picks the FQDN for master URI and worker has to use exactly
the same string for connection.
From: Benny Thompson [mailto:ben.d.tho...@gmail.com]
Hi Ognen,
See if this helps. I was working on this :
class MyClass[T](sc : SparkContext, flag1 : Boolean, rdd : RDD[T], hdfsPath :
String) extends Actor {
def act(){
if(flag1) this.process()
else this.count
}
private def process(){
println(sc.textFile(hdfsPath).count)
Hi,
Try to clean your temp dir, System.getProperty(java.io.tmpdir)
Also, Can you paste a longer stacktrace?
Thanks
Best Regards
On Tue, Mar 4, 2014 at 2:55 PM, goi cto goi@gmail.com wrote:
Hi,
I am running a spark java program on a local machine. when I try to write
the output to
Exception in thread delete Spark temp dir C:\Users\...
java.io.IOException: failed to delete: C:\Users\...\simple-project-1.0.jar
at org.apache.spark.util.utils$.deleteRecursively(Utils.scala:495)
at
org.apache.spark.util.utils$$anonfun$deleteRecursively$1.apply(Utils.scala:491)
I deleted my
Hello, I am using Spark with Scala and I am attempting to understand the
different filtering and mapping capabilities available. I haven't found an
example of the specific task I would like to do.
I am trying to read in a tab spaced text file and filter specific entries.
I would like this
Thanks Sean, I think that is doing what I needed. It was much simpler than
what I had been attempting.
Is it possible to do an OR statement filter? So, that for example column 2
can be filtered by A2 appearances and column 3 by A4?
--
View this message in context:
Hi Ognen,
Any particular reason of choosing scalatra over options like play or spray
?
Is scalatra much better in serving apis or is it due to similarity with
ruby's sinatra ?
Did you try the other options and then pick scalatra ?
Thanks.
Deb
On Tue, Mar 4, 2014 at 4:50 AM, Ognen Duzlevski
Thanks.
Does it make sence to add ==/equals method for Vector with this (or same)
behavior?
2014-03-04 6:00 GMT+02:00 Shixiong Zhu zsxw...@gmail.com:
Vector is an enhanced Array[Double]. You can compare it like
Array[Double]. E.g.,
scala val v1 = Vector(1.0, 2.0)
v1:
Deb,
On 3/4/14, 9:02 AM, Debasish Das wrote:
Hi Ognen,
Any particular reason of choosing scalatra over options like play or
spray ?
Is scalatra much better in serving apis or is it due to similarity
with ruby's sinatra ?
Did you try the other options and then pick scalatra ?
Not really.
Hi Mayur,
I am using CDH4.6.0p0.26. And the latest Cloudera Spark parcel is Spark
0.9.0 CDH4.6.0p0.50.
As I mentioned, somehow, the Cloudera Spark version doesn't contain the
run-example shell scripts.. However, it is automatically configured and it
is pretty easy to set up across the cluster...
Hi there,
I tried the Kafka WordCount example and it works perfect and the code is
pretty straightforward to understand.
Can anyone show to me how to start your own maven project with the
KafkaWordCount example using minimum-effort.
1. How the pom file should look like (including jar-plugin?
12 matches
Mail list logo