AM, Prashant Sharma scrapco...@gmail.comwrote:
Hi,
I am not sure I know how to. Above should have worked. Apart from the
trick every one knows that you can redirect stdout to stderr, knowing why
do you need it would be great !
On Sat, Nov 30, 2013 at 2:53 PM, Wenlei Xie wenlei@gmail.com
release 0.8.1. Can you explain exactly
how you are running Spark? Are you running the shell or are you
running a standalone application?
- Patrick
On Thu, Nov 28, 2013 at 12:54 AM, Wenlei Xie wenlei@gmail.com
wrote:
Hi,
I remember Spark used to print detailed error log
Hi,
I remember Spark used to print detailed error log into the stderr (e.g.
constructing RDD, evaluate it, how much memory each partition consumes).
But I cannot find it anymore but only with the following information:
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
you are running Spark? Are you running the shell or are you
running a standalone application?
- Patrick
On Thu, Nov 28, 2013 at 12:54 AM, Wenlei Xie wenlei@gmail.com wrote:
Hi,
I remember Spark used to print detailed error log into the stderr (e.g.
constructing RDD, evaluate it, how
Hi,
I am trying to do some tasks with the following style map function:
rdd.map { e =
val a = new Array[Int](100)
...Some calculation...
}
But here the array a is really just used as a temporary buffer and can be
reused. I am wondering if I can avoid constructing it everytime? (As it
/ speculative execution is turned on by
default. Do you know what snapshot version you used for 0.8 previously?
On Mon, Nov 4, 2013 at 12:03 PM, Wenlei Xie wenlei@gmail.com wrote:
Hi,
I have some iterative program written in Spark and have been tested under
a snapshot version of spark 0.8
PM, Wenlei Xie wenlei@gmail.com wrote:
Hi,
My iterative program written in Spark got quite various running time for
each iterations, although the computation load is supposed to
be roughly the same. My program logic would add a batch of tuples and
delete roughly same number of tuples
Hi,
I have some iterative program written in Spark and have been tested under a
snapshot version of spark 0.8 before. After I ported it to the 0.8 release
version, I see performance drops in large datasets. I am wondering if
there is any clue?
I monitored the number of partitions on each
wrote:
Does this help you? https://github.com/mesos/spark/pull/832
--
Reynold Xin, AMPLab, UC Berkeley
http://rxin.org
On Mon, Sep 2, 2013 at 3:24 PM, Wenlei Xie wenlei@gmail.com wrote:
Hi,
I am wondering if it is possible to get the partition position of cached
RDD? I am asking