Hi,
I am trying to broadcast large objects (order of a couple of 100 MBs).
However, I keep getting errors when trying to do so:
Traceback (most recent call last):
File /LORM_experiment.py, line 510, in module
broadcast_gradient_function = sc.broadcast(gradient_function)
File
Hi,
Using Spark 1.2 I ran into issued setting SPARK_LOCAL_DIRS to a different
path then local directory.
On our cluster we have a folder for temporary files (in a central file
system), which is called /scratch.
When setting SPARK_LOCAL_DIRS=/scratch/node name
I get:
An error occurred while
Seems like it is a bug rather than a feature.
I filed a bug report: https://issues.apache.org/jira/browse/SPARK-5363
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-1-1-slow-working-Spark-1-2-fast-freezing-tp21278p21317.html
Sent from the Apache Spark
Hi,
I just recently tried to migrate from Spark 1.1 to Spark 1.2 - using
PySpark. Initially, I was super glad, noticing that Spark 1.2 is way faster
than Spark 1.1. However, the initial joy faded quickly when I noticed that
all my stuff didn't successfully terminate operations anymore. Using
I suspect that putting a function into shared variable incurs additional
overhead? Any suggestion how to avoid that?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Performance-issue-tp21194p21210.html
Sent from the Apache Spark User List mailing list
Hi,
I observed some weird performance issue using Spark in combination with
Theano, and I have no real explanation for that. To exemplify the issue I am
using the pi.py example of spark that computes pi:
When I modify the function from the example:
#unmodified code
def f(_):
x =
I got it working. It was a bug in Spark 1.1. After upgrading to 1.2 it
worked.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Issues-running-spark-on-cluster-tp21138p21140.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
Hi,
I am running PySpark on a cluster. Generally it runs. However, frequently I
get the warning message (and consequently, the task not being executed):
WARN TaskSchedulerImpl: Initial job has not accepted any resources; check
your cluster UI to ensure that workers are registered and have
Hi,
I am running some Spark code on my cluster in standalone mode. However, I
have noticed that the most powerful machines (32 cores, 192 Gb mem) hardly
get any tasks, whereas my small machines (8 cores, 128 Gb mem) all get
plenty of tasks. The resources are all displayed correctly in the WebUI
Hi,
I am using PySpark (1.1) and I am using it for some image processing tasks.
The images (RDD) are of in the order of several MB to low/mid two digit MB.
However, when using the data and running operations on it using Spark, I
experience blowing up memory. Is there anything I can do about it? I
Hi,
I get exactly the same error. It runs on my local machine but not on the
cluster. I am running the example pi.py example.
Best,
Tassilo
--
View this message in context:
Hi there,
I am trying to run the example code pi.py on a cluster, however, I only got
it working on localhost. When trying to run in standalone mode,
./bin/spark-submit \
--master spark://[mymaster]:7077 \
examples/src/main/python/pi.py \
I get warnings about resources and memory (the
Hi,
I have an issue with running Spark in standalone mode on a cluster.
Everything seems to run fine for a couple of minutes until Spark stops
executing the tasks.
Any idea?
Would appreciate some help.
Thanks in advance,
Tassilo
I get errors like that at the end:
14/10/31 16:16:59 INFO
Hi there,
I am trying to run Spark on YARN managed cluster using Python (which
requires yarn-client mode). However, I cannot get it running (same with
example apps).
Using spark-submit to launch the script I get the following warning:
WARN cluster.YarnClientClusterScheduler: Initial job has not
Hi Andrew,
thanks for trying to help. However, I am a bit confused now. I'm not setting
any 'spark.driver.host', particularly spark-defaults.conf is
empty/non-exisiting. I thought this is only required when running Spark
standalone mode. Isn't it the case, when using YARN all the configuration
Hi Marco,
I have the same issue. Did you fix it by chance? How?
Best,
Tassilo
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/cannot-run-spark-shell-in-yarn-client-mode-tp4013p17603.html
Sent from the Apache Spark User List mailing list archive at
Hi,
I'd like to run my python script using spark-submit together with a JAR
file containing Java specifications for a Hadoop file system. How can I do
that? It seems I can either provide a JAR file or a PYthon file to
spark-submit.
So far I have been running my code in ipython with
Hi,
I am using Spark in Python. I wonder if there is a possibility for passing
extra arguments to the mapping function. In my scenario, after each map I
update parameters, which I want to use in the folllowning new iteration of
mapping. Any idea?
Thanks in advance.
-Tassilo
--
View this
Thanks for the nice example.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Mapping-with-extra-arguments-tp12541p12549.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
Thanks. As my files are defined to be non-splittable, I eventually I ended up
using mapPartitionsWithIndex() taking the split ID as index
def g(splitIndex, iterator):
yield (splitIndex, iterator.next())
myRDD.mapPartitionsWithIndex(g)
--
View this message in context:
Hi,
I wonder if there is something like an (row) index to of the elements in the
RDD. Specifically, my RDD is generated from a series of files, where the
value corresponds the file contents. Ideally, I would like to have the keys
to be an enumeration of the file number e.g. (0,file contents
Hi,
is there a way such that I can group items in an RDD together such that I
can process them using parallelize/map
Let's say I have data items with keys 1...1000 e.g.
loading RDD = sc. newAPIHadoopFile(...).cache()
Now, I would like them to be processed in chunks of e.g. tens
Thanks a lot. Yes, this mapPartitions seems a better way of dealing with this
problem as for groupBy() I need to collect() data before applying
parallelize(), which is expensive.
--
View this message in context:
Yes, thanks great. This seems to be the issue.
At least running with spark-submit works as well.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Using-Hadoop-InputFormat-in-Python-tp12067p12126.html
Sent from the Apache Spark User List mailing list archive
24 matches
Mail list logo