Ewan,
What issue are you having with HDFS when only Spark is installed? I'm not aware
of any issue like this.
Thanks,
Jonathan
—
Sent from Mailbox
On Wed, Sep 9, 2015 at 11:48 PM, Ewan Leith
wrote:
> The last time I checked, if you launch EMR 4 with
it might be a network issue. The error states failed to bind the server IP
address
Chester
Sent from my iPhone
On Jul 18, 2015, at 11:46 AM, Amjad ALSHABANI ashshab...@gmail.com wrote:
Does anybody have any idea about the error I m having.. I am really
clueless... And appreciate any idea
We used both spray and Akka. To avoid comparability issue, we used spark
shaded akka version. It works for us. This is 1.1.0 branch, I have not tried
with master branch
Chester
Sent from my iPad
On Oct 28, 2014, at 11:48 PM, Prashant Sharma scrapco...@gmail.com wrote:
Yes we shade akka to
They should be the same except the package names are changed to avoid protopuf
conflict. You can use it just like other Akka jars
Chester
Sent from my iPhone
On Oct 17, 2014, at 5:56 AM, Ruebenacker, Oliver A
oliver.ruebenac...@altisource.com wrote:
Hello,
My SBT pulls in,
All
Sorry this is spark related, but I thought some of you in San Francisco
might be interested in this talk. We announced this talk recently, it will be
at the end of next month (oct)
http://www.meetup.com/sfmachinelearning/events/208078582/
Prof CJ Lin is famous for his work on libsvm
Narrell matt.narr...@gmail.com
wrote:
How does this work with a cluster manager like YARN?
mn
On Sep 25, 2014, at 2:23 PM, Andrew Or and...@databricks.com wrote:
Hi Harsha,
You can turn on `spark.eventLog.enabled` as documented here:
http://spark.apache.org/docs/latest/monitoring.html
Archit
We are using yarn-cluster mode , and calling spark via Client class
directly from servlet server. It works fine.
To establish a communication channel to give further requests,
It should be possible with yarn client, but not with yarn server. Yarn
client mode, spark driver
17, 2014 at 1:29 PM, Matt Work Coarr
mattcoarr.w...@gmail.com wrote:
Thanks Marcelo! This is a huge help!!
Looking at the executor logs (in a vanilla spark install, I'm finding
them
in $SPARK_HOME/work/*)...
It launches the executor, but it looks like the
CoarseGrainedExecutorBackend
and tried to sync up ntp but it doesn't seem
to work.
Can someone help? Your help is highly appreciated!
Thanks,
Jian
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/jar-changed-on-src-filesystem-tp10011.html
Sent from the Apache Spark User List
Thanks Marcelo! This is a huge help!!
Looking at the executor logs (in a vanilla spark install, I'm finding them
in $SPARK_HOME/work/*)...
It launches the executor, but it looks like the
CoarseGrainedExecutorBackend is having trouble talking to the driver
(exactly what you said!!!).
Do you
you tried peeking into its log
file?
(That error is printed whenever the executors fail to report back to
the driver. Insufficient resources to launch the executor is the most
common cause of that, but not the only one.)
On Tue, Jul 15, 2014 at 2:43 PM, Matt Work Coarr
mattcoarr.w
Hello spark folks,
I have a simple spark cluster setup but I can't get jobs to run on it. I
am using the standlone mode.
One master, one slave. Both machines have 32GB ram and 8 cores.
The slave is setup with one worker that has 8 cores and 24GB memory
allocated.
My application requires 2
?)
Thank you,
Konstantin Kudryavtsev
On Mon, Jul 7, 2014 at 4:34 AM, Robert James srobertja...@gmail.com
wrote:
I can say from my experience that getting Spark to work with Hadoop 2
is not for the beginner; after solving one problem after another
(dependencies, scripts, etc.), I went back
()
print Spark AMI: + ami
except:
print stderr, Could not resolve AMI at: + ami_path
sys.exit(1)
return ami
Thanks
Best Regards
On Fri, Jun 6, 2014 at 2:14 AM, Matt Work Coarr mattcoarr.w...@gmail.com
wrote:
How would I go about creating a new AMI image that I can
Thanks Akhil! I'll give that a try!
How would I go about creating a new AMI image that I can use with the spark
ec2 commands? I can't seem to find any documentation. I'm looking for a
list of steps that I'd need to perform to make an Amazon Linux image ready
to be used by the spark ec2 tools.
I've been reading through the spark
Hi, I'm attempting to run spark-ec2 launch on AWS. My AWS instances
would be in our EC2 VPC (which seems to be causing a problem).
The two security groups MyClusterName-master and MyClusterName-slaves have
already been setup with the same ports open as the security group that
spark-ec2 tries to
For various schemaRDD functions like select, where, orderby, groupby etc. I
would like to create expression objects and pass these to the methods for
execution.
Can someone show some examples of how to create expressions for case class
and execute ? E.g., how to create expressions for select,
18 matches
Mail list logo