Spark are you using ?
>
> Is CapacityScheduler being used ?
>
> Thanks
>
> On Thu, Jun 2, 2016 at 1:32 AM, Prabeesh K. <prabsma...@gmail.com> wrote:
>
>> Hi I am using the below command to run a spark job and I get an error
>> like "Container preempted by scheduler&qu
Hi I am using the below command to run a spark job and I get an error like
"Container preempted by scheduler"
I am not sure if it's related to the wrong usage of Memory:
nohup ~/spark1.3/bin/spark-submit \ --num-executors 50 \ --master yarn \
--deploy-mode cluster \ --queue adhoc \
Refer this post
http://blog.prabeeshk.com/blog/2015/06/19/pyspark-notebook-with-docker/
Spark + Jupyter + Docker
On 18 August 2015 at 21:29, Jerry Lam chiling...@gmail.com wrote:
Hi Guru,
Thanks! Great to hear that someone tried it in production. How do you like
it so far?
Best Regards,
Refer this post
http://blog.prabeeshk.com/blog/2015/04/07/self-contained-pyspark-application/
On 13 April 2015 at 17:41, Punya Biswal pbis...@palantir.com wrote:
Dear Spark users,
My team is working on a small library that builds on PySpark and is
organized like PySpark as well -- it has a
You can also refer this blog http://blog.prabeeshk.com/blog/archives/
On 2 April 2015 at 12:19, Star Guo st...@ceph.me wrote:
Hi, all
I am new to here. Could you give me some suggestion to learn Spark ?
Thanks.
Best Regards,
Star Guo
Refer this blog
http://blog.prabeeshk.com/blog/2014/10/31/install-apache-spark-on-ubuntu-14-dot-04/
for step by step installation of Spark on Ubuntu
On 7 February 2015 at 03:12, Matei Zaharia matei.zaha...@gmail.com wrote:
You don't need HDFS or virtual machines to run Spark. You can just
You can refer the following link
https://github.com/prabeesh/Spark-Kestrel
On Tue, Nov 18, 2014 at 3:51 PM, Akhil Das ak...@sigmoidanalytics.com
wrote:
You can implement a custom receiver
http://spark.apache.org/docs/latest/streaming-custom-receivers.html to
connect to Kestrel and use it. I
try sbt clean command before build the app.
or delete .ivy2 ans .sbt folders(not a good methode). Then try to rebuild
the project.
On Thu, Jun 5, 2014 at 11:45 AM, Sean Owen so...@cloudera.com wrote:
I think this is SPARK-1949 again: https://github.com/apache/spark/pull/906
I think this
For building Spark for particular version of Hadoop
Refer
http://spark.apache.org/docs/latest/hadoop-third-party-distributions.html
On Thu, Jun 5, 2014 at 8:14 AM, Koert Kuipers ko...@tresata.com wrote:
you have to build spark against the version of hadoop your are using
On Wed, Jun 4, 2014
Hi,
scenario : Read data from HDFS and apply hive query on it and the result
is written back to HDFS.
Scheme creation, Querying and saveAsTextFile are working fine with
following mode
- local mode
- mesos cluster with single node
- spark cluster with multi node
Schema creation and
Please update the http://spark.apache.org/docs/latest/ link
On Fri, May 30, 2014 at 4:03 PM, Margusja mar...@roo.ee wrote:
Is it possible to download pre build package?
http://mirror.symnds.com/software/Apache/incubator/
spark/spark-1.0.0/spark-1.0.0-bin-hadoop2.tgz - gives me 404
Best
Hi,
I am trying to apply inner join in shark using 64MB and 27MB files. I am
able to run the following queris on Mesos
- SELECT * FROM geoLocation1
- SELECT * FROM geoLocation1 WHERE country = 'US'
But while trying inner join as
SELECT * FROM geoLocation1 g1 INNER JOIN
at 11:22 AM, prabeesh k prabsma...@gmail.com wrote:
Hi,
I have seen three different ways to query data from Spark
1. Default SQL support(
https://github.com/apache/spark/blob/master/examples/src/main/scala/org/apache/spark/sql/examples/HiveFromSpark.scala
)
2. Shark
3. Blink
ensure the only one SimpleApp object in your project, also check is there
any copy of SimpleApp.scala.
Normally the file SimpleApp.scala in src/main/scala or in the project root
folder.
On Sat, Apr 12, 2014 at 11:07 AM, jni2000 james...@federatedwireless.comwrote:
Hi
I am a new Spark user
Hi all,
Here I am sharing a blog for beginners, about creating spark streaming
stand alone application and bundle the app as single runnable jar. Take a
look and drop your comments in blog page.
http://prabstechblog.blogspot.in/2014/04/a-standalone-spark-application-in-scala.html
15 matches
Mail list logo