by default ? Can it store at the
memory and disk ? How can it configured ?
Thanks,
Abhi
,Is
it feasible ?
Any help is appreciated ?
Thanks,
Abhi
If i understand correctly , the above document creates pool for priority
which is static in nature and has to be defined before submitting the job .
.in my scenario each generated task can have different priority.
Thanks,
Abhi
On Mon, Mar 16, 2015 at 9:48 PM, twinkle sachdeva
twinkle.sachd
yes .
Each generated job can have a different priority it is like a recursive
function, where in each iteration generate job will be submitted to the
spark cluster based on the priority. jobs will lower priority or less than
some threshold will be discarded.
Thanks,
Abhi
On Mon, Mar 16, 2015
Thanks,
It worked.
-Abhi
On Tue, Mar 3, 2015 at 5:15 PM, Tobias Pfeiffer t...@preferred.jp wrote:
Hi,
On Wed, Mar 4, 2015 at 6:20 AM, Zhan Zhang zzh...@hortonworks.com wrote:
Do you have enough resource in your cluster? You can check your resource
manager to see the usage.
Yep, I can
I am trying to run below java class with yarn cluster, but it hangs in
accepted state . i don't see any error . Below is the class and command .
Any help is appreciated .
Thanks,
Abhi
bin/spark-submit --class com.mycompany.app.SimpleApp --master yarn-cluster
/home/hduser/my-app-1.0.jar
--
Abhi Basu
I am working with CDH5.2 (Spark 1.0.0) and wondering which version of Spark
comes with SparkSQL by default. Also, will SparkSQL come enabled to access
the Hive Metastore? Is there an easier way to enable Hive support without
have to build the code with various switches?
Thanks,
Abhi
--
Abhi
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org
--
Abhi Basu
,
Abhi
On Fri, Dec 12, 2014 at 6:40 PM, Stephen Boesch java...@gmail.com wrote:
What is the proper way to build with hive from sbt? The SPARK_HIVE is
deprecated. However after running the following:
sbt -Pyarn -Phadoop-2.3 -Phive assembly/assembly
And then
bin/pyspark
hivectx
10 matches
Mail list logo