RE: Using Trello to Show Mid to High Level features in Apache Zeppelin

2015-07-23 Thread Marko Galesic
Hi moon, I see your point that there would be overhead in managing two systems. However, I don’t believe that working within JIRA will achieve what I’m thinking of. I’m impressed there are people who use JIRA and seem to be end users; however, I speculate that these are advanced users – edging

Re: Using Trello to Show Mid to High Level features in Apache Zeppelin

2015-07-23 Thread A B
Hi guys! I find the suggestion to vote via trello totally cool and would support it. So if everyone is OK with this, let's do this. I was looking for such a possibility to have a community process to prioritize something for quite some time (have also played with various JIRA workarounds) - but

Spark memory configuration

2015-07-23 Thread PHELIPOT, REMY
Hello ! I am trying to launch some very greedy processes on a Spark 1.4 Cluster using Zeppelin, and I don't understand how to configure Spark memory properly. I’ve tried to set SPARK_MASTER_MEMORY, SPARK_WORKER_MEMORY and SPARK_EXECUTOR_MEMORY environment variables on the spark cluster nodes,

pyspark not working for me...

2015-07-23 Thread IT CTO
I am trying the simple thing in pyspark: %pyspark rdd = sc.parallelize([1,2,3]) print(rdd.collect()) z.show(sqlContext.createDataFrame(rdd)) AND keep getting error: Traceback (most recent call last): File /tmp/zeppelin_pyspark.py, line 116, in module eval(compiledCode) File string, line 3, in

Re: Fwd: Pyspark Syntax Error

2015-07-23 Thread felixcheung_m
It looks like pyspark is not able to talk to its java classes. Could you double check the SPARK_HOME etc are correctly set? On Wed, Jul 22, 2015 at 1:03 PM -0700, Renxia Wang renxia.w...@gmail.com wrote: Hi guys, I am trying to run zeppelin locally, interact with spark in local mode. I ran

Re: Local class incompatible?

2015-07-23 Thread Albert Yoon
Hi, Although SPARK_HOME correctly set and build Z with -Dpyspark I still experience local class not compatible error when I running any pyspark in Z. Job was correctly send and processed on cluster but the error seems thrown after final stage (Python RDD deserialization?) I followed instruction

Re: Using Trello to Show Mid to High Level features in Apache Zeppelin

2015-07-23 Thread Alexander Bezzubov
Guys, thank you for great suggestions! Am I right that you suggest using Trello not instead of ASF hosted JIRA, but together with it, and are volunteering to support it as a tool for prioritizing user's feedback? Also, how do you think, should we then move further discussion to the

Re: Spark memory configuration

2015-07-23 Thread Alexander Bezzubov
Hi, thank you for your interest in Zeppelin! You just have to set the 'Spark' interpreter properties in 'Interpreters' menu: CPU spark.cores.max: 24 Mem spark.executor.memory 22g You actually can use any of the http://spark.apache.org/docs/latest/configuration.html#application-properties