[jira] [Created] (ZEPPELIN-740) Remove spark.executor.memory default to 512m

2016-03-11 Thread Jonathan Kelly (JIRA)
Jonathan Kelly created ZEPPELIN-740:
---

 Summary: Remove spark.executor.memory default to 512m
 Key: ZEPPELIN-740
 URL: https://issues.apache.org/jira/browse/ZEPPELIN-740
 Project: Zeppelin
  Issue Type: Improvement
Affects Versions: 0.5.6
Reporter: Jonathan Kelly
Assignee: Jonathan Kelly
Priority: Trivial
 Fix For: 0.6.0


The Spark interpreter currently honors whatever is set for 
spark.executor.memory in spark-defaults.conf upon startup, but if you look at 
the Interpreter page, you'll see that it has a default of 512m. If you restart 
a running Spark interpreter from this page, the new SparkContext will use this 
new default of spark.executor.memory=512m instead of what it had previously 
pulled from spark-defaults.conf.

Removing this 512m default from SparkInterpreter code will allow 
spark.executor.memory to default to whatever value may be set in 
spark-defaults.conf, falling back to the Spark built-in default (which, btw, 
has for a few Spark versions been 1g, not 512m anymore).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ZEPPELIN-736) Remove spark.executor.memory default to 512m

2016-03-11 Thread Jonathan Kelly (JIRA)
Jonathan Kelly created ZEPPELIN-736:
---

 Summary: Remove spark.executor.memory default to 512m
 Key: ZEPPELIN-736
 URL: https://issues.apache.org/jira/browse/ZEPPELIN-736
 Project: Zeppelin
  Issue Type: Improvement
Affects Versions: 0.5.6
Reporter: Jonathan Kelly
Assignee: Jonathan Kelly
Priority: Trivial
 Fix For: 0.6.0


The Spark interpreter currently honors whatever is set for 
spark.executor.memory in spark-defaults.conf upon startup, but if you look at 
the Interpreter page, you'll see that it has a default of 512m. If you restart 
a running Spark interpreter from this page, the new SparkContext will use this 
new default of spark.executor.memory=512m instead of what it had previously 
pulled from spark-defaults.conf.

Removing this 512m default from SparkInterpreter code will allow 
spark.executor.memory to default to whatever value may be set in 
spark-defaults.conf, falling back to the Spark built-in default (which, btw, 
has for a few Spark versions been 1g, not 512m anymore).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ZEPPELIN-737) Remove spark.executor.memory default to 512m

2016-03-11 Thread Jonathan Kelly (JIRA)
Jonathan Kelly created ZEPPELIN-737:
---

 Summary: Remove spark.executor.memory default to 512m
 Key: ZEPPELIN-737
 URL: https://issues.apache.org/jira/browse/ZEPPELIN-737
 Project: Zeppelin
  Issue Type: Improvement
Affects Versions: 0.5.6
Reporter: Jonathan Kelly
Assignee: Jonathan Kelly
Priority: Trivial
 Fix For: 0.6.0


The Spark interpreter currently honors whatever is set for 
spark.executor.memory in spark-defaults.conf upon startup, but if you look at 
the Interpreter page, you'll see that it has a default of 512m. If you restart 
a running Spark interpreter from this page, the new SparkContext will use this 
new default of spark.executor.memory=512m instead of what it had previously 
pulled from spark-defaults.conf.

Removing this 512m default from SparkInterpreter code will allow 
spark.executor.memory to default to whatever value may be set in 
spark-defaults.conf, falling back to the Spark built-in default (which, btw, 
has for a few Spark versions been 1g, not 512m anymore).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ZEPPELIN-738) Remove spark.executor.memory default to 512m

2016-03-11 Thread Jonathan Kelly (JIRA)
Jonathan Kelly created ZEPPELIN-738:
---

 Summary: Remove spark.executor.memory default to 512m
 Key: ZEPPELIN-738
 URL: https://issues.apache.org/jira/browse/ZEPPELIN-738
 Project: Zeppelin
  Issue Type: Improvement
Affects Versions: 0.5.6
Reporter: Jonathan Kelly
Assignee: Jonathan Kelly
Priority: Trivial
 Fix For: 0.6.0


The Spark interpreter currently honors whatever is set for 
spark.executor.memory in spark-defaults.conf upon startup, but if you look at 
the Interpreter page, you'll see that it has a default of 512m. If you restart 
a running Spark interpreter from this page, the new SparkContext will use this 
new default of spark.executor.memory=512m instead of what it had previously 
pulled from spark-defaults.conf.

Removing this 512m default from SparkInterpreter code will allow 
spark.executor.memory to default to whatever value may be set in 
spark-defaults.conf, falling back to the Spark built-in default (which, btw, 
has for a few Spark versions been 1g, not 512m anymore).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ZEPPELIN-739) Remove spark.executor.memory default to 512m

2016-03-11 Thread Jonathan Kelly (JIRA)
Jonathan Kelly created ZEPPELIN-739:
---

 Summary: Remove spark.executor.memory default to 512m
 Key: ZEPPELIN-739
 URL: https://issues.apache.org/jira/browse/ZEPPELIN-739
 Project: Zeppelin
  Issue Type: Improvement
Affects Versions: 0.5.6
Reporter: Jonathan Kelly
Assignee: Jonathan Kelly
Priority: Trivial
 Fix For: 0.6.0


The Spark interpreter currently honors whatever is set for 
spark.executor.memory in spark-defaults.conf upon startup, but if you look at 
the Interpreter page, you'll see that it has a default of 512m. If you restart 
a running Spark interpreter from this page, the new SparkContext will use this 
new default of spark.executor.memory=512m instead of what it had previously 
pulled from spark-defaults.conf.

Removing this 512m default from SparkInterpreter code will allow 
spark.executor.memory to default to whatever value may be set in 
spark-defaults.conf, falling back to the Spark built-in default (which, btw, 
has for a few Spark versions been 1g, not 512m anymore).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ZEPPELIN-735) Remove spark.executor.memory default to 512m

2016-03-11 Thread Jonathan Kelly (JIRA)
Jonathan Kelly created ZEPPELIN-735:
---

 Summary: Remove spark.executor.memory default to 512m
 Key: ZEPPELIN-735
 URL: https://issues.apache.org/jira/browse/ZEPPELIN-735
 Project: Zeppelin
  Issue Type: Improvement
Affects Versions: 0.5.6
Reporter: Jonathan Kelly
Assignee: Jonathan Kelly
Priority: Trivial
 Fix For: 0.6.0


The Spark interpreter currently honors whatever is set for 
spark.executor.memory in spark-defaults.conf upon startup, but if you look at 
the Interpreter page, you'll see that it has a default of 512m. If you restart 
a running Spark interpreter from this page, the new SparkContext will use this 
new default of spark.executor.memory=512m instead of what it had previously 
pulled from spark-defaults.conf.

Removing this 512m default from SparkInterpreter code will allow 
spark.executor.memory to default to whatever value may be set in 
spark-defaults.conf, falling back to the Spark built-in default (which, btw, 
has for a few Spark versions been 1g, not 512m anymore).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: csv dependencies loaded in %spark but not %sql in spark 1.6/zeppelin 0.5.6

2016-02-02 Thread Jonathan Kelly
Awesome, thank you! BTW, I know that the Zeppelin 0.5.6 release was only
very recently, but do you happen to know yet when you plan on releasing
0.6.0?

On Tue, Feb 2, 2016 at 1:07 PM mina lee  wrote:

> This issue has been fixed few days ago in master branch.
>
> Here is the PR
> https://github.com/apache/incubator-zeppelin/pull/673
>
> And related issues filed in JIRA before
> https://issues.apache.org/jira/browse/ZEPPELIN-194
> https://issues.apache.org/jira/browse/ZEPPELIN-381
>
> With the latest master branch, we recommend you to load dependencies via
> interpreter setting menu instead of %dep interpreter.
>
> If you want to know how to set dependencies with latest master branch,
> please check doc
> <
> https://zeppelin.incubator.apache.org/docs/0.6.0-incubating-SNAPSHOT/manual/dependencymanagement.html
> >
> and
> let me know if it works.
>
> Cheers,
> Mina
>
> On Tue, Feb 2, 2016 at 12:50 PM, Lin, Yunfeng 
> wrote:
>
> > I’ve created an issue in jira
> >
> >
> >
> > https://issues.apache.org/jira/browse/ZEPPELIN-648
> >
> >
> >
> > *From:* Benjamin Kim [mailto:bbuil...@gmail.com]
> > *Sent:* Tuesday, February 02, 2016 3:34 PM
> > *To:* us...@zeppelin.incubator.apache.org
> > *Cc:* dev@zeppelin.incubator.apache.org
> > *Subject:* Re: csv dependencies loaded in %spark but not %sql in spark
> > 1.6/zeppelin 0.5.6
> >
> >
> >
> > Same here. I want to know the answer too.
> >
> >
> >
> >
> >
> > On Feb 2, 2016, at 12:32 PM, Jonathan Kelly 
> > wrote:
> >
> >
> >
> > Hey, I just ran into that same exact issue yesterday and wasn't sure if I
> > was doing something wrong or what. Glad to know it's not just me!
> > Unfortunately I have not yet had the time to look any deeper into it.
> Would
> > you mind filing a JIRA if there isn't already one?
> >
> >
> >
> > On Tue, Feb 2, 2016 at 12:29 PM Lin, Yunfeng 
> wrote:
> >
> > Hi guys,
> >
> >
> >
> > I load spark-csv dependencies in %spark, but not in %sql using apache
> > zeppelin 0.5.6 with spark 1.6.0. Everything is working fine in zeppelin
> > 0.5.5 with spark 1.5 through
> >
> >
> >
> > Do you have similar problems?
> >
> >
> >
> > I am loading spark csv dependencies (
> > https://github.com/databricks/spark-csv
> > <
> https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_databricks_spark-2Dcsv&d=BQMFaQ&c=j-EkbjBYwkAB4f8ZbVn1Fw&r=b2BXWa66OlJ_NWqk5P310M6mGfus8eDC5O4J0-nePFY&m=dSXRZCZNlnU1tx9rtyX9UWfdjT0EPbafKr2NyIrXP-o&s=zUPPWKYhZiNUuIUWmlXXGF_94ImGHQ4qHpFCU0xSEzg&e=
> >
> > )
> >
> >
> >
> > Using:
> >
> > %dep
> >
> > z.load(“PATH/commons-csv-1.1.jar”)
> >
> > z.load(“PATH/spark-csv_2.10-1.3.0.jar”)
> >
> > z.load(“PATH/univocity-parsers-1.5.1.jar:)
> >
> > z.load(“PATH/scala-library-2.10.5.jar”)
> >
> >
> >
> > I am able to load a csv from hdfs using data frame API in spark. It is
> > running perfect fine.
> >
> > %spark
> >
> > val df = sqlContext.read
> >
> > .format("com.databricks.spark.csv")
> >
> > .option("header", "false") // Use finrst line of all files as header
> >
> > .option("inferSchema", "true") // Automatically infer data types
> >
> > .load("hdfs://sd-6f48-7fe6:8020/tmp/people.txt")   // this is a file
> > in HDFS
> >
> > df.registerTempTable("people")
> >
> > df.show()
> >
> >
> >
> > This also work:
> >
> > %spark
> >
> > val df2=sqlContext.sql(“select * from people”)
> >
> > df2.show()
> >
> >
> >
> > But this doesn’t work….
> >
> > %sql
> >
> > select * from people
> >
> >
> >
> > java.lang.ClassNotFoundException:
> > com.databricks.spark.csv.CsvRelation$$anonfun$1$$anonfun$2 at
> > java.net.URLClassLoader$1.run(URLClassLoader.java:366) at
> > java.net.URLClassLoader$1.run(URLClassLoader.java:355) at
> > java.security.AccessController.doPrivileged(Native Method) at
> > java.net.URLClassLoader.findClass(URLClassLoader.java:354) at
> > java.lang.ClassLoader.loadClass(ClassLoader.java:425) at
> > sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) at
> > java.lang.ClassLoader.loadClass(ClassLoader.java:358) at
> > java.lang.Class.forName0(Native Method) at
> > jav

Re: csv dependencies loaded in %spark but not %sql in spark 1.6/zeppelin 0.5.6

2016-02-02 Thread Jonathan Kelly
Hey, I just ran into that same exact issue yesterday and wasn't sure if I
was doing something wrong or what. Glad to know it's not just me!
Unfortunately I have not yet had the time to look any deeper into it. Would
you mind filing a JIRA if there isn't already one?

On Tue, Feb 2, 2016 at 12:29 PM Lin, Yunfeng  wrote:

> Hi guys,
>
>
>
> I load spark-csv dependencies in %spark, but not in %sql using apache
> zeppelin 0.5.6 with spark 1.6.0. Everything is working fine in zeppelin
> 0.5.5 with spark 1.5 through
>
>
>
> Do you have similar problems?
>
>
>
> I am loading spark csv dependencies (
> https://github.com/databricks/spark-csv)
>
>
>
> Using:
>
> %dep
>
> z.load(“PATH/commons-csv-1.1.jar”)
>
> z.load(“PATH/spark-csv_2.10-1.3.0.jar”)
>
> z.load(“PATH/univocity-parsers-1.5.1.jar:)
>
> z.load(“PATH/scala-library-2.10.5.jar”)
>
>
>
> I am able to load a csv from hdfs using data frame API in spark. It is
> running perfect fine.
>
> %spark
>
> val df = sqlContext.read
>
> .format("com.databricks.spark.csv")
>
> .option("header", "false") // Use finrst line of all files as header
>
> .option("inferSchema", "true") // Automatically infer data types
>
> .load("hdfs://sd-6f48-7fe6:8020/tmp/people.txt")   // this is a file
> in HDFS
>
> df.registerTempTable("people")
>
> df.show()
>
>
>
> This also work:
>
> %spark
>
> val df2=sqlContext.sql(“select * from people”)
>
> df2.show()
>
>
>
> But this doesn’t work….
>
> %sql
>
> select * from people
>
>
>
> java.lang.ClassNotFoundException:
> com.databricks.spark.csv.CsvRelation$$anonfun$1$$anonfun$2 at
> java.net.URLClassLoader$1.run(URLClassLoader.java:366) at
> java.net.URLClassLoader$1.run(URLClassLoader.java:355) at
> java.security.AccessController.doPrivileged(Native Method) at
> java.net.URLClassLoader.findClass(URLClassLoader.java:354) at
> java.lang.ClassLoader.loadClass(ClassLoader.java:425) at
> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) at
> java.lang.ClassLoader.loadClass(ClassLoader.java:358) at
> java.lang.Class.forName0(Native Method) at
> java.lang.Class.forName(Class.java:270) at
> org.apache.spark.util.InnerClosureFinder$$anon$4.visitMethodInsn(ClosureCleaner.scala:435)
> at org.apache.xbean.asm5.ClassReader.a(Unknown Source) at
> org.apache.xbean.asm5.ClassReader.b(Unknown Source) at
> org.apache.xbean.asm5.ClassReader.accept(Unknown Source) at
> org.apache.xbean.asm5.ClassReader.accept(Unknown Source) at
> org.apache.spark.util.ClosureCleaner$.getInnerClosureClasses(ClosureCleaner.scala:84)
> at
> org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:187)
> at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:122) at
> org.apache.spark.SparkContext.clean(SparkContext.scala:2055) at
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1.apply(RDD.scala:707) at
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1.apply(RDD.scala:706) at
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
> at
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
> at org.apache.spark.rdd.RDD.withScope(RDD.scala:316) at
> org.apache.spark.rdd.RDD.mapPartitions(RDD.scala:706) at
> com.databricks.spark.csv.CsvRelation.tokenRdd(CsvRelation.scala:90) at
> com.databricks.spark.csv.CsvRelation.buildScan(CsvRelation.scala:104) at
> com.databricks.spark.csv.CsvRelation.buildScan(CsvRelation.scala:152) at
> org.apache.spark.sql.execution.datasources.DataSourceStrategy$$anonfun$4.apply(DataSourceStrategy.scala:64)
> at
> org.apache.spark.sql.execution.datasources.DataSourceStrategy$$anonfun$4.apply(DataSourceStrategy.scala:64)
> at
> org.apache.spark.sql.execution.datasources.DataSourceStrategy$$anonfun$pruneFilterProject$1.apply(DataSourceStrategy.scala:274)
> at
> org.apache.spark.sql.execution.datasources.DataSourceStrategy$$anonfun$pruneFilterProject$1.apply(DataSourceStrategy.scala:273)
> at
> org.apache.spark.sql.execution.datasources.DataSourceStrategy$.pruneFilterProjectRaw(DataSourceStrategy.scala:352)
> at
> org.apache.spark.sql.execution.datasources.DataSourceStrategy$.pruneFilterProject(DataSourceStrategy.scala:269)
> at
> org.apache.spark.sql.execution.datasources.DataSourceStrategy$.apply(DataSourceStrategy.scala:60)
> at
> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
> at
> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
> at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at
> org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:59)
> at
> org.apache.spark.sql.catalyst.planning.QueryPlanner.planLater(QueryPlanner.scala:54)
> at
> org.apache.spark.sql.execution.SparkStrategies$BasicOperators$.apply(SparkStrategies.scala:349)
> at
> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
> at
> org.apache.spark.sql.catalyst.planning.QueryPlanner$$ano

Re: csv dependencies loaded in %spark but not %sql in spark 1.6/zeppelin 0.5.6

2016-02-02 Thread Jonathan Kelly
BTW, this sounds very similar to
https://issues.apache.org/jira/browse/ZEPPELIN-297, which affects %pyspark
and was fixed in Zeppelin 0.5.5.

On Tue, Feb 2, 2016 at 12:32 PM Jonathan Kelly 
wrote:

> Hey, I just ran into that same exact issue yesterday and wasn't sure if I
> was doing something wrong or what. Glad to know it's not just me!
> Unfortunately I have not yet had the time to look any deeper into it. Would
> you mind filing a JIRA if there isn't already one?
>
> On Tue, Feb 2, 2016 at 12:29 PM Lin, Yunfeng  wrote:
>
>> Hi guys,
>>
>>
>>
>> I load spark-csv dependencies in %spark, but not in %sql using apache
>> zeppelin 0.5.6 with spark 1.6.0. Everything is working fine in zeppelin
>> 0.5.5 with spark 1.5 through
>>
>>
>>
>> Do you have similar problems?
>>
>>
>>
>> I am loading spark csv dependencies (
>> https://github.com/databricks/spark-csv)
>>
>>
>>
>> Using:
>>
>> %dep
>>
>> z.load(“PATH/commons-csv-1.1.jar”)
>>
>> z.load(“PATH/spark-csv_2.10-1.3.0.jar”)
>>
>> z.load(“PATH/univocity-parsers-1.5.1.jar:)
>>
>> z.load(“PATH/scala-library-2.10.5.jar”)
>>
>>
>>
>> I am able to load a csv from hdfs using data frame API in spark. It is
>> running perfect fine.
>>
>> %spark
>>
>> val df = sqlContext.read
>>
>> .format("com.databricks.spark.csv")
>>
>> .option("header", "false") // Use finrst line of all files as header
>>
>> .option("inferSchema", "true") // Automatically infer data types
>>
>> .load("hdfs://sd-6f48-7fe6:8020/tmp/people.txt")   // this is a file
>> in HDFS
>>
>> df.registerTempTable("people")
>>
>> df.show()
>>
>>
>>
>> This also work:
>>
>> %spark
>>
>> val df2=sqlContext.sql(“select * from people”)
>>
>> df2.show()
>>
>>
>>
>> But this doesn’t work….
>>
>> %sql
>>
>> select * from people
>>
>>
>>
>> java.lang.ClassNotFoundException:
>> com.databricks.spark.csv.CsvRelation$$anonfun$1$$anonfun$2 at
>> java.net.URLClassLoader$1.run(URLClassLoader.java:366) at
>> java.net.URLClassLoader$1.run(URLClassLoader.java:355) at
>> java.security.AccessController.doPrivileged(Native Method) at
>> java.net.URLClassLoader.findClass(URLClassLoader.java:354) at
>> java.lang.ClassLoader.loadClass(ClassLoader.java:425) at
>> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) at
>> java.lang.ClassLoader.loadClass(ClassLoader.java:358) at
>> java.lang.Class.forName0(Native Method) at
>> java.lang.Class.forName(Class.java:270) at
>> org.apache.spark.util.InnerClosureFinder$$anon$4.visitMethodInsn(ClosureCleaner.scala:435)
>> at org.apache.xbean.asm5.ClassReader.a(Unknown Source) at
>> org.apache.xbean.asm5.ClassReader.b(Unknown Source) at
>> org.apache.xbean.asm5.ClassReader.accept(Unknown Source) at
>> org.apache.xbean.asm5.ClassReader.accept(Unknown Source) at
>> org.apache.spark.util.ClosureCleaner$.getInnerClosureClasses(ClosureCleaner.scala:84)
>> at
>> org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:187)
>> at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:122) at
>> org.apache.spark.SparkContext.clean(SparkContext.scala:2055) at
>> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1.apply(RDD.scala:707) at
>> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1.apply(RDD.scala:706) at
>> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
>> at
>> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
>> at org.apache.spark.rdd.RDD.withScope(RDD.scala:316) at
>> org.apache.spark.rdd.RDD.mapPartitions(RDD.scala:706) at
>> com.databricks.spark.csv.CsvRelation.tokenRdd(CsvRelation.scala:90) at
>> com.databricks.spark.csv.CsvRelation.buildScan(CsvRelation.scala:104) at
>> com.databricks.spark.csv.CsvRelation.buildScan(CsvRelation.scala:152) at
>> org.apache.spark.sql.execution.datasources.DataSourceStrategy$$anonfun$4.apply(DataSourceStrategy.scala:64)
>> at
>> org.apache.spark.sql.execution.datasources.DataSourceStrategy$$anonfun$4.apply(DataSourceStrategy.scala:64)
>> at
>> org.apache.spark.sql.execution.datasources.DataSourceStrategy$$anonfun$pruneFilterProject$1.apply(DataSourceStrategy.scala:274)
>> at
>> org.apache.spark.sql.execution.datasources.DataSourceStrategy$$anonfun

Re: [ANNOUNCE] Apache Zeppelin 0.5.5-incubating released

2015-11-19 Thread Jonathan Kelly

And now Zeppelin 0.5.5 has been released on emr-4.2.0 today! Same Day
Delivery! :)


Thanks, Zeppelin community for a great release!

~ Jonathan

On Thu, Nov 19, 2015 at 4:37 PM, Alexander Bezzubov  wrote:

> Great job, thanks to everybody envolved!
>
> Looking forward futher time-based releases of Zeppelin.
>
> On Fri, Nov 20, 2015, 09:00 Pablo Torre  wrote:
>
>> Congrats!!
>>
>> Good job guys, as user of Zeppelin I appreciate a lot this effort.
>>
>> Best,
>>
>> 2015-11-20 0:46 GMT+01:00 임정택 :
>>
>>> Congrats!
>>>
>>> As an user of Zeppelin, thanks for all the contributors to make Zeppelin
>>> better!
>>>
>>> Best,
>>> Jungtaek Lim (HeartSaVioR)
>>>
>>> 2015-11-20 0:35 GMT+09:00 Hyung Sung Shim :
>>>
 Great!
 Congratulations.

 2015-11-19 22:33 GMT+09:00 moon soo Lee :

> The Apache Zeppelin (incubating) community is pleased to announce the
> availability of the 0.5.5-incubating release. The community puts
> significant effort into improving Apache Zeppelin since the last release,
> focusing on having new backend support, improvements on stability and
> simplifying the configuration. More than 60 contributors provided new
> features, improvements and verifying release. More than 90 issues has been
> resolved.
>
> We encourage download the latest release from
> http://zeppelin.incubator.apache.org/download.html
>
> Release note is available at
> http://zeppelin.incubator.apache.org/releases/zeppelin-release-0.5.5-incubating.html
>
> We welcome your help and feedback. For more information on the project
> and how to get involved, visit our website at
> http://zeppelin.incubator.apache.org/
>
> Thanks to all users and contributors who have helped to improve
> Apache Zeppelin.
>
> Regards,
> The Apache Zeppelin community
>
>
> Disclaimer:
> Apache Zeppelin is an effort undergoing incubation at the Apache
> Software
> Foundation (ASF), sponsored by the Apache Incubator PMC.
> Incubation is required of all newly accepted projects until a further
> review indicates that the infrastructure, communications, and decision
> making process have stabilized in a manner consistent with other
> successful ASF projects.
> While incubation status is not necessarily a reflection of the
> completeness or stability of the code, it does indicate that the
> project has yet to be fully endorsed by the ASF.
>



 --

 [image: 본문 이미지 1]

 (주)엔에프랩  |  콘텐츠서비스팀 |  팀장 심형성

 *E. hsshim*@nflabs.com 

 *T.* 02-3458-9650 *M. *010-4282-1230

 *A.* 서울특별시 강남구 논현동 216-2 하림빌딩 2층 NFLABS

>>>
>>>
>>>
>>> --
>>> Name : 임 정택
>>> Blog : http://www.heartsavior.net / http://dev.heartsavior.net
>>> Twitter : http://twitter.com/heartsavior
>>> LinkedIn : http://www.linkedin.com/in/heartsavior
>>>
>>
>>
>>
>> --
>> Pablo Torre.
>> Freelance software engineer and Ruby on Rails developer.
>> Oleiros (Coruña)
>> *Personal site *
>> My blog 
>>
>


Re: [RESULT] [VOTE] Release Apache Zeppelin (incubating) 0.5.5-incubating (RC3)

2015-11-16 Thread Jonathan Kelly
Ah, I see. I hadn't noticed that your other voting email was for a
different list, and I didn't realize that there was a requirement for a
second level of voting for incubating projects. Thanks for the
clarification.

~ Jonathan

On Mon, Nov 16, 2015 at 12:19 PM, tog  wrote:

> OK Moon Thanks for the information, I was not aware of that step.
>
> Cheers
> Guillaume
> On Nov 16, 2015 8:10 PM, "moon soo Lee"  wrote:
>
> > Hi,
> >
> > I think it's my bad that not explaining well enough about the next step.
> >
> > While Zeppelin is in incubation, after vote passed in dev@zeppelin,
> > another
> > vote is required in general@incubator with IPMC.
> >
> > Here's related information about release process in incubation.
> > http://incubator.apache.org/incubation/Incubation_Policy.html#Releases
> >
> > Thanks,
> > moon
> >
> > On Tue, Nov 17, 2015 at 4:48 AM Jonathan Kelly 
> > wrote:
> >
> > > Moon,
> > >
> > > I think I may have the same confusion as Guillaume, as you posted a
> > message
> > > on this same thread a couple of days ago saying that the vote for RC3
> has
> > > passed already, and the email contained a link to
> > >
> > >
> >
> http://mail-archives.apache.org/mod_mbox/incubator-zeppelin-dev/201511.mbox/%3CCALf24sboheQdok1BRX1p5pT-mwY6waOfs490xiPAhT-QNCCAHg%40mail.gmail.com%3E
> > > ,
> > > which shows the vote as closing on Nov 14th. But in your latest email,
> > > you've now linked to a different email on the thread (which I somehow
> > never
> > > saw posted to the list; at least, I never got the email) that shows
> that
> > > the vote is open through Nov 17th.
> > >
> > > Has the vote for 0.5.5 not actually passed yet?
> > >
> > > ~ Jonathan
> > >
> > > On Mon, Nov 16, 2015 at 11:29 AM, moon soo Lee 
> wrote:
> > >
> > > > Hi Guillaume,
> > > >
> > > > Thanks for asking.
> > > >
> > > > 0.5.5-incubating is in vote at general mailing list
> > > >
> > > >
> > >
> >
> http://mail-archives.apache.org/mod_mbox/incubator-general/201511.mbox/%3CCALf24sZin774uuZ%2BftOLV4pBf8egWyJRAMK3fscyh_kcQXZaig%40mail.gmail.com%3E
> > > >
> > > >
> > > > Once vote is passed at general@incubator and packages are synced to
> > > > download mirrors (will take +1 day after vote passed), then it'll be
> > > > announced.
> > > >
> > > > Thanks,
> > > > moon
> > > >
> > > > On Tue, Nov 17, 2015 at 4:05 AM tog 
> > wrote:
> > > >
> > > > > Hi Moon
> > > > >
> > > > > Do we have an official annonce now on user mailing list following
> the
> > > > vote?
> > > > >
> > > > > Cheers
> > > > > Guillaume
> > > > > On Nov 14, 2015 12:36 PM, "moon soo Lee"  wrote:
> > > > >
> > > > > > The vote passes with 7 binding +1 votes, 9 non-binding +1 votes,
> > and
> > > no
> > > > > +0
> > > > > > or -1 votes. Thanks for everyone who verified rc and voted.
> > > > > >
> > > > > >
> > > > > > +1:
> > > > > > DuyHai Doan
> > > > > > Anthony Corbacho*
> > > > > > Khalid Huseynov
> > > > > > Jeff Steinmetz
> > > > > > Victor Manuel Garcia
> > > > > > Felix Cheung
> > > > > > Jonathan Kelly
> > > > > > Corneu Damien*
> > > > > > Alexander Bezzubov*
> > > > > > Madhuka Udantha
> > > > > > Jian Zhong
> > > > > > Guillaume Alleon
> > > > > > Henry Saputra*
> > > > > > Jongyoul Lee*
> > > > > > Mina Lee*
> > > > > > Moon soo Lee*
> > > > > >
> > > > > >
> > > > > > +0:
> > > > > >
> > > > > > -1:
> > > > > >
> > > > > > *binding
> > > > > >
> > > > > >
> > > > > > Vote thread:
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> http://mail-archives.apache.org/mod_mbox/incubator-zeppelin-dev/201511.mbox/%3CCALf24sboheQdok1BRX1p5pT-mwY6waO

Re: [RESULT] [VOTE] Release Apache Zeppelin (incubating) 0.5.5-incubating (RC3)

2015-11-16 Thread Jonathan Kelly
Moon,

I think I may have the same confusion as Guillaume, as you posted a message
on this same thread a couple of days ago saying that the vote for RC3 has
passed already, and the email contained a link to
http://mail-archives.apache.org/mod_mbox/incubator-zeppelin-dev/201511.mbox/%3CCALf24sboheQdok1BRX1p5pT-mwY6waOfs490xiPAhT-QNCCAHg%40mail.gmail.com%3E,
which shows the vote as closing on Nov 14th. But in your latest email,
you've now linked to a different email on the thread (which I somehow never
saw posted to the list; at least, I never got the email) that shows that
the vote is open through Nov 17th.

Has the vote for 0.5.5 not actually passed yet?

~ Jonathan

On Mon, Nov 16, 2015 at 11:29 AM, moon soo Lee  wrote:

> Hi Guillaume,
>
> Thanks for asking.
>
> 0.5.5-incubating is in vote at general mailing list
>
> http://mail-archives.apache.org/mod_mbox/incubator-general/201511.mbox/%3CCALf24sZin774uuZ%2BftOLV4pBf8egWyJRAMK3fscyh_kcQXZaig%40mail.gmail.com%3E
>
>
> Once vote is passed at general@incubator and packages are synced to
> download mirrors (will take +1 day after vote passed), then it'll be
> announced.
>
> Thanks,
> moon
>
> On Tue, Nov 17, 2015 at 4:05 AM tog  wrote:
>
> > Hi Moon
> >
> > Do we have an official annonce now on user mailing list following the
> vote?
> >
> > Cheers
> > Guillaume
> > On Nov 14, 2015 12:36 PM, "moon soo Lee"  wrote:
> >
> > > The vote passes with 7 binding +1 votes, 9 non-binding +1 votes, and no
> > +0
> > > or -1 votes. Thanks for everyone who verified rc and voted.
> > >
> > >
> > > +1:
> > > DuyHai Doan
> > > Anthony Corbacho*
> > > Khalid Huseynov
> > > Jeff Steinmetz
> > > Victor Manuel Garcia
> > > Felix Cheung
> > > Jonathan Kelly
> > > Corneu Damien*
> > > Alexander Bezzubov*
> > > Madhuka Udantha
> > > Jian Zhong
> > > Guillaume Alleon
> > > Henry Saputra*
> > > Jongyoul Lee*
> > > Mina Lee*
> > > Moon soo Lee*
> > >
> > >
> > > +0:
> > >
> > > -1:
> > >
> > > *binding
> > >
> > >
> > > Vote thread:
> > >
> > >
> >
> http://mail-archives.apache.org/mod_mbox/incubator-zeppelin-dev/201511.mbox/%3CCALf24sboheQdok1BRX1p5pT-mwY6waOfs490xiPAhT-QNCCAHg%40mail.gmail.com%3E
> > >
> > >
> > > Thanks!
> > >
> > >
> > >
> > > On Sat, Nov 14, 2015 at 6:54 PM moon soo Lee  wrote:
> > >
> > > > +1
> > > >
> > > >
> > > > On Sat, Nov 14, 2015 at 6:51 PM Mina Lee  wrote:
> > > >
> > > >> +1
> > > >>
> > > >> On Fri, Nov 13, 2015 at 4:42 PM, Jongyoul Lee 
> > > wrote:
> > > >>
> > > >> > +1 for tested on spark yarn.
> > > >> >
> > > >> > On Fri, Nov 13, 2015 at 11:49 AM, Henry Saputra <
> > > >> henry.sapu...@gmail.com>
> > > >> > wrote:
> > > >> >
> > > >> > > NOTICE file looks good
> > > >> > > LICENSE file looks good
> > > >> > > DISCLAIMER file exists
> > > >> > > Signature file looks good
> > > >> > > No 3rd party executables
> > > >> > > License headers correct
> > > >> > > Source compiled
> > > >> > >
> > > >> > > +1
> > > >> > >
> > > >> > >
> > > >> > >
> > > >> > > On Wed, Nov 11, 2015 at 4:32 AM, moon soo Lee 
> > > >> wrote:
> > > >> > > > Hi folks,
> > > >> > > >
> > > >> > > > Since vote for 0.5.5-incubating RC2 was canceled (vote:
> > > >> > > > http://s.apache.org/DXi, cancel: http://s.apache.org/Lce)
> > > >> > > >
> > > >> > > > https://issues.apache.org/jira/browse/ZEPPELIN-404,
> > > >> > > > https://issues.apache.org/jira/browse/ZEPPELIN-405,
> > > >> > > > https://issues.apache.org/jira/browse/ZEPPELIN-406
> > > >> > > > addressed the issue found in RC2.
> > > >> > > >
> > > >> > > > I propose the following RC (RC3) to be released for the Apache
> > > >> Zeppelin
> > > >> > > > (incubating) 0.5.5-incuba

Re: [VOTE] Release Apache Zeppelin (incubating) 0.5.5-incubating (RC3)

2015-11-11 Thread Jonathan Kelly
Moon,

In the future, could you please use distinct tags for each RC, like the
Spark project does? That is, it would be nice if there were distinct
v0.5.5-rc1, v0.5.5-rc2, v0.5.5-rc3 tags, and once the vote passes, you can
tag v0.5.5 at the same commit as the RC for which the vote passed. One
problem with reusing the same tag for each RC is that git won't
automatically pull the tag updates unless I force it to do so or delete the
tag locally and re-fetch from Github.

Thanks,
Jonathan

On Wed, Nov 11, 2015 at 11:47 AM, Felix Cheung 
wrote:

> +1 tested Spark, pyspark, Flink.
>
>
>
> _
> From: Victor Manuel Garcia 
> Sent: Wednesday, November 11, 2015 9:19 AM
> Subject: Re: [VOTE] Release Apache Zeppelin (incubating) 0.5.5-incubating
> (RC3)
> To:  
>
>
>+1
>
>  2015-11-11 18:11 GMT+01:00 Jeff Steinmetz :
>
>  > +1
>  >
>  > Pyspark notebooks with some advanced functionality worked.
>  >
>  > Jeff Steinmetz
>  >
>  >
>  >
>  >
>  > On 11/11/15, 4:32 AM, "moon soo Lee"  wrote:
>  >
>  > >Hi folks,
>  > >
>  > >Since vote for 0.5.5-incubating RC2 was canceled (vote:
>  > >   http://s.apache.org/DXi, cancel:http://s.apache.org/Lce)
>  > >
>  > >   https://issues.apache.org/jira/browse/ZEPPELIN-404,
>  > >   https://issues.apache.org/jira/browse/ZEPPELIN-405,
>  > >   https://issues.apache.org/jira/browse/ZEPPELIN-406
>  > >addressed the issue found in RC2.
>  > >
>  > >I propose the following RC (RC3) to be released for the Apache Zeppelin
>  > >(incubating) 0.5.5-incubating release.
>  > >
>  > >The commit id is e4743e71d2421f5b6950f9e0f346f07bb84f1671 :
>  > >
>  >
> https://git-wip-us.apache.org/repos/asf?p=incubator-zeppelin.git;a=commit;h=e4743e71d2421f5b6950f9e0f346f07bb84f1671
>  > >
>  > >This corresponds to the tag: v0.5.5 :
>  > >
>  >
> https://git-wip-us.apache.org/repos/asf?p=incubator-zeppelin.git;a=tag;h=refs/tags/v0.5.5
>  > >
>  > >The release archives (tgz), signature, and checksums are here
>  > >
>  >
> https://dist.apache.org/repos/dist/dev/incubator/zeppelin/0.5.5-incubating-rc3/
>  > >
>  > >The release candidate consists of the following source distribution
>  > archive
>  > >zeppelin-0.5.5-incubating.tgz
>  > >
>  > >In addition, the following supplementary binary distributions are
> provided
>  > >for user convenience at the same location
>  > >zeppelin-0.5.5-incubating-bin-all.tgz
>  > >
>  > >
>  > >The maven artifacts are here
>  > >
> https://repository.apache.org/content/repositories/orgapachezeppelin-1003
>  > >
>  > >You can find the KEYS file here:
>  > >   https://dist.apache.org/repos/dist/release/incubator/zeppelin/KEYS
>  > >
>  > >Release notes available at
>  > >
>  >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12316221&version=12333531
>  > >
>  > >Vote will be open for next 72 hours (close at 4:30am 14/Nov PDT).
>  > >
>  > >[ ] +1 approve
>  > >[ ] 0 no opinion
>  > >[ ] -1 disapprove (and reason why)
>  >
>  >
>
>
>  --
>  *Victor Manuel Garcia Martinez*
>  *Software Engeenier
>   *
>
>  *+34 672104297  | victor.gar...@beeva.com *
>   *  | victormanuel.garcia.marti...@bbva.com
>  *
>
>
>
>  <   http://www.beeva.com/>
>


Re: Zeppelin running in debug mode

2015-09-28 Thread Jonathan Kelly
That should be possible, but how you do it might depend upon whether you
want to debug Zeppelin UI/server code or Zeppelin Spark interpreter code.
Then again, the same environment variable (ZEPPELIN_JAVA_OPTS is I think
what you want) might affect both anyway. What happens if you set
ZEPPELIN_JAVA_OPTS="-Xdebug
-Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=1044" in
zeppelin-env.sh then restart the server?

BTW, all three of the messages you sent around this time were sent to my
Gmail spam folder with this message: "*Why is this message in Spam?* It has
a from address in wso2.com but has failed wso2.com's required tests for
authentication."

~ Jonathan

On Mon, Sep 28, 2015 at 4:16 AM, Fazlan Nazeem  wrote:

> Hi,
>
>Can Zeppelin be run in debug mode so that I can remotely debug the code
>through intelliJ? If so can someone help me out with the steps?
>
> --
> Thanks & Regards,
>
> Fazlan Nazeem
>
> *Software Engineer*
>
> *WSO2 Inc*
> Mobile : +94772338839
> <%2B94%20%280%29%20773%20451194>
> fazl...@wso2.com
>


[jira] [Created] (ZEPPELIN-79) Zeppelin does not kill some interpreters when server is stopped

2015-05-13 Thread Jonathan Kelly (JIRA)
Jonathan Kelly created ZEPPELIN-79:
--

 Summary: Zeppelin does not kill some interpreters when server is 
stopped
 Key: ZEPPELIN-79
 URL: https://issues.apache.org/jira/browse/ZEPPELIN-79
 Project: Zeppelin
  Issue Type: Bug
  Components: Core
Affects Versions: 0.5.0
Reporter: Jonathan Kelly
 Fix For: 0.5.0


When the Zeppelin server is stopped (e.g., with a SIGTERM), it should also stop 
all interpreter processes that it has started.  Instead, I sometimes see them 
hang around as orphaned children after the Zeppelin server has stopped.  This 
seems to be the case with the spark interpreter (at least with yarn-client 
mode; haven't tested with standalone or local modes) and with the shell 
interpreter.  It does not seem to occur with the hive or angular interpreters; 
those ones stop when the server is stopped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ZEPPELIN-55) use HDFS in tutorial notebook so that it works with yarn-client

2015-04-21 Thread Jonathan Kelly (JIRA)
Jonathan Kelly created ZEPPELIN-55:
--

 Summary: use HDFS in tutorial notebook so that it works with 
yarn-client
 Key: ZEPPELIN-55
 URL: https://issues.apache.org/jira/browse/ZEPPELIN-55
 Project: Zeppelin
  Issue Type: Bug
  Components: Core
Affects Versions: 0.5.0
Reporter: Jonathan Kelly
 Fix For: 0.5.0


The Zeppelin tutorial notebook includes example code that downloads some sample 
data to the local filesystem then performs some simple queries on it. However, 
this only works when Zeppelin is configured to use Spark in local mode. If 
Zeppelin is instead configured to use yarn-client deploy mode, the tutorial 
notebook does not work anymore. This can be fixed by uploading the sample data 
to HDFS instead of leaving it on the local filesystem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (ZEPPELIN-6) Tutorial notebook uses incorrect method of registering the temp table for Spark 1.3+

2015-03-26 Thread Jonathan Kelly (JIRA)

[ 
https://issues.apache.org/jira/browse/ZEPPELIN-6?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14382318#comment-14382318
 ] 

Jonathan Kelly commented on ZEPPELIN-6:
---

Ah, yes, thanks.  Feel free to close this then.

> Tutorial notebook uses incorrect method of registering the temp table for 
> Spark 1.3+
> 
>
> Key: ZEPPELIN-6
> URL: https://issues.apache.org/jira/browse/ZEPPELIN-6
> Project: Zeppelin
>  Issue Type: Bug
> Environment: -Pspark-1.3
>    Reporter: Jonathan Kelly
>Priority: Minor
>
> If using Spark 1.3+, the call to register the temp table in the Zeppelin 
> Tutorial notebook must be:
> bank.toDF.registerTempTable("bank")
> instead of just:
> bank.registerTempTable("bank")
> See 
> http://spark.apache.org/docs/1.3.0/sql-programming-guide.html#upgrading-from-spark-sql-10-12-to-13
>  for more information on the API change.
> I'm not sure if you'd rather handle this by including a commented-out section 
> with the correct invocation for Spark 1.3+, or by adding the Spark 1.3+ 
> invocation and commenting out the old invocation, or by simply correcting the 
> invocation to work for Spark 1.3+, since most people will probably want to 
> start using Spark 1.3.  Maybe you only want to do my first suggestion until 
> Zeppelin is using Spark 1.3+ by default (i.e., no need to compile with 
> -Pspark-1.3).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ZEPPELIN-6) Tutorial notebook uses incorrect method of registering the temp table for Spark 1.3+

2015-03-25 Thread Jonathan Kelly (JIRA)
Jonathan Kelly created ZEPPELIN-6:
-

 Summary: Tutorial notebook uses incorrect method of registering 
the temp table for Spark 1.3+
 Key: ZEPPELIN-6
 URL: https://issues.apache.org/jira/browse/ZEPPELIN-6
 Project: Zeppelin
  Issue Type: Bug
 Environment: -Pspark-1.3
Reporter: Jonathan Kelly
Priority: Minor


If using Spark 1.3+, the call to register the temp table in the Zeppelin 
Tutorial notebook must be:

bank.toDF.registerTempTable("bank")

instead of just:

bank.registerTempTable("bank")

See 
http://spark.apache.org/docs/1.3.0/sql-programming-guide.html#upgrading-from-spark-sql-10-12-to-13
 for more information on the API change.

I'm not sure if you'd rather handle this by including a commented-out section 
with the correct invocation for Spark 1.3+, or by adding the Spark 1.3+ 
invocation and commenting out the old invocation, or by simply correcting the 
invocation to work for Spark 1.3+, since most people will probably want to 
start using Spark 1.3.  Maybe you only want to do my first suggestion until 
Zeppelin is using Spark 1.3+ by default (i.e., no need to compile with 
-Pspark-1.3).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (ZEPPELIN-5) Tutorial notebook fails if fs.defaultFS is set to non-local FS in Hadoop core-site.xml

2015-03-25 Thread Jonathan Kelly (JIRA)
Jonathan Kelly created ZEPPELIN-5:
-

 Summary: Tutorial notebook fails if fs.defaultFS is set to 
non-local FS in Hadoop core-site.xml
 Key: ZEPPELIN-5
 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5
 Project: Zeppelin
  Issue Type: Bug
Reporter: Jonathan Kelly
Priority: Minor


I've noticed that if fs.defaultFS is set to a non-local FS (like HDFS) in the 
Hadoop core-site.xml, the Zeppelin Tutorial notebook will fail to find the 
example data because it will attempt to look in, for example, 
hdfs://master-node:hdfs-port/data/bank-full.csv instead of looking in the local 
filesystem, where the tutorial downloaded the data files just a few lines 
earlier.  This can be fixed by prepending the url with "file://" in the call to 
sc.textFile().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)