Re: Removing support for Scala 2.10

2017-11-06 Thread Marius van Niekerk
I don't think that the current support for 2.10 in master even works
reliably.  Remove it as it will make the maintenance of this simpler.

On Sat, 4 Nov 2017 at 18:56 Chip Senkbeil  wrote:

> Go for it.
>
> On Sat, Nov 4, 2017, 3:08 PM Luciano Resende  wrote:
>
> > The current master only works with Apache Spark 2.x which is based on
> Scala
> > 2.11 and moving towards Scala 2.12.
> >
> > Anyone opposed on removing Scala 2.10 from master ?
> >
> > --
> > Luciano Resende
> > http://twitter.com/lresende1975
> > http://lresende.blogspot.com/
> >
>
-- 
regards
Marius van Niekerk


[jira] [Commented] (TOREE-428) Can't use case class in the Scala notebook

2017-11-06 Thread Paul Balm (JIRA)

[ 
https://issues.apache.org/jira/browse/TOREE-428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16240452#comment-16240452
 ] 

Paul Balm commented on TOREE-428:
-

I confirm this issue. This is my test case (slightly simpler):

* Create the test data from the terminal: {{F=arraystore.csv ; echo a > $F; 
echo b >> $F; echo c >> $F}}
* Read the file into an RDD with a case class: 

{noformat}
case class IdClass(id: String)
sc.textFile("arraystore.csv").map(IdClass).collect()
{noformat}

This produces the stacktrace in the description.

I don't have any particular insights into the problem, but an 
{{ArrayStoreException}} is normally an indication that an object is being 
stored in an array with an incompatible type. For example when you have an 
array Strings and you're trying to put an {{IdClass}} or {{Person}} object into 
it.


> Can't use case class in the Scala notebook
> --
>
> Key: TOREE-428
> URL: https://issues.apache.org/jira/browse/TOREE-428
> Project: TOREE
>  Issue Type: Bug
>  Components: Build
>Reporter: Haifeng Li
>
> the version of docker:
> jupyter/all-spark-notebook:lastest
> the way to start docker:
> docker run -it --rm -p : jupyter/all-spark-notebook:latest
> or
> docker ps -a
> docker start -i containerID
> the steps:
> Visit http://localhost:
> Start an toree notebook
> input code above
> {code:java}
> import spark.implicits._
> val p = spark.sparkContext.textFile ("../Data/person.txt")
> val pmap = p.map ( _.split (","))
> pmap.collect()
> {code}
> the output:res0: Array[Array[String]] = Array(Array(Barack, Obama, 53), 
> Array(George, Bush, 68), Array(Bill, Clinton, 68))
> {code:java}
> case class Persons (first_name:String,last_name: String,age:Int)
> val personRDD = pmap.map ( p => Persons (p(0), p(1), p(2).toInt))
> personRDD.take(1)
> {code}
> the error message:
> {code:java}
> org.apache.spark.SparkDriverExecutionException: Execution error
>   at 
> org.apache.spark.scheduler.DAGScheduler.handleTaskCompletion(DAGScheduler.scala:1186)
>   at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1711)
>   at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1669)
>   at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1658)
>   at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
>   at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:630)
>   at org.apache.spark.SparkContext.runJob(SparkContext.scala:2022)
>   at org.apache.spark.SparkContext.runJob(SparkContext.scala:2043)
>   at org.apache.spark.SparkContext.runJob(SparkContext.scala:2062)
>   at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1354)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>   at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
>   at org.apache.spark.rdd.RDD.take(RDD.scala:1327)
>   ... 39 elided
> Caused by: java.lang.ArrayStoreException: [LPersons;
>   at scala.runtime.ScalaRunTime$.array_update(ScalaRunTime.scala:90)
>   at 
> org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:2043)
>   at 
> org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:2043)
>   at org.apache.spark.scheduler.JobWaiter.taskSucceeded(JobWaiter.scala:59)
>   at 
> org.apache.spark.scheduler.DAGScheduler.handleTaskCompletion(DAGScheduler.scala:1182)
>   at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1711)
>   at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1669)
>   at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1658)
>   at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
> {code}
> The above code is working with the spark-shell. From error message, I 
> speculated that the driver program didn't correctly handle case class Persons 
> to RDD partition.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (TOREE-428) Can't use case class in the Scala notebook

2017-11-06 Thread Paul Balm (JIRA)

[ 
https://issues.apache.org/jira/browse/TOREE-428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16240452#comment-16240452
 ] 

Paul Balm edited comment on TOREE-428 at 11/6/17 4:10 PM:
--

I confirm this issue. This is my test case (slightly simpler):

* Create the test data from the terminal: {{F=arraystore.csv ; echo a > $F; 
echo b >> $F; echo c >> $F}}
* Read the file into an RDD with a case class: 

{noformat}
case class IdClass(id: String)
sc.textFile("arraystore.csv").map(IdClass).collect()
{noformat}

This produces the stacktrace in the description.

An {{ArrayStoreException}} is normally an indication that an object is being 
stored in an array with an incompatible type. For example when you have an 
array Strings and you're trying to put an {{IdClass}} or {{Person}} object into 
it.

As a wild guess as to what might be going on here: If you define IdClass on one 
thread with a given ClassLoader, and you reload it using another ClassLoader, 
it's not considered the same class. So if you create IdClass objects on one 
thread and you create the Array[IdClass] on another thread which has a 
different ClassLoader, you would get an ArrayStoreException when putting the 
objects into the array.



was (Author: pbalm):
I confirm this issue. This is my test case (slightly simpler):

* Create the test data from the terminal: {{F=arraystore.csv ; echo a > $F; 
echo b >> $F; echo c >> $F}}
* Read the file into an RDD with a case class: 

{noformat}
case class IdClass(id: String)
sc.textFile("arraystore.csv").map(IdClass).collect()
{noformat}

This produces the stacktrace in the description.

I don't have any particular insights into the problem, but an 
{{ArrayStoreException}} is normally an indication that an object is being 
stored in an array with an incompatible type. For example when you have an 
array Strings and you're trying to put an {{IdClass}} or {{Person}} object into 
it.


> Can't use case class in the Scala notebook
> --
>
> Key: TOREE-428
> URL: https://issues.apache.org/jira/browse/TOREE-428
> Project: TOREE
>  Issue Type: Bug
>  Components: Build
>Reporter: Haifeng Li
>
> the version of docker:
> jupyter/all-spark-notebook:lastest
> the way to start docker:
> docker run -it --rm -p : jupyter/all-spark-notebook:latest
> or
> docker ps -a
> docker start -i containerID
> the steps:
> Visit http://localhost:
> Start an toree notebook
> input code above
> {code:java}
> import spark.implicits._
> val p = spark.sparkContext.textFile ("../Data/person.txt")
> val pmap = p.map ( _.split (","))
> pmap.collect()
> {code}
> the output:res0: Array[Array[String]] = Array(Array(Barack, Obama, 53), 
> Array(George, Bush, 68), Array(Bill, Clinton, 68))
> {code:java}
> case class Persons (first_name:String,last_name: String,age:Int)
> val personRDD = pmap.map ( p => Persons (p(0), p(1), p(2).toInt))
> personRDD.take(1)
> {code}
> the error message:
> {code:java}
> org.apache.spark.SparkDriverExecutionException: Execution error
>   at 
> org.apache.spark.scheduler.DAGScheduler.handleTaskCompletion(DAGScheduler.scala:1186)
>   at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1711)
>   at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1669)
>   at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1658)
>   at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
>   at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:630)
>   at org.apache.spark.SparkContext.runJob(SparkContext.scala:2022)
>   at org.apache.spark.SparkContext.runJob(SparkContext.scala:2043)
>   at org.apache.spark.SparkContext.runJob(SparkContext.scala:2062)
>   at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1354)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>   at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
>   at org.apache.spark.rdd.RDD.take(RDD.scala:1327)
>   ... 39 elided
> Caused by: java.lang.ArrayStoreException: [LPersons;
>   at scala.runtime.ScalaRunTime$.array_update(ScalaRunTime.scala:90)
>   at 
> org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:2043)
>   at 
> org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:2043)
>   at org.apache.spark.scheduler.JobWaiter.taskSucceeded(JobWaiter.scala:59)
>   at 
> org.apache.spark.scheduler.DAGScheduler.handleTaskCompletion(DAGScheduler.scala:1182)
>   at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1711)
>   at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGSc

[jira] [Comment Edited] (TOREE-428) Can't use case class in the Scala notebook

2017-11-06 Thread Paul Balm (JIRA)

[ 
https://issues.apache.org/jira/browse/TOREE-428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16240452#comment-16240452
 ] 

Paul Balm edited comment on TOREE-428 at 11/6/17 4:13 PM:
--

I confirm this issue. This is my test case (slightly simpler):

* Create the test data from the terminal: {{F=arraystore.csv ; echo a > $F; 
echo b >> $F; echo c >> $F}}
* Read the file into an RDD with a case class: 

{noformat}
case class IdClass(id: String)
sc.textFile("arraystore.csv").map(IdClass).collect()
{noformat}

This produces the stacktrace in the description.

An {{ArrayStoreException}} is normally an indication that an object is being 
stored in an array with an incompatible type. For example when you have an 
array Strings and you're trying to put an {{IdClass}} or {{Person}} object into 
it.

As a wild guess as to what might be going on here: If you define IdClass on one 
thread with a given ClassLoader, and you reload it using another ClassLoader, 
it's not considered the same class. So if you create IdClass objects on one 
thread and you create the Array[IdClass] on another thread which has a 
different ClassLoader, you would get an ArrayStoreException when putting the 
objects into the array.

In a normal application that doesn't happen because there are no classes 
defined at run-time, they are all loaded by the SystemClassLoader from the JARs 
on the class path. However a class defined at runtime cannot be loaded by the 
SystemClassLoader. You have to set up a custom ClassLoader and make sure you 
consistently use this one across all threads that will be using the newly 
defined class.



was (Author: pbalm):
I confirm this issue. This is my test case (slightly simpler):

* Create the test data from the terminal: {{F=arraystore.csv ; echo a > $F; 
echo b >> $F; echo c >> $F}}
* Read the file into an RDD with a case class: 

{noformat}
case class IdClass(id: String)
sc.textFile("arraystore.csv").map(IdClass).collect()
{noformat}

This produces the stacktrace in the description.

An {{ArrayStoreException}} is normally an indication that an object is being 
stored in an array with an incompatible type. For example when you have an 
array Strings and you're trying to put an {{IdClass}} or {{Person}} object into 
it.

As a wild guess as to what might be going on here: If you define IdClass on one 
thread with a given ClassLoader, and you reload it using another ClassLoader, 
it's not considered the same class. So if you create IdClass objects on one 
thread and you create the Array[IdClass] on another thread which has a 
different ClassLoader, you would get an ArrayStoreException when putting the 
objects into the array.


> Can't use case class in the Scala notebook
> --
>
> Key: TOREE-428
> URL: https://issues.apache.org/jira/browse/TOREE-428
> Project: TOREE
>  Issue Type: Bug
>  Components: Build
>Reporter: Haifeng Li
>
> the version of docker:
> jupyter/all-spark-notebook:lastest
> the way to start docker:
> docker run -it --rm -p : jupyter/all-spark-notebook:latest
> or
> docker ps -a
> docker start -i containerID
> the steps:
> Visit http://localhost:
> Start an toree notebook
> input code above
> {code:java}
> import spark.implicits._
> val p = spark.sparkContext.textFile ("../Data/person.txt")
> val pmap = p.map ( _.split (","))
> pmap.collect()
> {code}
> the output:res0: Array[Array[String]] = Array(Array(Barack, Obama, 53), 
> Array(George, Bush, 68), Array(Bill, Clinton, 68))
> {code:java}
> case class Persons (first_name:String,last_name: String,age:Int)
> val personRDD = pmap.map ( p => Persons (p(0), p(1), p(2).toInt))
> personRDD.take(1)
> {code}
> the error message:
> {code:java}
> org.apache.spark.SparkDriverExecutionException: Execution error
>   at 
> org.apache.spark.scheduler.DAGScheduler.handleTaskCompletion(DAGScheduler.scala:1186)
>   at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1711)
>   at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1669)
>   at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1658)
>   at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
>   at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:630)
>   at org.apache.spark.SparkContext.runJob(SparkContext.scala:2022)
>   at org.apache.spark.SparkContext.runJob(SparkContext.scala:2043)
>   at org.apache.spark.SparkContext.runJob(SparkContext.scala:2062)
>   at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1354)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>   at org.apache.spark.rdd.RDD.wit

Re: Removing support for Scala 2.10

2017-11-06 Thread Ryan Blue
+1

On Mon, Nov 6, 2017 at 6:04 AM, Marius van Niekerk <
marius.v.niek...@gmail.com> wrote:

> I don't think that the current support for 2.10 in master even works
> reliably.  Remove it as it will make the maintenance of this simpler.
>
> On Sat, 4 Nov 2017 at 18:56 Chip Senkbeil  wrote:
>
> > Go for it.
> >
> > On Sat, Nov 4, 2017, 3:08 PM Luciano Resende 
> wrote:
> >
> > > The current master only works with Apache Spark 2.x which is based on
> > Scala
> > > 2.11 and moving towards Scala 2.12.
> > >
> > > Anyone opposed on removing Scala 2.10 from master ?
> > >
> > > --
> > > Luciano Resende
> > > http://twitter.com/lresende1975
> > > http://lresende.blogspot.com/
> > >
> >
> --
> regards
> Marius van Niekerk
>



-- 
Ryan Blue
Software Engineer
Netflix


Re: Removing support for Scala 2.10

2017-11-06 Thread Corey Stubbs
+1 as well

On Mon, Nov 6, 2017 at 11:28 AM Ryan Blue  wrote:

> +1
>
> On Mon, Nov 6, 2017 at 6:04 AM, Marius van Niekerk <
> marius.v.niek...@gmail.com> wrote:
>
> > I don't think that the current support for 2.10 in master even works
> > reliably.  Remove it as it will make the maintenance of this simpler.
> >
> > On Sat, 4 Nov 2017 at 18:56 Chip Senkbeil 
> wrote:
> >
> > > Go for it.
> > >
> > > On Sat, Nov 4, 2017, 3:08 PM Luciano Resende 
> > wrote:
> > >
> > > > The current master only works with Apache Spark 2.x which is based on
> > > Scala
> > > > 2.11 and moving towards Scala 2.12.
> > > >
> > > > Anyone opposed on removing Scala 2.10 from master ?
> > > >
> > > > --
> > > > Luciano Resende
> > > > http://twitter.com/lresende1975
> > > > http://lresende.blogspot.com/
> > > >
> > >
> > --
> > regards
> > Marius van Niekerk
> >
>
>
>
> --
> Ryan Blue
> Software Engineer
> Netflix
>


Re: [VOTE] Apache Toree 0.2.0 RC2

2017-11-06 Thread Ryan Blue
I think the tag and commit hash are incorrect. The tag is
v0.2.0-incubating-rc2 and the commit hash
is 32bbefa121aafd8713afab81516917234d72d690

See https://github.com/apache/incubator-toree/commit/32bbefa.

rb

On Sun, Nov 5, 2017 at 10:27 AM, Luciano Resende 
wrote:

> Please vote to approve the release of Apache Toree 0.2.0-incubating (RC2).
>
> Tag: v0.2.0-incubating-rc1 (01cd97e9bad04878a8014016c154a50e2a00f21d)
>
> https://github.com/apache/incubator-toree/tree/v0.2.0-incubating-rc2
>
> All distribution packages, including signatures, digests, etc. can be found
> at:
>
> https://dist.apache.org/repos/dist/dev/incubator/toree/0.2.
> 0-incubating-rc2/
>
> Staging artifacts can be found at:
>
> https://repository.apache.org/content/repositories/orgapachetoree-1008
>
> ## Testing Instructions
>
> The fastest way to get up and running is to using Jupyter.
>
> 1. Install Jupyter if you haven't already (http://jupyter.org/install.html
> )
>
> 2. Install Apache Toree via `pip install https://dist.apache.org/repos/
> dist/dev/incubator/toree/0.2.0-incubating-rc2/toree-pip/
> toree-0.2.0.tar.gz`
> followed by `jupyter toree install`
>
> - You need to set a valid Apache Spark 2.x home, which can be done via
> `jupyter toree install --spark_home=/usr/local/spark`
>
> - You may need to run with `sudo` for installation permission
>
> - For all installation options, run `jupyter toree install --help-all`
>
> 4. Run a Jupyter notebook server via `jupyter notebook`
>
> - If the notebook portion of Jupyter is not installed but Jupyter is,
> you can install via `pip install notebook`
>
> 5. Create a notebook using "Apache Toree - Scala" from the dropdown under
> new
> dev
> 6. Run Scala/Spark commands such as `sc.parallelize(1 to 100).sum()` in the
> notebook
>
> ## Voting Instructions
>
> The vote is open for at least 72 hours and passes if a majority of at least
> 3 +1 PMC votes are cast.
>
> [ ] +1 Release this package as Apache Toree 0.2.0-incubating
> [ ] -1 Do not release this package because ...
>
> --
> Luciano Resende
> http://twitter.com/lresende1975
> http://lresende.blogspot.com/
>



-- 
Ryan Blue
Software Engineer
Netflix


Re: [VOTE] Apache Toree 0.2.0 RC2

2017-11-06 Thread Ryan Blue
-1

Looks like license documentation is out of date. The source tarball looks
fine, but the binary one has problems.

The LICENSE file in the bin tarball appears to be missing some dependencies
that are included in the assembly Jar:
* Akka (Apache 2 - https://github.com/akka/akka/blob/master/LICENSE)
* Play (Apache 2 -
https://github.com/playframework/playframework/blob/master/LICENSE)
* Spring framework (Apache 2 -
https://github.com/spring-projects/spring-framework)
* Joda time (Apache 2 - http://www.joda.org/joda-time/license.html)
* Coursier (Apache 2 -
https://github.com/coursier/coursier/blob/master/LICENSE)
* Typesafe config (Apache 2 -
https://github.com/lightbend/config/blob/master/LICENSE-2.0.txt)
* Guava (Apache 2 - https://github.com/google/guava/blob/master/COPYING)

Coursier shades other libraries, including parts of Maven (ALv2), fastparse
(MIT - https://github.com/lihaoyi/fastparse), jsoup (MIT -
https://jsoup.org/license), something under "sourcecode", and Ivy.

The LICENSE file needs to have entries for any dependencies that are
distributed with Toree because those projects aren't licensed to users by
the ASF, they are licensed by the copyright owners for those projects. We
also need to make sure any entries in the NOTICE files from other Apache
projects that apply to this Toree release are included in Toree's NOTICE
file.

Also, I noticed that the license file includes 2 entries for Joda time and
lists depenencies on Scala 2.10 libs and Ammonite (which isn't used as far
as I know). I think we need to regenerate this file with the current
dependency set and possibly update it.

One last point is that if we don't already include the binary tarball's
license and notice in the assembly Jar when it is published to Maven
central, we'll need to add it there as well.

rb

On Mon, Nov 6, 2017 at 11:05 AM, Ryan Blue  wrote:

> I think the tag and commit hash are incorrect. The tag is
> v0.2.0-incubating-rc2 and the commit hash is
> 32bbefa121aafd8713afab81516917234d72d690
>
> See https://github.com/apache/incubator-toree/commit/32bbefa.
>
> rb
>
> On Sun, Nov 5, 2017 at 10:27 AM, Luciano Resende 
> wrote:
>
>> Please vote to approve the release of Apache Toree 0.2.0-incubating (RC2).
>>
>> Tag: v0.2.0-incubating-rc1 (01cd97e9bad04878a8014016c154a50e2a00f21d)
>>
>> https://github.com/apache/incubator-toree/tree/v0.2.0-incubating-rc2
>>
>> All distribution packages, including signatures, digests, etc. can be
>> found
>> at:
>>
>> https://dist.apache.org/repos/dist/dev/incubator/toree/0.2.0
>> -incubating-rc2/
>>
>> Staging artifacts can be found at:
>>
>> https://repository.apache.org/content/repositories/orgapachetoree-1008
>>
>> ## Testing Instructions
>>
>> The fastest way to get up and running is to using Jupyter.
>>
>> 1. Install Jupyter if you haven't already (http://jupyter.org/install.ht
>> ml)
>>
>> 2. Install Apache Toree via `pip install https://dist.apache.org/repos/
>> dist/dev/incubator/toree/0.2.0-incubating-rc2/toree-pip/tore
>> e-0.2.0.tar.gz`
>> 
>> followed by `jupyter toree install`
>>
>> - You need to set a valid Apache Spark 2.x home, which can be done via
>> `jupyter toree install --spark_home=/usr/local/spark`
>>
>> - You may need to run with `sudo` for installation permission
>>
>> - For all installation options, run `jupyter toree install --help-all`
>>
>> 4. Run a Jupyter notebook server via `jupyter notebook`
>>
>> - If the notebook portion of Jupyter is not installed but Jupyter is,
>> you can install via `pip install notebook`
>>
>> 5. Create a notebook using "Apache Toree - Scala" from the dropdown under
>> new
>> dev
>> 6. Run Scala/Spark commands such as `sc.parallelize(1 to 100).sum()` in
>> the
>> notebook
>>
>> ## Voting Instructions
>>
>> The vote is open for at least 72 hours and passes if a majority of at
>> least
>> 3 +1 PMC votes are cast.
>>
>> [ ] +1 Release this package as Apache Toree 0.2.0-incubating
>> [ ] -1 Do not release this package because ...
>>
>> --
>> Luciano Resende
>> http://twitter.com/lresende1975
>> http://lresende.blogspot.com/
>>
>
>
>
> --
> Ryan Blue
> Software Engineer
> Netflix
>



-- 
Ryan Blue
Software Engineer
Netflix


[jira] [Created] (TOREE-454) Use release audit tool to validate source release files.

2017-11-06 Thread Ryan Blue (JIRA)
Ryan Blue created TOREE-454:
---

 Summary: Use release audit tool to validate source release files.
 Key: TOREE-454
 URL: https://issues.apache.org/jira/browse/TOREE-454
 Project: TOREE
  Issue Type: Bug
Reporter: Ryan Blue






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (TOREE-454) Use release audit tool to validate source release files.

2017-11-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TOREE-454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16240818#comment-16240818
 ] 

ASF GitHub Bot commented on TOREE-454:
--

GitHub user rdblue opened a pull request:

https://github.com/apache/incubator-toree/pull/144

TOREE-454: Add RAT script.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rdblue/incubator-toree 
TOREE-454-add-rat-script

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-toree/pull/144.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #144


commit e2c47b408dc8aeb108b6e47eeefe8bdbef2863ed
Author: Ryan Blue 
Date:   2017-11-06T20:12:19Z

TOREE-454: Add RAT script.




> Use release audit tool to validate source release files.
> 
>
> Key: TOREE-454
> URL: https://issues.apache.org/jira/browse/TOREE-454
> Project: TOREE
>  Issue Type: Bug
>Reporter: Ryan Blue
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (TOREE-454) Use release audit tool to validate source release files.

2017-11-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/TOREE-454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16240917#comment-16240917
 ] 

ASF GitHub Bot commented on TOREE-454:
--

Github user rdblue closed the pull request at:

https://github.com/apache/incubator-toree/pull/144


> Use release audit tool to validate source release files.
> 
>
> Key: TOREE-454
> URL: https://issues.apache.org/jira/browse/TOREE-454
> Project: TOREE
>  Issue Type: Bug
>Reporter: Ryan Blue
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (TOREE-454) Use release audit tool to validate source release files.

2017-11-06 Thread Ryan Blue (JIRA)

 [ 
https://issues.apache.org/jira/browse/TOREE-454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Blue resolved TOREE-454.
-
   Resolution: Not A Problem
Fix Version/s: Not Applicable

Already exists. Use {{make audit-licenses}}.

> Use release audit tool to validate source release files.
> 
>
> Key: TOREE-454
> URL: https://issues.apache.org/jira/browse/TOREE-454
> Project: TOREE
>  Issue Type: Bug
>Reporter: Ryan Blue
> Fix For: Not Applicable
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)