[jira] [Comment Edited] (TOREE-407) Improve Branding on Site

2017-05-02 Thread Jakob Odersky (JIRA)

[ 
https://issues.apache.org/jira/browse/TOREE-407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15993861#comment-15993861
 ] 

Jakob Odersky edited comment on TOREE-407 at 5/2/17 9:55 PM:
-

That's a valid point, I also think that the description on the front page is 
not very direct. An emphasis on Jupyter connectivity would help convey the 
goals of the project.


was (Author: jodersky):
That's a valid point, I also think that the description on the front page is 
not very direct. An emphasis on Jupyter connectivity would help establish the 
goals of the project.

> Improve Branding on Site
> 
>
> Key: TOREE-407
> URL: https://issues.apache.org/jira/browse/TOREE-407
> Project: TOREE
>  Issue Type: Improvement
>Reporter: Kyle Kelley
>
> I want to recommend Toree to others as a Scala kernel. The messaging on 
> https://toree.incubator.apache.org/ is all about "remote spark" which muddies 
> what it actually does, which is provide kernels that are connected to Spark. 
> Toree, standalone, doesn't do anything without Jupyter.
> Here's what I wish it read:
> ```
> Apache Toree
> Spark connected kernels for Jupyter projects - Scala, Python, and R
> ```



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (TOREE-407) Improve Branding on Site

2017-05-02 Thread Jakob Odersky (JIRA)

[ 
https://issues.apache.org/jira/browse/TOREE-407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15993861#comment-15993861
 ] 

Jakob Odersky commented on TOREE-407:
-

That's a valid point, I also think that the description on the front page is 
not very direct. An emphasis on Jupyter connectivity would help establish the 
goals of the project.

> Improve Branding on Site
> 
>
> Key: TOREE-407
> URL: https://issues.apache.org/jira/browse/TOREE-407
> Project: TOREE
>  Issue Type: Improvement
>Reporter: Kyle Kelley
>
> I want to recommend Toree to others as a Scala kernel. The messaging on 
> https://toree.incubator.apache.org/ is all about "remote spark" which muddies 
> what it actually does, which is provide kernels that are connected to Spark. 
> Toree, standalone, doesn't do anything without Jupyter.
> Here's what I wish it read:
> ```
> Apache Toree
> Spark connected kernels for Jupyter projects - Scala, Python, and R
> ```



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (TOREE-402) Installer should support parameterized kernel names

2017-04-05 Thread Jakob Odersky (JIRA)

[ 
https://issues.apache.org/jira/browse/TOREE-402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15957571#comment-15957571
 ] 

Jakob Odersky commented on TOREE-402:
-

Good point; that's an aspect I didn't think about. Currently kernel directories 
are named according to this python snippet:
{code}
install_dir = self.kernel_spec_manager.install_kernel_spec(self.sourcedir,
 kernel_name='{}_{}'.format(self.kernel_name, 
interpreter.lower()).replace(' ', '_'),
{code}
basically replacing spaces with underscores and lowercasing letters.

I wouldn't advise relying on this format however as it could change in future 
versions. My recommendation therefore would be to either pursue option 1) or to 
install kernels manually (without using jupyter install).

> Installer should support parameterized kernel names
> ---
>
> Key: TOREE-402
> URL: https://issues.apache.org/jira/browse/TOREE-402
> Project: TOREE
>  Issue Type: Wish
>  Components: Kernel
>Affects Versions: 0.2.0
>Reporter: Christian Kadner
>Priority: Minor
>
> For enterprise deployments of Apache Toree it would be nice to have more 
> flexibility when specifying kernel name(s) when installing multiple 
> interpreters at the same time.
> Currently the Apache Toree installer allows to specify {{kernel_name}} and 
> {{interpreters}}, i.e. running {code}jupyter toree install 
> --kernel_name='Spark 2.1' --interpresters=Scala,PySpark,SparkR,SQL{code} 
> would result in kernels with these names:
> {code}
> Spark 2.1 - Scala
> Spark 2.1 - PySpark
> Spark 2.1 - SparkR
> Spark 2.1 - SQL
> {code}
> For enterprise deployments that support other languages and Spark versions 
> however this naming scheme is not flexible enough. Suppose this is the 
> desired list of kernels (kernel display names):
> {code}
> Python 2.7 with Spark 1.6
> Python 2.7 with Spark 2.0
> Python 3.5 with Spark 1.6
> Python 3.5 with Spark 2.0
> R with Spark 1.6
> R with Spark 2.0
> Scala 2.10 with Spark 1.6
> Scala 2.11 with Spark 2.0
> {code}
> In order to achieve the above names, one would have to write a custom script 
> to replace the {{display_name}} in the {{kernel.json}} files that get created 
> by the Toree installer.
> It would be nice to enrich the Toree install options to allow for some kind 
> of pattern instead of a fixed string, i.e.:
> {code}
> jupyter toree install --kernel_name='{interpreter.name} {interpreter.version} 
> with Spark {spark.version}' ...
> {code}
> The install documentation might read:
> {noformat}
> --kernel_name= (ToreeInstall.kernel_name)
> Examples:  '{interpreter.name} {interpreter.version} with Spark 
> {spark.version}'
> Default:   'Apache Toree - {interpreter.name}'
> Install the kernel spec with this name. This is also used as the display 
> name for kernels in the Jupyter UI.
> {noformat}
> Of course the placeholders would then have to be replaced by the Toree 
> install code and actual list of available variables may be different from the 
> above suggestion.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (TOREE-402) Installer should support parameterized kernel names

2017-04-05 Thread Jakob Odersky (JIRA)

[ 
https://issues.apache.org/jira/browse/TOREE-402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15957443#comment-15957443
 ] 

Jakob Odersky edited comment on TOREE-402 at 4/5/17 6:59 PM:
-

Thanks for the detailed description. This could be a useful addition to Toree, 
however I think it will add quite a bit of complexity to the installer and only 
solves a minor problem. Any patches are welcome!

In the meantime, I can think of one alternate solution (that would need to be 
discussed further) and one workaround:

1) Alternate solution: change the install script to allow installing only one 
kernel per invocation (essentially remove the ability specify multiple 
interpreters) and specify the name as-is, without appending the language used. 
The advantage in this case would be that we don't need to complicate the 
install scripts with template parsing and value substitution. Unfortunately 
however, this also changes the install script api and hence may break existing 
deployments. Then again, we are currently on version 0.x, so breaking changes 
should not necessarily be rejected.

2) Workaround: install kernels as usual and then modify the generated 
`kernel.json` files, changing the `display_name` property to the required 
kernel name.

In my opinion, 2) is the simplest fix for now. It can also be applied in 
automated deployments with command line tools that modify JSON data such as jq 
or plain sed.


was (Author: jodersky):
Thanks for the detailed description. This could be a useful addition to Toree, 
however I think it will require quite some time to implement and only solves a 
minor problem. Any patches are welcome!

In the meantime, I can think of one alternate solution (that would need to be 
discussed further) and one workaround:

1) Alternate solution: change the install script to allow installing only one 
kernel per invocation (essentially remove the ability specify multiple 
interpreters) and specify the name as-is, without appending the language used. 
The advantage in this case would be that we don't need to complicate the 
install scripts with template parsing and value substitution. Unfortunately 
however, this also changes the install script api and hence may break existing 
deployments. Then again, we are currently on version 0.x, so breaking changes 
should not necessarily be rejected.

2) Workaround: install kernels as usual and then modify the generated 
`kernel.json` files, changing the `display_name` property to the required 
kernel name.

In my opinion, 2) is the simplest fix for now. It can also be applied in 
automated deployments with command line tools that modify JSON data such as jq 
or plain sed.

> Installer should support parameterized kernel names
> ---
>
> Key: TOREE-402
> URL: https://issues.apache.org/jira/browse/TOREE-402
> Project: TOREE
>  Issue Type: Wish
>  Components: Kernel
>Affects Versions: 0.2.0
>Reporter: Christian Kadner
>Priority: Minor
>
> For enterprise deployments of Apache Toree it would be nice to have more 
> flexibility when specifying kernel name(s) when installing multiple 
> interpreters at the same time.
> Currently the Apache Toree installer allows to specify {{kernel_name}} and 
> {{interpreters}}, i.e. running {code}jupyter toree install 
> --kernel_name='Spark 2.1' --interpresters=Scala,PySpark,SparkR,SQL{code} 
> would result in kernels with these names:
> {code}
> Spark 2.1 - Scala
> Spark 2.1 - PySpark
> Spark 2.1 - SparkR
> Spark 2.1 - SQL
> {code}
> For enterprise deployments that support other languages and Spark versions 
> however this naming scheme is not flexible enough. Suppose this is the 
> desired list of kernels (kernel display names):
> {code}
> Python 2.7 with Spark 1.6
> Python 2.7 with Spark 2.0
> Python 3.5 with Spark 1.6
> Python 3.5 with Spark 2.0
> R with Spark 1.6
> R with Spark 2.0
> Scala 2.10 with Spark 1.6
> Scala 2.11 with Spark 2.0
> {code}
> In order to achieve the above names, one would have to write a custom script 
> to replace the {{display_name}} in the {{kernel.json}} files that get created 
> by the Toree installer.
> It would be nice to enrich the Toree install options to allow for some kind 
> of pattern instead of a fixed string, i.e.:
> {code}
> jupyter toree install --kernel_name='{interpreter.name} {interpreter.version} 
> with Spark {spark.version}' ...
> {code}
> The install documentation might read:
> {noformat}
> --kernel_name= (ToreeInstall.kernel_name)
> Examples:  '{interpreter.name} {interpreter.version} with Spark 
> {spark.version}'
> Default:   'Apache Toree - {interpreter.name}'
> Install the kernel spec with this name. This is also used as the display 
> name for kernels in the Jupyter UI.
> {noformat}
> Of course the placeholders would then have 

[jira] [Commented] (TOREE-399) Make Spark Kernel work on Windows

2017-03-30 Thread Jakob Odersky (JIRA)

[ 
https://issues.apache.org/jira/browse/TOREE-399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15949642#comment-15949642
 ] 

Jakob Odersky commented on TOREE-399:
-

Hi Aldo,
the run.sh script is a launcher script that basically starts Toree as a Spark 
app by calling spark-submit. To add support on windows, the easiest is to 
probably create a custom run/bat script that does equivalent of run.sh on 
windows and create a custiom kernel.json to call it.

Furthermore, maybe Toree works without modification on the linux subsytem on 
windows 10.


> Make Spark Kernel work on Windows
> -
>
> Key: TOREE-399
> URL: https://issues.apache.org/jira/browse/TOREE-399
> Project: TOREE
>  Issue Type: New Feature
> Environment: Windows 7/8/10
>Reporter: aldo
>
> After a successful install of the Spark Kernel the error: "Failed to run 
> command:" occurs when from jupyter we select a Scala Notebook.
> The error happens because the kernel.json runs 
> C:\\ProgramData\\jupyter\\kernels\\apache_toree_scala\\bin\\run.sh which is 
> bash shell script and hence cannot work on windows.
> Can you give me some direction to fix this, and I will implement it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (TOREE-375) Incorrect fully qualified name for spark context

2017-03-10 Thread Jakob Odersky (JIRA)

[ 
https://issues.apache.org/jira/browse/TOREE-375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15905637#comment-15905637
 ] 

Jakob Odersky commented on TOREE-375:
-

Closing this as it is not related to Toree. I'll do some more investigation on 
the Spark and Scala shell sides.

> Incorrect fully qualified name for spark context
> 
>
> Key: TOREE-375
> URL: https://issues.apache.org/jira/browse/TOREE-375
> Project: TOREE
>  Issue Type: Bug
> Environment: Jupyter Notebook with Toree latest master 
> (1a9c11f5f1381c15b691a716acd0e1f0432a9a35) and Spark 2.0.2, Scala 2.11
>Reporter: Felix Schüler
>Priority: Critical
>
> When running below snippet in a cell I get a compile error for the MLContext 
> Constructor. Somehow the fully qualified name of the SparkContext gets messed 
> up. 
> The same does not happen when I start a Spark shell with the --jars command 
> and create the MLContext there.
> Snippet (the systemml jar is build with the latest master of SystemML):
> {code}
> %addjar 
> file:///home/felix/repos/incubator-systemml/target/systemml-0.13.0-incubating-SNAPSHOT.jar
>  -f
> import org.apache.sysml.api.mlcontext._
> import org.apache.sysml.api.mlcontext.ScriptFactory._
> val ml = new MLContext(sc)
> Starting download from 
> file:///home/felix/repos/incubator-systemml/target/systemml-0.13.0-incubating-SNAPSHOT.jar
> Finished download of systemml-0.13.0-incubating-SNAPSHOT.jar
> Name: Compile Error
> Message: :25: error: overloaded method constructor MLContext with 
> alternatives:
>   (x$1: 
> org.apache.spark.api.java.JavaSparkContext)org.apache.sysml.api.mlcontext.MLContext
>  
>   (x$1: 
> org.apache.spark.org.apache.spark.org.apache.spark.org.apache.spark.org.apache.spark.SparkContext)org.apache.sysml.api.mlcontext.MLContext
>  cannot be applied to 
> (org.apache.spark.org.apache.spark.org.apache.spark.org.apache.spark.org.apache.spark.SparkContext)
>val ml = new MLContext(sc)
> ^
> StackTrace: 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (TOREE-375) Incorrect fully qualified name for spark context

2017-03-10 Thread Jakob Odersky (JIRA)

 [ 
https://issues.apache.org/jira/browse/TOREE-375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Odersky closed TOREE-375.
---
Resolution: Done

> Incorrect fully qualified name for spark context
> 
>
> Key: TOREE-375
> URL: https://issues.apache.org/jira/browse/TOREE-375
> Project: TOREE
>  Issue Type: Bug
> Environment: Jupyter Notebook with Toree latest master 
> (1a9c11f5f1381c15b691a716acd0e1f0432a9a35) and Spark 2.0.2, Scala 2.11
>Reporter: Felix Schüler
>Priority: Critical
>
> When running below snippet in a cell I get a compile error for the MLContext 
> Constructor. Somehow the fully qualified name of the SparkContext gets messed 
> up. 
> The same does not happen when I start a Spark shell with the --jars command 
> and create the MLContext there.
> Snippet (the systemml jar is build with the latest master of SystemML):
> {code}
> %addjar 
> file:///home/felix/repos/incubator-systemml/target/systemml-0.13.0-incubating-SNAPSHOT.jar
>  -f
> import org.apache.sysml.api.mlcontext._
> import org.apache.sysml.api.mlcontext.ScriptFactory._
> val ml = new MLContext(sc)
> Starting download from 
> file:///home/felix/repos/incubator-systemml/target/systemml-0.13.0-incubating-SNAPSHOT.jar
> Finished download of systemml-0.13.0-incubating-SNAPSHOT.jar
> Name: Compile Error
> Message: :25: error: overloaded method constructor MLContext with 
> alternatives:
>   (x$1: 
> org.apache.spark.api.java.JavaSparkContext)org.apache.sysml.api.mlcontext.MLContext
>  
>   (x$1: 
> org.apache.spark.org.apache.spark.org.apache.spark.org.apache.spark.org.apache.spark.SparkContext)org.apache.sysml.api.mlcontext.MLContext
>  cannot be applied to 
> (org.apache.spark.org.apache.spark.org.apache.spark.org.apache.spark.org.apache.spark.SparkContext)
>val ml = new MLContext(sc)
> ^
> StackTrace: 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (TOREE-386) spark kernel `--name test` or `--conf spark.app.name=test` parameter to spark_opts is not applied

2017-03-02 Thread Jakob Odersky (JIRA)

 [ 
https://issues.apache.org/jira/browse/TOREE-386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Odersky resolved TOREE-386.
-
Resolution: Fixed

> spark kernel `--name test` or `--conf spark.app.name=test` parameter to 
> spark_opts is not applied 
> --
>
> Key: TOREE-386
> URL: https://issues.apache.org/jira/browse/TOREE-386
> Project: TOREE
>  Issue Type: Bug
>Reporter: Sachin Aggarwal
>Assignee: Jakob Odersky
>
> this is my kernel.json
> {code}
> {
>   "language": "scala",
>   "display_name": "toree_special - Scala",
>   "env": {
> "SPARK_OPTS": "--name MyAPP --master yarn --deploy-mode client",
> "SPARK_HOME": "spark_home",
> "__TOREE_OPTS__": "",
> "DEFAULT_INTERPRETER": "Scala",
> "PYTHONPATH": "spark_home/python:spark_home/python/lib/py4j-0.9-src.zip",
> "PYTHON_EXEC": "python"
>   },
>   "argv": [
> "/root/.local/share/jupyter/kernels/toree_special_scala/bin/run.sh",
> "--profile",
> "{connection_file}"
>   ]
> }
> {code}
> the parameter that I added {color:red}--name MyAPP{color} is not applied I 
> still see app name in yarn resource ui as {color:red}IBM Spark Kernel{color}
> update: In new version of toree {color:red}IBM Spark Kernel{color} is renamed 
> to {color:red}Apache Toree{color}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (TOREE-383) Fix flaky tests

2017-02-27 Thread Jakob Odersky (JIRA)

 [ 
https://issues.apache.org/jira/browse/TOREE-383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Odersky resolved TOREE-383.
-
Resolution: Fixed

> Fix flaky tests
> ---
>
> Key: TOREE-383
> URL: https://issues.apache.org/jira/browse/TOREE-383
> Project: TOREE
>  Issue Type: Sub-task
>Reporter: Jakob Odersky
>Assignee: Jakob Odersky
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (TOREE-377) When magic fails, the error is swallowed

2017-02-27 Thread Jakob Odersky (JIRA)

[ 
https://issues.apache.org/jira/browse/TOREE-377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15886512#comment-15886512
 ] 

Jakob Odersky commented on TOREE-377:
-

Fixed in PR, thanks!

> When magic fails, the error is swallowed
> 
>
> Key: TOREE-377
> URL: https://issues.apache.org/jira/browse/TOREE-377
> Project: TOREE
>  Issue Type: Improvement
>Reporter: Ryan Blue
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (TOREE-377) When magic fails, the error is swallowed

2017-02-27 Thread Jakob Odersky (JIRA)

 [ 
https://issues.apache.org/jira/browse/TOREE-377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Odersky resolved TOREE-377.
-
Resolution: Fixed

> When magic fails, the error is swallowed
> 
>
> Key: TOREE-377
> URL: https://issues.apache.org/jira/browse/TOREE-377
> Project: TOREE
>  Issue Type: Improvement
>Reporter: Ryan Blue
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (TOREE-379) Tab completion doesn't replace partial words

2017-02-27 Thread Jakob Odersky (JIRA)

 [ 
https://issues.apache.org/jira/browse/TOREE-379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Odersky closed TOREE-379.
---
Resolution: Fixed

Fixed in PR, thanks!

> Tab completion doesn't replace partial words
> 
>
> Key: TOREE-379
> URL: https://issues.apache.org/jira/browse/TOREE-379
> Project: TOREE
>  Issue Type: Improvement
>Reporter: Ryan Blue
>
> Tab completion in both notebooks and the console is incorrect and causes the 
> front-end to add the selected option after an incomplete word instead of 
> replacing it.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (TOREE-387) Kernel should not store SparkSession

2017-02-27 Thread Jakob Odersky (JIRA)

 [ 
https://issues.apache.org/jira/browse/TOREE-387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Odersky resolved TOREE-387.
-
Resolution: Fixed

fixed in pr

> Kernel should not store SparkSession
> 
>
> Key: TOREE-387
> URL: https://issues.apache.org/jira/browse/TOREE-387
> Project: TOREE
>  Issue Type: Improvement
>Reporter: Ryan Blue
>
> Currently, the kernel creates and stores the SparkSession in a field to share 
> between interpreters. If the user closes a SparkSession and creates a new 
> one, then the Kernel still returns the original. Users may need to restart 
> Spark sessions for long-running notebooks or to deal with Spark errors 
> without losing datasets that have been pulled back to the notebook.
> I think that Toree should always return the current Spark session by calling 
> {{SparkSession.builder.getOrCreate}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (TOREE-386) spark kernel `--name test` or `--conf spark.app.name=test` parameter to spark_opts is not applied

2017-02-27 Thread Jakob Odersky (JIRA)

 [ 
https://issues.apache.org/jira/browse/TOREE-386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Odersky reassigned TOREE-386:
---

Assignee: Jakob Odersky

> spark kernel `--name test` or `--conf spark.app.name=test` parameter to 
> spark_opts is not applied 
> --
>
> Key: TOREE-386
> URL: https://issues.apache.org/jira/browse/TOREE-386
> Project: TOREE
>  Issue Type: Bug
>Reporter: Sachin Aggarwal
>Assignee: Jakob Odersky
>
> this is my kernel.json
> {code}
> {
>   "language": "scala",
>   "display_name": "toree_special - Scala",
>   "env": {
> "SPARK_OPTS": "--name MyAPP --master yarn --deploy-mode client",
> "SPARK_HOME": "spark_home",
> "__TOREE_OPTS__": "",
> "DEFAULT_INTERPRETER": "Scala",
> "PYTHONPATH": "spark_home/python:spark_home/python/lib/py4j-0.9-src.zip",
> "PYTHON_EXEC": "python"
>   },
>   "argv": [
> "/root/.local/share/jupyter/kernels/toree_special_scala/bin/run.sh",
> "--profile",
> "{connection_file}"
>   ]
> }
> {code}
> the parameter that I added {color:red}--name MyAPP{color} is not applied I 
> still see app name in yarn resource ui as {color:red}IBM Spark Kernel{color}
> update: In new version of toree {color:red}IBM Spark Kernel{color} is renamed 
> to {color:red}Apache Toree{color}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (TOREE-375) Incorrect fully qualified name for spark context

2017-02-21 Thread Jakob Odersky (JIRA)

[ 
https://issues.apache.org/jira/browse/TOREE-375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15876892#comment-15876892
 ] 

Jakob Odersky edited comment on TOREE-375 at 2/21/17 10:37 PM:
---

[~fschueler] so I'm actually not entirely sure what's the cause of this bug, 
however I can confirm it is not specific to Toree.

It can be reproduces in a standard spark shell as well:

{code}scala> :power
Power mode enabled. :phase is at typer.
import scala.tools.nsc._, intp.global._, definitions._
Try :help or completions for vals._ and power._

scala> power.intp.addUrlsToClassPath(jar)

scala> import org.apache.sysml.api.MLContext
import org.apache.sysml.api.MLContext

scala> val ml = new MLContext(sc)
:50: error: overloaded method constructor MLContext with alternatives:
  (x$1: 
org.apache.spark.api.java.JavaSparkContext)org.apache.sysml.api.MLContext 
  (x$1: 
org.apache.spark.org.apache.spark.org.apache.spark.org.apache.spark.org.apache.spark.SparkContext)org.apache.sysml.api.MLContext
 cannot be applied to 
(org.apache.spark.org.apache.spark.org.apache.spark.org.apache.spark.org.apache.spark.SparkContext)
   val ml = new MLContext(sc)
{code}


was (Author: jodersky):
[~fschueler] so I'm actually not entirely sure what's the cause of this bug, 
however I can confirm it is not specific to Toree.

It can be reproduces in a standard spark shell as well:

```
scala> :power
Power mode enabled. :phase is at typer.
import scala.tools.nsc._, intp.global._, definitions._
Try :help or completions for vals._ and power._

scala> power.intp.addUrlsToClassPath(jar)

scala> import org.apache.sysml.api.MLContext
import org.apache.sysml.api.MLContext

scala> val ml = new MLContext(sc)
:50: error: overloaded method constructor MLContext with alternatives:
  (x$1: 
org.apache.spark.api.java.JavaSparkContext)org.apache.sysml.api.MLContext 
  (x$1: 
org.apache.spark.org.apache.spark.org.apache.spark.org.apache.spark.org.apache.spark.SparkContext)org.apache.sysml.api.MLContext
 cannot be applied to 
(org.apache.spark.org.apache.spark.org.apache.spark.org.apache.spark.org.apache.spark.SparkContext)
   val ml = new MLContext(sc)
```

> Incorrect fully qualified name for spark context
> 
>
> Key: TOREE-375
> URL: https://issues.apache.org/jira/browse/TOREE-375
> Project: TOREE
>  Issue Type: Bug
> Environment: Jupyter Notebook with Toree latest master 
> (1a9c11f5f1381c15b691a716acd0e1f0432a9a35) and Spark 2.0.2, Scala 2.11
>Reporter: Felix Schüler
>Priority: Critical
>
> When running below snippet in a cell I get a compile error for the MLContext 
> Constructor. Somehow the fully qualified name of the SparkContext gets messed 
> up. 
> The same does not happen when I start a Spark shell with the --jars command 
> and create the MLContext there.
> Snippet (the systemml jar is build with the latest master of SystemML):
> {code}
> %addjar 
> file:///home/felix/repos/incubator-systemml/target/systemml-0.13.0-incubating-SNAPSHOT.jar
>  -f
> import org.apache.sysml.api.mlcontext._
> import org.apache.sysml.api.mlcontext.ScriptFactory._
> val ml = new MLContext(sc)
> Starting download from 
> file:///home/felix/repos/incubator-systemml/target/systemml-0.13.0-incubating-SNAPSHOT.jar
> Finished download of systemml-0.13.0-incubating-SNAPSHOT.jar
> Name: Compile Error
> Message: :25: error: overloaded method constructor MLContext with 
> alternatives:
>   (x$1: 
> org.apache.spark.api.java.JavaSparkContext)org.apache.sysml.api.mlcontext.MLContext
>  
>   (x$1: 
> org.apache.spark.org.apache.spark.org.apache.spark.org.apache.spark.org.apache.spark.SparkContext)org.apache.sysml.api.mlcontext.MLContext
>  cannot be applied to 
> (org.apache.spark.org.apache.spark.org.apache.spark.org.apache.spark.org.apache.spark.SparkContext)
>val ml = new MLContext(sc)
> ^
> StackTrace: 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (TOREE-382) Revamp the sbt build and consolidate dependencies

2017-02-16 Thread Jakob Odersky (JIRA)

 [ 
https://issues.apache.org/jira/browse/TOREE-382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Odersky resolved TOREE-382.
-
Resolution: Fixed

> Revamp the sbt build and consolidate dependencies
> -
>
> Key: TOREE-382
> URL: https://issues.apache.org/jira/browse/TOREE-382
> Project: TOREE
>  Issue Type: Sub-task
>Reporter: Jakob Odersky
>Assignee: Jakob Odersky
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (TOREE-386) toree spark kernel --name parameter to spark-submit is not applied

2017-02-15 Thread Jakob Odersky (JIRA)

[ 
https://issues.apache.org/jira/browse/TOREE-386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15868606#comment-15868606
 ] 

Jakob Odersky commented on TOREE-386:
-

Hmm, I'm not sure where the name is coming from, a grep over toree didn't show 
any mention of "IBM". On which version of toree can this behaviour be observed?

> toree spark kernel --name parameter to spark-submit is not applied 
> ---
>
> Key: TOREE-386
> URL: https://issues.apache.org/jira/browse/TOREE-386
> Project: TOREE
>  Issue Type: Bug
>Reporter: Sachin Aggarwal
>
> this is my kernel.json
> {code}
> {
>   "language": "scala",
>   "display_name": "toree_special - Scala",
>   "env": {
> "SPARK_OPTS": "--name MyAPP --master yarn --deploy-mode client",
> "SPARK_HOME": "spark_home",
> "__TOREE_OPTS__": "",
> "DEFAULT_INTERPRETER": "Scala",
> "PYTHONPATH": "spark_home/python:spark_home/python/lib/py4j-0.9-src.zip",
> "PYTHON_EXEC": "python"
>   },
>   "argv": [
> "/root/.local/share/jupyter/kernels/toree_special_scala/bin/run.sh",
> "--profile",
> "{connection_file}"
>   ]
> }
> {code}
> the parameter that I added {color:red}--name MyAPP{color} is not applied I 
> still see app name in yarn resource ui as {color:red}IBM Spark Kernel{color}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (TOREE-375) Incorrect fully qualified name for spark context

2017-02-15 Thread Jakob Odersky (JIRA)

 [ 
https://issues.apache.org/jira/browse/TOREE-375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Odersky updated TOREE-375:

Priority: Critical  (was: Major)

> Incorrect fully qualified name for spark context
> 
>
> Key: TOREE-375
> URL: https://issues.apache.org/jira/browse/TOREE-375
> Project: TOREE
>  Issue Type: Bug
> Environment: Jupyter Notebook with Toree latest master 
> (1a9c11f5f1381c15b691a716acd0e1f0432a9a35) and Spark 2.0.2, Scala 2.11
>Reporter: Felix Schüler
>Priority: Critical
>
> When running below snippet in a cell I get a compile error for the MLContext 
> Constructor. Somehow the fully qualified name of the SparkContext gets messed 
> up. 
> The same does not happen when I start a Spark shell with the --jars command 
> and create the MLContext there.
> Snippet (the systemml jar is build with the latest master of SystemML):
> {code}
> %addjar 
> file:///home/felix/repos/incubator-systemml/target/systemml-0.13.0-incubating-SNAPSHOT.jar
>  -f
> import org.apache.sysml.api.mlcontext._
> import org.apache.sysml.api.mlcontext.ScriptFactory._
> val ml = new MLContext(sc)
> Starting download from 
> file:///home/felix/repos/incubator-systemml/target/systemml-0.13.0-incubating-SNAPSHOT.jar
> Finished download of systemml-0.13.0-incubating-SNAPSHOT.jar
> Name: Compile Error
> Message: :25: error: overloaded method constructor MLContext with 
> alternatives:
>   (x$1: 
> org.apache.spark.api.java.JavaSparkContext)org.apache.sysml.api.mlcontext.MLContext
>  
>   (x$1: 
> org.apache.spark.org.apache.spark.org.apache.spark.org.apache.spark.org.apache.spark.SparkContext)org.apache.sysml.api.mlcontext.MLContext
>  cannot be applied to 
> (org.apache.spark.org.apache.spark.org.apache.spark.org.apache.spark.org.apache.spark.SparkContext)
>val ml = new MLContext(sc)
> ^
> StackTrace: 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (TOREE-383) Fix flaky tests

2017-02-14 Thread Jakob Odersky (JIRA)

 [ 
https://issues.apache.org/jira/browse/TOREE-383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Odersky reassigned TOREE-383:
---

Assignee: Jakob Odersky

> Fix flaky tests
> ---
>
> Key: TOREE-383
> URL: https://issues.apache.org/jira/browse/TOREE-383
> Project: TOREE
>  Issue Type: Sub-task
>Reporter: Jakob Odersky
>Assignee: Jakob Odersky
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (TOREE-385) Refactor travis build to be runnable as a container (as opposed to a vm)

2017-02-14 Thread Jakob Odersky (JIRA)
Jakob Odersky created TOREE-385:
---

 Summary: Refactor travis build to be runnable as a container (as 
opposed to a vm)
 Key: TOREE-385
 URL: https://issues.apache.org/jira/browse/TOREE-385
 Project: TOREE
  Issue Type: Sub-task
Reporter: Jakob Odersky
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (TOREE-383) Fix flaky tests

2017-02-14 Thread Jakob Odersky (JIRA)
Jakob Odersky created TOREE-383:
---

 Summary: Fix flaky tests
 Key: TOREE-383
 URL: https://issues.apache.org/jira/browse/TOREE-383
 Project: TOREE
  Issue Type: Sub-task
Reporter: Jakob Odersky
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (TOREE-382) Revamp the sbt build and consolidate dependencies

2017-02-14 Thread Jakob Odersky (JIRA)

 [ 
https://issues.apache.org/jira/browse/TOREE-382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Odersky updated TOREE-382:

Priority: Minor  (was: Major)

> Revamp the sbt build and consolidate dependencies
> -
>
> Key: TOREE-382
> URL: https://issues.apache.org/jira/browse/TOREE-382
> Project: TOREE
>  Issue Type: Sub-task
>Reporter: Jakob Odersky
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (TOREE-382) Revamp the sbt build and consolidate dependencies

2017-02-14 Thread Jakob Odersky (JIRA)
Jakob Odersky created TOREE-382:
---

 Summary: Revamp the sbt build and consolidate dependencies
 Key: TOREE-382
 URL: https://issues.apache.org/jira/browse/TOREE-382
 Project: TOREE
  Issue Type: Sub-task
Reporter: Jakob Odersky






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (TOREE-381) Revamp the build

2017-02-14 Thread Jakob Odersky (JIRA)
Jakob Odersky created TOREE-381:
---

 Summary: Revamp the build
 Key: TOREE-381
 URL: https://issues.apache.org/jira/browse/TOREE-381
 Project: TOREE
  Issue Type: Improvement
Reporter: Jakob Odersky
Priority: Minor


Toree's build is quite complex and has flaky tests. The complexity is partly 
inherent to the nature of the project, however it is also due to accumulated 
features over time that are no longer used.

In an attempt to improve developer productivity, I propose a 3-part plan to 
increase build stability and speed:

1. Revamp the sbt build by consolidating dependencies and refactoring the 
configuration to the current best practices (specifically moving common 
behaviour out of project/common.scala and into build-wide settings and 
auto-plugins)

2. Fix the flaky tests that currenlty fail due to classloader issues.

3. Refactor the travis build configuration to be runnable as a container (this 
should greatly improve CI throughput and latency).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (TOREE-372) stream corruption cased by big-endian and little-endian

2017-02-11 Thread Jakob Odersky (JIRA)

 [ 
https://issues.apache.org/jira/browse/TOREE-372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Odersky closed TOREE-372.
---
Resolution: Won't Fix

The issue isn't related to Toree. However a mixed endian environment may be 
possible by running Toree in YARN-client mode, once TOREE-369 gets implemented.

> stream corruption cased by big-endian and little-endian
> ---
>
> Key: TOREE-372
> URL: https://issues.apache.org/jira/browse/TOREE-372
> Project: TOREE
>  Issue Type: Bug
>Reporter: Wang enzhong
>Priority: Critical
>
> We currently run spark on z/OS system which is a big-endian platform and 
> jupyter+toree on x86 platform which is small-endian platform.  The output 
> from spark is unreadable caused by the different byte order. 
> If we use spark on z/OS and jupyter+toree on another big-endian platform, 
> there is no such error. 
> I've done some investigation and it seems toree leverages the bytestring of 
> Akka which has endian process, don't know why toree does not work in our 
> case. 
> Please help to look into the problem. Due to the tight project schedule, it 
> will be much appreciated if you can give us some advice on how to fix or 
> avoid the problem if it will take some time to change the code. Many thanks 
> in advance. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (TOREE-361) Spark examples that use Spark 2 fail because docker image contains 1.6

2017-02-11 Thread Jakob Odersky (JIRA)

 [ 
https://issues.apache.org/jira/browse/TOREE-361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Odersky closed TOREE-361.
---
Resolution: Fixed

fixed in pr

> Spark examples that use Spark 2 fail because docker image contains 1.6
> --
>
> Key: TOREE-361
> URL: https://issues.apache.org/jira/browse/TOREE-361
> Project: TOREE
>  Issue Type: Bug
>Reporter: Kevin Bates
>Priority: Minor
>  Labels: build, easyfix, newbie
>
> Spark2-dependent examples (magic-tutorial.ipynb) don't work because the 
> docker image referenced in the Makefile contains Spark 1.6. As a result, 
> issues occur with import of spark.implicits._ (and SparkSession references).  
> Workaround: override with specific tag of more recent image (e.g.,  `make dev 
> IMAGE=jupyter/all-spark-notebook:2410ad57203a`) based on examination at 
> https://github.com/jupyter/docker-stacks/wiki/Docker-build-history.
> The default (no override) should render working examples.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (TOREE-354) Scala Error with Apache Spark when run in Jupyter

2017-02-09 Thread Jakob Odersky (JIRA)

[ 
https://issues.apache.org/jira/browse/TOREE-354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15860179#comment-15860179
 ] 

Jakob Odersky commented on TOREE-354:
-

Can you try a `pip install --pre `? Pip will by default not 
install development versions if an already existing stable version exists.

> Scala Error with Apache Spark when run in Jupyter
> -
>
> Key: TOREE-354
> URL: https://issues.apache.org/jira/browse/TOREE-354
> Project: TOREE
>  Issue Type: Bug
>Affects Versions: 0.1.0
> Environment: Apache Spark 2.0.2
> Scala 2.11(Built with Apache Spark by default)
>Reporter: Ming Yu
>  Labels: jupyter, scala, spark-shell
> Fix For: 0.1.0
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> I'm having problems running Scala Spark on Jupyter. Below is my error message 
> when I load Apache Toree - Scala notebook in jupyter.
> {noformat}
> root@ubuntu-2gb-sgp1-01:~# jupyter notebook --ip 0.0.0.0 --port 
> [I 03:14:54.281 NotebookApp] Serving notebooks from local directory: /root
> [I 03:14:54.281 NotebookApp] 0 active kernels
> [I 03:14:54.281 NotebookApp] The Jupyter Notebook is running at: 
> http://0.0.0.0:/
> [I 03:14:54.281 NotebookApp] Use Control-C to stop this server and shut down 
> all kernels (twice to skip confirmation).
> [W 03:14:54.282 NotebookApp] No web browser found: could not locate runnable 
> browser.
> [I 03:15:09.976 NotebookApp] 302 GET / (61.6.68.44) 1.21ms
> [I 03:15:15.924 NotebookApp] Creating new notebook in
> [W 03:15:16.592 NotebookApp] 404 GET 
> /nbextensions/widgets/notebook/js/extension.js?v=20161120031454 (61.6.68.44) 
> 15.49ms 
> referer=http://188.166.235.21:/notebooks/Untitled2.ipynb?kernel_name=apache_toree_scala
> [I 03:15:16.677 NotebookApp] Kernel started: 
> 94a63354-d294-4de7-a12c-2e05905e0c45
> Starting Spark Kernel with SPARK_HOME=/usr/local/spark
> 16/11/20 03:15:18 [INFO] o.a.t.Main$$anon$1 - Kernel version: 
> 0.1.0.dev8-incubating-SNAPSHOT
> 16/11/20 03:15:18 [INFO] o.a.t.Main$$anon$1 - Scala version: Some(2.10.4)
> 16/11/20 03:15:18 [INFO] o.a.t.Main$$anon$1 - ZeroMQ (JeroMQ) version: 3.2.2
> 16/11/20 03:15:18 [INFO] o.a.t.Main$$anon$1 - Initializing internal actor 
> system
> Exception in thread "main" java.lang.NoSuchMethodError: 
> scala.collection.immutable.HashSet$.empty()Lscala/collection/immutable/HashSet;
> at akka.actor.ActorCell$.(ActorCell.scala:336)
> at akka.actor.ActorCell$.(ActorCell.scala)
> at akka.actor.RootActorPath.$div(ActorPath.scala:185)
> at akka.actor.LocalActorRefProvider.(ActorRefProvider.scala:465)
> at akka.actor.LocalActorRefProvider.(ActorRefProvider.scala:453)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> akka.actor.ReflectiveDynamicAccess$$anonfun$createInstanceFor$2.apply(DynamicAccess.scala:78)
> at scala.util.Try$.apply(Try.scala:192)
> at 
> akka.actor.ReflectiveDynamicAccess.createInstanceFor(DynamicAccess.scala:73)
> at 
> akka.actor.ReflectiveDynamicAccess$$anonfun$createInstanceFor$3.apply(DynamicAccess.scala:84)
> at 
> akka.actor.ReflectiveDynamicAccess$$anonfun$createInstanceFor$3.apply(DynamicAccess.scala:84)
> at scala.util.Success.flatMap(Try.scala:231)
> at 
> akka.actor.ReflectiveDynamicAccess.createInstanceFor(DynamicAccess.scala:84)
> at akka.actor.ActorSystemImpl.liftedTree1$1(ActorSystem.scala:585)
> at akka.actor.ActorSystemImpl.(ActorSystem.scala:578)
> at akka.actor.ActorSystem$.apply(ActorSystem.scala:142)
> at akka.actor.ActorSystem$.apply(ActorSystem.scala:109)
> at 
> org.apache.toree.boot.layer.StandardBareInitialization$class.createActorSystem(BareInitialization.scala:71)
> at org.apache.toree.Main$$anon$1.createActorSystem(Main.scala:35)
> at 
> org.apache.toree.boot.layer.StandardBareInitialization$class.initializeBare(BareInitialization.scala:60)
> at org.apache.toree.Main$$anon$1.initializeBare(Main.scala:35)
> at 
> org.apache.toree.boot.KernelBootstrap.initialize(KernelBootstrap.scala:72)
> at org.apache.toree.Main$delayedInit$body.apply(Main.scala:40)
> at scala.Function0$class.apply$mcV$sp(Function0.scala:34)
> at 
> scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
> at scala.App$$anonfun$main$1.apply(App.scala:76)
> at 

[jira] [Closed] (TOREE-354) Scala Error with Apache Spark when run in Jupyter

2017-02-07 Thread Jakob Odersky (JIRA)

 [ 
https://issues.apache.org/jira/browse/TOREE-354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Odersky closed TOREE-354.
---
Resolution: Not A Problem

There is a mismatch between the scala versions used by Toree and Spark.

The output of Toree shows it is using Scala 2.10, yet the spark Shell shows 
that Spark was compiled with 2.11.

The easiest way to fix this is to upgrade toree to latest master. Alternatively 
you can also [build spark with Scala 
2.10|http://spark.apache.org/docs/latest/building-spark.html#building-for-scala-210]
 but I would recommend against that since that version of scala has reached its 
end of life.

> Scala Error with Apache Spark when run in Jupyter
> -
>
> Key: TOREE-354
> URL: https://issues.apache.org/jira/browse/TOREE-354
> Project: TOREE
>  Issue Type: Bug
>Affects Versions: 0.1.0
> Environment: Apache Spark 2.0.2
> Scala 2.11(Built with Apache Spark by default)
>Reporter: Ming Yu
>  Labels: jupyter, scala, spark-shell
> Fix For: 0.1.0
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> I'm having problems running Scala Spark on Jupyter. Below is my error message 
> when I load Apache Toree - Scala notebook in jupyter.
> {noformat}
> root@ubuntu-2gb-sgp1-01:~# jupyter notebook --ip 0.0.0.0 --port 
> [I 03:14:54.281 NotebookApp] Serving notebooks from local directory: /root
> [I 03:14:54.281 NotebookApp] 0 active kernels
> [I 03:14:54.281 NotebookApp] The Jupyter Notebook is running at: 
> http://0.0.0.0:/
> [I 03:14:54.281 NotebookApp] Use Control-C to stop this server and shut down 
> all kernels (twice to skip confirmation).
> [W 03:14:54.282 NotebookApp] No web browser found: could not locate runnable 
> browser.
> [I 03:15:09.976 NotebookApp] 302 GET / (61.6.68.44) 1.21ms
> [I 03:15:15.924 NotebookApp] Creating new notebook in
> [W 03:15:16.592 NotebookApp] 404 GET 
> /nbextensions/widgets/notebook/js/extension.js?v=20161120031454 (61.6.68.44) 
> 15.49ms 
> referer=http://188.166.235.21:/notebooks/Untitled2.ipynb?kernel_name=apache_toree_scala
> [I 03:15:16.677 NotebookApp] Kernel started: 
> 94a63354-d294-4de7-a12c-2e05905e0c45
> Starting Spark Kernel with SPARK_HOME=/usr/local/spark
> 16/11/20 03:15:18 [INFO] o.a.t.Main$$anon$1 - Kernel version: 
> 0.1.0.dev8-incubating-SNAPSHOT
> 16/11/20 03:15:18 [INFO] o.a.t.Main$$anon$1 - Scala version: Some(2.10.4)
> 16/11/20 03:15:18 [INFO] o.a.t.Main$$anon$1 - ZeroMQ (JeroMQ) version: 3.2.2
> 16/11/20 03:15:18 [INFO] o.a.t.Main$$anon$1 - Initializing internal actor 
> system
> Exception in thread "main" java.lang.NoSuchMethodError: 
> scala.collection.immutable.HashSet$.empty()Lscala/collection/immutable/HashSet;
> at akka.actor.ActorCell$.(ActorCell.scala:336)
> at akka.actor.ActorCell$.(ActorCell.scala)
> at akka.actor.RootActorPath.$div(ActorPath.scala:185)
> at akka.actor.LocalActorRefProvider.(ActorRefProvider.scala:465)
> at akka.actor.LocalActorRefProvider.(ActorRefProvider.scala:453)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> akka.actor.ReflectiveDynamicAccess$$anonfun$createInstanceFor$2.apply(DynamicAccess.scala:78)
> at scala.util.Try$.apply(Try.scala:192)
> at 
> akka.actor.ReflectiveDynamicAccess.createInstanceFor(DynamicAccess.scala:73)
> at 
> akka.actor.ReflectiveDynamicAccess$$anonfun$createInstanceFor$3.apply(DynamicAccess.scala:84)
> at 
> akka.actor.ReflectiveDynamicAccess$$anonfun$createInstanceFor$3.apply(DynamicAccess.scala:84)
> at scala.util.Success.flatMap(Try.scala:231)
> at 
> akka.actor.ReflectiveDynamicAccess.createInstanceFor(DynamicAccess.scala:84)
> at akka.actor.ActorSystemImpl.liftedTree1$1(ActorSystem.scala:585)
> at akka.actor.ActorSystemImpl.(ActorSystem.scala:578)
> at akka.actor.ActorSystem$.apply(ActorSystem.scala:142)
> at akka.actor.ActorSystem$.apply(ActorSystem.scala:109)
> at 
> org.apache.toree.boot.layer.StandardBareInitialization$class.createActorSystem(BareInitialization.scala:71)
> at org.apache.toree.Main$$anon$1.createActorSystem(Main.scala:35)
> at 
> org.apache.toree.boot.layer.StandardBareInitialization$class.initializeBare(BareInitialization.scala:60)
> at org.apache.toree.Main$$anon$1.initializeBare(Main.scala:35)
> at 
> org.apache.toree.boot.KernelBootstrap.initialize(KernelBootstrap.scala:72)
> at 

[jira] [Commented] (TOREE-363) Syntax Highlighting Breaks

2017-02-07 Thread Jakob Odersky (JIRA)

[ 
https://issues.apache.org/jira/browse/TOREE-363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15857140#comment-15857140
 ] 

Jakob Odersky commented on TOREE-363:
-

AFAIK syntax highlighting is not handled by the kernels themselves, they only 
pass a hint to the notebooks, telling them what language it uses.

How often does this happen? I am unable to reproduce it.

> Syntax Highlighting Breaks
> --
>
> Key: TOREE-363
> URL: https://issues.apache.org/jira/browse/TOREE-363
> Project: TOREE
>  Issue Type: Bug
>Reporter: Aleksei Aleksinov
>Priority: Minor
> Attachments: Screen Shot 2017-01-26 at 18.40.54.png
>
>
> Syntax highlighting breaks after a while of usage. But Scala code execution 
> and Spark jobs work well.
> Affects Toree 0.2.0-dev1
> Jupyter 4.2.1
> Scala 2.12.1 or 2.11.8
> Spark 2.1.0
> Python 3.6



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (TOREE-281) Fix some typos in comments/docs/testnames.

2017-02-07 Thread Jakob Odersky (JIRA)

 [ 
https://issues.apache.org/jira/browse/TOREE-281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Odersky closed TOREE-281.
---
Resolution: Fixed

> Fix some typos in comments/docs/testnames.
> --
>
> Key: TOREE-281
> URL: https://issues.apache.org/jira/browse/TOREE-281
> Project: TOREE
>  Issue Type: Bug
>Reporter: Dongjoon Hyun
>Priority: Trivial
>
> There exists some minor typos like the followings in iPython Notebooks, 
> comments, and testcase names.
> {code}
> //  TODO Handle the case where there is no [-delimeter-]{+delimiter+}
> "This example is a Scala [-adapatation-]{+adaptation+} of 
> [this](https://github.com/jupyter-incubator/dashboards/blob/master/etc/notebooks/stream_demo/meetup-streaming.ipynb)
>  notebook from 
> [jupyter_dashboards](https://github.com/jupyter-incubator/dashboards).\n",
> "In Toree, declarativewidgets need to be initialized by adding the JAR 
> with the scala [-implamentation-]{+implementation+} and calling 
> `initWidgets`. This is must take place very close to the top of the notebook."
> "The rest of the call are [-methos-]{+methods+} for starting and stopping 
> the streaming application as well as functions that define the streaming 
> flow."
> # This file is populated when doing a make release. It should be empty by 
> [-defaut.-]{+default.+}
> it("should truncate or not [-turncate-]{+truncate+} based on %truncate") {
>  * Contains helpers and [-contants-]{+constants+} associated with the 
> dependency manager.
> // [-Overriden-]{+Overridden+} to link before sending open message
> //  Single property fields are not well supported by play, this is a little 
> funky workaround [-founde-]{+found+} here:
> #' Returns the [-dimentions-]{+dimensions+} (number of rows and columns) of a 
> DataFrame
> # @return a new RDD created by performing the simple union 
> [-(witout-]{+(without+} removing
>   # Allow the user to have a more flexible [-definiton-]{+definition+} of the 
> text file path
>   # Allow the user to have a more flexible [-definiton-]{+definition+} of the 
> text file path
>   # Allow the user to have a more flexible [-definiton-]{+definition+} of the 
> text file path
>   # Allow the user to have a more flexible [-definiton-]{+definition+} of the 
> text file path
>   # that [-termintates-]{+terminates+} when the next row is empty.
> #   \item mergeCombiners, to combine two C's into a single one (e.g., 
> [-concatentates-]{+concatenates+}
>   # "y" should be in the [-environemnt-]{+environment+} of g.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (TOREE-373) scala 2.11 library incompatible with pyspark 2.1.0

2017-02-07 Thread Jakob Odersky (JIRA)

 [ 
https://issues.apache.org/jira/browse/TOREE-373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Odersky closed TOREE-373.
---
Resolution: Not A Bug

+1 to Marius' answer. I would be surprised it worked with 2.12 though, 
considering it is binary incompatible 

> scala 2.11 library incompatible with pyspark 2.1.0
> --
>
> Key: TOREE-373
> URL: https://issues.apache.org/jira/browse/TOREE-373
> Project: TOREE
>  Issue Type: Bug
>Affects Versions: 0.1.0
> Environment: 64 bit Ubuntu 16.04 LTS
>Reporter: Peter
>  Labels: pyspark, scala
> Fix For: 0.1.0
>
>
> When doengraded to 2.10 or upgraded to 2.12, then I think it will work fine, 
> but some testing is required.
> Refer to the following stackexchange note:
> http://stackoverflow.com/questions/29339005/run-main-0-java-lang-nosuchmethoderror



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (TOREE-374) Variables declared on the Notebook are not garbage collected

2017-02-07 Thread Jakob Odersky (JIRA)

[ 
https://issues.apache.org/jira/browse/TOREE-374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15856880#comment-15856880
 ] 

Jakob Odersky edited comment on TOREE-374 at 2/7/17 9:57 PM:
-

Thanks for the reproduction steps, [~dtaieb].

The reason why results aren't garbage collected is due to the way the Scala 
REPL represents evaluated expressions. This is true for both in the standard 
object-wrapped repl as well as the class based one used by Toree.

Unfortunately I can't think of an easy fix that we could quickly get into 
upstream. How severe is this bug in a typical use-case? Are cells re-evaluated 
enough without a kernel restart, for it to become a problem?

[~mariusvniekerk] do you know if/how zeppelin has worked around this?


was (Author: jodersky):
Thanks for the reproduction steps, [~dtaieb].

The reason why results aren't garbage collected is due to the way the Scala 
REPL represents evaluated expressions. This is true for both in the standard 
object-wrapped repl as well as the class based one used by Toree.

Unfortunately I can't think of an easy fix that we could quickly get into 
upstream. How severe is this bug in a typical use-case? Are cells reevaluated 
enough without a kernel restart for it to become a problem?

[~mariusvniekerk] do you know if/how zeppelin has worked around this?

> Variables declared on the Notebook are not garbage collected
> 
>
> Key: TOREE-374
> URL: https://issues.apache.org/jira/browse/TOREE-374
> Project: TOREE
>  Issue Type: Bug
>Affects Versions: 0.1.0
>Reporter: David Taieb
>
> I'm not sure if it's a bug or a limitation of the underlying scala REPL.
> As part of supporting PixieDust (https://github.com/ibm-cds-labs/pixiedust) 
> auto-visualization feature within Scala gateway, I have implemented a weak 
> hashmap that tracks objects declared on the Scala REPL. However, I have found 
> that objects are not correctly gc'ed when the object is declared in a cell 
> with a val or var keyword and then the cell is ran again. One would expect 
> that the original object has no more references and should be gc'ed but it's 
> not. 
> However, when the object is declare with var keyword and then set to null in 
> another cell, then it is correctly gc'ed.
> I'm concerned that users who run the same cell multiple times would 
> unwittingly have memory leaks which can eventually lead to OOM errors.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (TOREE-375) Incorrect fully qualified name for spark context

2017-02-07 Thread Jakob Odersky (JIRA)

[ 
https://issues.apache.org/jira/browse/TOREE-375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15856668#comment-15856668
 ] 

Jakob Odersky commented on TOREE-375:
-

I checked the implementation of `valueOfTerm` in the Scala source and it does 
in fact seem that they assume a non-class-based representation of results when 
looking up symbols. I'll escalate this issue.

> Incorrect fully qualified name for spark context
> 
>
> Key: TOREE-375
> URL: https://issues.apache.org/jira/browse/TOREE-375
> Project: TOREE
>  Issue Type: Bug
> Environment: Jupyter Notebook with Toree latest master 
> (1a9c11f5f1381c15b691a716acd0e1f0432a9a35) and Spark 2.0.2, Scala 2.11
>Reporter: Felix Schüler
>
> When running below snippet in a cell I get a compile error for the MLContext 
> Constructor. Somehow the fully qualified name of the SparkContext gets messed 
> up. 
> The same does not happen when I start a Spark shell with the --jars command 
> and create the MLContext there.
> Snippet (the systemml jar is build with the latest master of SystemML):
> {code}
> %addjar 
> file:///home/felix/repos/incubator-systemml/target/systemml-0.13.0-incubating-SNAPSHOT.jar
>  -f
> import org.apache.sysml.api.mlcontext._
> import org.apache.sysml.api.mlcontext.ScriptFactory._
> val ml = new MLContext(sc)
> Starting download from 
> file:///home/felix/repos/incubator-systemml/target/systemml-0.13.0-incubating-SNAPSHOT.jar
> Finished download of systemml-0.13.0-incubating-SNAPSHOT.jar
> Name: Compile Error
> Message: :25: error: overloaded method constructor MLContext with 
> alternatives:
>   (x$1: 
> org.apache.spark.api.java.JavaSparkContext)org.apache.sysml.api.mlcontext.MLContext
>  
>   (x$1: 
> org.apache.spark.org.apache.spark.org.apache.spark.org.apache.spark.org.apache.spark.SparkContext)org.apache.sysml.api.mlcontext.MLContext
>  cannot be applied to 
> (org.apache.spark.org.apache.spark.org.apache.spark.org.apache.spark.org.apache.spark.SparkContext)
>val ml = new MLContext(sc)
> ^
> StackTrace: 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (TOREE-365) Certain interpreter evaluations do not return result strings

2017-02-06 Thread Jakob Odersky (JIRA)

 [ 
https://issues.apache.org/jira/browse/TOREE-365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Odersky closed TOREE-365.
---
Resolution: Won't Fix

Closing as discussed in the pull request. See TOREE-368 for continuation.

> Certain interpreter evaluations do not return result strings
> 
>
> Key: TOREE-365
> URL: https://issues.apache.org/jira/browse/TOREE-365
> Project: TOREE
>  Issue Type: Bug
>Reporter: Jakob Odersky
>
> The scala interpreter currently only returns results for expressions. Import 
> statements and declarations will not show up as results in a notebook 
> (although they are evaluated internally).
> This behaviour is related to the 
> [ScalaInterpreter#truncateResult|https://github.com/apache/incubator-toree/blob/master/scala-interpreter/src/main/scala/org/apache/toree/kernel/interpreter/scala/ScalaInterpreter.scala#L165-L187]
>  function. This function runs the result string of a REPL line through a 
> regex, in order to remove the "resX:" part. The function returns the empty 
> string in case the line does not start with "resX:", therefore returning an 
> empty string for import statements and other declarations. This can have 
> several subtle side effects, such as TOREE-340, or a toree client never 
> completing the "onResult" callback.
> A quick fix to this issue is to return the result string as-is if it does not 
> start with "resX".



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (TOREE-374) Variables declared on the Notebook are not garbage collected

2017-02-06 Thread Jakob Odersky (JIRA)

[ 
https://issues.apache.org/jira/browse/TOREE-374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15855081#comment-15855081
 ] 

Jakob Odersky commented on TOREE-374:
-

[~dtaieb] Could you provide some steps to reproduce this?

> Variables declared on the Notebook are not garbage collected
> 
>
> Key: TOREE-374
> URL: https://issues.apache.org/jira/browse/TOREE-374
> Project: TOREE
>  Issue Type: Bug
>Affects Versions: 0.1.0
>Reporter: David Taieb
>
> I'm not sure if it's a bug or a limitation of the underlying scala REPL.
> As part of supporting PixieDust (https://github.com/ibm-cds-labs/pixiedust) 
> auto-visualization feature within Scala gateway, I have implemented a weak 
> hashmap that tracks objects declared on the Scala REPL. However, I have found 
> that objects are not correctly gc'ed when the object is declared in a cell 
> with a val or var keyword and then the cell is ran again. One would expect 
> that the original object has no more references and should be gc'ed but it's 
> not. 
> However, when the object is declare with var keyword and then set to null in 
> another cell, then it is correctly gc'ed.
> I'm concerned that users who run the same cell multiple times would 
> unwittingly have memory leaks which can eventually lead to OOM errors.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (TOREE-374) Variables declared on the Notebook are not garbage collected

2017-02-06 Thread Jakob Odersky (JIRA)

[ 
https://issues.apache.org/jira/browse/TOREE-374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15855055#comment-15855055
 ] 

Jakob Odersky edited comment on TOREE-374 at 2/7/17 12:47 AM:
--

Hmm, I wonder if this is related to the Yrepl-class-based setting of the repl. 
This setting is a new/experimental feature of the scala interpreter but it is 
required for Spark to work.

[~dtaieb] Can you try running your test code in a plain scala repl? Can you 
observe the same memory leakage?

[~mariusvniekerk] I don't know anything about the comm api, how would that work 
here?


was (Author: jodersky):
Hmm, I wonder if this is related to the Yrepl-class-based setting of the repl. 
This setting is a new/experimental feature of the scala interpreter but it is 
required for Spark to work.

[~mariusvniekerk] I don't know anything about the comm api, how would that work 
here?

> Variables declared on the Notebook are not garbage collected
> 
>
> Key: TOREE-374
> URL: https://issues.apache.org/jira/browse/TOREE-374
> Project: TOREE
>  Issue Type: Bug
>Affects Versions: 0.1.0
>Reporter: David Taieb
>
> I'm not sure if it's a bug or a limitation of the underlying scala REPL.
> As part of supporting PixieDust (https://github.com/ibm-cds-labs/pixiedust) 
> auto-visualization feature within Scala gateway, I have implemented a weak 
> hashmap that tracks objects declared on the Scala REPL. However, I have found 
> that objects are not correctly gc'ed when the object is declared in a cell 
> with a val or var keyword and then the cell is ran again. One would expect 
> that the original object has no more references and should be gc'ed but it's 
> not. 
> However, when the object is declare with var keyword and then set to null in 
> another cell, then it is correctly gc'ed.
> I'm concerned that users who run the same cell multiple times would 
> unwittingly have memory leaks which can eventually lead to OOM errors.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (TOREE-374) Variables declared on the Notebook are not garbage collected

2017-02-06 Thread Jakob Odersky (JIRA)

[ 
https://issues.apache.org/jira/browse/TOREE-374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15855055#comment-15855055
 ] 

Jakob Odersky commented on TOREE-374:
-

Hmm, I wonder if this is related to the Yrepl-class-based setting of the repl. 
This setting is a new/experimental feature of the scala interpreter but it is 
required for Spark to work.

[~mariusvniekerk] I don't know anything about the comm api, how would that work 
here?

> Variables declared on the Notebook are not garbage collected
> 
>
> Key: TOREE-374
> URL: https://issues.apache.org/jira/browse/TOREE-374
> Project: TOREE
>  Issue Type: Bug
>Affects Versions: 0.1.0
>Reporter: David Taieb
>
> I'm not sure if it's a bug or a limitation of the underlying scala REPL.
> As part of supporting PixieDust (https://github.com/ibm-cds-labs/pixiedust) 
> auto-visualization feature within Scala gateway, I have implemented a weak 
> hashmap that tracks objects declared on the Scala REPL. However, I have found 
> that objects are not correctly gc'ed when the object is declared in a cell 
> with a val or var keyword and then the cell is ran again. One would expect 
> that the original object has no more references and should be gc'ed but it's 
> not. 
> However, when the object is declare with var keyword and then set to null in 
> another cell, then it is correctly gc'ed.
> I'm concerned that users who run the same cell multiple times would 
> unwittingly have memory leaks which can eventually lead to OOM errors.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (TOREE-375) Incorrect fully qualified name for spark context

2017-02-06 Thread Jakob Odersky (JIRA)

[ 
https://issues.apache.org/jira/browse/TOREE-375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15855019#comment-15855019
 ] 

Jakob Odersky edited comment on TOREE-375 at 2/7/17 12:16 AM:
--

-Yrepl-class-based strikes again.

I managed to track this down to the 
[refreshDefinitions()|https://github.com/apache/incubator-toree/blob/master/scala-interpreter/src/main/scala-2.11/org/apache/toree/kernel/interpreter/scala/ScalaInterpreterSpecific.scala#L79-L97]
 function that is called when jars are added dynamically. It appears that the 
`valueOfTerm` method does not find a value associated to any variables. The 
following snippet illustrates that this behaviour only happens when the 
`-Yrepl-class-based` option is set in the repl (the option is required for 
Spark to correctly serialize objects).

{code}
import scala.tools.nsc.Settings
import scala.tools.nsc.interpreter._

object Main extends App {
  val settings = new Settings
  settings.usejavacp.value = true
  //settings.Yreplclassbased.value = true

  val iMain: IMain = new IMain(settings)
  iMain.initializeSynchronous()

  iMain.interpret("val x = 1")

  iMain.definedTerms.foreach { name =>
println("defined term: " + name.toString)
iMain.valueOfTerm(name.toString) match {
  case Some(value) => println("value: " + value)
  case None => println("no value")
}
  }

}
{code}

Above codes yields:
{code}
[info] Running foo.Main 
[info] x: Int = 1
[info] defined term: x
[info] value: 1
{code}

When setting -Y-repl-class-based by uncommenting above comment:
{code}
[info] Running foo.Main 
[info] x: Int = 1
[info] defined term: x
[info] no value
{code}


was (Author: jodersky):
-Yrepl-class-based strikes again.

I managed to track this down to the 
[refreshDefinitions()|https://github.com/apache/incubator-toree/blob/master/scala-interpreter/src/main/scala-2.11/org/apache/toree/kernel/interpreter/scala/ScalaInterpreterSpecific.scala#L79-L97]
 function that is called when jars are added dynamically. It appears that the 
`valueOfTerm` method does not find a value associated to any variables. The 
following snippet illustrates that this behaviour only happens when the 
`-Yrepl-class-based` option is set in the repl (the option is required for 
Spark to correctly serialize objects).

{code}
object Main extends App {
  val settings = new Settings
  settings.usejavacp.value = true
  //settings.Yreplclassbased.value = true

  val iMain: IMain = new IMain(settings)
  iMain.initializeSynchronous()

  iMain.interpret("val x = 1")

  iMain.definedTerms.foreach { name =>
println("defined term: " + name.toString)
iMain.valueOfTerm(name.toString) match {
  case Some(value) => println("value: " + value)
  case None => println("no value")
}
  }

}
{code}

Above codes yields:
{code}
[info] Running foo.Main 
[info] x: Int = 1
[info] defined term: x
[info] value: 1
{code}

When setting -Y-repl-class-based by uncommenting above comment:
{code}
[info] Running foo.Main 
[info] x: Int = 1
[info] defined term: x
[info] no value
{code}

> Incorrect fully qualified name for spark context
> 
>
> Key: TOREE-375
> URL: https://issues.apache.org/jira/browse/TOREE-375
> Project: TOREE
>  Issue Type: Bug
> Environment: Jupyter Notebook with Toree latest master 
> (1a9c11f5f1381c15b691a716acd0e1f0432a9a35) and Spark 2.0.2, Scala 2.11
>Reporter: Felix Schüler
>
> When running below snippet in a cell I get a compile error for the MLContext 
> Constructor. Somehow the fully qualified name of the SparkContext gets messed 
> up. 
> The same does not happen when I start a Spark shell with the --jars command 
> and create the MLContext there.
> Snippet (the systemml jar is build with the latest master of SystemML):
> {code}
> %addjar 
> file:///home/felix/repos/incubator-systemml/target/systemml-0.13.0-incubating-SNAPSHOT.jar
>  -f
> import org.apache.sysml.api.mlcontext._
> import org.apache.sysml.api.mlcontext.ScriptFactory._
> val ml = new MLContext(sc)
> Starting download from 
> file:///home/felix/repos/incubator-systemml/target/systemml-0.13.0-incubating-SNAPSHOT.jar
> Finished download of systemml-0.13.0-incubating-SNAPSHOT.jar
> Name: Compile Error
> Message: :25: error: overloaded method constructor MLContext with 
> alternatives:
>   (x$1: 
> org.apache.spark.api.java.JavaSparkContext)org.apache.sysml.api.mlcontext.MLContext
>  
>   (x$1: 
> org.apache.spark.org.apache.spark.org.apache.spark.org.apache.spark.org.apache.spark.SparkContext)org.apache.sysml.api.mlcontext.MLContext
>  cannot be applied to 
> (org.apache.spark.org.apache.spark.org.apache.spark.org.apache.spark.org.apache.spark.SparkContext)
>val ml = new MLContext(sc)
> ^
> StackTrace: 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (TOREE-371) $SPARK_HOME environment variable not recognised

2017-02-06 Thread Jakob Odersky (JIRA)

[ 
https://issues.apache.org/jira/browse/TOREE-371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15854859#comment-15854859
 ] 

Jakob Odersky commented on TOREE-371:
-

I'm not sure how changing the default value fixes the issue, however I'm glad 
it worked for you. Would you like to open a pull request?

> $SPARK_HOME environment variable not recognised
> ---
>
> Key: TOREE-371
> URL: https://issues.apache.org/jira/browse/TOREE-371
> Project: TOREE
>  Issue Type: Bug
>Affects Versions: 0.1.0
> Environment: 64 bit Ubuntu 16.04 LTS
>Reporter: Peter
>
> Hi
> toreeapp.py is not recognising $SPARK_HOME, which is exported in .bashrc. 
> toreeapp.py is instead defaulting location of spark.
> $SPARK_HOME is read on line 57 in toree.py as follows:
> spark_home = Unicode(os.getenv(SPARK_HOME, '/usr/local/spark')
> $ echo $SPARK_HOME
> /usr/lib/spark
> $ sudo jupyter toree install --spark_opts='--master=local[4]'
> [ToreeInstall] Installing Apache Toree version 0.1.0.dev8
> [ToreeInstall] 
> Apache Toree is an effort undergoing incubation at the Apache Software
> Foundation (ASF), sponsored by the Apache Incubator PMC.
> Incubation is required of all newly accepted projects until a further review
> indicates that the infrastructure, communications, and decision making process
> have stabilized in a manner consistent with other successful ASF projects.
> While incubation status is not necessarily a reflection of the completeness
> or stability of the code, it does indicate that the project has yet to be
> fully endorsed by the ASF.
> Additionally, this release is not fully compliant with Apache release policy
> and includes a runtime dependency that is licensed as LGPL v3 (plus a static
> linking exception). This package is currently under an effort to re-license
> (https://github.com/zeromq/jeromq/issues/327).
> [ToreeInstall] Creating kernel Scala
> [ToreeInstall] Removing existing kernelspec in 
> /usr/local/share/jupyter/kernels/apache_toree_scala
> [ToreeInstall] Installed kernelspec apache_toree_scala in 
> /usr/local/share/jupyter/kernels/apache_toree_scala
> Traceback (most recent call last):
>   File "/usr/local/bin/jupyter-toree", line 11, in 
> sys.exit(main())
>   File "/usr/local/lib/python3.5/dist-packages/toree/toreeapp.py", line 167, 
> in main
> ToreeApp.launch_instance()
>   File 
> "/usr/local/lib/python3.5/dist-packages/traitlets/config/application.py", 
> line 653, in launch_instance
> app.start()
>   File "/usr/local/lib/python3.5/dist-packages/toree/toreeapp.py", line 164, 
> in start
> return self.subapp.start()
>   File "/usr/local/lib/python3.5/dist-packages/toree/toreeapp.py", line 133, 
> in start
> self.create_kernel_json(install_dir, interpreter)
>   File "/usr/local/lib/python3.5/dist-packages/toree/toreeapp.py", line 90, 
> in create_kernel_json
> python_lib_contents = listdir("{0}/python/lib".format(self.spark_home))
> FileNotFoundError: [Errno 2] No such file or directory: 
> '/usr/local/spark/python/lib'



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (TOREE-372) stream corruption cased by big-endian and little-endian

2017-02-06 Thread Jakob Odersky (JIRA)

[ 
https://issues.apache.org/jira/browse/TOREE-372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15854877#comment-15854877
 ] 

Jakob Odersky commented on TOREE-372:
-

Spark requires its driver app (SparkContext) and all executors to run on 
machines with the same endianess. In the described scenario, Toree acts as the 
driver app and therefore cannot communicate with the Spark executors running in 
a different-endian environment.

There is currently no fix available, however you may be able to work around the 
issue by running toree and jupyter in the same environment as your spark 
cluster, and proxying connections with something like 
[nb2kg|https://github.com/jupyter/kernel_gateway_demos/tree/master/nb2kg].

> stream corruption cased by big-endian and little-endian
> ---
>
> Key: TOREE-372
> URL: https://issues.apache.org/jira/browse/TOREE-372
> Project: TOREE
>  Issue Type: Bug
>Reporter: Wang enzhong
>Priority: Critical
>
> We currently run spark on z/OS system which is a big-endian platform and 
> jupyter+toree on x86 platform which is small-endian platform.  The output 
> from spark is unreadable caused by the different byte order. 
> If we use spark on z/OS and jupyter+toree on another big-endian platform, 
> there is no such error. 
> I've done some investigation and it seems toree leverages the bytestring of 
> Akka which has endian process, don't know why toree does not work in our 
> case. 
> Please help to look into the problem. Due to the tight project schedule, it 
> will be much appreciated if you can give us some advice on how to fix or 
> avoid the problem if it will take some time to change the code. Many thanks 
> in advance. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (TOREE-365) Certain interpreter evaluations do not return result strings

2017-01-27 Thread Jakob Odersky (JIRA)

 [ 
https://issues.apache.org/jira/browse/TOREE-365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Odersky updated TOREE-365:

Description: 
The scala interpreter currently only returns results for expressions. Import 
statements and declarations will not show up as results in a notebook (although 
they are evaluated internally).

This behaviour is related to the 
[ScalaInterpreter#truncateResult|https://github.com/apache/incubator-toree/blob/master/scala-interpreter/src/main/scala/org/apache/toree/kernel/interpreter/scala/ScalaInterpreter.scala#L165-L187]
 function. This function runs the result string of a REPL line through a regex, 
in order to remove the "resX:" part. The function returns the empty string in 
case the line does not start with "resX:", therefore returning an empty string 
for import statements and other declarations. This can have several subtle side 
effects, such as TOREE-340, or a toree client never completing the "onResult" 
callback.

A quick fix to this issue is to return the result string as-is if it does not 
start with "resX".

  was:
The scala interpreter currently only returns results for expressions. Import 
statements and declarations will not show up as results in a notebook (although 
they are evaluated internally).

This behaviour is related to the 
[ScalaInterpreter#truncateResult|(https://github.com/apache/incubator-toree/blob/master/scala-interpreter/src/main/scala/org/apache/toree/kernel/interpreter/scala/ScalaInterpreter.scala#L165-L187]
 function. This function runs the result string of a REPL line through a regex, 
in order to remove the "resX:" part. The function returns the empty string in 
case the line does not start with "resX:", therefore returning an empty string 
for import statements and other declarations. This can have several subtle side 
effects, such as TOREE-340, or a toree client never completing the "onResult" 
callback.

A quick fix to this issue is to return the result string as-is if it does not 
start with "resX".


> Certain interpreter evaluations do not return result strings
> 
>
> Key: TOREE-365
> URL: https://issues.apache.org/jira/browse/TOREE-365
> Project: TOREE
>  Issue Type: Bug
>Reporter: Jakob Odersky
>
> The scala interpreter currently only returns results for expressions. Import 
> statements and declarations will not show up as results in a notebook 
> (although they are evaluated internally).
> This behaviour is related to the 
> [ScalaInterpreter#truncateResult|https://github.com/apache/incubator-toree/blob/master/scala-interpreter/src/main/scala/org/apache/toree/kernel/interpreter/scala/ScalaInterpreter.scala#L165-L187]
>  function. This function runs the result string of a REPL line through a 
> regex, in order to remove the "resX:" part. The function returns the empty 
> string in case the line does not start with "resX:", therefore returning an 
> empty string for import statements and other declarations. This can have 
> several subtle side effects, such as TOREE-340, or a toree client never 
> completing the "onResult" callback.
> A quick fix to this issue is to return the result string as-is if it does not 
> start with "resX".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TOREE-365) Certain interpreter evaluations do not return result strings

2017-01-27 Thread Jakob Odersky (JIRA)

 [ 
https://issues.apache.org/jira/browse/TOREE-365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Odersky updated TOREE-365:

Description: 
The scala interpreter currently only returns results for expressions. Import 
statements and declarations will not show up as results in a notebook (although 
they are evaluated internally).

This behaviour is related to the 
[ScalaInterpreter#truncateResult|(https://github.com/apache/incubator-toree/blob/master/scala-interpreter/src/main/scala/org/apache/toree/kernel/interpreter/scala/ScalaInterpreter.scala#L165-L187]
 function. This function runs the result string of a REPL line through a regex, 
in order to remove the "resX:" part. The function returns the empty string in 
case the line does not start with "resX:", therefore returning an empty string 
for import statements and other declarations. This can have several subtle side 
effects, such as TOREE-340, or a toree client never completing the "onResult" 
callback.

A quick fix to this issue is to return the result string as-is if it does not 
start with "resX".

  was:
The scala interpreter currently only returns results for expressions. Import 
statements and declarations will not show up as results in a notebook (although 
they are evaluated internally).

This behaviour is related to the 
[ScalaInterpreter#truncateResult|(https://github.com/apache/incubator-toree/blob/master/scala-interpreter/src/main/scala/org/apache/toree/kernel/interpreter/scala/ScalaInterpreter.scala#L165-L187]
 function. This function runs the result string of a REPL line through a regex, 
in order to remove the "resX:" part. The function returns the empty string in 
case the line does not start with "resX:", therefore returning an empty string 
for import statements and other declarations. This can have several subtle side 
effects, such as TOREE-340, or a toree client never completing the "onResult" 
callback.

A quick fix to this issue is to return the result string as-is if it does not 
start with "resX". This leads me to a more general question: why is the resX 
stripped in the first place? I could see that in the context of a jupyter 
notebook, the res number may not necessarily match the cell number, however 
since res variables are still accessible and may actually be of utility for a 
user  


> Certain interpreter evaluations do not return result strings
> 
>
> Key: TOREE-365
> URL: https://issues.apache.org/jira/browse/TOREE-365
> Project: TOREE
>  Issue Type: Bug
>Reporter: Jakob Odersky
>
> The scala interpreter currently only returns results for expressions. Import 
> statements and declarations will not show up as results in a notebook 
> (although they are evaluated internally).
> This behaviour is related to the 
> [ScalaInterpreter#truncateResult|(https://github.com/apache/incubator-toree/blob/master/scala-interpreter/src/main/scala/org/apache/toree/kernel/interpreter/scala/ScalaInterpreter.scala#L165-L187]
>  function. This function runs the result string of a REPL line through a 
> regex, in order to remove the "resX:" part. The function returns the empty 
> string in case the line does not start with "resX:", therefore returning an 
> empty string for import statements and other declarations. This can have 
> several subtle side effects, such as TOREE-340, or a toree client never 
> completing the "onResult" callback.
> A quick fix to this issue is to return the result string as-is if it does not 
> start with "resX".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TOREE-365) Certain interpreter evaluations do not return result strings

2017-01-27 Thread Jakob Odersky (JIRA)

 [ 
https://issues.apache.org/jira/browse/TOREE-365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Odersky updated TOREE-365:

Description: 
The scala interpreter currently only returns results for expressions. Import 
statements and declarations will not show up as results in a notebook (although 
they are evaluated internally).

This behaviour is related to the 
[ScalaInterpreter#truncateResult|(https://github.com/apache/incubator-toree/blob/master/scala-interpreter/src/main/scala/org/apache/toree/kernel/interpreter/scala/ScalaInterpreter.scala#L165-L187]
 function. This function runs the result string of a REPL line through a regex, 
in order to remove the "resX:" part. The function returns the empty string in 
case the line does not start with "resX:", therefore returning an empty string 
for import statements and other declarations. This can have several subtle side 
effects, such as TOREE-340, or a toree client never completing the "onResult" 
callback.

A quick fix to this issue is to return the result string as-is if it does not 
start with "resX". This leads me to a more general question: why is the resX 
stripped in the first place? I could see that in the context of a jupyter 
notebook, the res number may not necessarily match the cell number, however 
since res variables are still accessible and may actually be of utility for a 
user  

  was:
The scala interpreter currently only returns results for expressions. Import 
statements and declarations will not show up as results in a notebook (although 
they are evaluated internally).

This behaviour is related to the 
[ScalaInterpreter#truncateResult|(https://github.com/apache/incubator-toree/blob/master/scala-interpreter/src/main/scala/org/apache/toree/kernel/interpreter/scala/ScalaInterpreter.scala#L165-L187]
 function. This function runs the result string of a REPL line through a regex, 
in order to remove the "resX:" part. The function returns the empty string in 
case the line does not start with "resX:", therefore returning an empty string 
for import statements and other declarations. This can have several subtle side 
effects, such as TOREE-340, or a toree client never completing the "onResult" 
callback.

A quick fix to this issue is to return the result string as-is if it does not 
start with "resX".


> Certain interpreter evaluations do not return result strings
> 
>
> Key: TOREE-365
> URL: https://issues.apache.org/jira/browse/TOREE-365
> Project: TOREE
>  Issue Type: Bug
>Reporter: Jakob Odersky
>
> The scala interpreter currently only returns results for expressions. Import 
> statements and declarations will not show up as results in a notebook 
> (although they are evaluated internally).
> This behaviour is related to the 
> [ScalaInterpreter#truncateResult|(https://github.com/apache/incubator-toree/blob/master/scala-interpreter/src/main/scala/org/apache/toree/kernel/interpreter/scala/ScalaInterpreter.scala#L165-L187]
>  function. This function runs the result string of a REPL line through a 
> regex, in order to remove the "resX:" part. The function returns the empty 
> string in case the line does not start with "resX:", therefore returning an 
> empty string for import statements and other declarations. This can have 
> several subtle side effects, such as TOREE-340, or a toree client never 
> completing the "onResult" callback.
> A quick fix to this issue is to return the result string as-is if it does not 
> start with "resX". This leads me to a more general question: why is the resX 
> stripped in the first place? I could see that in the context of a jupyter 
> notebook, the res number may not necessarily match the cell number, however 
> since res variables are still accessible and may actually be of utility for a 
> user  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (TOREE-365) Certain interpreter evaluations do not return result strings

2017-01-27 Thread Jakob Odersky (JIRA)
Jakob Odersky created TOREE-365:
---

 Summary: Certain interpreter evaluations do not return result 
strings
 Key: TOREE-365
 URL: https://issues.apache.org/jira/browse/TOREE-365
 Project: TOREE
  Issue Type: Bug
Reporter: Jakob Odersky


The scala interpreter currently only returns results for expressions. Import 
statements and declarations will not show up as results in a notebook (although 
they are evaluated internally).

This behaviour is related to the 
[ScalaInterpreter#truncateResult|(https://github.com/apache/incubator-toree/blob/master/scala-interpreter/src/main/scala/org/apache/toree/kernel/interpreter/scala/ScalaInterpreter.scala#L165-L187]
 function. This function runs the result string of a REPL line through a regex, 
in order to remove the "resX:" part. The function returns the empty string in 
case the line does not start with "resX:", therefore returning an empty string 
for import statements and other declarations. This can have several subtle side 
effects, such as TOREE-340, or a toree client never completing the "onResult" 
callback.

A quick fix to this issue is to return the result string as-is if it does not 
start with "resX".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TOREE-333) make sbt-publishM2

2017-01-27 Thread Jakob Odersky (JIRA)

 [ 
https://issues.apache.org/jira/browse/TOREE-333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Odersky updated TOREE-333:

Priority: Minor  (was: Major)

> make sbt-publishM2
> --
>
> Key: TOREE-333
> URL: https://issues.apache.org/jira/browse/TOREE-333
> Project: TOREE
>  Issue Type: Bug
>Affects Versions: 0.1.0
>Reporter: Brian Burns
>Priority: Minor
>  Labels: build
> Fix For: 0.1.0
>
>
> make sbt-publish* is throwing a ZipException.  Here is the stack trace.
> java.util.zip.ZipException: duplicate entry: LICENSE
>   at java.util.zip.ZipOutputStream.putNextEntry(ZipOutputStream.java:232)
>   at java.util.jar.JarOutputStream.putNextEntry(JarOutputStream.java:109)
>   at sbt.IO$.sbt$IO$$addFileEntry$1(IO.scala:445)
>   at sbt.IO$$anonfun$sbt$IO$$writeZip$2.apply(IO.scala:454)
>   at sbt.IO$$anonfun$sbt$IO$$writeZip$2.apply(IO.scala:454)
>   at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>   at sbt.IO$.sbt$IO$$writeZip(IO.scala:454)
>   at sbt.IO$$anonfun$archive$1.apply(IO.scala:410)
>   at sbt.IO$$anonfun$archive$1.apply(IO.scala:408)
>   at sbt.IO$$anonfun$withZipOutput$1.apply(IO.scala:498)
>   at sbt.IO$$anonfun$withZipOutput$1.apply(IO.scala:485)
>   at sbt.Using.apply(Using.scala:24)
>   at sbt.IO$.withZipOutput(IO.scala:485)
>   at sbt.IO$.archive(IO.scala:408)
>   at sbt.IO$.jar(IO.scala:392)
>   at sbt.Package$.makeJar(Package.scala:97)
>   at sbt.Package$$anonfun$3$$anonfun$apply$3.apply(Package.scala:64)
>   at sbt.Package$$anonfun$3$$anonfun$apply$3.apply(Package.scala:62)
>   at sbt.Tracked$$anonfun$outputChanged$1.apply(Tracked.scala:84)
>   at sbt.Tracked$$anonfun$outputChanged$1.apply(Tracked.scala:79)
>   at sbt.Package$.apply(Package.scala:72)
>   at sbt.Defaults$$anonfun$packageTask$1.apply(Defaults.scala:685)
>   at sbt.Defaults$$anonfun$packageTask$1.apply(Defaults.scala:684)
>   at scala.Function2$$anonfun$tupled$1.apply(Function2.scala:54)
>   at scala.Function2$$anonfun$tupled$1.apply(Function2.scala:53)
>   at scala.Function1$$anonfun$compose$1.apply(Function1.scala:47)
>   at sbt.$tilde$greater$$anonfun$$u2219$1.apply(TypeFunctions.scala:40)
>   at sbt.std.Transform$$anon$4.work(System.scala:63)
>   at 
> sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:226)
>   at 
> sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:226)
>   at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:17)
>   at sbt.Execute.work(Execute.scala:235)
>   at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:226)
>   at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:226)
>   at 
> sbt.ConcurrentRestrictions$$anon$4$$anonfun$1.apply(ConcurrentRestrictions.scala:159)
>   at sbt.CompletionService$$anon$2.call(CompletionService.scala:28)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> [error] (toree/compile:packageBin) java.util.zip.ZipException: duplicate 
> entry: LICENSE



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TOREE-340) Output of the form "a = b" returns "b"

2017-01-27 Thread Jakob Odersky (JIRA)

 [ 
https://issues.apache.org/jira/browse/TOREE-340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Odersky updated TOREE-340:

Priority: Major  (was: Minor)

> Output of the form "a = b" returns "b"
> --
>
> Key: TOREE-340
> URL: https://issues.apache.org/jira/browse/TOREE-340
> Project: TOREE
>  Issue Type: Bug
>Affects Versions: 0.1.0
>Reporter: Keith Kraus
>
> To reproduce issue, run in a cell
> {noformat}"test1 = test2"{noformat}
> output will be
> {noformat}test2{noformat}
> \\
> \\
> This also affects when trying to set a val or var
> {noformat}val testvar = "test1 = test2"{noformat}
> The value of {{testvar}} returns
> {noformat}test2{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (TOREE-262) Resolve LGPL Dependency in project

2017-01-27 Thread Jakob Odersky (JIRA)

[ 
https://issues.apache.org/jira/browse/TOREE-262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843653#comment-15843653
 ] 

Jakob Odersky edited comment on TOREE-262 at 1/27/17 11:35 PM:
---

The JeroMQ project is now released under the Mozilla Public License. 
[~lbustelo], does this resolve the issue with depending on it?


was (Author: jodersky):
The JeroMQ project is now released under the Mozilla Public License

> Resolve LGPL Dependency in project
> --
>
> Key: TOREE-262
> URL: https://issues.apache.org/jira/browse/TOREE-262
> Project: TOREE
>  Issue Type: Bug
>Reporter: Gino Bustelo
>Priority: Critical
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TOREE-262) Resolve LGPL Dependency in project

2017-01-27 Thread Jakob Odersky (JIRA)

[ 
https://issues.apache.org/jira/browse/TOREE-262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843653#comment-15843653
 ] 

Jakob Odersky commented on TOREE-262:
-

The JeroMQ project is now released under the Mozilla Public License

> Resolve LGPL Dependency in project
> --
>
> Key: TOREE-262
> URL: https://issues.apache.org/jira/browse/TOREE-262
> Project: TOREE
>  Issue Type: Bug
>Reporter: Gino Bustelo
>Priority: Critical
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (TOREE-322) Building Error with github snapshot

2017-01-27 Thread Jakob Odersky (JIRA)

 [ 
https://issues.apache.org/jira/browse/TOREE-322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Odersky resolved TOREE-322.
-
Resolution: Fixed

> Building Error with github snapshot
> ---
>
> Key: TOREE-322
> URL: https://issues.apache.org/jira/browse/TOREE-322
> Project: TOREE
>  Issue Type: Bug
>Reporter: Todd Leo
>  Labels: build
>
> I downloaded the latest version of Toree from github, then used make release, 
> only to find the following error:
> [info] Compiling 11 Scala sources and 6 Java sources to 
> /home/todd/utils/incubator-toree/plugins/target/scala-2.10/classes...
> [error] error while loading , error in opening zip file
> [info] Compiling 1 Scala source to 
> /home/todd/utils/incubator-toree/protocol/target/scala-2.10/classes...
> [info] Fetching artifacts of 
> org.apache.toree.kernel:toree-client_2.10:0.1.0.dev9-incubating-SNAPSHOT
> [info] Fetched artifacts of 
> org.apache.toree.kernel:toree-client_2.10:0.1.0.dev9-incubating-SNAPSHOT
> scala.reflect.internal.MissingRequirementError: object scala.runtime in 
> compiler mirror not found.
> at 
> scala.reflect.internal.MissingRequirementError$.signal(MissingRequirementError.scala:16)
> at 
> scala.reflect.internal.MissingRequirementError$.notFound(MissingRequirementError.scala:17)
> at 
> scala.reflect.internal.Mirrors$RootsBase$$anonfun$getModuleOrClass$3.apply(Mirrors.scala:49)
> at 
> scala.reflect.internal.Mirrors$RootsBase$$anonfun$getModuleOrClass$3.apply(Mirrors.scala:49)
> at scala.reflect.internal.Symbols$Symbol.orElse(Symbols.scala:2229)
> at 
> scala.reflect.internal.Mirrors$RootsBase.getModuleOrClass(Mirrors.scala:48)
> at 
> scala.reflect.internal.Mirrors$RootsBase.getModuleOrClass(Mirrors.scala:40)
> at 
> scala.reflect.internal.Mirrors$RootsBase.getModuleOrClass(Mirrors.scala:61)
> ...
> [error] (toree-plugins/compile:compileIncremental) 
> scala.reflect.internal.MissingRequirementError: object scala.runtime in 
> compiler mirror not found.
> [error] Total time: 11 s, completed Jun 13, 2016 4:39:29 PM
> Makefile:103: recipe for target 
> 'target/scala-2.10/toree-assembly-0.1.0.dev9-incubating-SNAPSHOT.jar' failed
> make: *** 
> [target/scala-2.10/toree-assembly-0.1.0.dev9-incubating-SNAPSHOT.jar] Error 1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TOREE-322) Building Error with github snapshot

2017-01-27 Thread Jakob Odersky (JIRA)

[ 
https://issues.apache.org/jira/browse/TOREE-322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843649#comment-15843649
 ] 

Jakob Odersky commented on TOREE-322:
-

Latest master works, I assume it must have been something temporary.

> Building Error with github snapshot
> ---
>
> Key: TOREE-322
> URL: https://issues.apache.org/jira/browse/TOREE-322
> Project: TOREE
>  Issue Type: Bug
>Reporter: Todd Leo
>  Labels: build
>
> I downloaded the latest version of Toree from github, then used make release, 
> only to find the following error:
> [info] Compiling 11 Scala sources and 6 Java sources to 
> /home/todd/utils/incubator-toree/plugins/target/scala-2.10/classes...
> [error] error while loading , error in opening zip file
> [info] Compiling 1 Scala source to 
> /home/todd/utils/incubator-toree/protocol/target/scala-2.10/classes...
> [info] Fetching artifacts of 
> org.apache.toree.kernel:toree-client_2.10:0.1.0.dev9-incubating-SNAPSHOT
> [info] Fetched artifacts of 
> org.apache.toree.kernel:toree-client_2.10:0.1.0.dev9-incubating-SNAPSHOT
> scala.reflect.internal.MissingRequirementError: object scala.runtime in 
> compiler mirror not found.
> at 
> scala.reflect.internal.MissingRequirementError$.signal(MissingRequirementError.scala:16)
> at 
> scala.reflect.internal.MissingRequirementError$.notFound(MissingRequirementError.scala:17)
> at 
> scala.reflect.internal.Mirrors$RootsBase$$anonfun$getModuleOrClass$3.apply(Mirrors.scala:49)
> at 
> scala.reflect.internal.Mirrors$RootsBase$$anonfun$getModuleOrClass$3.apply(Mirrors.scala:49)
> at scala.reflect.internal.Symbols$Symbol.orElse(Symbols.scala:2229)
> at 
> scala.reflect.internal.Mirrors$RootsBase.getModuleOrClass(Mirrors.scala:48)
> at 
> scala.reflect.internal.Mirrors$RootsBase.getModuleOrClass(Mirrors.scala:40)
> at 
> scala.reflect.internal.Mirrors$RootsBase.getModuleOrClass(Mirrors.scala:61)
> ...
> [error] (toree-plugins/compile:compileIncremental) 
> scala.reflect.internal.MissingRequirementError: object scala.runtime in 
> compiler mirror not found.
> [error] Total time: 11 s, completed Jun 13, 2016 4:39:29 PM
> Makefile:103: recipe for target 
> 'target/scala-2.10/toree-assembly-0.1.0.dev9-incubating-SNAPSHOT.jar' failed
> make: *** 
> [target/scala-2.10/toree-assembly-0.1.0.dev9-incubating-SNAPSHOT.jar] Error 1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (TOREE-333) make sbt-publishM2

2017-01-27 Thread Jakob Odersky (JIRA)

 [ 
https://issues.apache.org/jira/browse/TOREE-333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Odersky updated TOREE-333:

Priority: Major  (was: Critical)

> make sbt-publishM2
> --
>
> Key: TOREE-333
> URL: https://issues.apache.org/jira/browse/TOREE-333
> Project: TOREE
>  Issue Type: Bug
>Affects Versions: 0.1.0
>Reporter: Brian Burns
>  Labels: build
> Fix For: 0.1.0
>
>
> make sbt-publish* is throwing a ZipException.  Here is the stack trace.
> java.util.zip.ZipException: duplicate entry: LICENSE
>   at java.util.zip.ZipOutputStream.putNextEntry(ZipOutputStream.java:232)
>   at java.util.jar.JarOutputStream.putNextEntry(JarOutputStream.java:109)
>   at sbt.IO$.sbt$IO$$addFileEntry$1(IO.scala:445)
>   at sbt.IO$$anonfun$sbt$IO$$writeZip$2.apply(IO.scala:454)
>   at sbt.IO$$anonfun$sbt$IO$$writeZip$2.apply(IO.scala:454)
>   at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>   at sbt.IO$.sbt$IO$$writeZip(IO.scala:454)
>   at sbt.IO$$anonfun$archive$1.apply(IO.scala:410)
>   at sbt.IO$$anonfun$archive$1.apply(IO.scala:408)
>   at sbt.IO$$anonfun$withZipOutput$1.apply(IO.scala:498)
>   at sbt.IO$$anonfun$withZipOutput$1.apply(IO.scala:485)
>   at sbt.Using.apply(Using.scala:24)
>   at sbt.IO$.withZipOutput(IO.scala:485)
>   at sbt.IO$.archive(IO.scala:408)
>   at sbt.IO$.jar(IO.scala:392)
>   at sbt.Package$.makeJar(Package.scala:97)
>   at sbt.Package$$anonfun$3$$anonfun$apply$3.apply(Package.scala:64)
>   at sbt.Package$$anonfun$3$$anonfun$apply$3.apply(Package.scala:62)
>   at sbt.Tracked$$anonfun$outputChanged$1.apply(Tracked.scala:84)
>   at sbt.Tracked$$anonfun$outputChanged$1.apply(Tracked.scala:79)
>   at sbt.Package$.apply(Package.scala:72)
>   at sbt.Defaults$$anonfun$packageTask$1.apply(Defaults.scala:685)
>   at sbt.Defaults$$anonfun$packageTask$1.apply(Defaults.scala:684)
>   at scala.Function2$$anonfun$tupled$1.apply(Function2.scala:54)
>   at scala.Function2$$anonfun$tupled$1.apply(Function2.scala:53)
>   at scala.Function1$$anonfun$compose$1.apply(Function1.scala:47)
>   at sbt.$tilde$greater$$anonfun$$u2219$1.apply(TypeFunctions.scala:40)
>   at sbt.std.Transform$$anon$4.work(System.scala:63)
>   at 
> sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:226)
>   at 
> sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:226)
>   at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:17)
>   at sbt.Execute.work(Execute.scala:235)
>   at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:226)
>   at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:226)
>   at 
> sbt.ConcurrentRestrictions$$anon$4$$anonfun$1.apply(ConcurrentRestrictions.scala:159)
>   at sbt.CompletionService$$anon$2.call(CompletionService.scala:28)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> [error] (toree/compile:packageBin) java.util.zip.ZipException: duplicate 
> entry: LICENSE



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TOREE-333) make sbt-publishM2

2017-01-27 Thread Jakob Odersky (JIRA)

[ 
https://issues.apache.org/jira/browse/TOREE-333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843642#comment-15843642
 ] 

Jakob Odersky commented on TOREE-333:
-

The good news is that despite the error message, the publish actually works. I 
would still consider this an issue, although maybe not a critical one

> make sbt-publishM2
> --
>
> Key: TOREE-333
> URL: https://issues.apache.org/jira/browse/TOREE-333
> Project: TOREE
>  Issue Type: Bug
>Affects Versions: 0.1.0
>Reporter: Brian Burns
>Priority: Critical
>  Labels: build
> Fix For: 0.1.0
>
>
> make sbt-publish* is throwing a ZipException.  Here is the stack trace.
> java.util.zip.ZipException: duplicate entry: LICENSE
>   at java.util.zip.ZipOutputStream.putNextEntry(ZipOutputStream.java:232)
>   at java.util.jar.JarOutputStream.putNextEntry(JarOutputStream.java:109)
>   at sbt.IO$.sbt$IO$$addFileEntry$1(IO.scala:445)
>   at sbt.IO$$anonfun$sbt$IO$$writeZip$2.apply(IO.scala:454)
>   at sbt.IO$$anonfun$sbt$IO$$writeZip$2.apply(IO.scala:454)
>   at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>   at sbt.IO$.sbt$IO$$writeZip(IO.scala:454)
>   at sbt.IO$$anonfun$archive$1.apply(IO.scala:410)
>   at sbt.IO$$anonfun$archive$1.apply(IO.scala:408)
>   at sbt.IO$$anonfun$withZipOutput$1.apply(IO.scala:498)
>   at sbt.IO$$anonfun$withZipOutput$1.apply(IO.scala:485)
>   at sbt.Using.apply(Using.scala:24)
>   at sbt.IO$.withZipOutput(IO.scala:485)
>   at sbt.IO$.archive(IO.scala:408)
>   at sbt.IO$.jar(IO.scala:392)
>   at sbt.Package$.makeJar(Package.scala:97)
>   at sbt.Package$$anonfun$3$$anonfun$apply$3.apply(Package.scala:64)
>   at sbt.Package$$anonfun$3$$anonfun$apply$3.apply(Package.scala:62)
>   at sbt.Tracked$$anonfun$outputChanged$1.apply(Tracked.scala:84)
>   at sbt.Tracked$$anonfun$outputChanged$1.apply(Tracked.scala:79)
>   at sbt.Package$.apply(Package.scala:72)
>   at sbt.Defaults$$anonfun$packageTask$1.apply(Defaults.scala:685)
>   at sbt.Defaults$$anonfun$packageTask$1.apply(Defaults.scala:684)
>   at scala.Function2$$anonfun$tupled$1.apply(Function2.scala:54)
>   at scala.Function2$$anonfun$tupled$1.apply(Function2.scala:53)
>   at scala.Function1$$anonfun$compose$1.apply(Function1.scala:47)
>   at sbt.$tilde$greater$$anonfun$$u2219$1.apply(TypeFunctions.scala:40)
>   at sbt.std.Transform$$anon$4.work(System.scala:63)
>   at 
> sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:226)
>   at 
> sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:226)
>   at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:17)
>   at sbt.Execute.work(Execute.scala:235)
>   at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:226)
>   at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:226)
>   at 
> sbt.ConcurrentRestrictions$$anon$4$$anonfun$1.apply(ConcurrentRestrictions.scala:159)
>   at sbt.CompletionService$$anon$2.call(CompletionService.scala:28)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> [error] (toree/compile:packageBin) java.util.zip.ZipException: duplicate 
> entry: LICENSE



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (TOREE-362) How do I define ports specifically?

2017-01-27 Thread Jakob Odersky (JIRA)

[ 
https://issues.apache.org/jira/browse/TOREE-362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15843631#comment-15843631
 ] 

Jakob Odersky commented on TOREE-362:
-

I'm not sure I understand your question. The ports always fall back to the same 
default value if not overriden. There shouldn't be any randomness involved.

Custom ports can be specified by exporting TOREE_OPTS to include the command 
line options that change them. E.g.

{code}export TOREE_OPTS="--stdin-port=10001 --control-port=10002 
--hb-port=10003 --shell-port=10004 --iopub-port=10005"{code}

> How do I define ports specifically?
> ---
>
> Key: TOREE-362
> URL: https://issues.apache.org/jira/browse/TOREE-362
> Project: TOREE
>  Issue Type: Bug
>Reporter: Richard Quan
>Priority: Blocker
>
> Hello,
> I am using https://github.com/apache/incubator-toree to connect to a Spark 
> instance.  For some reason, on my company's proxy, I can sometimes connect 
> and more often cannot.  It seems to depend on the port numbers i.e. the 533xx 
> numbers here: 
> 17/01/23 14:56:55 [INFO] o.a.t.Main$$anon$1 - Connection Profile: {
>   "stdin_port" : 53327,
>   "control_port" : 53328,
>   "hb_port" : 53329,
>   "shell_port" : 53325,
>   "iopub_port" : 53326,
>   "ip" : "127.0.0.1",
>   "transport" : "tcp",
>   "signature_scheme" : "hmac-sha256",
>   "key" : "0b022f16-eff6-4cd3-8056-3a81cd5f289a"
> }
> I am wondering if it is possible to specify the port numbers as it almost 
> seems random, using the run.sh file.  Perhaps the ${TOREE_OPTS}?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)