So matplotlib isn't very well supported at the moment. Toree's pyspark
kernel is not ipykernel -- so the magics that work with that (and
matplotlib integration) sadly won't work.
You can do it by calling the kernel.display methods afaik.
On Wed, Nov 2, 2016 at 15:22 wrote:
> Hi,
>
> I just notic
Dear Toree podling,
Please don't forget to post your board report.
John
Hi,
I just noticed that simple plots with matplotlib do not work in Toree. I get
this error in the UI:
Magic pyspark failed to execute with error:
null was reset!
In the logs I see:
16/11/02 14:12:15 ERROR PySparkProcessHandler: null process failed:
org.apache.commons.exec.ExecuteException:
Chip, you’re right, this did the trick:
%%pyspark
print kernel.data().get(“x")
Thanks so much for the help!
On 11/2/16, 1:26 PM, "Chip Senkbeil" wrote:
>I just did that using the RC3 version of Toree for the 0.1.x branch. If
>you're on master, maybe it doesn't require _jvm_kernel. I ju
I just did that using the RC3 version of Toree for the 0.1.x branch. If
you're on master, maybe it doesn't require _jvm_kernel. I just saw that was
needed for our RC3.
On Wed, Nov 2, 2016 at 12:12 PM wrote:
> That is not working for me in the release I have 0.1.0…
>
> %%pyspark
> print dir(kerne
That is not working for me in the release I have 0.1.0…
%%pyspark
print dir(kernel._jvm_kernel)
['__call__', '__class__', '__delattr__', '__dict__', '__doc__',
'__format__', '__getattribute__', '__hash__', '__init__', '__module__',
'__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr
+1.
- Manually installed using the pip package; verified that the install
worked and the Scala option was available in the notebook.
- Ran Scala code on the notebook.
- Ran some simple parallelization, reduces, etc. using a local Spark
cluster.
- Verified that PySpark interpreter worked correctly
When running Scala as the default interpreter in a notebook:
Cell 1:
val x = 3
kernel.data.put("x", x)
Cell2:
%%pyspark
x = kernel._jvm_kernel.data().get("x")
kernel._jvm_kernel.data().put("x", x + 1)
Cell3:
println(kernel.data.get("x"))
On Wed, Nov 2, 2016 at 11:27 AM wrote:
> Thanks Chip, n
Thanks Chip, now, I understand how to work with it from the JVM side. Any
chance you have a snippet of how to get a value from the map in python?
Ian Maloney
Platform Architect
Advanced Analytics
Internal: 828716
Office: (734) 623-8716
Mobile: (313) 910-9272
On 11/2/16, 11:39 AM, "Chip Sen
While it isn't supported (we don't test its use in this case), you can
store objects in a shared hashmap under the kernel object that is made
available in each interpreter. The map is exposed as `kernel.data`, but the
way you access and store data is different per language.
The signature of the da
Hi,
I’m working primarily using the default scala/spark interpreter. It works
great, except when I need to plot something. Is there a way I can take a scala
object or spark data frame I’ve created in a scala cell and pass it off to a
pyspark cell for plotting?
This documentation issue, might b
[
https://issues.apache.org/jira/browse/TOREE-349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15629285#comment-15629285
]
Andrew Kerr commented on TOREE-349:
---
Using this execution context appears to work around
@hitesh Where would be an good place to put release instructions?
On Tue, Nov 1, 2016 at 6:21 PM Marius van Niekerk
wrote:
>
>
> On 2016-10-28 08:49 (-0400), Marius van Niekerk <
> marius.v.niek...@gmail.com> wrote:
> > +1
> > On Thu, Oct 27, 2016 at 10:35 Gino Bustelo wrote:
> >
> > > Please v
13 matches
Mail list logo