Github user rawkintrevo commented on the issue:
https://github.com/apache/zeppelin/pull/928
Currently failing to download deps.
```
[ERROR] Failed to execute goal
org.apache.maven.plugins:maven-remote-resources-plugin:1.4:process (default) on
project zeppelin: Error
Github user rawkintrevo commented on the issue:
https://github.com/apache/zeppelin/pull/928
All failures due to Mahout doesn't support scala 2.11 yet. Adding logic to
detect, similar to detecting spark v < 1.5, and adding to testing suite.
@bzz, pyspark issues seemed to
Github user bzz commented on the issue:
https://github.com/apache/zeppelin/pull/928
Got it, thank you so much for digging into it!
Let me try to look into it more this week
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user rawkintrevo commented on the issue:
https://github.com/apache/zeppelin/pull/928
@bzz, I can't recreate the build failure.
I can say
- Spark, pySpark, and Mahout notebooks and paragraphs run as expected.
- Spark and pySpark tests pass. Also, integration
Github user rawkintrevo commented on the issue:
https://github.com/apache/zeppelin/pull/928
@bzz and @Leemoonsoo
A big part of the refactor was introducing no new dependencies- instead
loading from maven or MAHOUT_HOME at interpretter start up via dependency
resolver.
Github user rawkintrevo commented on the issue:
https://github.com/apache/zeppelin/pull/928
@bzz not quite done. A little more testing and I realized, jars aren't
being properly loaded when Spark is in cluster mode. Think you could take a
peek and try to give me a hint why that
Github user bzz commented on the issue:
https://github.com/apache/zeppelin/pull/928
Great work, @rawkintrevo ! It looks like a rebase on the latest master is
needed now
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user rawkintrevo commented on the issue:
https://github.com/apache/zeppelin/pull/928
Per this:
http://stackoverflow.com/questions/32498891/spark-read-and-write-to-parquet-leads-to-outofmemoryerror-java-heap-space
and this:
Github user rawkintrevo commented on the issue:
https://github.com/apache/zeppelin/pull/928
As I am playing with this, things seem to stop/start working at random...
The Thrift Server error in the Zepplin context, with Java Heap Space errors
related to the kryo serializer in the
Github user rawkintrevo commented on the issue:
https://github.com/apache/zeppelin/pull/928
UPDATE:
Sorry for the quick one-two punch. But the above error only occurs in Spark
cluster mode, not in Spark local mode. Leading me to believe jars aren't
getting loaded up.
---
Github user rawkintrevo commented on the issue:
https://github.com/apache/zeppelin/pull/928
Consider the code
```scala
%mahout
val mxRnd = Matrices.symmetricUniformView(5000, 2, 1234)
val drmRand = drmParallelize(mxRnd)
val drmSin = drmRand.mapBlock()
Github user dlyubimov commented on the issue:
https://github.com/apache/zeppelin/pull/928
what's the message?
`DRMLike.collect()`, eventually, is a translation to `RDD.collect()`
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user rawkintrevo commented on the issue:
https://github.com/apache/zeppelin/pull/928
This appears to be working, but there is a bug when doing OLS regarding the
thift server. It is the same error message one normally gets when trying to use
incompatible versoin of spark. Is
Github user rawkintrevo commented on the issue:
https://github.com/apache/zeppelin/pull/928
If someone could help me out I'd appreciate it...
First of all, this works fine as expected in the notebooks (either way).
In MahoutSparkInterpreter.java line 89, there is a
14 matches
Mail list logo