On Mon, Aug 4, 2014 at 1:01 PM, Anand Avati av...@gluster.org wrote:
On Sun, Aug 3, 2014 at 9:09 PM, Patrick Wendell pwend...@gmail.com
wrote:
Hey Anand,
Thanks for looking into this - it's great to see momentum towards Scala
2.11 and I'd love if this land in Spark 1.2.
For the
Hi Xiangrui,
Maintaining another file will be a pain later so I deployed spark 1.0.1
without mllib and then my application jar bundles mllib 1.1.0-SNAPSHOT
along with the code changes for quadratic optimization...
Later the plan is to patch the snapshot mllib with the deployed stable
mllib...
One thing I like to clarify is that we do not support running a newer
version of a Spark component on top of a older version of Spark core.
I don't remember any code change in MLlib that requires Spark v1.1 but
I might miss some PRs. There were changes to CoGroup, which may be
relevant:
Ok...let me look into it a bit more and most likely I will deploy the Spark
v1.1 and then use mllib 1.1 SNAPSHOT jar with it so that we follow your
guideline of not running newer spark component on older version of spark
core...
That should solve this issue unless it is related to Java
One related question, is mllib jar independent from hadoop version (doesnt
use hadoop api directly)? Can I use mllib jar compile for one version of
hadoop and use it in another version of hadoop?
Sent from my Google Nexus 5
On Aug 6, 2014 8:29 AM, Debasish Das debasish.da...@gmail.com wrote:
Hi
I did not play with Hadoop settings...everything is compiled with
2.3.0CDH5.0.2 for me...
I did try to bump the version number of HBase from 0.94 to 0.96 or 0.98 but
there was no profile for CDH in the pom...but that's unrelated to this !
On Wed, Aug 6, 2014 at 9:45 AM, DB Tsai
Hi,
I'm trying to get the apache spark trunk compiling in my Eclipse, but I can't
seem to get it going. In particular, I've tried sbt/sbt eclipse, but it doesn't
seem to create the eclipse pieces for yarn and other projects. Doing mvn
eclipse:eclipse on yarn seems to fail as well as sbt/sbt
I refreshed my workspace.
I got the following error with this command:
mvn -Pyarn -Phive -Phadoop-2.4 -DskipTests install
[ERROR] bad symbolic reference. A signature in package.class refers to term
scalalogging
in package com.typesafe which is not available.
It may be completely missing from the
Hi Ted,
By refreshing do you mean you have done 'mvn clean'?
On Wed, Aug 6, 2014 at 1:17 PM, Ted Yu yuzhih...@gmail.com wrote:
I refreshed my workspace.
I got the following error with this command:
mvn -Pyarn -Phive -Phadoop-2.4 -DskipTests install
[ERROR] bad symbolic reference. A
I think your best bet by far is to consume the Maven build as-is from
within Eclipse. I wouldn't try to export a project config from the
build as there is plenty to get lost in translation.
Certainly this works well with IntelliJ, and by the by, if you have a
choice, I would strongly recommend
Forgot to do that step.
Now compilation passes.
On Wed, Aug 6, 2014 at 1:36 PM, Zongheng Yang zonghen...@gmail.com wrote:
Hi Ted,
By refreshing do you mean you have done 'mvn clean'?
On Wed, Aug 6, 2014 at 1:17 PM, Ted Yu yuzhih...@gmail.com wrote:
I refreshed my workspace.
I got the
I found the section on ordering categorical features really interesting,
but the A, B, C example seemed inconsistent. Am I interpreting this passage
wrong, or are there typos? Aren't the split candidates A | C, B and A, C |
B ?
For example, for a binary classification problem with one categorical
Hi Spark devs,
I’ve posted an issue on JIRA (
https://issues.apache.org/jira/browse/SPARK-2878) which occurs when using
Kryo serialisation with a custom Kryo registrator to register custom
classes with Kryo. This is an insidious issue that non-deterministically
causes Kryo to have different ID
I don't think it was a conscious design decision to not include the
application classes in the connection manager serializer. We should fix
that. Where is it deserializing data in that thread?
4 might make sense in the long run, but it adds a lot of complexity to the
code base (whole separate
See my comment on https://issues.apache.org/jira/browse/SPARK-2878 for the
full stacktrace, but it's in the BlockManager/BlockManagerWorker where it's
trying to fulfil a getBlock request for another node. The objects that
would be in the block haven't yet been serialised, and that then causes the
Ok I'll give it a little more time, and if I can't get it going, I'll switch. I
am indeed a little disappointed in the Scala IDE plugin for Eclipse so I think
switching to IntelliJ might be my best bet.
Thanks,
Ron
Sent from my iPad
On Aug 6, 2014, at 1:43 PM, Sean Owen so...@cloudera.com
So I downloaded community edition of IntelliJ, and ran sbt/sbt gen-idea.
I then imported the pom.xml file.
I'm still getting all sorts of errors from IntelliJ about unresolved
dependencies.
Any suggestions?
Thanks,
Ron
On Wednesday, August 6, 2014 12:29 PM, Ron Gonzalez
After sbt gen-idea , you can open the intellji project directly without
going through pom.xml
If u want to compile inside intellji, you have to remove one of the messo
jar. This is an open issue, and u can find the detail in JIRA.
Sent from my Google Nexus 5
On Aug 6, 2014 8:54 PM, Ron Gonzalez
18 matches
Mail list logo