Re: Spark Implementation of XGBoost

2015-10-26 Thread YiZhi Liu
t;>> Meihua >>> >>> ----- >>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org >>> For additional commands, e-mail: user-h...@spark.apache.org >>> > > -

How to take user jars precedence over Spark jars

2015-10-19 Thread YiZhi Liu
ict, but I couldn't figure out which one caused this failure. Interestingly, when I ran mvn test in my project, which test spark job in locally mode, all worked fine. So what is the right way to take user jars precedence over Spark jars? -- Yizhi Liu Senior Software Engineer / Data Mining www.mvad.

Re: How to take user jars precedence over Spark jars

2015-10-19 Thread YiZhi Liu
rst=true --conf > spark.executor.userClassPathFirst=true > > Cheers > > On Mon, Oct 19, 2015 at 5:07 AM, YiZhi Liu <javeli...@gmail.com> wrote: >> >> I'm trying to read a Thrift object from SequenceFile, using >> elephant-bird's ThriftWritable. My code looks li

Re: What is the difference between ml.classification.LogisticRegression and mllib.classification.LogisticRegressionWithLBFGS

2015-10-12 Thread YiZhi Liu
any problem. Thank you! 2015-10-08 1:15 GMT+08:00 Joseph Bradley <jos...@databricks.com>: > Hi YiZhi Liu, > > The spark.ml classes are part of the higher-level "Pipelines" API, which > works with DataFrames. When creating this API, we decided to separate it > from t

Re: What is the difference between ml.classification.LogisticRegression and mllib.classification.LogisticRegressionWithLBFGS

2015-10-12 Thread YiZhi Liu
oint that we have working code now, so it's time to > try to refactor those code to share more.) > > > Sincerely, > > DB Tsai > -- > Blog: https://www.dbtsai.com > PGP Key ID: 0xAF08DF8D > > On Mon, Oct 12, 2015 at

What is the difference between ml.classification.LogisticRegression and mllib.classification.LogisticRegressionWithLBFGS

2015-10-07 Thread YiZhi Liu
? Instead, it uses breeze.optimize.LBFGS and re-implements most of the procedures in mllib.optimization.{LBFGS,OWLQN}. Thank you. Best, -- Yizhi Liu Senior Software Engineer / Data Mining www.mvad.com, Shanghai, China

Re: SparkContext._active_spark_context returns None

2015-09-29 Thread YiZhi Liu
you want to pass some value to workers, you can use broadcast variable. > > Cheers > > On Mon, Sep 28, 2015 at 10:31 PM, YiZhi Liu <javeli...@gmail.com> wrote: >> >> Hi Ted, >> >> Thank you for reply. The sc works at driver, but how can I reach the >

Re: SparkContext._active_spark_context returns None

2015-09-28 Thread YiZhi Liu
Hi Ted, Thank you for reply. The sc works at driver, but how can I reach the JVM in rdd.map ? 2015-09-29 11:26 GMT+08:00 Ted Yu <yuzhih...@gmail.com>: >>>> sc._jvm.java.lang.Integer.valueOf("12") > 12 > > FYI > > On Mon, Sep 28, 2015 at 8:08 PM, YiZh

SparkContext._active_spark_context returns None

2015-09-28 Thread YiZhi Liu
iver end looks fine: >>> SparkContext._active_spark_context._jvm.java.lang.Integer.valueOf("123".strip()) 123 The program is trivial, I just wonder what is the right way to reach JVM in python. Any hel