Re: add spark-csv jar to ipython notbook without packages flags

2016-07-25 Thread Ndjido Ardo BAR
Hi Pseudo, try this : export SPARK_SUBMIT_OPTIONS = "--jars spark-csv_2.10-1.4.0.jar, commons-csv-1.1.jar" this have been working for me for a longtime ;-) both in Zeppelin(for Spark Scala) and Ipython Notebook (for PySpark). cheers, Ardo On Mon, Jul 25, 2016 at 1:28 PM, pseudo oduesp

Re: Labeledpoint

2016-06-21 Thread Ndjido Ardo BAR
To answer more accurately to your question, the model.fit(df) method takes in a DataFrame of Row(label=double, feature=Vectors.dense([...])) . cheers, Ardo. On Tue, Jun 21, 2016 at 6:44 PM, Ndjido Ardo BAR <ndj...@gmail.com> wrote: > Hi, > > You can use a RDD of LabelPoints to

Re: Labeledpoint

2016-06-21 Thread Ndjido Ardo BAR
Hi, You can use a RDD of LabelPoints to fit your model. Check the doc for more example : http://spark.apache.org/docs/latest/api/python/pyspark.ml.html?highlight=transform#pyspark.ml.classification.RandomForestClassificationModel.transform cheers, Ardo. On Tue, Jun 21, 2016 at 6:12 PM, pseudo

Re: prefix column Spark

2016-04-19 Thread Ndjido Ardo BAR
This can help: import org.apache.spark.sql.DataFrame def prefixDf(dataFrame: DataFrame, prefix: String): DataFrame = { val colNames = dataFrame.columns colNames.foldLeft(dataFrame){ (df, colName) => { df.withColumnRenamed(colName, s"${prefix}_${colName}") } } }

Re: Calling Python code from Scala

2016-04-18 Thread Ndjido Ardo BAR
Hi Didier, I think with PySpark you can wrap your legacy Python functions into UDFs and use it in your DataFrames. But you have to use DataFrames instead of RDD. cheers, Ardo On Mon, Apr 18, 2016 at 7:13 PM, didmar wrote: > Hi, > > I have a Spark project in Scala and I

Re: How to estimate the size of dataframe using pyspark?

2016-04-09 Thread Ndjido Ardo BAR
What's the size of your driver? On Sat, 9 Apr 2016 at 20:33, Buntu Dev wrote: > Actually, df.show() works displaying 20 rows but df.count() is the one > which is causing the driver to run out of memory. There are just 3 INT > columns. > > Any idea what could be the reason? >

Re: Can't filter

2015-12-10 Thread Ndjido Ardo Bar
Please send your call stack with the full description of the exception . > On 10 Dec 2015, at 12:10, Бобров Виктор wrote: > > Hi, I can’t filter my rdd. > > def filter1(tp: ((Array[String], Int), (Array[String], Int))): Boolean= { > tp._1._2 > tp._2._2 > } > val mail_rdd =

Re: RDD functions

2015-12-04 Thread Ndjido Ardo BAR
Hi Michal, I think the following link could interest you. You gonna find there a lot of examples! http://homepage.cs.latrobe.edu.au/zhe/ZhenHeSparkRDDAPIExamples.html cheers, Ardo On Fri, Dec 4, 2015 at 2:31 PM, Michal Klos wrote: >

Re: Grid search with Random Forest

2015-12-01 Thread Ndjido Ardo BAR
wrong but no there isn't one that I am aware of. >> >> Unless someone is willing to explain how to obtain the raw prediction >> column with the GBTClassifier. In this case I'd be happy to work on a PR. >> On 1 Dec 2015 8:43 a.m., "Ndjido Ardo BAR" <ndj...@gmail.c

Re: Grid search with Random Forest

2015-11-30 Thread Ndjido Ardo BAR
hould work with 1.5+. > > On Thu, Nov 26, 2015 at 12:53 PM, Ndjido Ardo Bar <ndj...@gmail.com> > wrote: > >> >> Hi folks, >> >> Does anyone know whether the Grid Search capability is enabled since the >> issue spark-9011 of version 1.4.0 ? I'm

Re: Grid search with Random Forest

2015-11-30 Thread Ndjido Ardo BAR
dictionCol like > the. RandomForestClassifier has. > Cf: > http://spark.apache.org/docs/latest/ml-ensembles.html#output-columns-predictions-1 > On 1 Dec 2015 3:57 a.m., "Ndjido Ardo BAR" <ndj...@gmail.com> wrote: > >> Hi Joseph, >> >> Yes Ra

Re: Debug Spark

2015-11-29 Thread Ndjido Ardo BAR
ek Galstyan > > Նարեկ Գալստյան > > On 29 November 2015 at 20:51, Ndjido Ardo BAR <ndj...@gmail.com> wrote: > >> Masf, the following link sets the basics to start debugging your spark >> apps in local mode: >> >> >> https://medium.com/large-scal

Re: Debug Spark

2015-11-29 Thread Ndjido Ardo BAR
com> wrote: > Hi Ardo > > > Some tutorial to debug with Intellij? > > Thanks > > Regards. > Miguel. > > > On Sun, Nov 29, 2015 at 5:32 PM, Ndjido Ardo BAR <ndj...@gmail.com> wrote: > >> hi, >> >> IntelliJ is just great for that! >&

Re: Debug Spark

2015-11-29 Thread Ndjido Ardo BAR
hi, IntelliJ is just great for that! cheers, Ardo. On Sun, Nov 29, 2015 at 5:18 PM, Masf wrote: > Hi > > Is it possible to debug spark locally with IntelliJ or another IDE? > > Thanks > > -- > Regards. > Miguel Ángel >

Grid search with Random Forest

2015-11-26 Thread Ndjido Ardo Bar
Hi folks, Does anyone know whether the Grid Search capability is enabled since the issue spark-9011 of version 1.4.0 ? I'm getting the "rawPredictionCol column doesn't exist" when trying to perform a grid search with Spark 1.4.0. Cheers, Ardo

Re: can I use Spark as alternative for gem fire cache ?

2015-10-17 Thread Ndjido Ardo Bar
Hi Kali, If I do understand you well, Tachyon ( http://tachyon-project.org) can be good alternative. You can use Spark Api to load and persist data into Tachyon. Hope that will help. Ardo > On 17 Oct 2015, at 15:28, "kali.tumm...@gmail.com" > wrote: > > Hi All, >

Re: Scala api end points

2015-09-24 Thread Ndjido Ardo BAR
Hi Masoom Alam, I successfully experimented the following project on Github https://github.com/erisa85/WikiSparkJobServer . I do recommand it to you. cheers, Ardo. On Thu, Sep 24, 2015 at 5:20 PM, masoom alam wrote: > Hi everyone > > I am new to Scala. I have a

Re: Small File to HDFS

2015-09-03 Thread Ndjido Ardo Bar
Hi Nibiau, Hbase seems to be a good solution to your problems. As you may know storing yours messages as a key-value pairs in Hbase saves you the overhead of manually resizing blocks of data using zip files. The added advantage along with the fact that Hbase uses HDFS for storage, is the