7;.' found.
> [INFO] }.toDF("user", "item", "rate”)}
> [INFO] ^
>
> when I tried below code
>
> val ratings = purchase.map ( line =>
> line.split(',') match { case Array(user, item, rate) =>
> (user.toInt, item.toInt, rate.toF
ried below code
>>
>> val ratings = purchase.map ( line =>
>> line.split(',') match { case Array(user, item, rate) =>
>> (user.toInt, item.toInt, rate.toFloat)
>> }).toDF("user", "item", "rate")
>>
>>
>
7; expected but '.' found.
>>>> [INFO] }.toDF("user", "item", "rate”)}
>>>> [INFO] ^
>>>>
>>>> when I tried below code
>>>>
>>>> val ratings = purchase.map ( line =>
>>>> l
"user", "item", "rate")
>>>
>>>
>>> error: value toDF is not a member of org.apache.spark.rdd.RDD[(Int, Int,
>>> Float)]
>>> [INFO] possible cause: maybe a semicolon is missing before `value toDF'?
>>> [INFO]
>>
>>
>>
>> I have looked at the document that you have shared and tried the following
>> code:
>>
>> case class Record(user: Int, item: Int, rate:Double)
>> val ratings = purchase.map(_.split(',')).map(r =>Record(r(0).
rd(r(0).toInt,
> r(1).toInt, r(2).toDouble)) .toDF("user", "item", "rate")
>
> for this, I got the below error:
>
> error: value toDF is not a member of org.apache.spark.rdd.RDD[Record]
>
>
> Appreciate your help !
>
> Thanks,
> Jay
>
&g
')).map(r =>Record(r(0).toInt,
>> r(1).toInt, r(2).toDouble)) .toDF("user", "item", "rate")
>>
>> for this, I got the below error:
>>
>> error: value toDF is not a member of org.apache.spark.rdd.RDD[Record]
>>
>>
>> Appreciate your help !
(1).toInt, r(2).toDouble)) .toDF("user", "item", "rate")
>
> for this, I got the below error:
>
> error: value toDF is not a member of org.apache.spark.rdd.RDD[Record]
>
>
> Appreciate your help !
>
> Thanks,
> Jay
>
>
> On Mar 16,
;)).map(r =>Record(r(0).toInt,
>>>> r(1).toInt, r(2).toDouble)) .toDF("user", "item", "rate")
>>>>
>>>> for this, I got the below error:
>>>>
>>>> error: value toDF is not a member of org.apache.spark.rdd
After this line:
val sc = new SparkContext(conf)
You need to add this line:
import sc.implicits._ //this is used to implicitly convert an RDD to a
DataFrame.
Hope this helps
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/RDD-to-DataFrame-for-using-
).map(r =>Record(r(0).toInt,
>>> r(1).toInt, r(2).toDouble)) .toDF("user", "item", "rate")
>>>
>>> for this, I got the below error:
>>>
>>> error: value toDF is not a member of org.apache.spark.rdd.RDD[Record]
>>>
>
>
> error: value toDF is not a member of org.apache.spark.rdd.RDD[Record]
>
>
> Appreciate your help !
>
> Thanks,
> Jay
>
>
> On Mar 16, 2015, at 11:35 AM, Xiangrui Meng wrote:
>
> Try this:
>
> val ratings = purchase.map { line =>
> line.split(
guide.html
-Xiangrui
On Mon, Mar 16, 2015 at 9:08 AM, jaykatukuri wrote:
> Hi all,
> I am trying to use the new ALS implementation under
> org.apache.spark.ml.recommendation.ALS.
>
>
>
> The new method to invoke for training seems to be override def fit(dataset:
> Dat
Hi all,
I am trying to use the new ALS implementation under
org.apache.spark.ml.recommendation.ALS.
The new method to invoke for training seems to be override def fit(dataset:
DataFrame, paramMap: ParamMap): ALSModel.
How do I create a dataframe object from ratings data set that is on hdfs
14 matches
Mail list logo