Forgot to add the more recent training material:
https://databricks-training.s3.amazonaws.com/index.html

On Fri, Feb 6, 2015 at 12:12 PM, Burak Yavuz <brk...@gmail.com> wrote:

> Hi Luca,
>
> You can tackle this using RowMatrix (spark-shell example):
> ```
> import org.apache.spark.mllib.linalg.distributed.RowMatrix
> import org.apache.spark.mllib.random._
>
> // sc is the spark context, numPartitions is the number of partitions you
> want the RDD to be in
> val data: RDD[Vector] = RandomRDDs.normalVectorRDD(sc, n, k,
> numPartitions, seed)
> val matrix = new RowMatrix(data, n, k)
> ```
>
> You can find more tutorials here:
> https://spark-summit.org/2013/exercises/index.html
>
> Best,
> Burak
>
>
>
>
> On Fri, Feb 6, 2015 at 10:03 AM, Luca Puggini <lucapug...@gmail.com>
> wrote:
>
>> Hi all,
>> this is my first email with this mailing list and I hope that I am not
>> doing anything wrong.
>>
>> I am currently trying to define a distributed matrix with n rows and k
>> columns where each element is randomly sampled by a uniform distribution.
>> How can I do that?
>>
>> It would be also nice if you can suggest me any good guide that I can use
>> to start working with Spark. (The quick start tutorial is not enough for me
>> )
>>
>> Thanks a lot !
>>
>
>

Reply via email to