Hi,

I have a question about the block size to be specified in
ALS.trainImplicit() in pyspark (Spark 1.6.1). There is only one block size
parameter to be specified. I want to know if that would result in
partitioning both the users as well as the items axes.

For example, I am using the following call to ALs.trainImplicit() in my
code.

---------------

RANK = 50

ITERATIONS = 2

BLOCKS = 1000

ALPHA = 1.0

model = ALS.trainImplicit(ratings, RANK, ITERATIONS, blocks=BLOCKS,
alpha=ALPHA)


----------------

Will this partition the users x items matrix into BLOCKS x BLOCKS number of
matrices or will it partition only the users axis thereby resulting in
BLOCKS number of matrices, each with columns = total number of unique items?

Thanks,
Nik

Reply via email to