[jira] [Comment Edited] (SYSTEMML-512) DML Script With UDFs Results In Out Of Memory Error As Compared to Without UDFs

2016-02-17 Thread Mike Dusenberry (JIRA)

[ 
https://issues.apache.org/jira/browse/SYSTEMML-512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15151623#comment-15151623
 ] 

Mike Dusenberry edited comment on SYSTEMML-512 at 2/18/16 2:52 AM:
---

[~mboehm7] I've added two Scala files with code that expresses the issue.  
{{test1.scala}} works correctly, and {{test2.scala}} has the issue described 
above.  The only difference is the PNMF script stored in {{val pnmf = ...}}.  
To replicate this, I used {{$SPARK_HOME/bin/spark-shell --master local[*] 
--driver-memory 1G --jars $SYSTEMML_HOME/target/SystemML.jar}}, and then 
{{:load test1.scala}} and {{:load test2.scala}} to run the scripts.  You will 
need the Amazon data in the same directory.

Also, smaller data sizes (2000) will allow {{test2.scala}} to run to 
completion, but it will run much slower than {{test1.scala}}.: 


was (Author: mwdus...@us.ibm.com):
[~mboehm7] I've added two Scala files with code that expresses the issue.  
{{test1.scala}} works correctly, and {{test2.scala}} has the issue described 
above.  The only difference is the PNMF script stored in {{val pnmf = ...}}.  
To replicate this, I used {{$SPARK_HOME/bin/spark-shell --master local[*] 
--driver-memory 1G --jars $SYSTEMML_HOME/target/SystemML.jar}}, and then 
{{:load test1.scala}} and {{:load test2.scala}} to run the scripts.  You will 
need the Amazon data in the same directory.

> DML Script With UDFs Results In Out Of Memory Error As Compared to Without 
> UDFs
> ---
>
> Key: SYSTEMML-512
> URL: https://issues.apache.org/jira/browse/SYSTEMML-512
> Project: SystemML
>  Issue Type: Bug
>Reporter: Mike Dusenberry
> Attachments: test1.scala, test2.scala
>
>
> Currently, the following script for running a simple version of Poisson 
> non-negative matrix factorization (PNMF) runs in linear time as desired:
> {code}
> # data & args
> X = read($X)
> X = X+1 # change product IDs to be 1-based, rather than 0-based
> V = table(X[,1], X[,2])
> V = V[1:$size,1:$size]
> max_iteration = as.integer($maxiter)
> rank = as.integer($rank)
> # run PNMF
> n = nrow(V)
> m = ncol(V)
> range = 0.01
> W = Rand(rows=n, cols=rank, min=0, max=range, pdf="uniform")
> H = Rand(rows=rank, cols=m, min=0, max=range, pdf="uniform")
> i=0
> while(i < max_iteration) {
>   H = (H * (t(W) %*% (V/(W%*%H/t(colSums(W)) 
>   W = (W * ((V/(W%*%H)) %*% t(H)))/t(rowSums(H))
>   i = i + 1;
> }
> # compute negative log-likelihood
> negloglik_temp = -1 * (sum(V*log(W%*%H)) - as.scalar(colSums(W)%*%rowSums(H)))
> # write outputs
> negloglik = matrix(negloglik_temp, rows=1, cols=1)
> write(negloglik, $negloglikout)
> write(W, $Wout)
> write(H, $Hout)
> {code}
> However, a small refactoring of this same script to pull the core PNMF 
> algorithm and the negative log-likelihood computation out into separate UDFs 
> results in non-linear runtime and a Java out of memory heap error on the same 
> dataset.  
> {code}
> pnmf = function(matrix[double] V, integer max_iteration, integer rank) return 
> (matrix[double] W, matrix[double] H) {
> n = nrow(V)
> m = ncol(V)
> 
> range = 0.01
> W = Rand(rows=n, cols=rank, min=0, max=range, pdf="uniform")
> H = Rand(rows=rank, cols=m, min=0, max=range, pdf="uniform")
> 
> i=0
> while(i < max_iteration) {
>   H = (H * (t(W) %*% (V/(W%*%H/t(colSums(W)) 
>   W = (W * ((V/(W%*%H)) %*% t(H)))/t(rowSums(H))
>   i = i + 1;
> }
> }
> negloglikfunc = function(matrix[double] V, matrix[double] W, matrix[double] 
> H) return (double negloglik) {
> negloglik = -1 * (sum(V*log(W%*%H)) - as.scalar(colSums(W)%*%rowSums(H)))
> }
> # data & args
> X = read($X)
> X = X+1 # change product IDs to be 1-based, rather than 0-based
> V = table(X[,1], X[,2])
> V = V[1:$size,1:$size]
> max_iteration = as.integer($maxiter)
> rank = as.integer($rank)
> # run PNMF and evaluate
> [W, H] = pnmf(V, max_iteration, rank)
> negloglik_temp = negloglikfunc(V, W, H)
> # write outputs
> negloglik = matrix(negloglik_temp, rows=1, cols=1)
> write(negloglik, $negloglikout)
> write(W, $Wout)
> write(H, $Hout)
> {code}
> The expectation would be that such modularization at the DML level should be 
> allowed without any impact on performance.
> Details:
> - Data: Amazon product co-purchasing dataset from Stanford 
> [http://snap.stanford.edu/data/amazon0601.html | 
> http://snap.stanford.edu/data/amazon0601.html]
> - Execution mode: Spark {{MLContext}}, but should be applicable to 
> command-line invocation as well. 
> - Error message:
> {code}
> java.lang.OutOfMemoryError: Java heap space
>   at 
> org.apache.sysml.runtime.matrix.data.MatrixBlock.allocateDenseBlock(MatrixBlock.java:415)
>   at 
> 

[jira] [Comment Edited] (SYSTEMML-512) DML Script With UDFs Results In Out Of Memory Error As Compared to Without UDFs

2016-02-17 Thread Matthias Boehm (JIRA)

[ 
https://issues.apache.org/jira/browse/SYSTEMML-512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15150016#comment-15150016
 ] 

Matthias Boehm edited comment on SYSTEMML-512 at 2/17/16 7:09 AM:
--

[~mwdus...@us.ibm.com] could you please specify the configuration of size and 
rank that you used? I just tried it with size=10k and memory budget of 2g and 
everything worked fine. However, I could image an issue if the dense matrix 
fits in memory because we would not mark the function for recompilation and our 
IPA does not yet propagate literals into functions (required for rank because 
the wdivmm/wcemm are dynamic rewrites), but this is already tracked in 
[SYSTEMML-427].  

With regard to the out-of-memory - this is most likely an issue of the open jdk 
young generation size as explained in [SYSTEMML-455]


was (Author: mboehm7):
[~mwdus...@us.ibm.com] could you please specify the configuration of size and 
rank that you used? I just tried it with size=10k and memory budget of 2g and 
everything worked fine. However, I could image an issue if the dense matrix 
fits in memory because we would not mark the function for recompilation and our 
IPA does not yet propagate literals into functions, but already tracked in 
[SYSTEMML-427].  

With regard to the out-of-memory - this is most likely an issue of the open jdk 
young generation size as explained in [SYSTEMML-455]

> DML Script With UDFs Results In Out Of Memory Error As Compared to Without 
> UDFs
> ---
>
> Key: SYSTEMML-512
> URL: https://issues.apache.org/jira/browse/SYSTEMML-512
> Project: SystemML
>  Issue Type: Bug
>Reporter: Mike Dusenberry
>
> Currently, the following script for running a simple version of Poisson 
> non-negative matrix factorization (PNMF) runs in linear time as desired:
> {code}
> # data & args
> X = read($X)
> X = X+1 # change product IDs to be 1-based, rather than 0-based
> V = table(X[,1], X[,2])
> V = V[1:$size,1:$size]
> max_iteration = as.integer($maxiter)
> rank = as.integer($rank)
> # run PNMF
> n = nrow(V)
> m = ncol(V)
> range = 0.01
> W = Rand(rows=n, cols=rank, min=0, max=range, pdf="uniform")
> H = Rand(rows=rank, cols=m, min=0, max=range, pdf="uniform")
> i=0
> while(i < max_iteration) {
>   H = (H * (t(W) %*% (V/(W%*%H/t(colSums(W)) 
>   W = (W * ((V/(W%*%H)) %*% t(H)))/t(rowSums(H))
>   i = i + 1;
> }
> # compute negative log-likelihood
> negloglik_temp = -1 * (sum(V*log(W%*%H)) - as.scalar(colSums(W)%*%rowSums(H)))
> # write outputs
> negloglik = matrix(negloglik_temp, rows=1, cols=1)
> write(negloglik, $negloglikout)
> write(W, $Wout)
> write(H, $Hout)
> {code}
> However, a small refactoring of this same script to pull the core PNMF 
> algorithm and the negative log-likelihood computation out into separate UDFs 
> results in non-linear runtime and a Java out of memory heap error on the same 
> dataset.  
> {code}
> pnmf = function(matrix[double] V, integer max_iteration, integer rank) return 
> (matrix[double] W, matrix[double] H) {
> n = nrow(V)
> m = ncol(V)
> 
> range = 0.01
> W = Rand(rows=n, cols=rank, min=0, max=range, pdf="uniform")
> H = Rand(rows=rank, cols=m, min=0, max=range, pdf="uniform")
> 
> i=0
> while(i < max_iteration) {
>   H = (H * (t(W) %*% (V/(W%*%H/t(colSums(W)) 
>   W = (W * ((V/(W%*%H)) %*% t(H)))/t(rowSums(H))
>   i = i + 1;
> }
> }
> negloglikfunc = function(matrix[double] V, matrix[double] W, matrix[double] 
> H) return (double negloglik) {
> negloglik = -1 * (sum(V*log(W%*%H)) - as.scalar(colSums(W)%*%rowSums(H)))
> }
> # data & args
> X = read($X)
> X = X+1 # change product IDs to be 1-based, rather than 0-based
> V = table(X[,1], X[,2])
> V = V[1:$size,1:$size]
> max_iteration = as.integer($maxiter)
> rank = as.integer($rank)
> # run PNMF and evaluate
> [W, H] = pnmf(V, max_iteration, rank)
> negloglik_temp = negloglikfunc(V, W, H)
> # write outputs
> negloglik = matrix(negloglik_temp, rows=1, cols=1)
> write(negloglik, $negloglikout)
> write(W, $Wout)
> write(H, $Hout)
> {code}
> The expectation would be that such modularization at the DML level should be 
> allowed without any impact on performance.
> Details:
> - Data: Amazon product co-purchasing dataset from Stanford 
> [http://snap.stanford.edu/data/amazon0601.html | 
> http://snap.stanford.edu/data/amazon0601.html]
> - Execution mode: Spark {{MLContext}}, but should be applicable to 
> command-line invocation as well. 
> - Error message:
> {code}
> java.lang.OutOfMemoryError: Java heap space
>   at 
> org.apache.sysml.runtime.matrix.data.MatrixBlock.allocateDenseBlock(MatrixBlock.java:415)
>   at 
> org.apache.sysml.runtime.matrix.data.MatrixBlock.sparseToDense(MatrixBlock.java:1212)
>   at 
>