[jira] [Commented] (SYSTEMML-693) Automatically invoke toString when user tries to print a matrix
[ https://issues.apache.org/jira/browse/SYSTEMML-693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304914#comment-15304914 ] Nakul Jindal commented on SYSTEMML-693: --- After some looking around, I saw that there is more than one way to solve this. 1. Do what Scalar printing does, change the "processInstruction" method and the "ScalarMatrixArithmetic" to deal with printing. This is the easiest and most straightforward way. 2. Somehow wrap the "toString" HOP around the Matrix HOP. This is somewhat more involved 3. Insert the toString in the Parsing phase Type information is not available at parsing time, not sure if this is even feasible. [~mboehm7], [~niketanpansare] - Any suggestions? > Automatically invoke toString when user tries to print a matrix > --- > > Key: SYSTEMML-693 > URL: https://issues.apache.org/jira/browse/SYSTEMML-693 > Project: SystemML > Issue Type: Improvement > Components: Parser >Reporter: Nakul Jindal >Priority: Minor > > The {{toString}} builtin function was added as [PR > #120|https://github.com/apache/incubator-systemml/pull/120] and SYSTEMML-693. > The way to print a matrix with this builtin function is > {code} > m = ... # Create Matrix > print("matrix : " + toString(m)) > {code} > To improve usability, the DML programmer should be able to say > {code} > m = ... # Create Matrix > print("matrix : " + m) > {code} > The call to {{toString}} should be automatically inserted. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (SYSTEMML-734) Implement additional loss layers required by the GoogleNet proto
Niketan Pansare created SYSTEMML-734: Summary: Implement additional loss layers required by the GoogleNet proto Key: SYSTEMML-734 URL: https://issues.apache.org/jira/browse/SYSTEMML-734 Project: SystemML Issue Type: Task Reporter: Niketan Pansare -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (SYSTEMML-733) Create proto file for GoogleNet and test the generated DML on ImageNet dataset for accuracy
Niketan Pansare created SYSTEMML-733: Summary: Create proto file for GoogleNet and test the generated DML on ImageNet dataset for accuracy Key: SYSTEMML-733 URL: https://issues.apache.org/jira/browse/SYSTEMML-733 Project: SystemML Issue Type: Task Reporter: Niketan Pansare -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (SYSTEMML-732) Explore different memory management policy for GPU
[ https://issues.apache.org/jira/browse/SYSTEMML-732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304794#comment-15304794 ] Niketan Pansare edited comment on SYSTEMML-732 at 5/27/16 9:17 PM: --- In initial PR https://github.com/apache/incubator-systemml/pull/165, we are using a naive eviction policy. was (Author: niketanpansare): In initial PR, we are using a naive eviction policy. > Explore different memory management policy for GPU > -- > > Key: SYSTEMML-732 > URL: https://issues.apache.org/jira/browse/SYSTEMML-732 > Project: SystemML > Issue Type: Task >Reporter: Niketan Pansare > > The issues that needs to be addressed are: > 1. Eviction policy > 2. Lazy/Eager synchronization between CP/GPU -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (SYSTEMML-732) Explore different memory management policy for GPU
[ https://issues.apache.org/jira/browse/SYSTEMML-732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304794#comment-15304794 ] Niketan Pansare commented on SYSTEMML-732: -- In initial PR, we are using a naive eviction policy. > Explore different memory management policy for GPU > -- > > Key: SYSTEMML-732 > URL: https://issues.apache.org/jira/browse/SYSTEMML-732 > Project: SystemML > Issue Type: Task >Reporter: Niketan Pansare > > The issues that needs to be addressed are: > 1. Eviction policy > 2. Lazy/Eager synchronization between CP/GPU -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (SYSTEMML-731) Conduct initial performance experiments for mat mult
Niketan Pansare created SYSTEMML-731: Summary: Conduct initial performance experiments for mat mult Key: SYSTEMML-731 URL: https://issues.apache.org/jira/browse/SYSTEMML-731 Project: SystemML Issue Type: Task Reporter: Niketan Pansare Assignee: Niketan Pansare Before the PR https://github.com/apache/incubator-systemml/pull/165 gets merged, initial performance experiments needs to be conducted for dense-dense mat mult. [~nakul02] [~mboehm7] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (SYSTEMML-730) Add fused GPU instructions for LSTM/RNN
Niketan Pansare created SYSTEMML-730: Summary: Add fused GPU instructions for LSTM/RNN Key: SYSTEMML-730 URL: https://issues.apache.org/jira/browse/SYSTEMML-730 Project: SystemML Issue Type: Task Reporter: Niketan Pansare When we decide to move to CuDNN v5, this will call the respective CuDNN functions. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (SYSTEMML-729) Add GPU instructions that utilizes CuDNN v4's conv2d and pooling related functions
Niketan Pansare created SYSTEMML-729: Summary: Add GPU instructions that utilizes CuDNN v4's conv2d and pooling related functions Key: SYSTEMML-729 URL: https://issues.apache.org/jira/browse/SYSTEMML-729 Project: SystemML Issue Type: Task Reporter: Niketan Pansare -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (SYSTEMML-704) Host jcu*.jar libraries on mvn repo
[ https://issues.apache.org/jira/browse/SYSTEMML-704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304772#comment-15304772 ] Niketan Pansare commented on SYSTEMML-704: -- As per the discussion on the dev mailing list, the PR https://github.com/apache/incubator-systemml/pull/165 will be merged when this issue is resolved. > Host jcu*.jar libraries on mvn repo > --- > > Key: SYSTEMML-704 > URL: https://issues.apache.org/jira/browse/SYSTEMML-704 > Project: SystemML > Issue Type: Task >Reporter: Niketan Pansare >Priority: Minor > > The PR https://github.com/apache/incubator-systemml/pull/165/ uses system > scope for jcu*.jar as they are not published on mvn central. Since we are > planning to include them into SystemML, it would be good to host them into a > repo we maintain and have provided scope instead. If for LICENSE or some > other reasons, we are not able to host them, I am fine with rejecting this > issue too. From jcuda's website "JCuda is published under the terms of the > MIT/X11 License". > The current version depends on jcu*-0.7.5b.jar (except jcudnn-0.7.5.jar). The > jars are available for download from > http://www.jcuda.org/downloads/downloads.html. The source is available at > https://github.com/jcuda > [~nakul02] [~deron] [~luciano resende] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (SYSTEMML-727) Add bufferpool integration logic to CUDA backend
Niketan Pansare created SYSTEMML-727: Summary: Add bufferpool integration logic to CUDA backend Key: SYSTEMML-727 URL: https://issues.apache.org/jira/browse/SYSTEMML-727 Project: SystemML Issue Type: Task Reporter: Niketan Pansare This work is done in the PR: https://github.com/apache/incubator-systemml/pull/165 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (SYSTEMML-726) Explore additional solver generators (for example: L-BFGS, Conjugate gradient)
Niketan Pansare created SYSTEMML-726: Summary: Explore additional solver generators (for example: L-BFGS, Conjugate gradient) Key: SYSTEMML-726 URL: https://issues.apache.org/jira/browse/SYSTEMML-726 Project: SystemML Issue Type: Task Reporter: Niketan Pansare -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (SYSTEMML-725) Implement generator for the layers used in AlexNet proto
Niketan Pansare created SYSTEMML-725: Summary: Implement generator for the layers used in AlexNet proto Key: SYSTEMML-725 URL: https://issues.apache.org/jira/browse/SYSTEMML-725 Project: SystemML Issue Type: Task Reporter: Niketan Pansare An example of such a layer is LRN. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (SYSTEMML-721) Integrate Barista api into DMLScript for direct invocation
[ https://issues.apache.org/jira/browse/SYSTEMML-721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304745#comment-15304745 ] Niketan Pansare commented on SYSTEMML-721: -- This issue is dependent on the PR getting into the master: https://github.com/apache/incubator-systemml/pull/158 > Integrate Barista api into DMLScript for direct invocation > -- > > Key: SYSTEMML-721 > URL: https://issues.apache.org/jira/browse/SYSTEMML-721 > Project: SystemML > Issue Type: Task >Reporter: Niketan Pansare > > Barista is the class that generates DML from Caffe proto. > Once this task is completed, the user should be able to invoke a caffe proto > file using following command (as an example): > hadoop jar SystemML.jar -f Caffe.proto -caffe -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (SYSTEMML-719) Create proto file for Autoencoder and test the generated DML on MNIST dataset for accuracy
Niketan Pansare created SYSTEMML-719: Summary: Create proto file for Autoencoder and test the generated DML on MNIST dataset for accuracy Key: SYSTEMML-719 URL: https://issues.apache.org/jira/browse/SYSTEMML-719 Project: SystemML Issue Type: Task Reporter: Niketan Pansare Assignee: Niketan Pansare -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (SYSTEMML-720) Implement additional loss layers required by the Autoencoder proto
Niketan Pansare created SYSTEMML-720: Summary: Implement additional loss layers required by the Autoencoder proto Key: SYSTEMML-720 URL: https://issues.apache.org/jira/browse/SYSTEMML-720 Project: SystemML Issue Type: Task Reporter: Niketan Pansare Assignee: Niketan Pansare -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (SYSTEMML-718) Implement generator for the layers used in Lenet proto
Niketan Pansare created SYSTEMML-718: Summary: Implement generator for the layers used in Lenet proto Key: SYSTEMML-718 URL: https://issues.apache.org/jira/browse/SYSTEMML-718 Project: SystemML Issue Type: Task Reporter: Niketan Pansare Assignee: Niketan Pansare Implemented in the PR https://github.com/niketanpansare/incubator-systemml/tree/d9e6efaf297b1a22fcbe3eb0b7f75f07e19969db/src/main/java/org/apache/sysml/api/dl/layer -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (SYSTEMML-692) Create initial prototype for generating DML from Caffe solver/net proto files
[ https://issues.apache.org/jira/browse/SYSTEMML-692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Niketan Pansare reassigned SYSTEMML-692: Assignee: Niketan Pansare > Create initial prototype for generating DML from Caffe solver/net proto files > - > > Key: SYSTEMML-692 > URL: https://issues.apache.org/jira/browse/SYSTEMML-692 > Project: SystemML > Issue Type: Task >Reporter: Niketan Pansare >Assignee: Niketan Pansare > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (SYSTEMML-717) Create proto file for Lenet and test the generated DML on MNIST dataset for accuracy
[ https://issues.apache.org/jira/browse/SYSTEMML-717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15304735#comment-15304735 ] Niketan Pansare commented on SYSTEMML-717: -- Created https://github.com/niketanpansare/incubator-systemml/blob/d9e6efaf297b1a22fcbe3eb0b7f75f07e19969db/samples/caffe/Lenet.proto based that generates DML file having network/parameters same as https://github.com/apache/incubator-systemml/blob/master/scripts/staging/lenet-train.dml ... [~prithvi_r_s] > Create proto file for Lenet and test the generated DML on MNIST dataset for > accuracy > > > Key: SYSTEMML-717 > URL: https://issues.apache.org/jira/browse/SYSTEMML-717 > Project: SystemML > Issue Type: Task >Reporter: Niketan Pansare > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (SYSTEMML-717) Create proto file for Lenet and test the generated DML on MNIST dataset for accuracy
Niketan Pansare created SYSTEMML-717: Summary: Create proto file for Lenet and test the generated DML on MNIST dataset for accuracy Key: SYSTEMML-717 URL: https://issues.apache.org/jira/browse/SYSTEMML-717 Project: SystemML Issue Type: Task Reporter: Niketan Pansare -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (SYSTEMML-692) Generate DML from Caffe solver/net proto files
[ https://issues.apache.org/jira/browse/SYSTEMML-692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Niketan Pansare updated SYSTEMML-692: - Issue Type: Task (was: Epic) > Generate DML from Caffe solver/net proto files > -- > > Key: SYSTEMML-692 > URL: https://issues.apache.org/jira/browse/SYSTEMML-692 > Project: SystemML > Issue Type: Task >Reporter: Niketan Pansare > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (SYSTEMML-692) Generate DML from Caffe solver/net proto files
[ https://issues.apache.org/jira/browse/SYSTEMML-692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Niketan Pansare updated SYSTEMML-692: - Issue Type: Epic (was: Task) > Generate DML from Caffe solver/net proto files > -- > > Key: SYSTEMML-692 > URL: https://issues.apache.org/jira/browse/SYSTEMML-692 > Project: SystemML > Issue Type: Epic >Reporter: Niketan Pansare > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (SYSTEMML-641) Performance features core block matrix multiply
[ https://issues.apache.org/jira/browse/SYSTEMML-641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mike Dusenberry closed SYSTEMML-641. Assignee: Matthias Boehm > Performance features core block matrix multiply > > > Key: SYSTEMML-641 > URL: https://issues.apache.org/jira/browse/SYSTEMML-641 > Project: SystemML > Issue Type: Task > Components: Runtime >Reporter: Matthias Boehm >Assignee: Matthias Boehm > Fix For: SystemML 0.10 > > > 1) Cache-conscious dense-dense with large skinny rhs (> L3 cache) > 2) Scheduling improvements multi-threaded operations with short lhs > 3) Column-wise parallelization with wide rhs -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (SYSTEMML-547) Implement built-in functions for max and average pooling
[ https://issues.apache.org/jira/browse/SYSTEMML-547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mike Dusenberry updated SYSTEMML-547: - Assignee: Niketan Pansare (was: Nakul Jindal) > Implement built-in functions for max and average pooling > > > Key: SYSTEMML-547 > URL: https://issues.apache.org/jira/browse/SYSTEMML-547 > Project: SystemML > Issue Type: New Feature > Components: Parser, Runtime >Reporter: Niketan Pansare >Assignee: Niketan Pansare >Priority: Minor > Fix For: SystemML 0.10 > > Original Estimate: 168h > Remaining Estimate: 168h > > pool2d(input, pool_size, stride_length, border_mode="valid", pool_mode="max") > Performs downscaling of the input matrix. > The arguments to this function are: > 1. input is a 2-dimensional matrix. > 2. pool_size is a required integer parameter. > 3. stride_length is an optional Int parameter. The default value is 1. > 4. border_mode is an optional String parameter. The valid values are "same" > and "valid". > 5. pool_mode is an optional String parameter. The valid values are "max" and > "avg". We can later add additional operators here (such as sum). > For detailed documentation, see Theano's pool_2d function: > https://github.com/Theano/Theano/blob/master/theano/tensor/signal/pool.py#L40 > An an example, our pool2d(input=X, pool_size=2, stride_length=1, > border_mode="valid", pool_mode="avg") invocation is similar to Theano's > pool_2d(X, ds=(2,2), st=(1,1), ignore_border=True, padding=(0, 0), > mode="average_exc_pad") > Since padding=(0,0) is the most common padding (probably the only one most > people will use), I thought of simplifying the interface by borrowing > concepts from TensorFlow's functions max_pool and avg_pool. See > https://www.tensorflow.org/versions/r0.7/api_docs/python/nn.html#avg_pool > The above example will translate into following TensorFlow code: > tf.nn.avg_pool(X, pool_size=(1,2,2,1), strides=(1,1,1,1), padding="VALID") > Another good reference to understanding pooling operation is > http://cs231n.github.io/convolutional-networks/#pool > [~mwdus...@us.ibm.com], [~nakul02], [~prithvi_r_s], [~reinw...@us.ibm.com] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (SYSTEMML-547) Implement built-in functions for max and average pooling
[ https://issues.apache.org/jira/browse/SYSTEMML-547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Niketan Pansare closed SYSTEMML-547. Resolution: Fixed Fix Version/s: SystemML 0.10 https://github.com/apache/incubator-systemml/commit/c334c2c85bc9cbb343e63b5b28ff3a1c5098c7fa > Implement built-in functions for max and average pooling > > > Key: SYSTEMML-547 > URL: https://issues.apache.org/jira/browse/SYSTEMML-547 > Project: SystemML > Issue Type: New Feature > Components: Parser, Runtime >Reporter: Niketan Pansare >Assignee: Nakul Jindal >Priority: Minor > Fix For: SystemML 0.10 > > Original Estimate: 168h > Remaining Estimate: 168h > > pool2d(input, pool_size, stride_length, border_mode="valid", pool_mode="max") > Performs downscaling of the input matrix. > The arguments to this function are: > 1. input is a 2-dimensional matrix. > 2. pool_size is a required integer parameter. > 3. stride_length is an optional Int parameter. The default value is 1. > 4. border_mode is an optional String parameter. The valid values are "same" > and "valid". > 5. pool_mode is an optional String parameter. The valid values are "max" and > "avg". We can later add additional operators here (such as sum). > For detailed documentation, see Theano's pool_2d function: > https://github.com/Theano/Theano/blob/master/theano/tensor/signal/pool.py#L40 > An an example, our pool2d(input=X, pool_size=2, stride_length=1, > border_mode="valid", pool_mode="avg") invocation is similar to Theano's > pool_2d(X, ds=(2,2), st=(1,1), ignore_border=True, padding=(0, 0), > mode="average_exc_pad") > Since padding=(0,0) is the most common padding (probably the only one most > people will use), I thought of simplifying the interface by borrowing > concepts from TensorFlow's functions max_pool and avg_pool. See > https://www.tensorflow.org/versions/r0.7/api_docs/python/nn.html#avg_pool > The above example will translate into following TensorFlow code: > tf.nn.avg_pool(X, pool_size=(1,2,2,1), strides=(1,1,1,1), padding="VALID") > Another good reference to understanding pooling operation is > http://cs231n.github.io/convolutional-networks/#pool > [~mwdus...@us.ibm.com], [~nakul02], [~prithvi_r_s], [~reinw...@us.ibm.com] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (SYSTEMML-714) Compile error on rewrite 'pushdown sum on binary' w/ multiple consumers
[ https://issues.apache.org/jira/browse/SYSTEMML-714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matthias Boehm updated SYSTEMML-714: Description: The dynamic simplification rewrite 'pushdown sum on binary +' with multiple consumes creates a HOP DAG corruption leading to compilation errors. Consider the following script as an example {code} A = rand(rows=10, cols=1); B = rand(rows=10, cols=1); C = rand(rows=10, cols=1); D = rand(rows=10, cols=1); r1 = sum(A*B + C*D); r2 = r1; print("ret1="+r1+", ret2="+r2); {code} The trace of applied rewrites is as follows {code} DEBUG rewrite.RewriteAlgebraicSimplificationDynamic: Applied pushdownSumOnAdditiveBinary. DEBUG rewrite.RewriteAlgebraicSimplificationDynamic: Applied simplifyDotProductSum. DEBUG rewrite.RewriteAlgebraicSimplificationDynamic: Applied fuseDatagenReorgOperation. DEBUG rewrite.RewriteAlgebraicSimplificationDynamic: Applied simplifyDotProductSum. DEBUG rewrite.RewriteAlgebraicSimplificationDynamic: Applied fuseDatagenReorgOperation {code} Finally, this issue results in the following or similar exception on subsequent rewrites: {code} Caused by: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0 at java.util.ArrayList.rangeCheck(ArrayList.java:653) at java.util.ArrayList.get(ArrayList.java:429) at org.apache.sysml.hops.rewrite.RewriteAlgebraicSimplificationDynamic.simplifyColwiseAggregate(RewriteAlgebraicSimplificationDynamic.java:566) at org.apache.sysml.hops.rewrite.RewriteAlgebraicSimplificationDynamic.rule_AlgebraicSimplification(RewriteAlgebraicSimplificationDynamic.java:154) at org.apache.sysml.hops.rewrite.RewriteAlgebraicSimplificationDynamic.rule_AlgebraicSimplification(RewriteAlgebraicSimplificationDynamic.java:185) at org.apache.sysml.hops.rewrite.RewriteAlgebraicSimplificationDynamic.rewriteHopDAGs(RewriteAlgebraicSimplificationDynamic.java:91) at org.apache.sysml.hops.rewrite.ProgramRewriter.rewriteHopDAGs(ProgramRewriter.java:279) at org.apache.sysml.hops.rewrite.ProgramRewriter.rewriteStatementBlockHopDAGs(ProgramRewriter.java:263) at org.apache.sysml.hops.rewrite.ProgramRewriter.rewriteProgramHopDAGs(ProgramRewriter.java:206) at org.apache.sysml.parser.DMLTranslator.rewriteHopsDAG(DMLTranslator.java:273) at org.apache.sysml.api.DMLScript.execute(DMLScript.java:602) at org.apache.sysml.api.DMLScript.executeScript(DMLScript.java:337) {code} The issue is caused by incorrect handling of multiple parents in the rewrite 'pushdown sum on binary +'. The workaround is to disable rewrites (optimization level 1 instead 2) or to create a "if(1==1){}" cut right after the sum (preferred workaround). > Compile error on rewrite 'pushdown sum on binary' w/ multiple consumers > --- > > Key: SYSTEMML-714 > URL: https://issues.apache.org/jira/browse/SYSTEMML-714 > Project: SystemML > Issue Type: Bug > Components: Compiler >Affects Versions: SystemML 0.10 >Reporter: Matthias Boehm > Fix For: SystemML 0.11 > > > The dynamic simplification rewrite 'pushdown sum on binary +' with multiple > consumes creates a HOP DAG corruption leading to compilation errors. Consider > the following script as an example > {code} > A = rand(rows=10, cols=1); > B = rand(rows=10, cols=1); > C = rand(rows=10, cols=1); > D = rand(rows=10, cols=1); > r1 = sum(A*B + C*D); > r2 = r1; > print("ret1="+r1+", ret2="+r2); > {code} > The trace of applied rewrites is as follows > {code} > DEBUG rewrite.RewriteAlgebraicSimplificationDynamic: Applied > pushdownSumOnAdditiveBinary. > DEBUG rewrite.RewriteAlgebraicSimplificationDynamic: Applied > simplifyDotProductSum. > DEBUG rewrite.RewriteAlgebraicSimplificationDynamic: Applied > fuseDatagenReorgOperation. > DEBUG rewrite.RewriteAlgebraicSimplificationDynamic: Applied > simplifyDotProductSum. > DEBUG rewrite.RewriteAlgebraicSimplificationDynamic: Applied > fuseDatagenReorgOperation > {code} > Finally, this issue results in the following or similar exception on > subsequent rewrites: > {code} > Caused by: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0 > at java.util.ArrayList.rangeCheck(ArrayList.java:653) > at java.util.ArrayList.get(ArrayList.java:429) > at > org.apache.sysml.hops.rewrite.RewriteAlgebraicSimplificationDynamic.simplifyColwiseAggregate(RewriteAlgebraicSimplificationDynamic.java:566) > at > org.apache.sysml.hops.rewrite.RewriteAlgebraicSimplificationDynamic.rule_AlgebraicSimplification(RewriteAlgebraicSimplificationDynamic.java:154) > at > org.apache.sysml.hops.rewrite.RewriteAlgebraicSimplificationDynamic.rule_AlgebraicSimplification(RewriteAlgebraicSimplificationDynamic.java:185) > at >