[jira] [Assigned] (SYSTEMML-684) Update LICENSE and NOTICE for standalone jar artifact

2016-05-11 Thread Deron Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deron Eriksson reassigned SYSTEMML-684:
---

Assignee: Deron Eriksson

> Update LICENSE and NOTICE for standalone jar artifact
> -
>
> Key: SYSTEMML-684
> URL: https://issues.apache.org/jira/browse/SYSTEMML-684
> Project: SystemML
>  Issue Type: Task
>  Components: Build
>Reporter: Deron Eriksson
>Assignee: Deron Eriksson
>
> Update assembly for the LICENSE and NOTICE for the dependencies contained in 
> the standalone jar.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (SYSTEMML-684) Update LICENSE and NOTICE for standalone jar artifact

2016-05-11 Thread Deron Eriksson (JIRA)
Deron Eriksson created SYSTEMML-684:
---

 Summary: Update LICENSE and NOTICE for standalone jar artifact
 Key: SYSTEMML-684
 URL: https://issues.apache.org/jira/browse/SYSTEMML-684
 Project: SystemML
  Issue Type: Task
  Components: Build
Reporter: Deron Eriksson


Update assembly for the LICENSE and NOTICE for the dependencies contained in 
the standalone jar.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (SYSTEMML-683) Update LICENSE and NOTICE for in-memory jar artifact

2016-05-11 Thread Deron Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deron Eriksson reassigned SYSTEMML-683:
---

Assignee: Deron Eriksson

> Update LICENSE and NOTICE for in-memory jar artifact
> 
>
> Key: SYSTEMML-683
> URL: https://issues.apache.org/jira/browse/SYSTEMML-683
> Project: SystemML
>  Issue Type: Task
>  Components: Build
>Reporter: Deron Eriksson
>Assignee: Deron Eriksson
>
> Update assembly for the LICENSE and NOTICE for the dependencies contained in 
> the in-memory jar.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (SYSTEMML-683) Update LICENSE and NOTICE for in-memory jar artifact

2016-05-11 Thread Deron Eriksson (JIRA)
Deron Eriksson created SYSTEMML-683:
---

 Summary: Update LICENSE and NOTICE for in-memory jar artifact
 Key: SYSTEMML-683
 URL: https://issues.apache.org/jira/browse/SYSTEMML-683
 Project: SystemML
  Issue Type: Task
  Components: Build
Reporter: Deron Eriksson


Update assembly for the LICENSE and NOTICE for the dependencies contained in 
the in-memory jar.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (SYSTEMML-682) Update LICENSE and NOTICE for in-memory jar artifact

2016-05-11 Thread Deron Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deron Eriksson updated SYSTEMML-682:

Component/s: Build

> Update LICENSE and NOTICE for in-memory jar artifact
> 
>
> Key: SYSTEMML-682
> URL: https://issues.apache.org/jira/browse/SYSTEMML-682
> Project: SystemML
>  Issue Type: Task
>  Components: Build
>Reporter: Deron Eriksson
>Assignee: Deron Eriksson
>
> LICENSE and NOTICE files need to be updated to reflect the library 
> dependencies packaged in in-memory jar.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (SYSTEMML-682) Update LICENSE and NOTICE for in-memory jar artifact

2016-05-11 Thread Deron Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deron Eriksson reassigned SYSTEMML-682:
---

Assignee: Deron Eriksson

> Update LICENSE and NOTICE for in-memory jar artifact
> 
>
> Key: SYSTEMML-682
> URL: https://issues.apache.org/jira/browse/SYSTEMML-682
> Project: SystemML
>  Issue Type: Task
>  Components: Build
>Reporter: Deron Eriksson
>Assignee: Deron Eriksson
>
> LICENSE and NOTICE files need to be updated to reflect the library 
> dependencies packaged in in-memory jar.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (SYSTEMML-682) Update LICENSE and NOTICE for in-memory jar artifact

2016-05-11 Thread Deron Eriksson (JIRA)
Deron Eriksson created SYSTEMML-682:
---

 Summary: Update LICENSE and NOTICE for in-memory jar artifact
 Key: SYSTEMML-682
 URL: https://issues.apache.org/jira/browse/SYSTEMML-682
 Project: SystemML
  Issue Type: Task
Reporter: Deron Eriksson


LICENSE and NOTICE files need to be updated to reflect the library dependencies 
packaged in in-memory jar.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (SYSTEMML-593) MLContext Redesign

2016-05-11 Thread Niketan Pansare (JIRA)

[ 
https://issues.apache.org/jira/browse/SYSTEMML-593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15280832#comment-15280832
 ] 

Niketan Pansare commented on SYSTEMML-593:
--

Thanks [~deron] for creating the design document. It improves the usability of 
MLContext a lot.

I like the common interface "in" that allows users to pass both data as well as 
command-line arguments. I also like that we use $prefix for commandline 
variables in the "in" method. Thereby, in(String, RDD/DataFrame) maps to 
registerInput and in(String, boolean/double/float/int/string) maps to 
command-line arguments. I also like that this design avoids the need to cast 
boolean/double/float/int into String.

I also like the Script abstraction as it avoids overloaded execute methods (for 
example: PyDML, DML, ...).

Few thoughts/suggestions:
1. Current MLContext allows the users to pass RDD/DataFrame to the script using 
"registerInput". In the proposed document, we pass the RDD/DataFrame through 
".in(...)". In addition, registerInput method allows for passing the format and 
the meta-data information. In some cases, the format is required but meta-data 
is optional and in some other case both are required. We need to add 
appropriate guards in our new MLContext.
For example: we should not support `script.in("A", sc.textFile("m.csv"))` as 
RDD can refer to either "csv" or "text" format. Also, `script.in("A", 
sc.textFile("m.text"), "text")` should throw an error stating meta-data is 
required.

2.  The DML language semantics should be respected. For example: if script has 
following line `X = read($fileX)`, then providing .in("X", ...), but not 
.in("$fileX", ...) should throw an error.

3. Please remember that DataFrame is unordered collection and we return matrix 
which is an ordered structure. So, please remember to return DataFrame with an 
"ID" column as we do in our current MLOutput class, else we are potentially 
breaking the contract. 

4. Please support following different types of DataFrame:
- With an ID column and one DF column of type double for every column of 
matrix. This is safe way for user to pass a DataFrame to SystemML and still be 
able to do pre-processing.
- Without an ID column, but with one DF column of type double for every column 
of matrix.  This is potentially unsafe and user ensures that rows are sorted.
- With an ID column and DF with a column of Vector DataType. This is often used 
in MLPipeline wrappers.
- Without an ID column, but with DF with a column of Vector DataType. This is 
often used in MLPipeline wrappers.

5. With exception of DataFrame, all the RDDs that we pass map to the format we 
support in read(): RDD/JavaRDD/JavaPairRDD/... for csv and text format + RDD/JavaPairRDD for 
binaryblock. For non-read formats, we implement RDDConverterUtils.

Please support all the read-formats either directly or via an abstraction (for 
example: proposed BinaryBlockMatrix which is wrapper of JavaPairRDD and 
MC). In particular, users might prefer to stick with BinaryBlockMatrix if they 
want to pass it to another DML script but might want DataFrame if they want to 
apply SQL. Why ? For extremely wide matrices, DataFrame is extremely 
inefficient format. 

An alternate suggestion: You can only support registering one type of 
DataFrame/RDD and have many constructors/factory methods for them. For example: 
Please see org.apache.sysml.api.MLMatrix (for reference implementation of 
BinaryBlockMatrix) which essentially is a two column DataFrame that supports 
simple Matrix algebra. It also fits well into Spark Datasource API: 
ml.read(sqlContext, "W_small.mtx", "binary").

[~reinwald] [~mboehm7] [~mwdus...@us.ibm.com]

> MLContext Redesign
> --
>
> Key: SYSTEMML-593
> URL: https://issues.apache.org/jira/browse/SYSTEMML-593
> Project: SystemML
>  Issue Type: Improvement
>  Components: APIs
>Reporter: Deron Eriksson
>Assignee: Deron Eriksson
> Attachments: Design Document - MLContext API Redesign.pdf
>
>
> This JIRA proposes a redesign of the Java MLContext API with several goals:
> • Simplify the user experience
> • Encapsulate primary entities using object-oriented concepts
> • Make API extensible for external users
> • Make API extensible for SystemML developers
> • Locate all user-interaction classes, interfaces, etc under a single API 
> package
> • Extensive Javadocs for all classes in the API
> • Potentially fold JMLC API into MLContext so as to have a single 
> programmatic API



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (SYSTEMML-665) Fix license, notice, and disclaimer for in-memory jar artifact

2016-05-11 Thread Deron Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deron Eriksson closed SYSTEMML-665.
---

I will close this issue and create another JIRA for the additional updates. 
Thank you.

> Fix license, notice, and disclaimer for in-memory jar artifact
> --
>
> Key: SYSTEMML-665
> URL: https://issues.apache.org/jira/browse/SYSTEMML-665
> Project: SystemML
>  Issue Type: Task
>  Components: Build
>Reporter: Deron Eriksson
>Assignee: Luciano Resende
> Fix For: SystemML 0.10
>
>
> The in-memory jar artifact (see "create-inmemory-jar" in pom.xml, which 
> delegates to "src/assembly/inmemory.xml") has issues with its license, 
> notice, and disclaimer files.
> #1) The DISCLAIMER file "should" be included (probably best located at base 
> level).
> #2) It contains a META-INF/NOTICE file for commons logging and a 
> META-INF/NOTICE.txt file for commons lang. However there is no NOTICE file 
> for SystemML (probably best located at base level).
> #3) It contains a META-INF/LICENSE file which is the general Apache license 
> and a META-INF/LICENSE.txt file which is also the general Apache license. 
> However, the LICENSE file most likely should include a listing of the license 
> information for all the projects contained in the artifact (similar to 
> src/assembly/standalone/LICENSE). Also, see SYSTEMML-659 and SYSTEMML-662. 
> This LICENSE for SystemML is probably best located at the base level of the 
> artifact.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (SYSTEMML-662) Fix license, notice, and disclaimer for standalone jar artifact

2016-05-11 Thread Deron Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/SYSTEMML-662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15280634#comment-15280634
 ] 

Deron Eriksson commented on SYSTEMML-662:
-

I will close this issue and create another JIRA for the additional updates. 
Thank you.

> Fix license, notice, and disclaimer for standalone jar artifact
> ---
>
> Key: SYSTEMML-662
> URL: https://issues.apache.org/jira/browse/SYSTEMML-662
> Project: SystemML
>  Issue Type: Task
>  Components: Build
>Reporter: Deron Eriksson
>Assignee: Luciano Resende
> Fix For: SystemML 0.10
>
>
> The standalone jar artifact (see "create-standalone-jar" in pom.xml, which 
> delegates to "src/assembly/standalone-jar.xml") has issues with its license, 
> notice, and disclaimer files.
> #1) There is no META-INF/DISCLAIMER incubator file, which "should" be there.
> #2) It contains a META-INF/NOTICE file for "Apache Wink :: JSON4J"  and a 
> META-INF/NOTICE.txt file for "Apache Commons CLI". However, there is no 
> META-INF/NOTICE file for SystemML.
> #3) It contains a META-INF/LICENSE file which is the general Apache license 
> and a META-INF/LICENSE.txt file which is also the general Apache license. 
> However, the LICENSE file most likely should include a listing of the license 
> information for all the projects contained in the project (similar to 
> src/assembly/standalone/LICENSE).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (SYSTEMML-662) Fix license, notice, and disclaimer for standalone jar artifact

2016-05-11 Thread Deron Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deron Eriksson closed SYSTEMML-662.
---

> Fix license, notice, and disclaimer for standalone jar artifact
> ---
>
> Key: SYSTEMML-662
> URL: https://issues.apache.org/jira/browse/SYSTEMML-662
> Project: SystemML
>  Issue Type: Task
>  Components: Build
>Reporter: Deron Eriksson
>Assignee: Luciano Resende
> Fix For: SystemML 0.10
>
>
> The standalone jar artifact (see "create-standalone-jar" in pom.xml, which 
> delegates to "src/assembly/standalone-jar.xml") has issues with its license, 
> notice, and disclaimer files.
> #1) There is no META-INF/DISCLAIMER incubator file, which "should" be there.
> #2) It contains a META-INF/NOTICE file for "Apache Wink :: JSON4J"  and a 
> META-INF/NOTICE.txt file for "Apache Commons CLI". However, there is no 
> META-INF/NOTICE file for SystemML.
> #3) It contains a META-INF/LICENSE file which is the general Apache license 
> and a META-INF/LICENSE.txt file which is also the general Apache license. 
> However, the LICENSE file most likely should include a listing of the license 
> information for all the projects contained in the project (similar to 
> src/assembly/standalone/LICENSE).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (SYSTEMML-633) Improve Left-Indexing Performance with (Nested) Parfor Loops

2016-05-11 Thread Mike Dusenberry (JIRA)

[ 
https://issues.apache.org/jira/browse/SYSTEMML-633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15280584#comment-15280584
 ] 

Mike Dusenberry edited comment on SYSTEMML-633 at 5/11/16 6:30 PM:
---

Attached the files for the TensorFlow vs SystemML tests:
* {{log.txt}} // look at date for correct file
* {{perf-dml.dml}}
* {{perf-tf.py}}
* {{perf.sh}}
* {{run.sh}}
* {{time.txt}}


was (Author: mwdus...@us.ibm.com):
Attached the files for the TensorFlow vs SystemML tests.

> Improve Left-Indexing Performance with (Nested) Parfor Loops
> 
>
> Key: SYSTEMML-633
> URL: https://issues.apache.org/jira/browse/SYSTEMML-633
> Project: SystemML
>  Issue Type: Improvement
>  Components: ParFor
>Reporter: Mike Dusenberry
>Priority: Critical
> Attachments: Im2colWrapper.java, log.txt, log.txt, perf-dml.dml, 
> perf-tf.py, perf.sh, run.sh, systemml-nn.zip, time.txt
>
>
> In the experimental deep learning DML library I've been building 
> ([https://github.com/dusenberrymw/systemml-nn|https://github.com/dusenberrymw/systemml-nn]),
>  I've experienced severe bottlenecks due to *left-indexing* in parfor loops.  
> Here, I will highlight a few particular instances with simplified examples, 
> but the same issue is shared across many areas of the library, particularly 
> in the convolution and max pooling layers, and is exaggerated in real 
> use-cases.
> *Quick note* on setup for any of the below experiments.  Please grab a copy 
> of the above repo (particularly the {{nn}} directory), and run any 
> experiments with the {{nn}} package available at the base directory of the 
> experiment.
> Scenario: *Convolution*
> * In the library above, the forward pass of the convolution function 
> ([{{conv::forward(...)}} | 
> https://github.com/dusenberrymw/systemml-nn/blob/f6d3e077ae3c303eb8426b31329d3734e3483d5f/nn/layers/conv.dml#L8]
>  in {{nn/layers/conv.dml}}) essentially accepts a matrix {{X}} of images, a 
> matrix of weights {{W}}, and several other parameters corresponding to image 
> sizes, filter sizes, etc.  It then loops through the images with a {{parfor}} 
> loop, and for each image it pads the image with {{util::pad_image}}, extracts 
> "patches" of the image into columns of a matrix in a sliding fashion across 
> the image with {{util::im2col}}, performs a matrix multiplication between the 
> matrix of patch columns and the weight matrix, and then saves the result into 
> a matrix defined outside of the parfor loop using left-indexing.
> * Left-indexing has been identified as the bottleneck by a wide margin.
> * Left-indexing is used in the main {{conv::forward(...)}} function in the 
> [last line in the parfor 
> loop|https://github.com/dusenberrymw/systemml-nn/blob/f6d3e077ae3c303eb8426b31329d3734e3483d5f/nn/layers/conv.dml#L61],
>  in the 
> [{{util::pad_image(...)}}|https://github.com/dusenberrymw/systemml-nn/blob/f6d3e077ae3c303eb8426b31329d3734e3483d5f/nn/util.dml#L196]
>  function used by {{conv::forward(...)}}, as well as in the 
> [{{util::im2col(...)}}|https://github.com/dusenberrymw/systemml-nn/blob/f6d3e077ae3c303eb8426b31329d3734e3483d5f/nn/util.dml#L96]
>  function used by {{conv::forward(...)}}.
> * Test script (assuming the {{nn}} package is available):
> ** {{speed-633.dml}} {code}
> source("nn/layers/conv.dml") as conv
> source("nn/util.dml") as util
> # Generate data
> N = 64  # num examples
> C = 30  # num channels
> Hin = 28  # input height
> Win = 28  # input width
> F = 20  # num filters
> Hf = 3  # filter height
> Wf = 3  # filter width
> stride = 1
> pad = 1
> X = rand(rows=N, cols=C*Hin*Win)
> # Create layer
> [W, b] = conv::init(F, C, Hf, Wf)
> # Forward
> [out, Hout, Wout] = conv::forward(X, W, b, C, Hin, Win, Hf, Wf, stride, 
> stride, pad, pad)
> print("Out: " + nrow(out) + "x" + ncol(out))
> print("Hout: " + Hout)
> print("Wout: " + Wout)
> print("")
> print(sum(out))
> {code}
> * Invocation:
> ** {{java -jar 
> $SYSTEMML_HOME/target/systemml-0.10.0-incubating-SNAPSHOT-standalone.jar -f 
> speed-633.dml -stats -explain -exec singlenode}}
> * Stats output (modified to output up to 100 instructions):
> ** {code}
> ...
> Total elapsed time:   26.834 sec.
> Total compilation time:   0.529 sec.
> Total execution time:   26.304 sec.
> Number of compiled MR Jobs: 0.
> Number of executed MR Jobs: 0.
> Cache hits (Mem, WB, FS, HDFS): 9196235/0/0/0.
> Cache writes (WB, FS, HDFS):  3070724/0/0.
> Cache times (ACQr/m, RLS, EXP): 1.474/1.120/26.998/0.000 sec.
> HOP DAGs recompiled (PRED, SB): 0/0.
> HOP DAGs recompile time:  0.268 sec.
> Functions recompiled:   129.
> Functions recompile time: 0.841 sec.
> ParFor loops optimized:   1.
> ParFor optimize time:   0.032 sec.
> ParFor initialize time:   0.015 sec.
> ParFor result merge time: 0.

[jira] [Updated] (SYSTEMML-633) Improve Left-Indexing Performance with (Nested) Parfor Loops

2016-05-11 Thread Mike Dusenberry (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Dusenberry updated SYSTEMML-633:
-
Attachment: time.txt
run.sh
perf.sh
perf-tf.py
perf-dml.dml
log.txt

Attached the files for the TensorFlow vs SystemML tests.

> Improve Left-Indexing Performance with (Nested) Parfor Loops
> 
>
> Key: SYSTEMML-633
> URL: https://issues.apache.org/jira/browse/SYSTEMML-633
> Project: SystemML
>  Issue Type: Improvement
>  Components: ParFor
>Reporter: Mike Dusenberry
>Priority: Critical
> Attachments: Im2colWrapper.java, log.txt, log.txt, perf-dml.dml, 
> perf-tf.py, perf.sh, run.sh, systemml-nn.zip, time.txt
>
>
> In the experimental deep learning DML library I've been building 
> ([https://github.com/dusenberrymw/systemml-nn|https://github.com/dusenberrymw/systemml-nn]),
>  I've experienced severe bottlenecks due to *left-indexing* in parfor loops.  
> Here, I will highlight a few particular instances with simplified examples, 
> but the same issue is shared across many areas of the library, particularly 
> in the convolution and max pooling layers, and is exaggerated in real 
> use-cases.
> *Quick note* on setup for any of the below experiments.  Please grab a copy 
> of the above repo (particularly the {{nn}} directory), and run any 
> experiments with the {{nn}} package available at the base directory of the 
> experiment.
> Scenario: *Convolution*
> * In the library above, the forward pass of the convolution function 
> ([{{conv::forward(...)}} | 
> https://github.com/dusenberrymw/systemml-nn/blob/f6d3e077ae3c303eb8426b31329d3734e3483d5f/nn/layers/conv.dml#L8]
>  in {{nn/layers/conv.dml}}) essentially accepts a matrix {{X}} of images, a 
> matrix of weights {{W}}, and several other parameters corresponding to image 
> sizes, filter sizes, etc.  It then loops through the images with a {{parfor}} 
> loop, and for each image it pads the image with {{util::pad_image}}, extracts 
> "patches" of the image into columns of a matrix in a sliding fashion across 
> the image with {{util::im2col}}, performs a matrix multiplication between the 
> matrix of patch columns and the weight matrix, and then saves the result into 
> a matrix defined outside of the parfor loop using left-indexing.
> * Left-indexing has been identified as the bottleneck by a wide margin.
> * Left-indexing is used in the main {{conv::forward(...)}} function in the 
> [last line in the parfor 
> loop|https://github.com/dusenberrymw/systemml-nn/blob/f6d3e077ae3c303eb8426b31329d3734e3483d5f/nn/layers/conv.dml#L61],
>  in the 
> [{{util::pad_image(...)}}|https://github.com/dusenberrymw/systemml-nn/blob/f6d3e077ae3c303eb8426b31329d3734e3483d5f/nn/util.dml#L196]
>  function used by {{conv::forward(...)}}, as well as in the 
> [{{util::im2col(...)}}|https://github.com/dusenberrymw/systemml-nn/blob/f6d3e077ae3c303eb8426b31329d3734e3483d5f/nn/util.dml#L96]
>  function used by {{conv::forward(...)}}.
> * Test script (assuming the {{nn}} package is available):
> ** {{speed-633.dml}} {code}
> source("nn/layers/conv.dml") as conv
> source("nn/util.dml") as util
> # Generate data
> N = 64  # num examples
> C = 30  # num channels
> Hin = 28  # input height
> Win = 28  # input width
> F = 20  # num filters
> Hf = 3  # filter height
> Wf = 3  # filter width
> stride = 1
> pad = 1
> X = rand(rows=N, cols=C*Hin*Win)
> # Create layer
> [W, b] = conv::init(F, C, Hf, Wf)
> # Forward
> [out, Hout, Wout] = conv::forward(X, W, b, C, Hin, Win, Hf, Wf, stride, 
> stride, pad, pad)
> print("Out: " + nrow(out) + "x" + ncol(out))
> print("Hout: " + Hout)
> print("Wout: " + Wout)
> print("")
> print(sum(out))
> {code}
> * Invocation:
> ** {{java -jar 
> $SYSTEMML_HOME/target/systemml-0.10.0-incubating-SNAPSHOT-standalone.jar -f 
> speed-633.dml -stats -explain -exec singlenode}}
> * Stats output (modified to output up to 100 instructions):
> ** {code}
> ...
> Total elapsed time:   26.834 sec.
> Total compilation time:   0.529 sec.
> Total execution time:   26.304 sec.
> Number of compiled MR Jobs: 0.
> Number of executed MR Jobs: 0.
> Cache hits (Mem, WB, FS, HDFS): 9196235/0/0/0.
> Cache writes (WB, FS, HDFS):  3070724/0/0.
> Cache times (ACQr/m, RLS, EXP): 1.474/1.120/26.998/0.000 sec.
> HOP DAGs recompiled (PRED, SB): 0/0.
> HOP DAGs recompile time:  0.268 sec.
> Functions recompiled:   129.
> Functions recompile time: 0.841 sec.
> ParFor loops optimized:   1.
> ParFor optimize time:   0.032 sec.
> ParFor initialize time:   0.015 sec.
> ParFor result merge time: 0.028 sec.
> ParFor total update in-place: 0/0/1559360
> Total JIT compile time:   14.235 sec.
> Total JVM GC count:   94.
> Total JVM GC time:0.366 sec.
> Heavy hitter in

[jira] [Commented] (SYSTEMML-633) Improve Left-Indexing Performance with (Nested) Parfor Loops

2016-05-11 Thread Mike Dusenberry (JIRA)

[ 
https://issues.apache.org/jira/browse/SYSTEMML-633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15280558#comment-15280558
 ] 

Mike Dusenberry commented on SYSTEMML-633:
--

[~niketanpansare], [~mboehm7], [~reinwald], [~reinw...@us.ibm.com]

> Improve Left-Indexing Performance with (Nested) Parfor Loops
> 
>
> Key: SYSTEMML-633
> URL: https://issues.apache.org/jira/browse/SYSTEMML-633
> Project: SystemML
>  Issue Type: Improvement
>  Components: ParFor
>Reporter: Mike Dusenberry
>Priority: Critical
> Attachments: Im2colWrapper.java, log.txt, systemml-nn.zip
>
>
> In the experimental deep learning DML library I've been building 
> ([https://github.com/dusenberrymw/systemml-nn|https://github.com/dusenberrymw/systemml-nn]),
>  I've experienced severe bottlenecks due to *left-indexing* in parfor loops.  
> Here, I will highlight a few particular instances with simplified examples, 
> but the same issue is shared across many areas of the library, particularly 
> in the convolution and max pooling layers, and is exaggerated in real 
> use-cases.
> *Quick note* on setup for any of the below experiments.  Please grab a copy 
> of the above repo (particularly the {{nn}} directory), and run any 
> experiments with the {{nn}} package available at the base directory of the 
> experiment.
> Scenario: *Convolution*
> * In the library above, the forward pass of the convolution function 
> ([{{conv::forward(...)}} | 
> https://github.com/dusenberrymw/systemml-nn/blob/f6d3e077ae3c303eb8426b31329d3734e3483d5f/nn/layers/conv.dml#L8]
>  in {{nn/layers/conv.dml}}) essentially accepts a matrix {{X}} of images, a 
> matrix of weights {{W}}, and several other parameters corresponding to image 
> sizes, filter sizes, etc.  It then loops through the images with a {{parfor}} 
> loop, and for each image it pads the image with {{util::pad_image}}, extracts 
> "patches" of the image into columns of a matrix in a sliding fashion across 
> the image with {{util::im2col}}, performs a matrix multiplication between the 
> matrix of patch columns and the weight matrix, and then saves the result into 
> a matrix defined outside of the parfor loop using left-indexing.
> * Left-indexing has been identified as the bottleneck by a wide margin.
> * Left-indexing is used in the main {{conv::forward(...)}} function in the 
> [last line in the parfor 
> loop|https://github.com/dusenberrymw/systemml-nn/blob/f6d3e077ae3c303eb8426b31329d3734e3483d5f/nn/layers/conv.dml#L61],
>  in the 
> [{{util::pad_image(...)}}|https://github.com/dusenberrymw/systemml-nn/blob/f6d3e077ae3c303eb8426b31329d3734e3483d5f/nn/util.dml#L196]
>  function used by {{conv::forward(...)}}, as well as in the 
> [{{util::im2col(...)}}|https://github.com/dusenberrymw/systemml-nn/blob/f6d3e077ae3c303eb8426b31329d3734e3483d5f/nn/util.dml#L96]
>  function used by {{conv::forward(...)}}.
> * Test script (assuming the {{nn}} package is available):
> ** {{speed-633.dml}} {code}
> source("nn/layers/conv.dml") as conv
> source("nn/util.dml") as util
> # Generate data
> N = 64  # num examples
> C = 30  # num channels
> Hin = 28  # input height
> Win = 28  # input width
> F = 20  # num filters
> Hf = 3  # filter height
> Wf = 3  # filter width
> stride = 1
> pad = 1
> X = rand(rows=N, cols=C*Hin*Win)
> # Create layer
> [W, b] = conv::init(F, C, Hf, Wf)
> # Forward
> [out, Hout, Wout] = conv::forward(X, W, b, C, Hin, Win, Hf, Wf, stride, 
> stride, pad, pad)
> print("Out: " + nrow(out) + "x" + ncol(out))
> print("Hout: " + Hout)
> print("Wout: " + Wout)
> print("")
> print(sum(out))
> {code}
> * Invocation:
> ** {{java -jar 
> $SYSTEMML_HOME/target/systemml-0.10.0-incubating-SNAPSHOT-standalone.jar -f 
> speed-633.dml -stats -explain -exec singlenode}}
> * Stats output (modified to output up to 100 instructions):
> ** {code}
> ...
> Total elapsed time:   26.834 sec.
> Total compilation time:   0.529 sec.
> Total execution time:   26.304 sec.
> Number of compiled MR Jobs: 0.
> Number of executed MR Jobs: 0.
> Cache hits (Mem, WB, FS, HDFS): 9196235/0/0/0.
> Cache writes (WB, FS, HDFS):  3070724/0/0.
> Cache times (ACQr/m, RLS, EXP): 1.474/1.120/26.998/0.000 sec.
> HOP DAGs recompiled (PRED, SB): 0/0.
> HOP DAGs recompile time:  0.268 sec.
> Functions recompiled:   129.
> Functions recompile time: 0.841 sec.
> ParFor loops optimized:   1.
> ParFor optimize time:   0.032 sec.
> ParFor initialize time:   0.015 sec.
> ParFor result merge time: 0.028 sec.
> ParFor total update in-place: 0/0/1559360
> Total JIT compile time:   14.235 sec.
> Total JVM GC count:   94.
> Total JVM GC time:0.366 sec.
> Heavy hitter instructions (name, time, count):
> -- 1)   leftIndex   41.670 sec  1559360
> -- 2)   forward   26.212 sec  1
> -- 3)   im2col_t45  25.919 sec  8
> -- 4

[jira] [Commented] (SYSTEMML-633) Improve Left-Indexing Performance with (Nested) Parfor Loops

2016-05-11 Thread Mike Dusenberry (JIRA)

[ 
https://issues.apache.org/jira/browse/SYSTEMML-633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15280532#comment-15280532
 ] 

Mike Dusenberry commented on SYSTEMML-633:
--

Update: I ran a set of tests to compare *TensorFlow* with our *SystemML* engine 
by using a forward convolution function.  While the particular scenario is 
convolution and deep learning, the reality is that it is just DML-bodied 
functions and the engine being compared to TensorFlow's functions and engine.

*TensorFlow vs. SystemML*
---
* Machine
** Massive single node from a research cluster:
*** 2 Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz (6 cores each, hyperthreaded to 
24 total virtual cores)
*** 256GB RAM
* Tests:
** 4 setups of forward convolution for both SystemML & TensorFlow -- see 
attached files {{perf-dml.dml}}, {{perf-tf.py}}.
* Results: (Engine, Iterations, Setup #, Time (sec))
** SystemML-NN,1,1,6.963174578
Tensorflow,1,1,2.409152860
SystemML-NN,1,2,80.491425876
Tensorflow,1,2,26.183729520
SystemML-NN,1,3,7.187359147
Tensorflow,1,3,2.656847511
SystemML-NN,1,4,83.597674899
Tensorflow,1,4,21.305138164
SystemML-NN,10,1,27.405661815
Tensorflow,10,1,2.096477385
SystemML-NN,10,2,338.673049859
Tensorflow,10,2,21.800133563
SystemML-NN,10,3,35.109882799
Tensorflow,10,3,2.028066097
SystemML-NN,10,4,396.854964030
Tensorflow,10,4,22.103966235
SystemML-NN,100,1,245.515412673
Tensorflow,100,1,2.511825601
SystemML-NN,100,2,3150.047737891
Tensorflow,100,2,26.059402039
SystemML-NN,100,3,326.527263634
Tensorflow,100,3,2.547926570
SystemML-NN,100,4,3821.133584436
Tensorflow,100,4,24.366318114
SystemML-NN,1000,1,2435.717824721
Tensorflow,1000,1,7.634583445
SystemML-NN,1000,2,30354.415483735
Tensorflow,1000,2,67.591256457
SystemML-NN,1000,3,3161.059507123
Tensorflow,1000,3,7.824721516
SystemML-NN,1000,4,36051.256351073
Tensorflow,1000,4,55.355726359
* Analysis:
** Rewrites, such as update-in-place, are not being applied to valid code if it 
is located within a DML-bodied function.
** See log file: {{log.txt}}

*Overall*: The SystemML engine is currently *far behind* TensorFlow when 
DML-bodied functions are used.

> Improve Left-Indexing Performance with (Nested) Parfor Loops
> 
>
> Key: SYSTEMML-633
> URL: https://issues.apache.org/jira/browse/SYSTEMML-633
> Project: SystemML
>  Issue Type: Improvement
>  Components: ParFor
>Reporter: Mike Dusenberry
>Priority: Critical
> Attachments: Im2colWrapper.java, log.txt, systemml-nn.zip
>
>
> In the experimental deep learning DML library I've been building 
> ([https://github.com/dusenberrymw/systemml-nn|https://github.com/dusenberrymw/systemml-nn]),
>  I've experienced severe bottlenecks due to *left-indexing* in parfor loops.  
> Here, I will highlight a few particular instances with simplified examples, 
> but the same issue is shared across many areas of the library, particularly 
> in the convolution and max pooling layers, and is exaggerated in real 
> use-cases.
> *Quick note* on setup for any of the below experiments.  Please grab a copy 
> of the above repo (particularly the {{nn}} directory), and run any 
> experiments with the {{nn}} package available at the base directory of the 
> experiment.
> Scenario: *Convolution*
> * In the library above, the forward pass of the convolution function 
> ([{{conv::forward(...)}} | 
> https://github.com/dusenberrymw/systemml-nn/blob/f6d3e077ae3c303eb8426b31329d3734e3483d5f/nn/layers/conv.dml#L8]
>  in {{nn/layers/conv.dml}}) essentially accepts a matrix {{X}} of images, a 
> matrix of weights {{W}}, and several other parameters corresponding to image 
> sizes, filter sizes, etc.  It then loops through the images with a {{parfor}} 
> loop, and for each image it pads the image with {{util::pad_image}}, extracts 
> "patches" of the image into columns of a matrix in a sliding fashion across 
> the image with {{util::im2col}}, performs a matrix multiplication between the 
> matrix of patch columns and the weight matrix, and then saves the result into 
> a matrix defined outside of the parfor loop using left-indexing.
> * Left-indexing has been identified as the bottleneck by a wide margin.
> * Left-indexing is used in the main {{conv::forward(...)}} function in the 
> [last line in the parfor 
> loop|https://github.com/dusenberrymw/systemml-nn/blob/f6d3e077ae3c303eb8426b31329d3734e3483d5f/nn/layers/conv.dml#L61],
>  in the 
> [{{util::pad_image(...)}}|https://github.com/dusenberrymw/systemml-nn/blob/f6d3e077ae3c303eb8426b31329d3734e3483d5f/nn/util.dml#L196]
>  function used by {{conv::forward(...)}}, as well as in the 
> [{{util::im2col(...)}}|https://github.com/dusenberrymw/systemml-nn/blob/f6d3e077ae3c303eb8426b31329d3734e3483d5f/nn/util.dml#L96]
>  function used by {{conv::forward(...)}}.
> * Test script (assuming

[jira] [Commented] (SYSTEMML-680) eigen() fails with "Unsupported function EIGEN" but diag() works

2016-05-11 Thread Matthias Boehm (JIRA)

[ 
https://issues.apache.org/jira/browse/SYSTEMML-680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15280521#comment-15280521
 ] 

Matthias Boehm commented on SYSTEMML-680:
-

no problem, but just to be clear, the statement above only applies to eigen, 
qr, lu, cholesky and solve - all other operations are automatically compiled to 
distributed operations on MR/Spark.

> eigen() fails with "Unsupported function EIGEN" but diag() works
> 
>
> Key: SYSTEMML-680
> URL: https://issues.apache.org/jira/browse/SYSTEMML-680
> Project: SystemML
>  Issue Type: Bug
>  Components: APIs
>Affects Versions: SystemML 0.9
> Environment: Linux ip-172-20-42-170 3.13.0-61-generic #100-Ubuntu SMP 
> Wed Jul 29 11:21:34 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
> Scala code runner version 2.10.4 -- Copyright 2002-2013, LAMP/EPFL
> spark-core_2.10-1.3.0.jar
>Reporter: Golda Velez
>Priority: Minor
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>
> Could be some simple error, since I'm new to SystemML
> I'm running a tiny DML script:
> X = read($Xin)
> ee = eigen(X)
> via some things to set up the matrix in scala, ending with
> val sysMlMatrix = RDDConverterUtils.dataFrameToBinaryBlock(sc, df, mc, false)
> val ml = new MLContext(sc)
> ml.reset()
> ml.registerInput("X", sysMlMatrix, numRows, numCols)
> ml.registerOutput("e")
> val nargs = Map("Xin" -> " ", "Eout" -> " ")
> val outputs = ml.execute("dum.dml", nargs)
> I could certainly be doing something wrong, but it does run if I replace 
> eigen() with diag() and both are listed similarly in the guide 
> http://apache.github.io/incubator-systemml/spark-mlcontext-programming-guide.html
> Is eigen() supported currently and does it require some installation of some 
> library?  I didn't see anything about that in the docs.
> Thanks, this looks super useful!
> This might just be a documentation bug, not a code bug, but I'm not sure how 
> else to contact people about it and get it resolved.  Are there forums?
> --Golda



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (SYSTEMML-649) JMLC/MLContext support for scalar output variables

2016-05-11 Thread Deron Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/SYSTEMML-649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15280517#comment-15280517
 ] 

Deron Eriksson commented on SYSTEMML-649:
-

JMLC scalar output support fixed by 
[PR150|https://github.com/apache/incubator-systemml/pull/150].

I will address MLContext scalar support in the new MLContext API.

> JMLC/MLContext support for scalar output variables
> --
>
> Key: SYSTEMML-649
> URL: https://issues.apache.org/jira/browse/SYSTEMML-649
> Project: SystemML
>  Issue Type: Task
>  Components: APIs
>Reporter: Matthias Boehm
>Assignee: Deron Eriksson
>
> Right now neither JMLC nor MLContext supports scalar output variables. This 
> task aims to extend both APIs with the required primitives.
> The workaround is to cast any output scalar on script-level with as.matrix to 
> a 1-1 matrix and handle it in the calling application. However, especially 
> with MLContext this puts an unnecessary burden on the user as he needs to 
> deal with RDDs for a simple scalar too. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (SYSTEMML-633) Improve Left-Indexing Performance with (Nested) Parfor Loops

2016-05-11 Thread Mike Dusenberry (JIRA)

 [ 
https://issues.apache.org/jira/browse/SYSTEMML-633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Dusenberry updated SYSTEMML-633:
-
Priority: Critical  (was: Major)

> Improve Left-Indexing Performance with (Nested) Parfor Loops
> 
>
> Key: SYSTEMML-633
> URL: https://issues.apache.org/jira/browse/SYSTEMML-633
> Project: SystemML
>  Issue Type: Improvement
>  Components: ParFor
>Reporter: Mike Dusenberry
>Priority: Critical
> Attachments: Im2colWrapper.java, log.txt, systemml-nn.zip
>
>
> In the experimental deep learning DML library I've been building 
> ([https://github.com/dusenberrymw/systemml-nn|https://github.com/dusenberrymw/systemml-nn]),
>  I've experienced severe bottlenecks due to *left-indexing* in parfor loops.  
> Here, I will highlight a few particular instances with simplified examples, 
> but the same issue is shared across many areas of the library, particularly 
> in the convolution and max pooling layers, and is exaggerated in real 
> use-cases.
> *Quick note* on setup for any of the below experiments.  Please grab a copy 
> of the above repo (particularly the {{nn}} directory), and run any 
> experiments with the {{nn}} package available at the base directory of the 
> experiment.
> Scenario: *Convolution*
> * In the library above, the forward pass of the convolution function 
> ([{{conv::forward(...)}} | 
> https://github.com/dusenberrymw/systemml-nn/blob/f6d3e077ae3c303eb8426b31329d3734e3483d5f/nn/layers/conv.dml#L8]
>  in {{nn/layers/conv.dml}}) essentially accepts a matrix {{X}} of images, a 
> matrix of weights {{W}}, and several other parameters corresponding to image 
> sizes, filter sizes, etc.  It then loops through the images with a {{parfor}} 
> loop, and for each image it pads the image with {{util::pad_image}}, extracts 
> "patches" of the image into columns of a matrix in a sliding fashion across 
> the image with {{util::im2col}}, performs a matrix multiplication between the 
> matrix of patch columns and the weight matrix, and then saves the result into 
> a matrix defined outside of the parfor loop using left-indexing.
> * Left-indexing has been identified as the bottleneck by a wide margin.
> * Left-indexing is used in the main {{conv::forward(...)}} function in the 
> [last line in the parfor 
> loop|https://github.com/dusenberrymw/systemml-nn/blob/f6d3e077ae3c303eb8426b31329d3734e3483d5f/nn/layers/conv.dml#L61],
>  in the 
> [{{util::pad_image(...)}}|https://github.com/dusenberrymw/systemml-nn/blob/f6d3e077ae3c303eb8426b31329d3734e3483d5f/nn/util.dml#L196]
>  function used by {{conv::forward(...)}}, as well as in the 
> [{{util::im2col(...)}}|https://github.com/dusenberrymw/systemml-nn/blob/f6d3e077ae3c303eb8426b31329d3734e3483d5f/nn/util.dml#L96]
>  function used by {{conv::forward(...)}}.
> * Test script (assuming the {{nn}} package is available):
> ** {{speed-633.dml}} {code}
> source("nn/layers/conv.dml") as conv
> source("nn/util.dml") as util
> # Generate data
> N = 64  # num examples
> C = 30  # num channels
> Hin = 28  # input height
> Win = 28  # input width
> F = 20  # num filters
> Hf = 3  # filter height
> Wf = 3  # filter width
> stride = 1
> pad = 1
> X = rand(rows=N, cols=C*Hin*Win)
> # Create layer
> [W, b] = conv::init(F, C, Hf, Wf)
> # Forward
> [out, Hout, Wout] = conv::forward(X, W, b, C, Hin, Win, Hf, Wf, stride, 
> stride, pad, pad)
> print("Out: " + nrow(out) + "x" + ncol(out))
> print("Hout: " + Hout)
> print("Wout: " + Wout)
> print("")
> print(sum(out))
> {code}
> * Invocation:
> ** {{java -jar 
> $SYSTEMML_HOME/target/systemml-0.10.0-incubating-SNAPSHOT-standalone.jar -f 
> speed-633.dml -stats -explain -exec singlenode}}
> * Stats output (modified to output up to 100 instructions):
> ** {code}
> ...
> Total elapsed time:   26.834 sec.
> Total compilation time:   0.529 sec.
> Total execution time:   26.304 sec.
> Number of compiled MR Jobs: 0.
> Number of executed MR Jobs: 0.
> Cache hits (Mem, WB, FS, HDFS): 9196235/0/0/0.
> Cache writes (WB, FS, HDFS):  3070724/0/0.
> Cache times (ACQr/m, RLS, EXP): 1.474/1.120/26.998/0.000 sec.
> HOP DAGs recompiled (PRED, SB): 0/0.
> HOP DAGs recompile time:  0.268 sec.
> Functions recompiled:   129.
> Functions recompile time: 0.841 sec.
> ParFor loops optimized:   1.
> ParFor optimize time:   0.032 sec.
> ParFor initialize time:   0.015 sec.
> ParFor result merge time: 0.028 sec.
> ParFor total update in-place: 0/0/1559360
> Total JIT compile time:   14.235 sec.
> Total JVM GC count:   94.
> Total JVM GC time:0.366 sec.
> Heavy hitter instructions (name, time, count):
> -- 1)   leftIndex   41.670 sec  1559360
> -- 2)   forward   26.212 sec  1
> -- 3)   im2col_t45  25.919 sec  8
> -- 4)   im2col_t41  25.850 sec  8
> -- 5)   im2col_t48  25.831 sec  8
> -- 6)   im2col_t43  

[jira] [Commented] (SYSTEMML-680) eigen() fails with "Unsupported function EIGEN" but diag() works

2016-05-11 Thread Golda Velez (JIRA)

[ 
https://issues.apache.org/jira/browse/SYSTEMML-680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15280495#comment-15280495
 ] 

Golda Velez commented on SYSTEMML-680:
--

Oh - if its only single-node in-memory, we probably won't pursue it
further, the purpose of porting our R stuff into Spark was to try and
get it to where it can be distributed

Thanks for the quick response!

--Golda



> eigen() fails with "Unsupported function EIGEN" but diag() works
> 
>
> Key: SYSTEMML-680
> URL: https://issues.apache.org/jira/browse/SYSTEMML-680
> Project: SystemML
>  Issue Type: Bug
>  Components: APIs
>Affects Versions: SystemML 0.9
> Environment: Linux ip-172-20-42-170 3.13.0-61-generic #100-Ubuntu SMP 
> Wed Jul 29 11:21:34 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
> Scala code runner version 2.10.4 -- Copyright 2002-2013, LAMP/EPFL
> spark-core_2.10-1.3.0.jar
>Reporter: Golda Velez
>Priority: Minor
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>
> Could be some simple error, since I'm new to SystemML
> I'm running a tiny DML script:
> X = read($Xin)
> ee = eigen(X)
> via some things to set up the matrix in scala, ending with
> val sysMlMatrix = RDDConverterUtils.dataFrameToBinaryBlock(sc, df, mc, false)
> val ml = new MLContext(sc)
> ml.reset()
> ml.registerInput("X", sysMlMatrix, numRows, numCols)
> ml.registerOutput("e")
> val nargs = Map("Xin" -> " ", "Eout" -> " ")
> val outputs = ml.execute("dum.dml", nargs)
> I could certainly be doing something wrong, but it does run if I replace 
> eigen() with diag() and both are listed similarly in the guide 
> http://apache.github.io/incubator-systemml/spark-mlcontext-programming-guide.html
> Is eigen() supported currently and does it require some installation of some 
> library?  I didn't see anything about that in the docs.
> Thanks, this looks super useful!
> This might just be a documentation bug, not a code bug, but I'm not sure how 
> else to contact people about it and get it resolved.  Are there forums?
> --Golda



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (SYSTEMML-296) Add elif (else if) to PyDML

2016-05-11 Thread Tatsuya Nishiyama (JIRA)

[ 
https://issues.apache.org/jira/browse/SYSTEMML-296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15279763#comment-15279763
 ] 

Tatsuya Nishiyama commented on SYSTEMML-296:


Hi [~deron], 
I submitted [PR #151 | https://github.com/apache/incubator-systemml/pull/151]. 
Please review.

> Add elif (else if) to PyDML
> ---
>
> Key: SYSTEMML-296
> URL: https://issues.apache.org/jira/browse/SYSTEMML-296
> Project: SystemML
>  Issue Type: Improvement
>  Components: Parser, PyDML
>Reporter: Deron Eriksson
>Priority: Minor
>
> "Else if" (elif) statements are useful since they avoid the need for nested 
> if/else statements.  Currently, DML has "else if" but PyDML does not have an 
> equivalent "elif".
> As an example, here is DML containing "else if" statements:
> {code}
> x = 6
> if (x == 1) {
>   print('A')
> } else if (x == 2) {
>   print('B')
> } else if (x == 3) {
>   print('C')
> } else if (x == 4) {
>   print('D')
> } else if (x == 5) {
>   print('E')
> } else {
>   print('F')
> }
> {code}
> Here is the logical equivalent in PyDML using nested "if else" statements:
> {code}
> x = 6
> if (x == 1):
> print('A')
> else:
> if (x == 2):
> print('B')
> else:
> if (x == 3):
> print('C')
> else:
> if (x == 4):
> print('D')
> else:
> if (x == 5):
> print('E')
> else:
> print('F')
> {code}
> The nesting becomes especially challenging in a Python-like language as the 
> number of nested "if else" statements increases due to the use of whitespace 
> indentations to specify blocks of code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)