[jira] [Commented] (MADLIB-1102) Graph - Breadth First Search / Traversal

2017-07-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/MADLIB-1102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084890#comment-16084890
 ] 

ASF GitHub Bot commented on MADLIB-1102:


Github user rashmi815 closed the pull request at:

https://github.com/apache/incubator-madlib/pull/141


> Graph - Breadth First Search / Traversal
> 
>
> Key: MADLIB-1102
> URL: https://issues.apache.org/jira/browse/MADLIB-1102
> Project: Apache MADlib
>  Issue Type: New Feature
>  Components: Module: Graph
>Reporter: Rashmi Raghu
>Assignee: Rashmi Raghu
> Fix For: v1.12
>
>
> Story
> As a MADlib user and developer, I want to implement Breadth First Search / 
> Traversal for a graph. BFS is also a core part of the connected components 
> graph algorithm.
> Accpetance:
> 1) Interface defined
> 2) Design doc updated
> 3) Documentation and on-line help
> 4) IC and functional tests
> 5) Scale tests
> References:
> [0] [https://en.wikipedia.org/wiki/Breadth-first_search] 
> "Breadth-first search (BFS) is an algorithm for traversing or searching tree 
> or graph data structures. It starts at the tree root (or some arbitrary node 
> of a graph, sometimes referred to as a 'search key'[1]) and explores the 
> neighbor nodes first, before moving to the next level neighbors."
> [1] [http://www.geeksforgeeks.org/breadth-first-traversal-for-a-graph/]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (MADLIB-1135) Neural Networks - MLP - Phase 3

2017-07-12 Thread Frank McQuillan (JIRA)

 [ 
https://issues.apache.org/jira/browse/MADLIB-1135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Frank McQuillan reassigned MADLIB-1135:
---

Assignee: (was: Cooper Sloan)

> Neural Networks - MLP - Phase 3
> ---
>
> Key: MADLIB-1135
> URL: https://issues.apache.org/jira/browse/MADLIB-1135
> Project: Apache MADlib
>  Issue Type: Improvement
>  Components: Module: Neural Networks
>Reporter: Frank McQuillan
> Fix For: v1.12
>
>
> Follow on from https://issues.apache.org/jira/browse/MADLIB-413
> Story
> As a MADlib developer, I want to get 2nd phase implementation of NN going 
> with training and prediction functions, so that I can use this to build to an 
> MVP version for GA.
> Features to add:
> * weights for inputs
> * logic for n_tries
> * multiple loss functions
> * normalize inputs
> * L2 regularization



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (MADLIB-1134) Neural Networks - MLP - Phase 2

2017-07-12 Thread Frank McQuillan (JIRA)

 [ 
https://issues.apache.org/jira/browse/MADLIB-1134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Frank McQuillan updated MADLIB-1134:

Issue Type: Improvement  (was: New Feature)

> Neural Networks - MLP - Phase 2
> ---
>
> Key: MADLIB-1134
> URL: https://issues.apache.org/jira/browse/MADLIB-1134
> Project: Apache MADlib
>  Issue Type: Improvement
>  Components: Module: Neural Networks
>Reporter: Frank McQuillan
>Assignee: Cooper Sloan
> Fix For: v1.12
>
>
> Follow on from https://issues.apache.org/jira/browse/MADLIB-413
> Story
> As a MADlib developer, I want to get 2nd phase implementation of NN going 
> with training and prediction functions, so that I can use this to build to an 
> MVP version for GA.
> Features to add
> * weights for inputs
> * logic for n_tries
> * multiple loss functions
> * normalize inputs
> * L2 regularization



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (MADLIB-1135) Neural Networks - MLP - Phase 3

2017-07-12 Thread Frank McQuillan (JIRA)

 [ 
https://issues.apache.org/jira/browse/MADLIB-1135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Frank McQuillan updated MADLIB-1135:

Description: 
Follow on from https://issues.apache.org/jira/browse/MADLIB-413 and 
https://issues.apache.org/jira/browse/MADLIB-1134

Story

As a MADlib developer, I want to get 3nd phase implementation of NN going with 
training and prediction functions, so that I can have a more advanced and 
performant version of NN

Features to add:

* other algos (e.g., resilient backpropagation)
* momentum

  was:
Follow on from https://issues.apache.org/jira/browse/MADLIB-413

Story

As a MADlib developer, I want to get 2nd phase implementation of NN going with 
training and prediction functions, so that I can use this to build to an MVP 
version for GA.

Features to add:

* weights for inputs
* logic for n_tries
* multiple loss functions
* normalize inputs
* L2 regularization


> Neural Networks - MLP - Phase 3
> ---
>
> Key: MADLIB-1135
> URL: https://issues.apache.org/jira/browse/MADLIB-1135
> Project: Apache MADlib
>  Issue Type: Improvement
>  Components: Module: Neural Networks
>Reporter: Frank McQuillan
> Fix For: v2.0
>
>
> Follow on from https://issues.apache.org/jira/browse/MADLIB-413 and 
> https://issues.apache.org/jira/browse/MADLIB-1134
> Story
> As a MADlib developer, I want to get 3nd phase implementation of NN going 
> with training and prediction functions, so that I can have a more advanced 
> and performant version of NN
> Features to add:
> * other algos (e.g., resilient backpropagation)
> * momentum



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (MADLIB-1135) Neural Networks - MLP - Phase 3

2017-07-12 Thread Frank McQuillan (JIRA)

 [ 
https://issues.apache.org/jira/browse/MADLIB-1135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Frank McQuillan updated MADLIB-1135:

Fix Version/s: (was: v1.12)
   v2.0

> Neural Networks - MLP - Phase 3
> ---
>
> Key: MADLIB-1135
> URL: https://issues.apache.org/jira/browse/MADLIB-1135
> Project: Apache MADlib
>  Issue Type: Improvement
>  Components: Module: Neural Networks
>Reporter: Frank McQuillan
> Fix For: v2.0
>
>
> Follow on from https://issues.apache.org/jira/browse/MADLIB-413 and 
> https://issues.apache.org/jira/browse/MADLIB-1134
> Story
> As a MADlib developer, I want to get 3nd phase implementation of NN going 
> with training and prediction functions, so that I can have a more advanced 
> and performant version of NN
> Features to add:
> * other algos (e.g., resilient backpropagation)
> * momentum



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (MADLIB-1135) Neural Networks - MLP - Phase 3

2017-07-12 Thread Frank McQuillan (JIRA)
Frank McQuillan created MADLIB-1135:
---

 Summary: Neural Networks - MLP - Phase 3
 Key: MADLIB-1135
 URL: https://issues.apache.org/jira/browse/MADLIB-1135
 Project: Apache MADlib
  Issue Type: Improvement
  Components: Module: Neural Networks
Reporter: Frank McQuillan
Assignee: Cooper Sloan
 Fix For: v1.12


Follow on from https://issues.apache.org/jira/browse/MADLIB-413

Story

As a MADlib developer, I want to get 2nd phase implementation of NN going with 
training and prediction functions, so that I can use this to build to an MVP 
version for GA.

Features to add:

* weights for inputs
* logic for n_tries
* multiple loss functions
* normalize inputs
* L2 regularization



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (MADLIB-1134) Neural Networks - MLP - Phase 2

2017-07-12 Thread Frank McQuillan (JIRA)

 [ 
https://issues.apache.org/jira/browse/MADLIB-1134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Frank McQuillan updated MADLIB-1134:

Reporter: Frank McQuillan  (was: Caleb Welton)

> Neural Networks - MLP - Phase 2
> ---
>
> Key: MADLIB-1134
> URL: https://issues.apache.org/jira/browse/MADLIB-1134
> Project: Apache MADlib
>  Issue Type: New Feature
>  Components: Module: Neural Networks
>Reporter: Frank McQuillan
>Assignee: Cooper Sloan
> Fix For: v1.12
>
>
> Follow on from https://issues.apache.org/jira/browse/MADLIB-413
> Story
> As a MADlib developer, I want to get 2nd phase implementation of NN going 
> with training and prediction functions, so that I can use this to build to an 
> MVP version for GA.
> Features to add
> * weights for inputs
> * logic for n_tries
> * multiple loss functions
> * normalize inputs
> * L2 regularization



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (MADLIB-1134) Neural Networks - MLP - Phase 2

2017-07-12 Thread Frank McQuillan (JIRA)

 [ 
https://issues.apache.org/jira/browse/MADLIB-1134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Frank McQuillan updated MADLIB-1134:

Description: 
Follow on from https://issues.apache.org/jira/browse/MADLIB-413

Story

As a MADlib developer, I want to get 2nd phase implementation of NN going with 
training and prediction functions, so that I can use this to build to an MVP 
version for GA.

Features to add

* weights for inputs
* logic for n_tries
* multiple loss functions
* normalize inputs
* L2 regularization

  was:
Multilayer perceptron with backpropagation

Modules:
* mlp_classification
* mlp_regression

Interface

{code}
source_table VARCHAR
output_table VARCHAR
independent_varname VARCHAR -- Column name for input features, should be a Real 
Valued array
dependent_varname VARCHAR, -- Column name for target values, should be Real 
Valued array of size 1 or greater
hidden_layer_sizes INTEGER[], -- Number of units per hidden layer (can be empty 
or null, in which case, no hidden layers)
optimizer_params VARCHAR, -- Specified below
weights VARCHAR, -- Column name for weights. Weights the loss for each input 
vector. Column should contain positive real value
activation_function VARCHAR, -- One of 'sigmoid' (default), 'tanh', 'relu', or 
any prefix (eg. 't', 's')
grouping_cols
)
{code}
where
{code}
optimizer_params: -- eg "step_size=0.5, n_tries=5"
{
step_size DOUBLE PRECISION, -- Learning rate
n_iterations INTEGER, -- Number of iterations per try
n_tries INTEGER, -- Total number of training cycles, with random 
initializations to avoid local minima.
tolerance DOUBLE PRECISION, -- Maximum distance between weights before training 
stops (or until it reaches n_iterations)
}
{code}


> Neural Networks - MLP - Phase 2
> ---
>
> Key: MADLIB-1134
> URL: https://issues.apache.org/jira/browse/MADLIB-1134
> Project: Apache MADlib
>  Issue Type: New Feature
>  Components: Module: Neural Networks
>Reporter: Caleb Welton
>Assignee: Cooper Sloan
> Fix For: v1.12
>
>
> Follow on from https://issues.apache.org/jira/browse/MADLIB-413
> Story
> As a MADlib developer, I want to get 2nd phase implementation of NN going 
> with training and prediction functions, so that I can use this to build to an 
> MVP version for GA.
> Features to add
> * weights for inputs
> * logic for n_tries
> * multiple loss functions
> * normalize inputs
> * L2 regularization



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (MADLIB-1134) Neural Networks - MLP - Phase 2

2017-07-12 Thread Frank McQuillan (JIRA)

 [ 
https://issues.apache.org/jira/browse/MADLIB-1134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Frank McQuillan updated MADLIB-1134:

Summary: Neural Networks - MLP - Phase 2  (was: CLONE - Neural Networks - 
MLP - Phase 1)

> Neural Networks - MLP - Phase 2
> ---
>
> Key: MADLIB-1134
> URL: https://issues.apache.org/jira/browse/MADLIB-1134
> Project: Apache MADlib
>  Issue Type: New Feature
>  Components: Module: Neural Networks
>Reporter: Caleb Welton
>Assignee: Cooper Sloan
> Fix For: v1.12
>
>
> Multilayer perceptron with backpropagation
> Modules:
> * mlp_classification
> * mlp_regression
> Interface
> {code}
> source_table VARCHAR
> output_table VARCHAR
> independent_varname VARCHAR -- Column name for input features, should be a 
> Real Valued array
> dependent_varname VARCHAR, -- Column name for target values, should be Real 
> Valued array of size 1 or greater
> hidden_layer_sizes INTEGER[], -- Number of units per hidden layer (can be 
> empty or null, in which case, no hidden layers)
> optimizer_params VARCHAR, -- Specified below
> weights VARCHAR, -- Column name for weights. Weights the loss for each input 
> vector. Column should contain positive real value
> activation_function VARCHAR, -- One of 'sigmoid' (default), 'tanh', 'relu', 
> or any prefix (eg. 't', 's')
> grouping_cols
> )
> {code}
> where
> {code}
> optimizer_params: -- eg "step_size=0.5, n_tries=5"
> {
> step_size DOUBLE PRECISION, -- Learning rate
> n_iterations INTEGER, -- Number of iterations per try
> n_tries INTEGER, -- Total number of training cycles, with random 
> initializations to avoid local minima.
> tolerance DOUBLE PRECISION, -- Maximum distance between weights before 
> training stops (or until it reaches n_iterations)
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (MADLIB-1134) CLONE - Neural Networks - MLP - Phase 1

2017-07-12 Thread Frank McQuillan (JIRA)
Frank McQuillan created MADLIB-1134:
---

 Summary: CLONE - Neural Networks - MLP - Phase 1
 Key: MADLIB-1134
 URL: https://issues.apache.org/jira/browse/MADLIB-1134
 Project: Apache MADlib
  Issue Type: New Feature
  Components: Module: Neural Networks
Reporter: Caleb Welton
Assignee: Cooper Sloan
 Fix For: v1.12


Multilayer perceptron with backpropagation

Modules:
* mlp_classification
* mlp_regression

Interface

{code}
source_table VARCHAR
output_table VARCHAR
independent_varname VARCHAR -- Column name for input features, should be a Real 
Valued array
dependent_varname VARCHAR, -- Column name for target values, should be Real 
Valued array of size 1 or greater
hidden_layer_sizes INTEGER[], -- Number of units per hidden layer (can be empty 
or null, in which case, no hidden layers)
optimizer_params VARCHAR, -- Specified below
weights VARCHAR, -- Column name for weights. Weights the loss for each input 
vector. Column should contain positive real value
activation_function VARCHAR, -- One of 'sigmoid' (default), 'tanh', 'relu', or 
any prefix (eg. 't', 's')
grouping_cols
)
{code}
where
{code}
optimizer_params: -- eg "step_size=0.5, n_tries=5"
{
step_size DOUBLE PRECISION, -- Learning rate
n_iterations INTEGER, -- Number of iterations per try
n_tries INTEGER, -- Total number of training cycles, with random 
initializations to avoid local minima.
tolerance DOUBLE PRECISION, -- Maximum distance between weights before training 
stops (or until it reaches n_iterations)
}
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (MADLIB-413) Neural Networks - MLP - Phase 1

2017-07-12 Thread Frank McQuillan (JIRA)

 [ 
https://issues.apache.org/jira/browse/MADLIB-413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Frank McQuillan updated MADLIB-413:
---
Summary: Neural Networks - MLP - Phase 1  (was: Neural Networks - MLP)

> Neural Networks - MLP - Phase 1
> ---
>
> Key: MADLIB-413
> URL: https://issues.apache.org/jira/browse/MADLIB-413
> Project: Apache MADlib
>  Issue Type: New Feature
>  Components: Module: Neural Networks
>Reporter: Caleb Welton
>Assignee: Cooper Sloan
> Fix For: v1.12
>
>
> Multilayer perceptron with backpropagation
> Modules:
> * mlp_classification
> * mlp_regression
> Interface
> {code}
> source_table VARCHAR
> output_table VARCHAR
> independent_varname VARCHAR -- Column name for input features, should be a 
> Real Valued array
> dependent_varname VARCHAR, -- Column name for target values, should be Real 
> Valued array of size 1 or greater
> hidden_layer_sizes INTEGER[], -- Number of units per hidden layer (can be 
> empty or null, in which case, no hidden layers)
> optimizer_params VARCHAR, -- Specified below
> weights VARCHAR, -- Column name for weights. Weights the loss for each input 
> vector. Column should contain positive real value
> activation_function VARCHAR, -- One of 'sigmoid' (default), 'tanh', 'relu', 
> or any prefix (eg. 't', 's')
> grouping_cols
> )
> {code}
> where
> {code}
> optimizer_params: -- eg "step_size=0.5, n_tries=5"
> {
> step_size DOUBLE PRECISION, -- Learning rate
> n_iterations INTEGER, -- Number of iterations per try
> n_tries INTEGER, -- Total number of training cycles, with random 
> initializations to avoid local minima.
> tolerance DOUBLE PRECISION, -- Maximum distance between weights before 
> training stops (or until it reaches n_iterations)
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)