Re: Fwd: Re: RE: Join my network on LinkedIn

2013-12-19 Thread Isabel Drost-Fromm
On Thu, Dec 19, 2013 at 02:02:23AM -0800, Ted Dunning wrote:
 The confluence 4.0 migration broke our style sheets on the confluence web
 site.  Other projects have been advised to remove all fanciness from the
 styling and then re-add whatever is necessary.

So this might be why I'm even unable to find any login button to it maybe. Does 
anyone remember where the styles for our confluence live?


Isabel


[jira] [Commented] (MAHOUT-1231) No input clusters found in error in kmeans

2013-12-19 Thread Zoraida Hidalgo Sanchez (JIRA)

[ 
https://issues.apache.org/jira/browse/MAHOUT-1231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13852828#comment-13852828
 ] 

Zoraida Hidalgo Sanchez commented on MAHOUT-1231:
-

Lee, it happens to me when running reuters example(cluster-reuters.sh). 
tfidf-vectors directory contains empty content and kmeans fails to pick up the 
random clusters. It works if I run it in local mode. I am running 0.7 version 
with claudera 1.4.2

 No input clusters found in  error in kmeans
 -

 Key: MAHOUT-1231
 URL: https://issues.apache.org/jira/browse/MAHOUT-1231
 Project: Mahout
  Issue Type: Question
  Components: Clustering
Reporter: Summer Lee
 Fix For: 0.8


 1.seqdirectory
  mahout seqdirectory --input /user/hdfs/input/new1.csv --output
  /user/hdfs/new1/seqdirectory --tempDir
  /user/hdfs/new1/seqdirectory/tempDir
 2.seq2sparse 
  mahout seq2sparse --input /user/hdfs/new1/seqdirectory --output
  /user/hdfs/new1/seq2sparse -wt tfidf
 3.kmeans 
  mahout kmeans --input /user/hdfs/new1/seq2sparse/tfidf-vectors
  --output /user/hdfs/new1/kmeans -c /user/hdfs/new1/clusters/kmeans -x 3 -k 
  3 --tempDir /user/hdfs/new1/kmeans/tempDir
 and then error is occured
 Failing Oozie Launcher, Main class [org.apache.mahout.driver.MahoutDriver], 
 main() threw exception, No input clusters found in 
 /user/oozie/mahout/z3/kmeansCopy/clusters/part-randomSeed. Check your -c 
 argument.
 java.lang.IllegalStateException: No input clusters found in 
 /user/oozie/mahout/z3/kmeansCopy/clusters/part-randomSeed. Check your -c 
 argument.
   at 
 org.apache.mahout.clustering.kmeans.KMeansDriver.buildClusters(KMeansDriver.java:217)
   at 
 org.apache.mahout.clustering.kmeans.KMeansDriver.run(KMeansDriver.java:148)
   at 
 org.apache.mahout.clustering.kmeans.KMeansDriver.run(KMeansDriver.java:107)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
   at 
 org.apache.mahout.clustering.kmeans.KMeansDriver.main(KMeansDriver.java:48)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
   at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
   at org.apache.mahout.driver.MahoutDriver.main(MahoutDriver.java:195)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at 
 org.apache.oozie.action.hadoop.LauncherMapper.map(LauncherMapper.java:467)
   at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
   at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:436)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
   at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:396)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
   at org.apache.hadoop.mapred.Child.main(Child.java:249)
 Oozie Launcher failed, finishing Hadoop job gracefully
 Oozie Launcher ends
 ===
 Why kmeans driver can't make clusters in Hadoop with oozie system?
 In hadoop with not oozie system, it worked.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Updated] (MAHOUT-1265) Add Multilayer Perceptron

2013-12-19 Thread Yexi Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAHOUT-1265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yexi Jiang updated MAHOUT-1265:
---

Attachment: Mahout-1265-17.patch

The version 17.

 Add Multilayer Perceptron 
 --

 Key: MAHOUT-1265
 URL: https://issues.apache.org/jira/browse/MAHOUT-1265
 Project: Mahout
  Issue Type: New Feature
Reporter: Yexi Jiang
  Labels: machine_learning, neural_network
 Attachments: MAHOUT-1265.patch, Mahout-1265-13.patch, 
 Mahout-1265-17.patch


 Design of multilayer perceptron
 1. Motivation
 A multilayer perceptron (MLP) is a kind of feed forward artificial neural 
 network, which is a mathematical model inspired by the biological neural 
 network. The multilayer perceptron can be used for various machine learning 
 tasks such as classification and regression. It is helpful if it can be 
 included in mahout.
 2. API
 The design goal of API is to facilitate the usage of MLP for user, and make 
 the implementation detail user transparent.
 The following is an example code of how user uses the MLP.
 -
 //  set the parameters
 double learningRate = 0.5;
 double momentum = 0.1;
 int[] layerSizeArray = new int[] {2, 5, 1};
 String costFuncName = “SquaredError”;
 String squashingFuncName = “Sigmoid”;
 //  the location to store the model, if there is already an existing model at 
 the specified location, MLP will throw exception
 URI modelLocation = ...
 MultilayerPerceptron mlp = new MultiLayerPerceptron(layerSizeArray, 
 modelLocation);
 mlp.setLearningRate(learningRate).setMomentum(momentum).setRegularization(...).setCostFunction(...).setSquashingFunction(...);
 //  the user can also load an existing model with given URI and update the 
 model with new training data, if there is no existing model at the specified 
 location, an exception will be thrown
 /*
 MultilayerPerceptron mlp = new MultiLayerPerceptron(learningRate, 
 regularization, momentum, squashingFuncName, costFuncName, modelLocation);
 */
 URI trainingDataLocation = …
 //  the detail of training is transparent to the user, it may running in a 
 single machine or in a distributed environment
 mlp.train(trainingDataLocation);
 //  user can also train the model with one training instance in stochastic 
 gradient descent way
 Vector trainingInstance = ...
 mlp.train(trainingInstance);
 //  prepare the input feature
 Vector inputFeature …
 //  the semantic meaning of the output result is defined by the user
 //  in general case, the dimension of output vector is 1 for regression and 
 two-class classification
 //  the dimension of output vector is n for n-class classification (n  2)
 Vector outputVector = mlp.output(inputFeature); 
 -
 3. Methodology
 The output calculation can be easily implemented with feed-forward approach. 
 Also, the single machine training is straightforward. The following will 
 describe how to train MLP in distributed way with batch gradient descent. The 
 workflow is illustrated as the below figure.
 https://docs.google.com/drawings/d/1s8hiYKpdrP3epe1BzkrddIfShkxPrqSuQBH0NAawEM4/pub?w=960h=720
 For the distributed training, each training iteration is divided into two 
 steps, the weight update calculation step and the weight update step. The 
 distributed MLP can only be trained in batch-update approach.
 3.1 The partial weight update calculation step:
 This step trains the MLP distributedly. Each task will get a copy of the MLP 
 model, and calculate the weight update with a partition of data.
 Suppose the training error is E(w) = ½ \sigma_{d \in D} cost(t_d, y_d), where 
 D denotes the training set, d denotes a training instance, t_d denotes the 
 class label and y_d denotes the output of the MLP. Also, suppose sigmoid 
 function is used as the squashing function, 
 squared error is used as the cost function, 
 t_i denotes the target value for the ith dimension of the output layer, 
 o_i denotes the actual output for the ith dimension of the output layer, 
 l denotes the learning rate,
 w_{ij} denotes the weight between the jth neuron in previous layer and the 
 ith neuron in the next layer. 
 The weight of each edge is updated as 
 \Delta w_{ij} = l * 1 / m * \delta_j * o_i, 
 where \delta_j = - \sigma_{m} * o_j^{(m)} * (1 - o_j^{(m)}) * (t_j^{(m)} - 
 o_j^{(m)}) for output layer, \delta = - \sigma_{m} * o_j^{(m)} * (1 - 
 o_j^{(m)}) * \sigma_k \delta_k * w_{jk} for hidden layer. 
 It is easy to know that \delta_j can be rewritten as 
 \delta_j = - \sigma_{i = 1}^k \sigma_{m_i} * o_j^{(m_i)} * (1 - o_j^{(m_i)}) 
 * (t_j^{(m_i)} - o_j^{(m_i)})
 The above equation indicates that the \delta_j can be divided into k parts.
 So for the implementation, each mapper can calculate part of \delta_j with 
 given partition of data, and then store the result into a 

[jira] [Created] (MAHOUT-1383) Download link on Mahout main page still points to Confluence

2013-12-19 Thread Isabel Drost-Fromm (JIRA)
Isabel Drost-Fromm created MAHOUT-1383:
--

 Summary: Download link on Mahout main page still points to 
Confluence
 Key: MAHOUT-1383
 URL: https://issues.apache.org/jira/browse/MAHOUT-1383
 Project: Mahout
  Issue Type: Bug
Affects Versions: 0.8
Reporter: Isabel Drost-Fromm
 Fix For: 0.9


The download button on the Mahout main page (and all sub-pages as it is part of 
the template) still points to the Confluence Wiki page. Need to re-direct this 
to the new, fixed CMS page.

Code lives in svn under site/mahout-cms/templates/standard.html



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


Re: Mahout 0.9 release

2013-12-19 Thread Frank Scholten
I am looking at M-1329 (Support for Hadoop 2.x) as we speak. This change
requires quite some testing and I prefer to push this to 1.0. I am thinking
of creating a unit test that starts miniclusters for each versions and runs
a job in them.




On Thu, Dec 19, 2013 at 12:28 AM, Suneel Marthi suneel_mar...@yahoo.comwrote:

 There's M-1329 that covers this. Hopefully it should make it for 0.9

 Sent from my iPhone

  On Dec 18, 2013, at 6:20 PM, Isabel Drost-Fromm isa...@apache.org
 wrote:
 
  On Mon, 16 Dec 2013 23:16:36 +0200
  Gokhan Capan gkhn...@gmail.com wrote:
 
  M-1354 (Support for Hadoop 2.x) - Patch available.
  Gokhan, any updates on this.
 
  Nope, still couldn't make it work.
 
 
  Should we push that for 1.0 then (if this is shortly before completion
  and there's too much in 1.0 to push for a release early next year, I'd
  also be happy to have a smaller release between now and Berlin
  Buzzwords that includes the fix...).
 
  Isabel



Re: Mahout 0.9 release

2013-12-19 Thread Suneel Marthi
+1

Sent from my iPhone

 On Dec 19, 2013, at 12:17 PM, Frank Scholten fr...@frankscholten.nl wrote:
 
 I am looking at M-1329 (Support for Hadoop 2.x) as we speak. This change
 requires quite some testing and I prefer to push this to 1.0. I am thinking
 of creating a unit test that starts miniclusters for each versions and runs
 a job in them.
 
 
 
 
 On Thu, Dec 19, 2013 at 12:28 AM, Suneel Marthi 
 suneel_mar...@yahoo.comwrote:
 
 There's M-1329 that covers this. Hopefully it should make it for 0.9
 
 Sent from my iPhone
 
 On Dec 18, 2013, at 6:20 PM, Isabel Drost-Fromm isa...@apache.org
 wrote:
 
 On Mon, 16 Dec 2013 23:16:36 +0200
 Gokhan Capan gkhn...@gmail.com wrote:
 
 M-1354 (Support for Hadoop 2.x) - Patch available.
 Gokhan, any updates on this.
 
 Nope, still couldn't make it work.
 
 
 Should we push that for 1.0 then (if this is shortly before completion
 and there's too much in 1.0 to push for a release early next year, I'd
 also be happy to have a smaller release between now and Berlin
 Buzzwords that includes the fix...).
 
 Isabel
 


[jira] [Commented] (MAHOUT-1265) Add Multilayer Perceptron

2013-12-19 Thread Suneel Marthi (JIRA)

[ 
https://issues.apache.org/jira/browse/MAHOUT-1265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13853072#comment-13853072
 ] 

Suneel Marthi commented on MAHOUT-1265:
---

I'll be committing this code to trunk today. 

 Add Multilayer Perceptron 
 --

 Key: MAHOUT-1265
 URL: https://issues.apache.org/jira/browse/MAHOUT-1265
 Project: Mahout
  Issue Type: New Feature
Reporter: Yexi Jiang
  Labels: machine_learning, neural_network
 Attachments: MAHOUT-1265.patch, Mahout-1265-13.patch, 
 Mahout-1265-17.patch


 Design of multilayer perceptron
 1. Motivation
 A multilayer perceptron (MLP) is a kind of feed forward artificial neural 
 network, which is a mathematical model inspired by the biological neural 
 network. The multilayer perceptron can be used for various machine learning 
 tasks such as classification and regression. It is helpful if it can be 
 included in mahout.
 2. API
 The design goal of API is to facilitate the usage of MLP for user, and make 
 the implementation detail user transparent.
 The following is an example code of how user uses the MLP.
 -
 //  set the parameters
 double learningRate = 0.5;
 double momentum = 0.1;
 int[] layerSizeArray = new int[] {2, 5, 1};
 String costFuncName = “SquaredError”;
 String squashingFuncName = “Sigmoid”;
 //  the location to store the model, if there is already an existing model at 
 the specified location, MLP will throw exception
 URI modelLocation = ...
 MultilayerPerceptron mlp = new MultiLayerPerceptron(layerSizeArray, 
 modelLocation);
 mlp.setLearningRate(learningRate).setMomentum(momentum).setRegularization(...).setCostFunction(...).setSquashingFunction(...);
 //  the user can also load an existing model with given URI and update the 
 model with new training data, if there is no existing model at the specified 
 location, an exception will be thrown
 /*
 MultilayerPerceptron mlp = new MultiLayerPerceptron(learningRate, 
 regularization, momentum, squashingFuncName, costFuncName, modelLocation);
 */
 URI trainingDataLocation = …
 //  the detail of training is transparent to the user, it may running in a 
 single machine or in a distributed environment
 mlp.train(trainingDataLocation);
 //  user can also train the model with one training instance in stochastic 
 gradient descent way
 Vector trainingInstance = ...
 mlp.train(trainingInstance);
 //  prepare the input feature
 Vector inputFeature …
 //  the semantic meaning of the output result is defined by the user
 //  in general case, the dimension of output vector is 1 for regression and 
 two-class classification
 //  the dimension of output vector is n for n-class classification (n  2)
 Vector outputVector = mlp.output(inputFeature); 
 -
 3. Methodology
 The output calculation can be easily implemented with feed-forward approach. 
 Also, the single machine training is straightforward. The following will 
 describe how to train MLP in distributed way with batch gradient descent. The 
 workflow is illustrated as the below figure.
 https://docs.google.com/drawings/d/1s8hiYKpdrP3epe1BzkrddIfShkxPrqSuQBH0NAawEM4/pub?w=960h=720
 For the distributed training, each training iteration is divided into two 
 steps, the weight update calculation step and the weight update step. The 
 distributed MLP can only be trained in batch-update approach.
 3.1 The partial weight update calculation step:
 This step trains the MLP distributedly. Each task will get a copy of the MLP 
 model, and calculate the weight update with a partition of data.
 Suppose the training error is E(w) = ½ \sigma_{d \in D} cost(t_d, y_d), where 
 D denotes the training set, d denotes a training instance, t_d denotes the 
 class label and y_d denotes the output of the MLP. Also, suppose sigmoid 
 function is used as the squashing function, 
 squared error is used as the cost function, 
 t_i denotes the target value for the ith dimension of the output layer, 
 o_i denotes the actual output for the ith dimension of the output layer, 
 l denotes the learning rate,
 w_{ij} denotes the weight between the jth neuron in previous layer and the 
 ith neuron in the next layer. 
 The weight of each edge is updated as 
 \Delta w_{ij} = l * 1 / m * \delta_j * o_i, 
 where \delta_j = - \sigma_{m} * o_j^{(m)} * (1 - o_j^{(m)}) * (t_j^{(m)} - 
 o_j^{(m)}) for output layer, \delta = - \sigma_{m} * o_j^{(m)} * (1 - 
 o_j^{(m)}) * \sigma_k \delta_k * w_{jk} for hidden layer. 
 It is easy to know that \delta_j can be rewritten as 
 \delta_j = - \sigma_{i = 1}^k \sigma_{m_i} * o_j^{(m_i)} * (1 - o_j^{(m_i)}) 
 * (t_j^{(m_i)} - o_j^{(m_i)})
 The above equation indicates that the \delta_j can be divided into k parts.
 So for the implementation, each mapper can calculate part of \delta_j with 
 given 

Re: Mahout 0.9 release

2013-12-19 Thread Andrew Musselman
+1


On Thu, Dec 19, 2013 at 9:20 AM, Suneel Marthi suneel_mar...@yahoo.comwrote:

 +1

 Sent from my iPhone

  On Dec 19, 2013, at 12:17 PM, Frank Scholten fr...@frankscholten.nl
 wrote:
 
  I am looking at M-1329 (Support for Hadoop 2.x) as we speak. This change
  requires quite some testing and I prefer to push this to 1.0. I am
 thinking
  of creating a unit test that starts miniclusters for each versions and
 runs
  a job in them.
 
 
 
 
  On Thu, Dec 19, 2013 at 12:28 AM, Suneel Marthi suneel_mar...@yahoo.com
 wrote:
 
  There's M-1329 that covers this. Hopefully it should make it for 0.9
 
  Sent from my iPhone
 
  On Dec 18, 2013, at 6:20 PM, Isabel Drost-Fromm isa...@apache.org
  wrote:
 
  On Mon, 16 Dec 2013 23:16:36 +0200
  Gokhan Capan gkhn...@gmail.com wrote:
 
  M-1354 (Support for Hadoop 2.x) - Patch available.
  Gokhan, any updates on this.
 
  Nope, still couldn't make it work.
 
 
  Should we push that for 1.0 then (if this is shortly before completion
  and there's too much in 1.0 to push for a release early next year, I'd
  also be happy to have a smaller release between now and Berlin
  Buzzwords that includes the fix...).
 
  Isabel
 



[jira] [Updated] (MAHOUT-1265) Add Multilayer Perceptron

2013-12-19 Thread Suneel Marthi (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAHOUT-1265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suneel Marthi updated MAHOUT-1265:
--

Attachment: (was: Mahout-1265-13.patch)

 Add Multilayer Perceptron 
 --

 Key: MAHOUT-1265
 URL: https://issues.apache.org/jira/browse/MAHOUT-1265
 Project: Mahout
  Issue Type: New Feature
Reporter: Yexi Jiang
  Labels: machine_learning, neural_network
 Attachments: MAHOUT-1265.patch, Mahout-1265-17.patch


 Design of multilayer perceptron
 1. Motivation
 A multilayer perceptron (MLP) is a kind of feed forward artificial neural 
 network, which is a mathematical model inspired by the biological neural 
 network. The multilayer perceptron can be used for various machine learning 
 tasks such as classification and regression. It is helpful if it can be 
 included in mahout.
 2. API
 The design goal of API is to facilitate the usage of MLP for user, and make 
 the implementation detail user transparent.
 The following is an example code of how user uses the MLP.
 -
 //  set the parameters
 double learningRate = 0.5;
 double momentum = 0.1;
 int[] layerSizeArray = new int[] {2, 5, 1};
 String costFuncName = “SquaredError”;
 String squashingFuncName = “Sigmoid”;
 //  the location to store the model, if there is already an existing model at 
 the specified location, MLP will throw exception
 URI modelLocation = ...
 MultilayerPerceptron mlp = new MultiLayerPerceptron(layerSizeArray, 
 modelLocation);
 mlp.setLearningRate(learningRate).setMomentum(momentum).setRegularization(...).setCostFunction(...).setSquashingFunction(...);
 //  the user can also load an existing model with given URI and update the 
 model with new training data, if there is no existing model at the specified 
 location, an exception will be thrown
 /*
 MultilayerPerceptron mlp = new MultiLayerPerceptron(learningRate, 
 regularization, momentum, squashingFuncName, costFuncName, modelLocation);
 */
 URI trainingDataLocation = …
 //  the detail of training is transparent to the user, it may running in a 
 single machine or in a distributed environment
 mlp.train(trainingDataLocation);
 //  user can also train the model with one training instance in stochastic 
 gradient descent way
 Vector trainingInstance = ...
 mlp.train(trainingInstance);
 //  prepare the input feature
 Vector inputFeature …
 //  the semantic meaning of the output result is defined by the user
 //  in general case, the dimension of output vector is 1 for regression and 
 two-class classification
 //  the dimension of output vector is n for n-class classification (n  2)
 Vector outputVector = mlp.output(inputFeature); 
 -
 3. Methodology
 The output calculation can be easily implemented with feed-forward approach. 
 Also, the single machine training is straightforward. The following will 
 describe how to train MLP in distributed way with batch gradient descent. The 
 workflow is illustrated as the below figure.
 https://docs.google.com/drawings/d/1s8hiYKpdrP3epe1BzkrddIfShkxPrqSuQBH0NAawEM4/pub?w=960h=720
 For the distributed training, each training iteration is divided into two 
 steps, the weight update calculation step and the weight update step. The 
 distributed MLP can only be trained in batch-update approach.
 3.1 The partial weight update calculation step:
 This step trains the MLP distributedly. Each task will get a copy of the MLP 
 model, and calculate the weight update with a partition of data.
 Suppose the training error is E(w) = ½ \sigma_{d \in D} cost(t_d, y_d), where 
 D denotes the training set, d denotes a training instance, t_d denotes the 
 class label and y_d denotes the output of the MLP. Also, suppose sigmoid 
 function is used as the squashing function, 
 squared error is used as the cost function, 
 t_i denotes the target value for the ith dimension of the output layer, 
 o_i denotes the actual output for the ith dimension of the output layer, 
 l denotes the learning rate,
 w_{ij} denotes the weight between the jth neuron in previous layer and the 
 ith neuron in the next layer. 
 The weight of each edge is updated as 
 \Delta w_{ij} = l * 1 / m * \delta_j * o_i, 
 where \delta_j = - \sigma_{m} * o_j^{(m)} * (1 - o_j^{(m)}) * (t_j^{(m)} - 
 o_j^{(m)}) for output layer, \delta = - \sigma_{m} * o_j^{(m)} * (1 - 
 o_j^{(m)}) * \sigma_k \delta_k * w_{jk} for hidden layer. 
 It is easy to know that \delta_j can be rewritten as 
 \delta_j = - \sigma_{i = 1}^k \sigma_{m_i} * o_j^{(m_i)} * (1 - o_j^{(m_i)}) 
 * (t_j^{(m_i)} - o_j^{(m_i)})
 The above equation indicates that the \delta_j can be divided into k parts.
 So for the implementation, each mapper can calculate part of \delta_j with 
 given partition of data, and then store the result into a specified location.
 3.2 

[jira] [Updated] (MAHOUT-1265) Add Multilayer Perceptron

2013-12-19 Thread Suneel Marthi (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAHOUT-1265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suneel Marthi updated MAHOUT-1265:
--

   Resolution: Fixed
Fix Version/s: 0.9
 Assignee: Suneel Marthi
   Status: Resolved  (was: Patch Available)

 Add Multilayer Perceptron 
 --

 Key: MAHOUT-1265
 URL: https://issues.apache.org/jira/browse/MAHOUT-1265
 Project: Mahout
  Issue Type: New Feature
Reporter: Yexi Jiang
Assignee: Suneel Marthi
  Labels: machine_learning, neural_network
 Fix For: 0.9

 Attachments: MAHOUT-1265.patch, Mahout-1265-17.patch


 Design of multilayer perceptron
 1. Motivation
 A multilayer perceptron (MLP) is a kind of feed forward artificial neural 
 network, which is a mathematical model inspired by the biological neural 
 network. The multilayer perceptron can be used for various machine learning 
 tasks such as classification and regression. It is helpful if it can be 
 included in mahout.
 2. API
 The design goal of API is to facilitate the usage of MLP for user, and make 
 the implementation detail user transparent.
 The following is an example code of how user uses the MLP.
 -
 //  set the parameters
 double learningRate = 0.5;
 double momentum = 0.1;
 int[] layerSizeArray = new int[] {2, 5, 1};
 String costFuncName = “SquaredError”;
 String squashingFuncName = “Sigmoid”;
 //  the location to store the model, if there is already an existing model at 
 the specified location, MLP will throw exception
 URI modelLocation = ...
 MultilayerPerceptron mlp = new MultiLayerPerceptron(layerSizeArray, 
 modelLocation);
 mlp.setLearningRate(learningRate).setMomentum(momentum).setRegularization(...).setCostFunction(...).setSquashingFunction(...);
 //  the user can also load an existing model with given URI and update the 
 model with new training data, if there is no existing model at the specified 
 location, an exception will be thrown
 /*
 MultilayerPerceptron mlp = new MultiLayerPerceptron(learningRate, 
 regularization, momentum, squashingFuncName, costFuncName, modelLocation);
 */
 URI trainingDataLocation = …
 //  the detail of training is transparent to the user, it may running in a 
 single machine or in a distributed environment
 mlp.train(trainingDataLocation);
 //  user can also train the model with one training instance in stochastic 
 gradient descent way
 Vector trainingInstance = ...
 mlp.train(trainingInstance);
 //  prepare the input feature
 Vector inputFeature …
 //  the semantic meaning of the output result is defined by the user
 //  in general case, the dimension of output vector is 1 for regression and 
 two-class classification
 //  the dimension of output vector is n for n-class classification (n  2)
 Vector outputVector = mlp.output(inputFeature); 
 -
 3. Methodology
 The output calculation can be easily implemented with feed-forward approach. 
 Also, the single machine training is straightforward. The following will 
 describe how to train MLP in distributed way with batch gradient descent. The 
 workflow is illustrated as the below figure.
 https://docs.google.com/drawings/d/1s8hiYKpdrP3epe1BzkrddIfShkxPrqSuQBH0NAawEM4/pub?w=960h=720
 For the distributed training, each training iteration is divided into two 
 steps, the weight update calculation step and the weight update step. The 
 distributed MLP can only be trained in batch-update approach.
 3.1 The partial weight update calculation step:
 This step trains the MLP distributedly. Each task will get a copy of the MLP 
 model, and calculate the weight update with a partition of data.
 Suppose the training error is E(w) = ½ \sigma_{d \in D} cost(t_d, y_d), where 
 D denotes the training set, d denotes a training instance, t_d denotes the 
 class label and y_d denotes the output of the MLP. Also, suppose sigmoid 
 function is used as the squashing function, 
 squared error is used as the cost function, 
 t_i denotes the target value for the ith dimension of the output layer, 
 o_i denotes the actual output for the ith dimension of the output layer, 
 l denotes the learning rate,
 w_{ij} denotes the weight between the jth neuron in previous layer and the 
 ith neuron in the next layer. 
 The weight of each edge is updated as 
 \Delta w_{ij} = l * 1 / m * \delta_j * o_i, 
 where \delta_j = - \sigma_{m} * o_j^{(m)} * (1 - o_j^{(m)}) * (t_j^{(m)} - 
 o_j^{(m)}) for output layer, \delta = - \sigma_{m} * o_j^{(m)} * (1 - 
 o_j^{(m)}) * \sigma_k \delta_k * w_{jk} for hidden layer. 
 It is easy to know that \delta_j can be rewritten as 
 \delta_j = - \sigma_{i = 1}^k \sigma_{m_i} * o_j^{(m_i)} * (1 - o_j^{(m_i)}) 
 * (t_j^{(m_i)} - o_j^{(m_i)})
 The above equation indicates that the \delta_j can be divided into k parts.
 So for the 

[jira] [Commented] (MAHOUT-1265) Add Multilayer Perceptron

2013-12-19 Thread Suneel Marthi (JIRA)

[ 
https://issues.apache.org/jira/browse/MAHOUT-1265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13853188#comment-13853188
 ] 

Suneel Marthi commented on MAHOUT-1265:
---

Patch committed to trunk, great work Yexi.

 Add Multilayer Perceptron 
 --

 Key: MAHOUT-1265
 URL: https://issues.apache.org/jira/browse/MAHOUT-1265
 Project: Mahout
  Issue Type: New Feature
Reporter: Yexi Jiang
  Labels: machine_learning, neural_network
 Fix For: 0.9

 Attachments: MAHOUT-1265.patch, Mahout-1265-17.patch


 Design of multilayer perceptron
 1. Motivation
 A multilayer perceptron (MLP) is a kind of feed forward artificial neural 
 network, which is a mathematical model inspired by the biological neural 
 network. The multilayer perceptron can be used for various machine learning 
 tasks such as classification and regression. It is helpful if it can be 
 included in mahout.
 2. API
 The design goal of API is to facilitate the usage of MLP for user, and make 
 the implementation detail user transparent.
 The following is an example code of how user uses the MLP.
 -
 //  set the parameters
 double learningRate = 0.5;
 double momentum = 0.1;
 int[] layerSizeArray = new int[] {2, 5, 1};
 String costFuncName = “SquaredError”;
 String squashingFuncName = “Sigmoid”;
 //  the location to store the model, if there is already an existing model at 
 the specified location, MLP will throw exception
 URI modelLocation = ...
 MultilayerPerceptron mlp = new MultiLayerPerceptron(layerSizeArray, 
 modelLocation);
 mlp.setLearningRate(learningRate).setMomentum(momentum).setRegularization(...).setCostFunction(...).setSquashingFunction(...);
 //  the user can also load an existing model with given URI and update the 
 model with new training data, if there is no existing model at the specified 
 location, an exception will be thrown
 /*
 MultilayerPerceptron mlp = new MultiLayerPerceptron(learningRate, 
 regularization, momentum, squashingFuncName, costFuncName, modelLocation);
 */
 URI trainingDataLocation = …
 //  the detail of training is transparent to the user, it may running in a 
 single machine or in a distributed environment
 mlp.train(trainingDataLocation);
 //  user can also train the model with one training instance in stochastic 
 gradient descent way
 Vector trainingInstance = ...
 mlp.train(trainingInstance);
 //  prepare the input feature
 Vector inputFeature …
 //  the semantic meaning of the output result is defined by the user
 //  in general case, the dimension of output vector is 1 for regression and 
 two-class classification
 //  the dimension of output vector is n for n-class classification (n  2)
 Vector outputVector = mlp.output(inputFeature); 
 -
 3. Methodology
 The output calculation can be easily implemented with feed-forward approach. 
 Also, the single machine training is straightforward. The following will 
 describe how to train MLP in distributed way with batch gradient descent. The 
 workflow is illustrated as the below figure.
 https://docs.google.com/drawings/d/1s8hiYKpdrP3epe1BzkrddIfShkxPrqSuQBH0NAawEM4/pub?w=960h=720
 For the distributed training, each training iteration is divided into two 
 steps, the weight update calculation step and the weight update step. The 
 distributed MLP can only be trained in batch-update approach.
 3.1 The partial weight update calculation step:
 This step trains the MLP distributedly. Each task will get a copy of the MLP 
 model, and calculate the weight update with a partition of data.
 Suppose the training error is E(w) = ½ \sigma_{d \in D} cost(t_d, y_d), where 
 D denotes the training set, d denotes a training instance, t_d denotes the 
 class label and y_d denotes the output of the MLP. Also, suppose sigmoid 
 function is used as the squashing function, 
 squared error is used as the cost function, 
 t_i denotes the target value for the ith dimension of the output layer, 
 o_i denotes the actual output for the ith dimension of the output layer, 
 l denotes the learning rate,
 w_{ij} denotes the weight between the jth neuron in previous layer and the 
 ith neuron in the next layer. 
 The weight of each edge is updated as 
 \Delta w_{ij} = l * 1 / m * \delta_j * o_i, 
 where \delta_j = - \sigma_{m} * o_j^{(m)} * (1 - o_j^{(m)}) * (t_j^{(m)} - 
 o_j^{(m)}) for output layer, \delta = - \sigma_{m} * o_j^{(m)} * (1 - 
 o_j^{(m)}) * \sigma_k \delta_k * w_{jk} for hidden layer. 
 It is easy to know that \delta_j can be rewritten as 
 \delta_j = - \sigma_{i = 1}^k \sigma_{m_i} * o_j^{(m_i)} * (1 - o_j^{(m_i)}) 
 * (t_j^{(m_i)} - o_j^{(m_i)})
 The above equation indicates that the \delta_j can be divided into k parts.
 So for the implementation, each mapper can calculate part of \delta_j with 
 given 

[jira] [Commented] (MAHOUT-1265) Add Multilayer Perceptron

2013-12-19 Thread Yexi Jiang (JIRA)

[ 
https://issues.apache.org/jira/browse/MAHOUT-1265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13853303#comment-13853303
 ] 

Yexi Jiang commented on MAHOUT-1265:


Great. I am thinking a mapreduce version of MLP. It may take a non-trivial 
amount of time.

 Add Multilayer Perceptron 
 --

 Key: MAHOUT-1265
 URL: https://issues.apache.org/jira/browse/MAHOUT-1265
 Project: Mahout
  Issue Type: New Feature
Reporter: Yexi Jiang
Assignee: Suneel Marthi
  Labels: machine_learning, neural_network
 Fix For: 0.9

 Attachments: MAHOUT-1265.patch, Mahout-1265-17.patch


 Design of multilayer perceptron
 1. Motivation
 A multilayer perceptron (MLP) is a kind of feed forward artificial neural 
 network, which is a mathematical model inspired by the biological neural 
 network. The multilayer perceptron can be used for various machine learning 
 tasks such as classification and regression. It is helpful if it can be 
 included in mahout.
 2. API
 The design goal of API is to facilitate the usage of MLP for user, and make 
 the implementation detail user transparent.
 The following is an example code of how user uses the MLP.
 -
 //  set the parameters
 double learningRate = 0.5;
 double momentum = 0.1;
 int[] layerSizeArray = new int[] {2, 5, 1};
 String costFuncName = “SquaredError”;
 String squashingFuncName = “Sigmoid”;
 //  the location to store the model, if there is already an existing model at 
 the specified location, MLP will throw exception
 URI modelLocation = ...
 MultilayerPerceptron mlp = new MultiLayerPerceptron(layerSizeArray, 
 modelLocation);
 mlp.setLearningRate(learningRate).setMomentum(momentum).setRegularization(...).setCostFunction(...).setSquashingFunction(...);
 //  the user can also load an existing model with given URI and update the 
 model with new training data, if there is no existing model at the specified 
 location, an exception will be thrown
 /*
 MultilayerPerceptron mlp = new MultiLayerPerceptron(learningRate, 
 regularization, momentum, squashingFuncName, costFuncName, modelLocation);
 */
 URI trainingDataLocation = …
 //  the detail of training is transparent to the user, it may running in a 
 single machine or in a distributed environment
 mlp.train(trainingDataLocation);
 //  user can also train the model with one training instance in stochastic 
 gradient descent way
 Vector trainingInstance = ...
 mlp.train(trainingInstance);
 //  prepare the input feature
 Vector inputFeature …
 //  the semantic meaning of the output result is defined by the user
 //  in general case, the dimension of output vector is 1 for regression and 
 two-class classification
 //  the dimension of output vector is n for n-class classification (n  2)
 Vector outputVector = mlp.output(inputFeature); 
 -
 3. Methodology
 The output calculation can be easily implemented with feed-forward approach. 
 Also, the single machine training is straightforward. The following will 
 describe how to train MLP in distributed way with batch gradient descent. The 
 workflow is illustrated as the below figure.
 https://docs.google.com/drawings/d/1s8hiYKpdrP3epe1BzkrddIfShkxPrqSuQBH0NAawEM4/pub?w=960h=720
 For the distributed training, each training iteration is divided into two 
 steps, the weight update calculation step and the weight update step. The 
 distributed MLP can only be trained in batch-update approach.
 3.1 The partial weight update calculation step:
 This step trains the MLP distributedly. Each task will get a copy of the MLP 
 model, and calculate the weight update with a partition of data.
 Suppose the training error is E(w) = ½ \sigma_{d \in D} cost(t_d, y_d), where 
 D denotes the training set, d denotes a training instance, t_d denotes the 
 class label and y_d denotes the output of the MLP. Also, suppose sigmoid 
 function is used as the squashing function, 
 squared error is used as the cost function, 
 t_i denotes the target value for the ith dimension of the output layer, 
 o_i denotes the actual output for the ith dimension of the output layer, 
 l denotes the learning rate,
 w_{ij} denotes the weight between the jth neuron in previous layer and the 
 ith neuron in the next layer. 
 The weight of each edge is updated as 
 \Delta w_{ij} = l * 1 / m * \delta_j * o_i, 
 where \delta_j = - \sigma_{m} * o_j^{(m)} * (1 - o_j^{(m)}) * (t_j^{(m)} - 
 o_j^{(m)}) for output layer, \delta = - \sigma_{m} * o_j^{(m)} * (1 - 
 o_j^{(m)}) * \sigma_k \delta_k * w_{jk} for hidden layer. 
 It is easy to know that \delta_j can be rewritten as 
 \delta_j = - \sigma_{i = 1}^k \sigma_{m_i} * o_j^{(m_i)} * (1 - o_j^{(m_i)}) 
 * (t_j^{(m_i)} - o_j^{(m_i)})
 The above equation indicates that the \delta_j can be divided into k parts.
 So 

[jira] [Commented] (MAHOUT-1265) Add Multilayer Perceptron

2013-12-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/MAHOUT-1265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13853402#comment-13853402
 ] 

Hudson commented on MAHOUT-1265:


SUCCESS: Integrated in Mahout-Quality #2376 (See 
[https://builds.apache.org/job/Mahout-Quality/2376/])
MAHOUT-1265: Multilayer Perceptron (smarthi: rev 1552403)
* /mahout/trunk/CHANGELOG
* /mahout/trunk/core/src/main/java/org/apache/mahout/classifier/mlp
* 
/mahout/trunk/core/src/main/java/org/apache/mahout/classifier/mlp/MultilayerPerceptron.java
* 
/mahout/trunk/core/src/main/java/org/apache/mahout/classifier/mlp/NeuralNetwork.java
* 
/mahout/trunk/core/src/main/java/org/apache/mahout/classifier/mlp/NeuralNetworkFunctions.java
* /mahout/trunk/core/src/test/java/org/apache/mahout/classifier/mlp
* 
/mahout/trunk/core/src/test/java/org/apache/mahout/classifier/mlp/TestMultilayerPerceptron.java
* 
/mahout/trunk/core/src/test/java/org/apache/mahout/classifier/mlp/TestNeuralNetwork.java


 Add Multilayer Perceptron 
 --

 Key: MAHOUT-1265
 URL: https://issues.apache.org/jira/browse/MAHOUT-1265
 Project: Mahout
  Issue Type: New Feature
Reporter: Yexi Jiang
Assignee: Suneel Marthi
  Labels: machine_learning, neural_network
 Fix For: 0.9

 Attachments: MAHOUT-1265.patch, Mahout-1265-17.patch


 Design of multilayer perceptron
 1. Motivation
 A multilayer perceptron (MLP) is a kind of feed forward artificial neural 
 network, which is a mathematical model inspired by the biological neural 
 network. The multilayer perceptron can be used for various machine learning 
 tasks such as classification and regression. It is helpful if it can be 
 included in mahout.
 2. API
 The design goal of API is to facilitate the usage of MLP for user, and make 
 the implementation detail user transparent.
 The following is an example code of how user uses the MLP.
 -
 //  set the parameters
 double learningRate = 0.5;
 double momentum = 0.1;
 int[] layerSizeArray = new int[] {2, 5, 1};
 String costFuncName = “SquaredError”;
 String squashingFuncName = “Sigmoid”;
 //  the location to store the model, if there is already an existing model at 
 the specified location, MLP will throw exception
 URI modelLocation = ...
 MultilayerPerceptron mlp = new MultiLayerPerceptron(layerSizeArray, 
 modelLocation);
 mlp.setLearningRate(learningRate).setMomentum(momentum).setRegularization(...).setCostFunction(...).setSquashingFunction(...);
 //  the user can also load an existing model with given URI and update the 
 model with new training data, if there is no existing model at the specified 
 location, an exception will be thrown
 /*
 MultilayerPerceptron mlp = new MultiLayerPerceptron(learningRate, 
 regularization, momentum, squashingFuncName, costFuncName, modelLocation);
 */
 URI trainingDataLocation = …
 //  the detail of training is transparent to the user, it may running in a 
 single machine or in a distributed environment
 mlp.train(trainingDataLocation);
 //  user can also train the model with one training instance in stochastic 
 gradient descent way
 Vector trainingInstance = ...
 mlp.train(trainingInstance);
 //  prepare the input feature
 Vector inputFeature …
 //  the semantic meaning of the output result is defined by the user
 //  in general case, the dimension of output vector is 1 for regression and 
 two-class classification
 //  the dimension of output vector is n for n-class classification (n  2)
 Vector outputVector = mlp.output(inputFeature); 
 -
 3. Methodology
 The output calculation can be easily implemented with feed-forward approach. 
 Also, the single machine training is straightforward. The following will 
 describe how to train MLP in distributed way with batch gradient descent. The 
 workflow is illustrated as the below figure.
 https://docs.google.com/drawings/d/1s8hiYKpdrP3epe1BzkrddIfShkxPrqSuQBH0NAawEM4/pub?w=960h=720
 For the distributed training, each training iteration is divided into two 
 steps, the weight update calculation step and the weight update step. The 
 distributed MLP can only be trained in batch-update approach.
 3.1 The partial weight update calculation step:
 This step trains the MLP distributedly. Each task will get a copy of the MLP 
 model, and calculate the weight update with a partition of data.
 Suppose the training error is E(w) = ½ \sigma_{d \in D} cost(t_d, y_d), where 
 D denotes the training set, d denotes a training instance, t_d denotes the 
 class label and y_d denotes the output of the MLP. Also, suppose sigmoid 
 function is used as the squashing function, 
 squared error is used as the cost function, 
 t_i denotes the target value for the ith dimension of the output layer, 
 o_i denotes the actual output for the ith dimension of the 

[jira] [Commented] (MAHOUT-1265) Add Multilayer Perceptron

2013-12-19 Thread Ted Dunning (JIRA)

[ 
https://issues.apache.org/jira/browse/MAHOUT-1265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13853712#comment-13853712
 ] 

Ted Dunning commented on MAHOUT-1265:
-

{quote}
Great. I am thinking a mapreduce version of MLP. It may take a non-trivial 
amount of time.
{quote}
Let's talk on the mailing list.  I really think that a downpour architecture 
will not be much harder than a map-reduce implementation and will be orders of 
magnitude faster.

 Add Multilayer Perceptron 
 --

 Key: MAHOUT-1265
 URL: https://issues.apache.org/jira/browse/MAHOUT-1265
 Project: Mahout
  Issue Type: New Feature
Reporter: Yexi Jiang
Assignee: Suneel Marthi
  Labels: machine_learning, neural_network
 Fix For: 0.9

 Attachments: MAHOUT-1265.patch, Mahout-1265-17.patch


 Design of multilayer perceptron
 1. Motivation
 A multilayer perceptron (MLP) is a kind of feed forward artificial neural 
 network, which is a mathematical model inspired by the biological neural 
 network. The multilayer perceptron can be used for various machine learning 
 tasks such as classification and regression. It is helpful if it can be 
 included in mahout.
 2. API
 The design goal of API is to facilitate the usage of MLP for user, and make 
 the implementation detail user transparent.
 The following is an example code of how user uses the MLP.
 -
 //  set the parameters
 double learningRate = 0.5;
 double momentum = 0.1;
 int[] layerSizeArray = new int[] {2, 5, 1};
 String costFuncName = “SquaredError”;
 String squashingFuncName = “Sigmoid”;
 //  the location to store the model, if there is already an existing model at 
 the specified location, MLP will throw exception
 URI modelLocation = ...
 MultilayerPerceptron mlp = new MultiLayerPerceptron(layerSizeArray, 
 modelLocation);
 mlp.setLearningRate(learningRate).setMomentum(momentum).setRegularization(...).setCostFunction(...).setSquashingFunction(...);
 //  the user can also load an existing model with given URI and update the 
 model with new training data, if there is no existing model at the specified 
 location, an exception will be thrown
 /*
 MultilayerPerceptron mlp = new MultiLayerPerceptron(learningRate, 
 regularization, momentum, squashingFuncName, costFuncName, modelLocation);
 */
 URI trainingDataLocation = …
 //  the detail of training is transparent to the user, it may running in a 
 single machine or in a distributed environment
 mlp.train(trainingDataLocation);
 //  user can also train the model with one training instance in stochastic 
 gradient descent way
 Vector trainingInstance = ...
 mlp.train(trainingInstance);
 //  prepare the input feature
 Vector inputFeature …
 //  the semantic meaning of the output result is defined by the user
 //  in general case, the dimension of output vector is 1 for regression and 
 two-class classification
 //  the dimension of output vector is n for n-class classification (n  2)
 Vector outputVector = mlp.output(inputFeature); 
 -
 3. Methodology
 The output calculation can be easily implemented with feed-forward approach. 
 Also, the single machine training is straightforward. The following will 
 describe how to train MLP in distributed way with batch gradient descent. The 
 workflow is illustrated as the below figure.
 https://docs.google.com/drawings/d/1s8hiYKpdrP3epe1BzkrddIfShkxPrqSuQBH0NAawEM4/pub?w=960h=720
 For the distributed training, each training iteration is divided into two 
 steps, the weight update calculation step and the weight update step. The 
 distributed MLP can only be trained in batch-update approach.
 3.1 The partial weight update calculation step:
 This step trains the MLP distributedly. Each task will get a copy of the MLP 
 model, and calculate the weight update with a partition of data.
 Suppose the training error is E(w) = ½ \sigma_{d \in D} cost(t_d, y_d), where 
 D denotes the training set, d denotes a training instance, t_d denotes the 
 class label and y_d denotes the output of the MLP. Also, suppose sigmoid 
 function is used as the squashing function, 
 squared error is used as the cost function, 
 t_i denotes the target value for the ith dimension of the output layer, 
 o_i denotes the actual output for the ith dimension of the output layer, 
 l denotes the learning rate,
 w_{ij} denotes the weight between the jth neuron in previous layer and the 
 ith neuron in the next layer. 
 The weight of each edge is updated as 
 \Delta w_{ij} = l * 1 / m * \delta_j * o_i, 
 where \delta_j = - \sigma_{m} * o_j^{(m)} * (1 - o_j^{(m)}) * (t_j^{(m)} - 
 o_j^{(m)}) for output layer, \delta = - \sigma_{m} * o_j^{(m)} * (1 - 
 o_j^{(m)}) * \sigma_k \delta_k * w_{jk} for hidden layer. 
 It is easy to know that \delta_j can be rewritten as 

Wiki cleanup

2013-12-19 Thread Isabel Drost-Fromm
Hi,

seems like after the Confluence account lock-down I'm lacking
permissions to delete pages in our wiki (currently using the account
that is linked to my @apache.org address to log-in, login name
mainec).

Could someone with enough karma please add me back (if there's another
account that looks like me and has permissions, please let me know.)


Isabel


[jira] [Resolved] (MAHOUT-1383) Download link on Mahout main page still points to Confluence

2013-12-19 Thread Isabel Drost-Fromm (JIRA)

 [ 
https://issues.apache.org/jira/browse/MAHOUT-1383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Isabel Drost-Fromm resolved MAHOUT-1383.


Resolution: Fixed

 Download link on Mahout main page still points to Confluence
 

 Key: MAHOUT-1383
 URL: https://issues.apache.org/jira/browse/MAHOUT-1383
 Project: Mahout
  Issue Type: Bug
Affects Versions: 0.8
Reporter: Isabel Drost-Fromm
 Fix For: 0.9


 The download button on the Mahout main page (and all sub-pages as it is part 
 of the template) still points to the Confluence Wiki page. Need to re-direct 
 this to the new, fixed CMS page.
 Code lives in svn under site/mahout-cms/templates/standard.html



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


Re: Wiki cleanup

2013-12-19 Thread Andrew Musselman
Likewise I'd like to help, if someone can grant me permission to edit please do.

 On Dec 19, 2013, at 11:11 PM, Isabel Drost-Fromm isa...@apache.org wrote:
 
 Hi,
 
 seems like after the Confluence account lock-down I'm lacking
 permissions to delete pages in our wiki (currently using the account
 that is linked to my @apache.org address to log-in, login name
 mainec).
 
 Could someone with enough karma please add me back (if there's another
 account that looks like me and has permissions, please let me know.)
 
 
 Isabel


Re: Wiki cleanup

2013-12-19 Thread Suneel Marthi
Grant has all the edit Grants!!





On Friday, December 20, 2013 2:28 AM, Andrew Musselman 
andrew.mussel...@gmail.com wrote:
 
Likewise I'd like to help, if someone can grant me permission to edit please do.


 On Dec 19, 2013, at 11:11 PM, Isabel Drost-Fromm isa...@apache.org wrote:
 
 Hi,
 
 seems like after the Confluence account lock-down I'm lacking
 permissions to delete pages in our wiki (currently using the account
 that is linked to my @apache.org address to log-in, login name
 mainec).
 
 Could someone with enough karma please add me back (if there's another
 account that looks like me and has permissions, please let me know.)
 
 
 Isabel

Re: Wiki cleanup

2013-12-19 Thread Suneel Marthi
I believe I have access to edit the Wiki, I had to request Grant for access.
Please send a message to Grant.





On Friday, December 20, 2013 2:12 AM, Isabel Drost-Fromm isa...@apache.org 
wrote:
 
Hi,

seems like after the Confluence account lock-down I'm lacking
permissions to delete pages in our wiki (currently using the account
that is linked to my @apache.org address to log-in, login name
mainec).

Could someone with enough karma please add me back (if there's another
account that looks like me and has permissions, please let me know.)


Isabel

Re: Wiki cleanup

2013-12-19 Thread Andrew Musselman
Perfect name :)

 On Dec 19, 2013, at 11:29 PM, Suneel Marthi suneel_mar...@yahoo.com wrote:
 
 Grant has all the edit Grants!!
 
 
 
 
 
 On Friday, December 20, 2013 2:28 AM, Andrew Musselman 
 andrew.mussel...@gmail.com wrote:
 
 Likewise I'd like to help, if someone can grant me permission to edit please 
 do.
 
 
 On Dec 19, 2013, at 11:11 PM, Isabel Drost-Fromm isa...@apache.org wrote:
 
 Hi,
 
 seems like after the Confluence account lock-down I'm lacking
 permissions to delete pages in our wiki (currently using the account
 that is linked to my @apache.org address to log-in, login name
 mainec).
 
 Could someone with enough karma please add me back (if there's another
 account that looks like me and has permissions, please let me know.)
 
 
 Isabel