[GitHub] spark issue #19152: [SPARK-21915][ML][PySpark] Model 1 and Model 2 ParamMaps...

2017-09-13 Thread marktab
Github user marktab commented on the issue:

https://github.com/apache/spark/pull/19152
  
@srowen  -- may I close this pull request? 


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #19152: [SPARK-21915][ML][PySpark] Model 1 and Model 2 Pa...

2017-09-13 Thread marktab
GitHub user marktab reopened a pull request:

https://github.com/apache/spark/pull/19152

[SPARK-21915][ML][PySpark] Model 1 and Model 2 ParamMaps Missing

@dongjoon-hyun @HyukjinKwon

Error in PySpark example code:
/examples/src/main/python/ml/estimator_transformer_param_example.py

The original Scala code says
println("Model 2 was fit using parameters: " + 
model2.parent.extractParamMap)

The parent is lr

There is no method for accessing parent as is done in Scala.

This code has been tested in Python, and returns values consistent with 
Scala

## What changes were proposed in this pull request?

Proposing to call the lr variable instead of model1 or model2

## How was this patch tested?

This patch was tested with Spark 2.1.0 comparing the Scala and PySpark 
results. Pyspark returns nothing at present for those two print lines.

The output for model2 in PySpark should be

{Param(parent='LogisticRegression_4187be538f744d5a9090', name='tol', 
doc='the convergence tolerance for iterative algorithms (>= 0).'): 1e-06,
Param(parent='LogisticRegression_4187be538f744d5a9090', 
name='elasticNetParam', doc='the ElasticNet mixing parameter, in range [0, 1]. 
For alpha = 0, the penalty is an L2 penalty. For alpha = 1, it is an L1 
penalty.'): 0.0,
Param(parent='LogisticRegression_4187be538f744d5a9090', 
name='predictionCol', doc='prediction column name.'): 'prediction',
Param(parent='LogisticRegression_4187be538f744d5a9090', name='featuresCol', 
doc='features column name.'): 'features',
Param(parent='LogisticRegression_4187be538f744d5a9090', name='labelCol', 
doc='label column name.'): 'label',
Param(parent='LogisticRegression_4187be538f744d5a9090', 
name='probabilityCol', doc='Column name for predicted class conditional 
probabilities. Note: Not all models output well-calibrated probability 
estimates! These probabilities should be treated as confidences, not precise 
probabilities.'): 'myProbability',
Param(parent='LogisticRegression_4187be538f744d5a9090', 
name='rawPredictionCol', doc='raw prediction (a.k.a. confidence) column 
name.'): 'rawPrediction',
Param(parent='LogisticRegression_4187be538f744d5a9090', name='family', 
doc='The name of family which is a description of the label distribution to be 
used in the model. Supported options: auto, binomial, multinomial'): 'auto',
Param(parent='LogisticRegression_4187be538f744d5a9090', 
name='fitIntercept', doc='whether to fit an intercept term.'): True,
Param(parent='LogisticRegression_4187be538f744d5a9090', name='threshold', 
doc='Threshold in binary classification prediction, in range [0, 1]. If 
threshold and thresholds are both set, they must match.e.g. if threshold is p, 
then thresholds must be equal to [1-p, p].'): 0.55,
Param(parent='LogisticRegression_4187be538f744d5a9090', 
name='aggregationDepth', doc='suggested depth for treeAggregate (>= 2).'): 2,
Param(parent='LogisticRegression_4187be538f744d5a9090', name='maxIter', 
doc='max number of iterations (>= 0).'): 30,
Param(parent='LogisticRegression_4187be538f744d5a9090', name='regParam', 
doc='regularization parameter (>= 0).'): 0.1,
Param(parent='LogisticRegression_4187be538f744d5a9090', 
name='standardization', doc='whether to standardize the training features 
before fitting the model.'): True}

Please review http://spark.apache.org/contributing.html before opening a 
pull request.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/marktab/spark branch-2.2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/19152.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #19152


commit a2ccb8a83d13d39c95f0ac1cac1c74dca064
Author: MarkTab marktab.net <mark...@users.noreply.github.com>
Date:   2017-09-07T02:20:59Z

Model 1 and Model 2 ParamMaps Missing

@dongjoon-hyun @HyukjinKwon

Error in PySpark example code:

[https://github.com/apache/spark/blob/master/examples/src/main/python/ml/estimator_transformer_param_example.py]

The original Scala code says
println("Model 2 was fit using parameters: " + 
model2.parent.extractParamMap)

The parent is lr

There is no method for accessing parent as is done in Scala.

This code has been tested in Python, and returns values consistent with 
Scala




---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #19152: [SPARK-21915][ML][PySpark] Model 1 and Model 2 Pa...

2017-09-13 Thread marktab
Github user marktab closed the pull request at:

https://github.com/apache/spark/pull/19152


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #19152: [SPARK-21915][ML][PySpark] Model 1 and Model 2 Pa...

2017-09-06 Thread marktab
GitHub user marktab opened a pull request:

https://github.com/apache/spark/pull/19152

[SPARK-21915][ML][PySpark] Model 1 and Model 2 ParamMaps Missing

@dongjoon-hyun @HyukjinKwon

Error in PySpark example code:
/examples/src/main/python/ml/estimator_transformer_param_example.py

The original Scala code says
println("Model 2 was fit using parameters: " + 
model2.parent.extractParamMap)

The parent is lr

There is no method for accessing parent as is done in Scala.

This code has been tested in Python, and returns values consistent with 
Scala

## What changes were proposed in this pull request?

Proposing to call the lr variable instead of model1 or model2

## How was this patch tested?

This patch was tested with Spark 2.1.0 comparing the Scala and PySpark 
results. Pyspark returns nothing at present for those two print lines.

The output for model2 in PySpark should be

{Param(parent='LogisticRegression_4187be538f744d5a9090', name='tol', 
doc='the convergence tolerance for iterative algorithms (>= 0).'): 1e-06,
Param(parent='LogisticRegression_4187be538f744d5a9090', 
name='elasticNetParam', doc='the ElasticNet mixing parameter, in range [0, 1]. 
For alpha = 0, the penalty is an L2 penalty. For alpha = 1, it is an L1 
penalty.'): 0.0,
Param(parent='LogisticRegression_4187be538f744d5a9090', 
name='predictionCol', doc='prediction column name.'): 'prediction',
Param(parent='LogisticRegression_4187be538f744d5a9090', name='featuresCol', 
doc='features column name.'): 'features',
Param(parent='LogisticRegression_4187be538f744d5a9090', name='labelCol', 
doc='label column name.'): 'label',
Param(parent='LogisticRegression_4187be538f744d5a9090', 
name='probabilityCol', doc='Column name for predicted class conditional 
probabilities. Note: Not all models output well-calibrated probability 
estimates! These probabilities should be treated as confidences, not precise 
probabilities.'): 'myProbability',
Param(parent='LogisticRegression_4187be538f744d5a9090', 
name='rawPredictionCol', doc='raw prediction (a.k.a. confidence) column 
name.'): 'rawPrediction',
Param(parent='LogisticRegression_4187be538f744d5a9090', name='family', 
doc='The name of family which is a description of the label distribution to be 
used in the model. Supported options: auto, binomial, multinomial'): 'auto',
Param(parent='LogisticRegression_4187be538f744d5a9090', 
name='fitIntercept', doc='whether to fit an intercept term.'): True,
Param(parent='LogisticRegression_4187be538f744d5a9090', name='threshold', 
doc='Threshold in binary classification prediction, in range [0, 1]. If 
threshold and thresholds are both set, they must match.e.g. if threshold is p, 
then thresholds must be equal to [1-p, p].'): 0.55,
Param(parent='LogisticRegression_4187be538f744d5a9090', 
name='aggregationDepth', doc='suggested depth for treeAggregate (>= 2).'): 2,
Param(parent='LogisticRegression_4187be538f744d5a9090', name='maxIter', 
doc='max number of iterations (>= 0).'): 30,
Param(parent='LogisticRegression_4187be538f744d5a9090', name='regParam', 
doc='regularization parameter (>= 0).'): 0.1,
Param(parent='LogisticRegression_4187be538f744d5a9090', 
name='standardization', doc='whether to standardize the training features 
before fitting the model.'): True}

Please review http://spark.apache.org/contributing.html before opening a 
pull request.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/marktab/spark branch-2.2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/19152.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #19152


commit a2ccb8a83d13d39c95f0ac1cac1c74dca064
Author: MarkTab marktab.net <mark...@users.noreply.github.com>
Date:   2017-09-07T02:20:59Z

Model 1 and Model 2 ParamMaps Missing

@dongjoon-hyun @HyukjinKwon

Error in PySpark example code:

[https://github.com/apache/spark/blob/master/examples/src/main/python/ml/estimator_transformer_param_example.py]

The original Scala code says
println("Model 2 was fit using parameters: " + 
model2.parent.extractParamMap)

The parent is lr

There is no method for accessing parent as is done in Scala.

This code has been tested in Python, and returns values consistent with 
Scala




---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #19126: [SPARK-21915][ML][PySpark]Model 1 and Model 2 ParamMaps ...

2017-09-06 Thread marktab
Github user marktab commented on the issue:

https://github.com/apache/spark/pull/19126
  
@srowen since I am a new to this review process, should I be seeing the 
change at http://spark.apache.org/docs/latest/ml-pipeline.html 


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #19126: [SPARK-21915][ML][PySpark]Model 1 and Model 2 Par...

2017-09-06 Thread marktab
Github user marktab closed the pull request at:

https://github.com/apache/spark/pull/19126


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #19126: [SPARK-21915][ML][PySpark]Model 1 and Model 2 ParamMaps ...

2017-09-06 Thread marktab
Github user marktab commented on the issue:

https://github.com/apache/spark/pull/19126
  
Thanks @srowen 


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #19126: Model 1 and Model 2 ParamMaps Missing

2017-09-04 Thread marktab
GitHub user marktab opened a pull request:

https://github.com/apache/spark/pull/19126

Model 1 and Model 2 ParamMaps Missing

The original Scala code says
println("Model 2 was fit using parameters: " + 
model2.parent.extractParamMap)

The parent is lr

There is no method for accessing parent as is done in Scala.

This code has been tested in Python, and returns values consistent with 
Scala

## What changes were proposed in this pull request?

(Please fill in changes proposed in this fix)

## How was this patch tested?

(Please explain how this patch was tested. E.g. unit tests, integration 
tests, manual tests)
(If this patch involves UI changes, please attach a screenshot; otherwise, 
remove this)

Please review http://spark.apache.org/contributing.html before opening a 
pull request.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/marktab/spark master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/19126.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #19126


commit 76e5da7b14d71338cf82352e9cf5628640e732a2
Author: MarkTab marktab.net <mark...@users.noreply.github.com>
Date:   2017-09-05T03:26:07Z

Model 1 and Model 2 ParamMaps Missing

The original Scala code says
println("Model 2 was fit using parameters: " + 
model2.parent.extractParamMap)

The parent is lr

There is no method for accessing parent as is done in Scala.

This code has been tested in Python, and returns values consistent with 
Scala




---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



Re: [R] trouble with uninstall

2011-08-07 Thread marktab
Hello

I had the same problem today.

Error Message:  This Installation can only be uninstalled on 64-bit
Windows. 
Computer System:  Windows 7 Enterprise SP1 32-bit
R Version attempted:  R 2.13.1 (installed as part of the R and Friends
package from http://rcom.univie.ac.at/download.html)

I have R 2.12.0 successfully running on another machine using Windows 7
64-bit.

The reason I tried to uninstall is because the installation hanged at a
certain point.  I went down the basic path of R installation only.  Since
installation was unsuccessful, I logged off and tried again a few times. 
Then, when I tried to uninstall, the software gave an error message.

I believe version 2.13.1 for Windows was packaged incorrectly, thus causing
both installation and uninstallation problems.

It is possible to uninstall manually.  If someone knows of a fix to this
installation or uninstallation problem, please post to this thread (which
might help other people who have the same problem).  

My resolution will be to drop back to an earlier version of R, and then wait
for another future version beyond 2.13.1 -- sad that this particular build
has some flaws



-

Mark Tabladillo
Data Mining Architect
Microsoft MVP; SAS Developer
http://www.marktab.net
--
View this message in context: 
http://r.789695.n4.nabble.com/trouble-with-uninstall-tp3694855p3725426.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [Freeswitch-users] Problem dialing out via E1

2009-03-18 Thread MarkTab

We're a couple more steps forward from yesterday. Turned out some of my regex
was incorrect, plus example #9 in the Freeswitch Dialplan Wiki has an extra
space before one of the closing brackets in the default.xml example. After
staring at the screen all day it's funny how you miss these things!

Situation now is I can get the call into FS but, it rings the extension for
a fraction of a second then the call drops. Here's the contents of the
public and default dialplans I'm using (as per example in the wiki) and the
debug -  http://pastebin.freeswitch.org/7819
http://pastebin.freeswitch.org/7819 

I'm also seeing another issue when placing subsequent inbound calls, they
bounce if hitting the same channel the first call came in to (typically
/1:1). Again, grabbed a debug of this -  http://pastebin.freeswitch.org/7818
http://pastebin.freeswitch.org/7818 

Getting there (slowly)

Mark.


mercutioviz wrote:
 
 On Tue, Mar 17, 2009 at 4:24 AM, Mark Tabron
 mark.tab...@rnid-typetalk.org.uk wrote:
 Another update - this time (part) good news! Decided to run wancfg_tdmapi
 again, using the same settings as we always did, and we can now make
 external calls. I suspect that whatever BT did yesterday kicked the
 circuit back into life.
 
 Good. I can't tell you how many times I've spoken to a telco when
 there's a problem and the circuit magically comes back to life. They
 frequently claim, We didn't do anything. I think that's a euphemism
 for we did a reset and prayed.
 

 However placing an external call into FS isn't as successful, looks like
 it can't assign a channel and terminates the call.

 
 Be sure that you have some routing mechanism in your public.xml file.
 Do you have a whole block of DID numbers? Anyway, pastebin your
 public.xml and a debug trace of an incoming call, including what phone
 number the caller dialed, and we'll take a look.
 
 -MC
 
 ___
 Freeswitch-users mailing list
 Freeswitch-users@lists.freeswitch.org
 http://lists.freeswitch.org/mailman/listinfo/freeswitch-users
 UNSUBSCRIBE:http://lists.freeswitch.org/mailman/options/freeswitch-users
 http://www.freeswitch.org
 
 

-- 
View this message in context: 
http://www.nabble.com/Problem-dialing-out-via-E1-tp22479047p22582281.html
Sent from the Freeswitch-users mailing list archive at Nabble.com.


___
Freeswitch-users mailing list
Freeswitch-users@lists.freeswitch.org
http://lists.freeswitch.org/mailman/listinfo/freeswitch-users
UNSUBSCRIBE:http://lists.freeswitch.org/mailman/options/freeswitch-users
http://www.freeswitch.org