[jira] [Commented] (SPARK-9277) SparseVector constructor must throw an error when declared number of elements less than array length

2015-07-25 Thread Andrey Vykhodtsev (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-9277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14641787#comment-14641787
 ] 

Andrey Vykhodtsev commented on SPARK-9277:
--

btw here is the case that shows that just checking that len(array) < size is 
unreliable:

In [4]:

x =  SparseVector(2, {1:1, 1:2, 1:3, 1:4, 5:5})
In [5]:

l = LabeledPoint(0, x)
In [6]:

r = sc.parallelize([l])
In [7]:

m = LogisticRegressionWithSGD.train(r)

Result :

Py4JJavaError: An error occurred while calling 
o38.trainLogisticRegressionModelWithSGD.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in 
stage 5.0 failed 1 times, most recent failure: Lost task 0.0 in stage 5.0 (TID 
5, localhost): java.lang.ArrayIndexOutOfBoundsException: 5

> SparseVector constructor must throw an error when declared number of elements 
> less than array length
> 
>
> Key: SPARK-9277
> URL: https://issues.apache.org/jira/browse/SPARK-9277
> Project: Spark
>  Issue Type: Bug
>  Components: MLlib
>Affects Versions: 1.3.1
>Reporter: Andrey Vykhodtsev
>Priority: Minor
>  Labels: starter
> Attachments: SparseVector test.html, SparseVector test.ipynb
>
>
> I found that one can create SparseVector inconsistently and it will lead to 
> an Java error in runtime, for example when training LogisticRegressionWithSGD.
> Here is the test case:
> In [2]:
> sc.version
> Out[2]:
> u'1.3.1'
> In [13]:
> from pyspark.mllib.linalg import SparseVector
> from pyspark.mllib.regression import LabeledPoint
> from pyspark.mllib.classification import LogisticRegressionWithSGD
> In [3]:
> x =  SparseVector(2, {1:1, 2:2, 3:3, 4:4, 5:5})
> In [10]:
> l = LabeledPoint(0, x)
> In [12]:
> r = sc.parallelize([l])
> In [14]:
> m = LogisticRegressionWithSGD.train(r)
> Error:
> Py4JJavaError: An error occurred while calling 
> o86.trainLogisticRegressionModelWithSGD.
> : org.apache.spark.SparkException: Job aborted due to stage failure: Task 7 
> in stage 11.0 failed 1 times, most recent failure: Lost task 7.0 in stage 
> 11.0 (TID 47, localhost): java.lang.ArrayIndexOutOfBoundsException: 2
> Attached is the notebook with the scenario and the full message



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-9277) SparseVector constructor must throw an error when declared number of elements less than array length

2015-07-25 Thread Andrey Vykhodtsev (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-9277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14641555#comment-14641555
 ] 

Andrey Vykhodtsev commented on SPARK-9277:
--

Hi Joseph,

will it be too expensive performance wize to add the following check : 

max index in the array < size?

>From the correctness perspective it is a better thing to do.


> SparseVector constructor must throw an error when declared number of elements 
> less than array length
> 
>
> Key: SPARK-9277
> URL: https://issues.apache.org/jira/browse/SPARK-9277
> Project: Spark
>  Issue Type: Bug
>  Components: MLlib
>Affects Versions: 1.3.1
>Reporter: Andrey Vykhodtsev
>Priority: Minor
>  Labels: starter
> Attachments: SparseVector test.html, SparseVector test.ipynb
>
>
> I found that one can create SparseVector inconsistently and it will lead to 
> an Java error in runtime, for example when training LogisticRegressionWithSGD.
> Here is the test case:
> In [2]:
> sc.version
> Out[2]:
> u'1.3.1'
> In [13]:
> from pyspark.mllib.linalg import SparseVector
> from pyspark.mllib.regression import LabeledPoint
> from pyspark.mllib.classification import LogisticRegressionWithSGD
> In [3]:
> x =  SparseVector(2, {1:1, 2:2, 3:3, 4:4, 5:5})
> In [10]:
> l = LabeledPoint(0, x)
> In [12]:
> r = sc.parallelize([l])
> In [14]:
> m = LogisticRegressionWithSGD.train(r)
> Error:
> Py4JJavaError: An error occurred while calling 
> o86.trainLogisticRegressionModelWithSGD.
> : org.apache.spark.SparkException: Job aborted due to stage failure: Task 7 
> in stage 11.0 failed 1 times, most recent failure: Lost task 7.0 in stage 
> 11.0 (TID 47, localhost): java.lang.ArrayIndexOutOfBoundsException: 2
> Attached is the notebook with the scenario and the full message



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-9277) SparseVector constructor must throw an error when declared number of elements less than array lenght

2015-07-23 Thread Andrey Vykhodtsev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-9277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Vykhodtsev updated SPARK-9277:
-
Attachment: SparseVector test.ipynb
SparseVector test.html


Attached is the notebook with the scenario and the full message:

> SparseVector constructor must throw an error when declared number of elements 
> less than array lenght
> 
>
> Key: SPARK-9277
> URL: https://issues.apache.org/jira/browse/SPARK-9277
> Project: Spark
>  Issue Type: Bug
>  Components: MLlib
>Affects Versions: 1.3.1
>Reporter: Andrey Vykhodtsev
>Priority: Minor
> Attachments: SparseVector test.html, SparseVector test.ipynb
>
>
> I found that one can create SparseVector inconsistently and it will lead to 
> an Java error in runtime, for example when training LogisticRegressionWithSGD.
> Here is the test case:
> In [2]:
> sc.version
> Out[2]:
> u'1.3.1'
> In [13]:
> from pyspark.mllib.linalg import SparseVector
> from pyspark.mllib.regression import LabeledPoint
> from pyspark.mllib.classification import LogisticRegressionWithSGD
> In [3]:
> x =  SparseVector(2, {1:1, 2:2, 3:3, 4:4, 5:5})
> In [10]:
> l = LabeledPoint(0, x)
> In [12]:
> r = sc.parallelize([l])
> In [14]:
> m = LogisticRegressionWithSGD.train(r)
> Error:
> Py4JJavaError: An error occurred while calling 
> o86.trainLogisticRegressionModelWithSGD.
> : org.apache.spark.SparkException: Job aborted due to stage failure: Task 7 
> in stage 11.0 failed 1 times, most recent failure: Lost task 7.0 in stage 
> 11.0 (TID 47, localhost): java.lang.ArrayIndexOutOfBoundsException: 2
> Attached is the notebook with the scenario and the full message



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-9277) SparseVector constructor must throw an error when declared number of elements less than array lenght

2015-07-23 Thread Andrey Vykhodtsev (JIRA)
Andrey Vykhodtsev created SPARK-9277:


 Summary: SparseVector constructor must throw an error when 
declared number of elements less than array lenght
 Key: SPARK-9277
 URL: https://issues.apache.org/jira/browse/SPARK-9277
 Project: Spark
  Issue Type: Bug
  Components: MLlib
Affects Versions: 1.3.1
Reporter: Andrey Vykhodtsev
Priority: Minor


I found that one can create SparseVector inconsistently and it will lead to an 
Java error in runtime, for example when training LogisticRegressionWithSGD.

Here is the test case:


In [2]:
sc.version
Out[2]:
u'1.3.1'
In [13]:
from pyspark.mllib.linalg import SparseVector
from pyspark.mllib.regression import LabeledPoint
from pyspark.mllib.classification import LogisticRegressionWithSGD
In [3]:
x =  SparseVector(2, {1:1, 2:2, 3:3, 4:4, 5:5})
In [10]:
l = LabeledPoint(0, x)
In [12]:
r = sc.parallelize([l])
In [14]:
m = LogisticRegressionWithSGD.train(r)

Error:


Py4JJavaError: An error occurred while calling 
o86.trainLogisticRegressionModelWithSGD.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 7 in 
stage 11.0 failed 1 times, most recent failure: Lost task 7.0 in stage 11.0 
(TID 47, localhost): java.lang.ArrayIndexOutOfBoundsException: 2


Attached is the notebook with the scenario and the full message



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org