[GitHub] [spark] zhengruifeng commented on a change in pull request #25178: [SPARK-28421][ML] SparseVector.apply performance optimization

2019-07-21 Thread GitBox
zhengruifeng commented on a change in pull request #25178: [SPARK-28421][ML] 
SparseVector.apply performance optimization
URL: https://github.com/apache/spark/pull/25178#discussion_r305653654
 
 

 ##
 File path: mllib-local/src/main/scala/org/apache/spark/ml/linalg/Vectors.scala
 ##
 @@ -603,6 +603,19 @@ class SparseVector @Since("2.0.0") (
 
   private[spark] override def asBreeze: BV[Double] = new BSV[Double](indices, 
values, size)
 
+  override def apply(i: Int): Double = {
+if (i < 0 || i >= size) {
+  throw new IndexOutOfBoundsException(s"Index $i out of bounds [0, $size)")
+}
+
+if (indices.isEmpty || i < indices(0) || i > indices(indices.length - 1)) {
 
 Review comment:
   Existing `SparseVector` do not override the `apply` method inheriting from 
`Vector`:
   ```
 /**
  * Gets the value of the ith element.
  * @param i index
  */
 @Since("2.0.0")
 def apply(i: Int): Double = asBreeze(i)
   ```
   
   So a `spark.ml.linalg.SparseVector` will first be converted to a 
`breeze.collection.mutable.SparseArray` and then a `breeze.linalg.SparseVector`.
   
   As to the range check, I think it is just a tiny optimization.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] zhengruifeng commented on a change in pull request #25178: [SPARK-28421][ML] SparseVector.apply performance optimization

2019-07-18 Thread GitBox
zhengruifeng commented on a change in pull request #25178: [SPARK-28421][ML] 
SparseVector.apply performance optimization
URL: https://github.com/apache/spark/pull/25178#discussion_r305176615
 
 

 ##
 File path: mllib-local/src/main/scala/org/apache/spark/ml/linalg/Vectors.scala
 ##
 @@ -603,6 +603,19 @@ class SparseVector @Since("2.0.0") (
 
   private[spark] override def asBreeze: BV[Double] = new BSV[Double](indices, 
values, size)
 
+  override def apply(i: Int): Double = {
+if (i < 0 || i >= size) {
+  throw new IndexOutOfBoundsException(s"Index $i out of bounds [0, $size)")
+}
+
+if (indices.isEmpty || i < indices(0) || i > indices(indices.length - 1)) {
 
 Review comment:
   @srowen I add the checks just because in the impl of `findOffset` in  
`breeze.collection.mutable.SparseArray`,
   it says `// special case for end of list - this is a big win for growing 
sparse arrays`, and I think it is reasonable.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] zhengruifeng commented on a change in pull request #25178: [SPARK-28421][ML] SparseVector.apply performance optimization

2019-07-18 Thread GitBox
zhengruifeng commented on a change in pull request #25178: [SPARK-28421][ML] 
SparseVector.apply performance optimization
URL: https://github.com/apache/spark/pull/25178#discussion_r305176094
 
 

 ##
 File path: mllib-local/src/main/scala/org/apache/spark/ml/linalg/Vectors.scala
 ##
 @@ -603,6 +603,19 @@ class SparseVector @Since("2.0.0") (
 
   private[spark] override def asBreeze: BV[Double] = new BSV[Double](indices, 
values, size)
 
+  override def apply(i: Int): Double = {
+if (i < 0 || i >= size) {
+  throw new IndexOutOfBoundsException(s"Index $i out of bounds [0, $size)")
+}
+
+if (indices.isEmpty || i < indices(0) || i > indices(indices.length - 1)) {
 
 Review comment:
   you can see that if the `nnz` grows, the speed up decrese. That is because 
with a big `nnz`, the  searching complexity `log(nnz)` dominate the whole 
process. However, when `nnz` is a small number (most frequently), the 
conversion is relatively the main part.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] zhengruifeng commented on a change in pull request #25178: [SPARK-28421][ML] SparseVector.apply performance optimization

2019-07-18 Thread GitBox
zhengruifeng commented on a change in pull request #25178: [SPARK-28421][ML] 
SparseVector.apply performance optimization
URL: https://github.com/apache/spark/pull/25178#discussion_r305174098
 
 

 ##
 File path: mllib-local/src/main/scala/org/apache/spark/ml/linalg/Vectors.scala
 ##
 @@ -603,6 +603,19 @@ class SparseVector @Since("2.0.0") (
 
   private[spark] override def asBreeze: BV[Double] = new BSV[Double](indices, 
values, size)
 
+  override def apply(i: Int): Double = {
+if (i < 0 || i >= size) {
+  throw new IndexOutOfBoundsException(s"Index $i out of bounds [0, $size)")
+}
+
+if (indices.isEmpty || i < indices(0) || i > indices(indices.length - 1)) {
 
 Review comment:
   @srowen @kiszk 
   on each call of `Sparse.apply`, a conversion to `breeze.linalg.SparseVector` 
& `breeze.collection.mutable.SparseArray` was performed internally.
   The improvement coms from avoiding this. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] zhengruifeng commented on a change in pull request #25178: [SPARK-28421][ML] SparseVector.apply performance optimization

2019-07-17 Thread GitBox
zhengruifeng commented on a change in pull request #25178: [SPARK-28421][ML] 
SparseVector.apply performance optimization
URL: https://github.com/apache/spark/pull/25178#discussion_r304713452
 
 

 ##
 File path: mllib-local/src/main/scala/org/apache/spark/ml/linalg/Vectors.scala
 ##
 @@ -603,6 +603,19 @@ class SparseVector @Since("2.0.0") (
 
   private[spark] override def asBreeze: BV[Double] = new BSV[Double](indices, 
values, size)
 
+  override def apply(i: Int): Double = {
+if (i < 0 || i >= size) {
+  throw new IndexOutOfBoundsException(s"Index $i out of bounds [0, $size)")
+}
+
+if (indices.isEmpty || i < indices(0) || i > indices(indices.length - 1)) {
 
 Review comment:
   1, the impl of `Arrays.binarySearch` does not chek the range:
   ```
   public static int binarySearch(int[] a, int key) {
   return binarySearch0(a, 0, a.length, key);
   }
   
   // Like public version, but without range checks.
   private static int binarySearch0(long[] a, int fromIndex, int toIndex,
long key) {
   int low = fromIndex;
   int high = toIndex - 1;
   
   while (low <= high) {
   int mid = (low + high) >>> 1;
   long midVal = a[mid];
   
   if (midVal < key)
   low = mid + 1;
   else if (midVal > key)
   high = mid - 1;
   else
   return mid; // key found
   }
   return -(low + 1);  // key not found.
   }
   ```
   2, in `breeze.collection.mutable.SparseArray`, the `findOffset` function 
called in `apply` to perform binary seach, take the special case that the key 
is out of range into account
   ```
   if (used == 0) {
 // empty list do nothing
 -1
   } else {
 val index = this.index
 if (i > index(used - 1)) {
   // special case for end of list - this is a big win for growing 
sparse arrays
   ~used
   ```
   
   so I added those simple checking between binary search.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org