[GitHub] spark pull request: SPARK-5984: Fix TimSort bug causes ArrayOutOfB...

2015-02-28 Thread hotou
Github user hotou commented on the pull request:

https://github.com/apache/spark/pull/4804#issuecomment-76567920
  
Thanks for the review guys


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: SPARK-5984: Fix TimSort bug causes ArrayOutOfB...

2015-02-27 Thread hotou
Github user hotou commented on the pull request:

https://github.com/apache/spark/pull/4804#issuecomment-76509270
  
@srowen Sounds good, done.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: SPARK-5984: Fix TimSort bug causes ArrayOutOfB...

2015-02-27 Thread hotou
Github user hotou commented on the pull request:

https://github.com/apache/spark/pull/4804#issuecomment-76508685
  
@srowen I did what you recommended here. This passed the rat test on my 
machine at least


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: SPARK-5984: Fix TimSort bug causes ArrayOutOfB...

2015-02-27 Thread hotou
Github user hotou commented on a diff in the pull request:

https://github.com/apache/spark/pull/4804#discussion_r25553389
  
--- Diff: 
core/src/test/java/org/apache/spark/util/collection/TestTimSort.java ---
@@ -0,0 +1,133 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.spark.util.collection;
+
+import java.util.*;
+
+/**
+ * This codes generates a int array which fails the standard TimSort.
+ *
+ * The blog that reported the bug
+ * http://www.envisage-project.eu/timsort-specification-and-verification/
+ *
+ * The algorithms to reproduce the bug is obtained from the reporter of 
the bug
+ * https://github.com/abstools/java-timsort-bug
+ *
+ * Licensed under Apache License 2.0
+ * https://github.com/abstools/java-timsort-bug/blob/master/LICENSE
+ */
+public class TestTimSort {
+
+  private static final int MIN_MERGE = 32;
+
+  /**
+   * Returns an array of integers that demonstrate the bug in TimSort
+   */
+  public static int[] getTimSortBugTestSet(int length) {
+int minRun = minRunLength(length);
+List runs = runsJDKWorstCase(minRun, length);
+return createArray(runs, length);
+  }
+
+  private static int minRunLength(int n) {
+int r = 0; // Becomes 1 if any 1 bits are shifted off
+while (n >= MIN_MERGE) {
+  r |= (n & 1);
+  n >>= 1;
+}
+return n + r;
+  }
+
+  private static int[] createArray(List runs, int length) {
+int[] a = new int[length];
+Arrays.fill(a, 0);
+int endRun = -1;
+for (long len : runs)
+  a[endRun += len] = 1;
+a[length - 1] = 0;
+return a;
+  }
+
+  /**
+   * Fills runs with a sequence of run lengths of the form
+   * Y_n x_{n,1}   x_{n,2}   ... x_{n,l_n} 
+   * Y_{n-1} x_{n-1,1} x_{n-1,2} ... x_{n-1,l_{n-1}} 
+   * ... 
+   * Y_1 x_{1,1}   x_{1,2}   ... x_{1,l_1}
+   * The Y_i's are chosen to satisfy the invariant throughout execution,
+   * but the x_{i,j}'s are merged (by TimSort.mergeCollapse)
+   * into an X_i that violates the invariant.
+   *
+   * @param length The sum of all run lengths that will be added to 
runs.
+   */
+  private static List runsJDKWorstCase(int minRun, int length) {
+List runs = new ArrayList();
+
+long runningTotal = 0, Y = minRun + 4, X = minRun;
+
+while (runningTotal + Y + X <= length) {
+  runningTotal += X + Y;
+  generateJDKWrongElem(runs, minRun, X);
+  runs.add(0, Y);
+  // X_{i+1} = Y_i + x_{i,1} + 1, since runs.get(1) = x_{i,1}
+  X = Y + runs.get(1) + 1;
+  // Y_{i+1} = X_{i+1} + Y_i + 1
+  Y += X + 1;
+}
+
+if (runningTotal + X <= length) {
+  runningTotal += X;
+  generateJDKWrongElem(runs, minRun, X);
+}
+
+runs.add(length - runningTotal);
+return runs;
--- End diff --

In SorterSuite I added a test that uses TestTimSort.java

Yes TestTimSort just generate a int[], but the the array has to be at least 
67108864 long, so I thought just posting a huge int[] is not as useful as 
knowing how the array is generated.

The original codes was written to demonstrate the bug so it had a main(), 
and some other stuffs, I get rid of those.

I am fine with fixing the license here, if you guys bear with me a bit. I 
am not that experienced with open source licenses


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: SPARK-5984: Fix TimSort bug causes ArrayOutOfB...

2015-02-27 Thread hotou
Github user hotou commented on a diff in the pull request:

https://github.com/apache/spark/pull/4804#discussion_r25552968
  
--- Diff: 
core/src/test/java/org/apache/spark/util/collection/TestTimSort.java ---
@@ -0,0 +1,133 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.spark.util.collection;
+
+import java.util.*;
+
+/**
+ * This codes generates a int array which fails the standard TimSort.
+ *
+ * The blog that reported the bug
+ * http://www.envisage-project.eu/timsort-specification-and-verification/
+ *
+ * The algorithms to reproduce the bug is obtained from the reporter of 
the bug
+ * https://github.com/abstools/java-timsort-bug
+ *
+ * Licensed under Apache License 2.0
+ * https://github.com/abstools/java-timsort-bug/blob/master/LICENSE
--- End diff --

Well, it's not a exact copy, I made changes to the original codes. 

Do you guys have a IntelliJ style that I can import ?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: SPARK-5984: Fix TimSort bug causes ArrayOutOfB...

2015-02-27 Thread hotou
Github user hotou commented on the pull request:

https://github.com/apache/spark/pull/4804#issuecomment-76505114
  
ok


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: SPARK-5984: Fix TimSort bug causes ArrayOutOfB...

2015-02-27 Thread hotou
Github user hotou commented on the pull request:

https://github.com/apache/spark/pull/4804#issuecomment-76504910
  
Ah. I guess I have to have license file header in .java, not just linked to 
it


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: SPARK-5984: Fix TimSort bug causes ArrayOutOfB...

2015-02-27 Thread hotou
Github user hotou commented on the pull request:

https://github.com/apache/spark/pull/4804#issuecomment-76503891
  
@rxin @srowen Thanks for the review, I updated the comments and licensed 
info etc.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: SPARK-5984: Fix TimSort bug causes ArrayOutOfB...

2015-02-26 Thread hotou
GitHub user hotou opened a pull request:

https://github.com/apache/spark/pull/4804

SPARK-5984: Fix TimSort bug causes ArrayOutOfBoundsException

Fix TimSort bug which causes a ArrayOutOfBoundsException. 

Using the proposed fix here

http://envisage-project.eu/proving-android-java-and-python-sorting-algorithm-is-broken-and-how-to-fix-it/

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hotou/spark SPARK-5984

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/4804.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4804


commit 479a106b6d699c299e5710a5b4dcdf7d45ceae65
Author: Evan Yu 
Date:   2015-02-27T04:23:28Z

SPARK-5984: Fix TimSort bug causes ArrayOutOfBoundsException




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-5860][CORE] JdbcRDD: overflow on large ...

2015-02-21 Thread hotou
Github user hotou commented on a diff in the pull request:

https://github.com/apache/spark/pull/4701#discussion_r25123040
  
--- Diff: core/src/test/scala/org/apache/spark/rdd/JdbcRDDSuite.scala ---
@@ -29,22 +29,42 @@ class JdbcRDDSuite extends FunSuite with BeforeAndAfter 
with LocalSparkContext {
 Class.forName("org.apache.derby.jdbc.EmbeddedDriver")
 val conn = 
DriverManager.getConnection("jdbc:derby:target/JdbcRDDSuiteDb;create=true")
 try {
-  val create = conn.createStatement
-  create.execute("""
-CREATE TABLE FOO(
-  ID INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1, 
INCREMENT BY 1),
-  DATA INTEGER
-)""")
-  create.close()
-  val insert = conn.prepareStatement("INSERT INTO FOO(DATA) VALUES(?)")
-  (1 to 100).foreach { i =>
-insert.setInt(1, i * 2)
-insert.executeUpdate
+
+  try {
+val create = conn.createStatement
+create.execute("""
+  CREATE TABLE FOO(
+ID INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 
1, INCREMENT BY 1),
+DATA INTEGER
+  )""")
+create.close()
+val insert = conn.prepareStatement("INSERT INTO FOO(DATA) 
VALUES(?)")
+(1 to 100).foreach { i =>
+  insert.setInt(1, i * 2)
+  insert.executeUpdate
+}
+insert.close()
+  } catch {
+case e: SQLException if e.getSQLState == "X0Y32" =>
+// table exists
   }
-  insert.close()
-} catch {
-  case e: SQLException if e.getSQLState == "X0Y32" =>
+
+  try {
+val create = conn.createStatement
+create.execute("CREATE TABLE BIGINT_TEST(ID BIGINT NOT NULL, DATA 
INTEGER)")
+create.close()
+val insert = conn.prepareStatement("INSERT INTO BIGINT_TEST 
VALUES(?,?)")
+(1 to 100).foreach { i =>
+  insert.setLong(1, 10L +  4000L * i)
+  insert.setInt(2, i)
+  insert.executeUpdate
+}
+insert.close()
+  } catch {
--- End diff --

There was a problem when I tried to to that. The original writer uses the 
inner catch block to prevent re-creating the table. 

catch {
  case e: SQLException if e.getSQLState == "X0Y32" =>
  // table exists
}

Which means it has to exist for each table to be created. I was simply 
following that pattern. 

An alternative would be is to drop and re-create the table each time, which 
produce cleaner codes, but may slow down the test suite a little bit



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-5860][CORE] JdbcRDD: overflow on large ...

2015-02-20 Thread hotou
Github user hotou commented on a diff in the pull request:

https://github.com/apache/spark/pull/4701#discussion_r25077261
  
--- Diff: core/src/main/scala/org/apache/spark/rdd/JdbcRDD.scala ---
@@ -64,8 +64,8 @@ class JdbcRDD[T: ClassTag](
 // bounds are inclusive, hence the + 1 here and - 1 on end
 val length = 1 + upperBound - lowerBound
 (0 until numPartitions).map(i => {
-  val start = lowerBound + ((i * length) / numPartitions).toLong
-  val end = lowerBound + (((i + 1) * length) / numPartitions).toLong - 
1
+  val start = lowerBound + ((BigDecimal(i) * length) / 
numPartitions).toLong
--- End diff --

Done


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-5860][CORE] JdbcRDD: overflow on large ...

2015-02-20 Thread hotou
Github user hotou commented on a diff in the pull request:

https://github.com/apache/spark/pull/4701#discussion_r25076778
  
--- Diff: core/src/main/scala/org/apache/spark/rdd/JdbcRDD.scala ---
@@ -64,8 +64,8 @@ class JdbcRDD[T: ClassTag](
 // bounds are inclusive, hence the + 1 here and - 1 on end
 val length = 1 + upperBound - lowerBound
 (0 until numPartitions).map(i => {
-  val start = lowerBound + ((i * length) / numPartitions).toLong
-  val end = lowerBound + (((i + 1) * length) / numPartitions).toLong - 
1
+  val start = lowerBound + ((BigDecimal(i) * length) / 
numPartitions).toLong
--- End diff --

I think there is a problem here where 
length = 1 + upperBound - lowerBound 
This can over flow as well, if lowerBound = 0 and upperBound = 
Long.MAX_VALUE
I should fix this, actually make the change smaller


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-5860][CORE] JdbcRDD: overflow on large ...

2015-02-20 Thread hotou
Github user hotou commented on a diff in the pull request:

https://github.com/apache/spark/pull/4701#discussion_r25075949
  
--- Diff: core/src/main/scala/org/apache/spark/rdd/JdbcRDD.scala ---
@@ -64,8 +64,8 @@ class JdbcRDD[T: ClassTag](
 // bounds are inclusive, hence the + 1 here and - 1 on end
 val length = 1 + upperBound - lowerBound
 (0 until numPartitions).map(i => {
-  val start = lowerBound + ((i * length) / numPartitions).toLong
-  val end = lowerBound + (((i + 1) * length) / numPartitions).toLong - 
1
+  val start = lowerBound + ((BigDecimal(i) * length) / 
numPartitions).toLong
--- End diff --

@srowen Sorry I don't understand how last partition can get >= upperBound

With this codes

val length = 1 + upperBound - lowerBound
(0 until numPartitions).map(i => {
  val start = lowerBound + ((BigInt(i) * length) / numPartitions).toLong
  val end = lowerBound + ((BigInt(i + 1) * length) / 
numPartitions).toLong - 1
  new JdbcPartition(i, start, end)
 })

the last iteration on this:
i = numPartitions - 1

So end = lowerBound + (numPartitions - 1 + 1) * (1 + upperBound - 
lowerBound) / numPartitions - 1
= lowerBound + numPartitions/ numPartitions * (1 + upperBound - lowerBound) 
 - 1
= lowerBound 1 + upperBound - lowerBound - 1
= upperBound




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-5860][CORE] JdbcRDD: overflow on large ...

2015-02-20 Thread hotou
Github user hotou commented on a diff in the pull request:

https://github.com/apache/spark/pull/4701#discussion_r25074390
  
--- Diff: core/src/main/scala/org/apache/spark/rdd/JdbcRDD.scala ---
@@ -64,8 +64,8 @@ class JdbcRDD[T: ClassTag](
 // bounds are inclusive, hence the + 1 here and - 1 on end
 val length = 1 + upperBound - lowerBound
 (0 until numPartitions).map(i => {
-  val start = lowerBound + ((i * length) / numPartitions).toLong
-  val end = lowerBound + (((i + 1) * length) / numPartitions).toLong - 
1
+  val start = lowerBound + ((BigDecimal(i) * length) / 
numPartitions).toLong
--- End diff --

@srowen 

1. BigInt in scala actually works, I should change to that. 
2. The overflow is not necessary to be only the last partition, in my test 
case it was the last 4 partitions
3. I think the current algo give us better partitions than the fix interval 
algo, for the reason in above comment, would you agree with that ?
4. The current algo does +1 on the length and -1 on the end of each 
interval, I think that works fine. 



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-5860][CORE] JdbcRDD: overflow on large ...

2015-02-20 Thread hotou
Github user hotou commented on a diff in the pull request:

https://github.com/apache/spark/pull/4701#discussion_r25073802
  
--- Diff: core/src/test/scala/org/apache/spark/rdd/JdbcRDDSuite.scala ---
@@ -29,22 +29,42 @@ class JdbcRDDSuite extends FunSuite with BeforeAndAfter 
with LocalSparkContext {
 Class.forName("org.apache.derby.jdbc.EmbeddedDriver")
 val conn = 
DriverManager.getConnection("jdbc:derby:target/JdbcRDDSuiteDb;create=true")
 try {
-  val create = conn.createStatement
-  create.execute("""
-CREATE TABLE FOO(
-  ID INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1, 
INCREMENT BY 1),
-  DATA INTEGER
-)""")
-  create.close()
-  val insert = conn.prepareStatement("INSERT INTO FOO(DATA) VALUES(?)")
-  (1 to 100).foreach { i =>
-insert.setInt(1, i * 2)
-insert.executeUpdate
+
--- End diff --

Oh.. that's because I need to create a new table, and I put that and the 
old table creation in one single
try {
 ...
} finally {
   conn.close()
}


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-5860][CORE] JdbcRDD: overflow on large ...

2015-02-20 Thread hotou
Github user hotou commented on a diff in the pull request:

https://github.com/apache/spark/pull/4701#discussion_r25072880
  
--- Diff: core/src/main/scala/org/apache/spark/rdd/JdbcRDD.scala ---
@@ -64,8 +64,8 @@ class JdbcRDD[T: ClassTag](
 // bounds are inclusive, hence the + 1 here and - 1 on end
 val length = 1 + upperBound - lowerBound
 (0 until numPartitions).map(i => {
-  val start = lowerBound + ((i * length) / numPartitions).toLong
-  val end = lowerBound + (((i + 1) * length) / numPartitions).toLong - 
1
+  val start = lowerBound + ((BigDecimal(i) * length) / 
numPartitions).toLong
--- End diff --

@rxin I actually favor the current partition algo, it's pretty neat in a 
way, for example

lowerBound = 1
upperBound = 100
numPartition = 8

With fix length increase, you get 
[1,13],[14,26],[27,39],[40,52],[53,65],[66,78],[79,91],[92,100]
In which you always end up with one small partition at the end

With the current algo you get
[1,12],[13,25],[26,37],[38,50],[51,62],[63,75],[76,87],[88,100]
You get more evenly distributed partitions


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-5860][CORE] JdbcRDD: overflow on large ...

2015-02-20 Thread hotou
Github user hotou commented on a diff in the pull request:

https://github.com/apache/spark/pull/4701#discussion_r25071707
  
--- Diff: core/src/main/scala/org/apache/spark/rdd/JdbcRDD.scala ---
@@ -64,8 +64,8 @@ class JdbcRDD[T: ClassTag](
 // bounds are inclusive, hence the + 1 here and - 1 on end
 val length = 1 + upperBound - lowerBound
 (0 until numPartitions).map(i => {
-  val start = lowerBound + ((i * length) / numPartitions).toLong
-  val end = lowerBound + (((i + 1) * length) / numPartitions).toLong - 
1
+  val start = lowerBound + ((BigDecimal(i) * length) / 
numPartitions).toLong
--- End diff --

@srowen No that won't work, the problem here is this
length is always long, but (i * length) can overflow long. In my test case 
for example

567279357766147899L * 20 = -7101156918386593636

So we need a type that is can represent larger numbers


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-5860][CORE] JdbcRDD: overflow on large ...

2015-02-20 Thread hotou
Github user hotou commented on a diff in the pull request:

https://github.com/apache/spark/pull/4701#discussion_r25071164
  
--- Diff: core/src/test/scala/org/apache/spark/rdd/JdbcRDDSuite.scala ---
@@ -29,22 +29,42 @@ class JdbcRDDSuite extends FunSuite with BeforeAndAfter 
with LocalSparkContext {
 Class.forName("org.apache.derby.jdbc.EmbeddedDriver")
 val conn = 
DriverManager.getConnection("jdbc:derby:target/JdbcRDDSuiteDb;create=true")
 try {
-  val create = conn.createStatement
-  create.execute("""
-CREATE TABLE FOO(
-  ID INTEGER NOT NULL GENERATED ALWAYS AS IDENTITY (START WITH 1, 
INCREMENT BY 1),
-  DATA INTEGER
-)""")
-  create.close()
-  val insert = conn.prepareStatement("INSERT INTO FOO(DATA) VALUES(?)")
-  (1 to 100).foreach { i =>
-insert.setInt(1, i * 2)
-insert.executeUpdate
+
--- End diff --

I just want to add a new test for this bug, is this not the right place to 
put it ?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-5860][CORE] JdbcRDD: overflow on large ...

2015-02-19 Thread hotou
GitHub user hotou opened a pull request:

https://github.com/apache/spark/pull/4701

[SPARK-5860][CORE] JdbcRDD: overflow on large range with high number of 
partitions

Fix a overflow bug in JdbcRDD when calculating partitions for large BIGINT 
ids

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hotou/spark SPARK-5860

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/4701.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4701


commit 4e9ff4f34a56bbbdf65a7e79e546ecb7eb5729e9
Author: Evan Yu 
Date:   2015-02-19T14:57:55Z

[SPARK-5860][CORE] JdbcRDD overflow on large range with high number of 
partitions




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-5753][CORE] JdbcRDD overflow on large r...

2015-02-19 Thread hotou
Github user hotou closed the pull request at:

https://github.com/apache/spark/pull/4700


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-5753][CORE] JdbcRDD overflow on large r...

2015-02-19 Thread hotou
Github user hotou commented on the pull request:

https://github.com/apache/spark/pull/4700#issuecomment-75177174
  
Wrong ticket


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-5753][CORE] JdbcRDD overflow on large r...

2015-02-19 Thread hotou
GitHub user hotou opened a pull request:

https://github.com/apache/spark/pull/4700

[SPARK-5753][CORE] JdbcRDD overflow on large range with high number of 
partitions

Fix a overflow bug in JdbcRDD when calculating partitions for large BIGINT 
ids

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hotou/spark SPARK-5753

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/4700.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4700


commit c84fe49de3193bf60d2601b57fd051075b4dfa27
Author: Evan Yu 
Date:   2015-02-19T14:57:55Z

[SPARK-5753][CORE] JdbcRDD overflow on large range with high number of 
partitions




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org