a
Re: Stats on targets for 1.5.0
I think it would be fantastic if this work was burned down before adding big new chunks of work. The stat is worth keeping an eye on. +1, keeping in mind that burning down work also means just targeting it for a different release or closing it. :) Nick On Fri, Jun 19, 2015 at 3:18 PM Sean Owen so...@cloudera.com wrote: Quick point of reference for 1.5.0: 226 issues are Fixed for 1.5.0, and 388 are Targeted for 1.5.0. So maybe 36% of things to be done for 1.5.0 are complete, and we're in theory 3 of 8 weeks into the merge window, or 37.5%. That's nicely on track! assuming, of course, that nothing else is targeted for 1.5.0. History suggests that a lot more will be, since a minor release has more usually had 1000+ JIRAs. However lots of forward-looking JIRAs have been filed, so it may be that most planned work is on the books already this time around. I think it would be fantastic if this work was burned down before adding big new chunks of work. The stat is worth keeping an eye on. - To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org For additional commands, e-mail: dev-h...@spark.apache.org
Re: [mllib] Refactoring some spark.mllib model classes in Python not inheriting JavaModelWrapper
Hi Xiangrui I got it. I will try to refactor any model class not inheriting JavaModelWrapper and show you it. Thanks, Yu - -- Yu Ishikawa -- View this message in context: http://apache-spark-developers-list.1001551.n3.nabble.com/mllib-Refactoring-some-spark-mllib-model-classes-in-Python-not-inheriting-JavaModelWrapper-tp12781p12803.html Sent from the Apache Spark Developers List mailing list archive at Nabble.com. - To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org For additional commands, e-mail: dev-h...@spark.apache.org
Re: Workaround for problems with OS X + JIRA Client
Hi Sean, That sounds interesting. I didn't know the client. I will try it later. Thank you for sharing the information. Yu - -- Yu Ishikawa -- View this message in context: http://apache-spark-developers-list.1001551.n3.nabble.com/Workaround-for-problems-with-OS-X-JIRA-Client-tp12799p12804.html Sent from the Apache Spark Developers List mailing list archive at Nabble.com. - To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org For additional commands, e-mail: dev-h...@spark.apache.org
Workaround for problems with OS X + JIRA Client
Not sure if many of you use JIRA Client (http://almworks.com/jiraclient/overview.html) to keep tabs on JIRA -- definitely worth it -- but if you're on OS X, I wonder if you too have suddenly been experiencing some type of SSL / keypair error on syncing? It's something to do with a JIRA server update and the fact that this app only knows how to run on Apple's Java 6, and it has some lack of support for bigger keys. Anyway ... if so, and you have Java 7 / 8 available locally as 'java', this mostly works: cd /Applications/JIRA\ Client.app/Contents/Resources/Java/lib java -jar ../jiraclient.jar - To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org For additional commands, e-mail: dev-h...@spark.apache.org
[Tungsten] NPE in UnsafeShuffleWriter.java
Hi want to try new tungsten-sort shuffle manager, but on 1 stage executors start to die with NPE: 15/06/19 17:53:35 WARN TaskSetManager: Lost task 38.0 in stage 41.0 (TID 3176, ip-10-50-225-214.ec2.internal): java.lang.NullPointerException at org.apache.spark.shuffle.unsafe.UnsafeShuffleWriter.write(UnsafeShuffleWriter.java:151) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:70) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:70) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Any suggestions? Thanks, Peter Rudenko
Re: [Tungsten] NPE in UnsafeShuffleWriter.java
Hey Peter, I think that this is actually due to an error-handling issue: if you look at the stack trace that you posted, the NPE is being thrown from an error-handling branch of a `finally` block: @Override public void write(scala.collection.IteratorProduct2K, V records) throws IOException { boolean success = false; try { while (records.hasNext()) { insertRecordIntoSorter(records.next()); } closeAndWriteOutput(); success = true; } finally { if (!success) { sorter.cleanupAfterError(); // this is the line throwing the error } } } I suspect that what's happening is that an exception is being thrown from user / upstream code in the initial call to records.next(), but the error-handling block is failing because sorter == null since we haven't initialized it yet. I'm going to file a JIRA for this and will try to add a set of regression tests to the ShuffleSuite to make sure exceptions from user code aren't swallowed like this. On Fri, Jun 19, 2015 at 11:36 AM, Peter Rudenko petro.rude...@gmail.com wrote: Hi want to try new tungsten-sort shuffle manager, but on 1 stage executors start to die with NPE: 15/06/19 17:53:35 WARN TaskSetManager: Lost task 38.0 in stage 41.0 (TID 3176, ip-10-50-225-214.ec2.internal): java.lang.NullPointerException at org.apache.spark.shuffle.unsafe.UnsafeShuffleWriter.write(UnsafeShuffleWriter.java:151) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:70) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:70) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Any suggestions? Thanks, Peter Rudenko
Stats on targets for 1.5.0
Quick point of reference for 1.5.0: 226 issues are Fixed for 1.5.0, and 388 are Targeted for 1.5.0. So maybe 36% of things to be done for 1.5.0 are complete, and we're in theory 3 of 8 weeks into the merge window, or 37.5%. That's nicely on track! assuming, of course, that nothing else is targeted for 1.5.0. History suggests that a lot more will be, since a minor release has more usually had 1000+ JIRAs. However lots of forward-looking JIRAs have been filed, so it may be that most planned work is on the books already this time around. I think it would be fantastic if this work was burned down before adding big new chunks of work. The stat is worth keeping an eye on. - To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org For additional commands, e-mail: dev-h...@spark.apache.org