[GitHub] spark pull request #19935: Branch 0.6
Github user khanm002 closed the pull request at: https://github.com/apache/spark/pull/19935 --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #19936: Branch 0.5
Github user khanm002 commented on a diff in the pull request: https://github.com/apache/spark/pull/19936#discussion_r155924835 --- Diff: repl/src/main/scala/spark/repl/SparkILoop.scala --- @@ -200,7 +200,7 @@ class SparkILoop(in0: Option[BufferedReader], val out: PrintWriter, val master: __ / __/__ ___ _/ /__ _\ \/ _ \/ _ `/ __/ '_/ - /___/ .__/\_,_/_/ /_/\_\ version 0.5.2-SNAPSHOT --- End diff -- #- /___/ .__/\_,_/_/ /_/\_\ version 0.5.2-SNAPSHOT --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #19936: Branch 0.5
GitHub user khanm002 opened a pull request: https://github.com/apache/spark/pull/19936 Branch 0.5 ## What changes were proposed in this pull request? (Please fill in changes proposed in this fix) ## How was this patch tested? (Please explain how this patch was tested. E.g. unit tests, integration tests, manual tests) (If this patch involves UI changes, please attach a screenshot; otherwise, remove this) Please review http://spark.apache.org/contributing.html before opening a pull request. You can merge this pull request into a Git repository by running: $ git pull https://github.com/apache/spark branch-0.5 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/spark/pull/19936.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #19936 commit 1b9bba0e9aea82c0390d4c43098c08940d3b0309 Author: Thomas Dudziak <tom...@gmail.com> Date: 2012-10-09T22:21:38Z Support for Hadoop 2 distributions such as cdh4 Conflicts: core/src/main/scala/spark/NewHadoopRDD.scala core/src/main/scala/spark/PairRDDFunctions.scala project/SparkBuild.scala commit 8eec96fa5436902d2aa24cf8700b4424aff2005a Author: Matei Zaharia <ma...@eecs.berkeley.edu> Date: 2012-11-21T02:23:34Z Change version to 0.5.2 commit 5b021ce0990ec675afc6939cc2c06f041c973d17 Author: Matei Zaharia <ma...@eecs.berkeley.edu> Date: 2012-11-23T00:26:15Z Change version to 0.5.3-SNAPSHOT --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #19935: Branch 0.6
GitHub user khanm002 opened a pull request: https://github.com/apache/spark/pull/19935 Branch 0.6 ## What changes were proposed in this pull request? (Please fill in changes proposed in this fix) ## How was this patch tested? (Please explain how this patch was tested. E.g. unit tests, integration tests, manual tests) (If this patch involves UI changes, please attach a screenshot; otherwise, remove this) Please review http://spark.apache.org/contributing.html before opening a pull request. You can merge this pull request into a Git repository by running: $ git pull https://github.com/apache/spark branch-0.6 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/spark/pull/19935.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #19935 commit ce143d64ed68e956248c3ba5cc310a63a79f33c8 Author: Matei Zaharia <ma...@eecs.berkeley.edu> Date: 2012-10-25T04:52:13Z Strip leading mesos:// in URLs passed to Mesos commit d3387427ce6608f53df371f9365c49062ae0dee5 Author: root <root@domu-12-31-39-05-3d-b1.compute-1.internal> Date: 2012-10-26T07:31:08Z Don't throw an error in the block manager when a block is cached on the master due to a locally computed operation commit d2b2fc229e488f961f029846c34660552468dda4 Author: Matei Zaharia <ma...@eecs.berkeley.edu> Date: 2012-11-06T23:57:38Z Made Akka timeout and message frame size configurable, and upped the defaults commit 222355e0584f52be2b0285257151f7c3f1f3f3fa Author: Thomas Dudziak <tom...@gmail.com> Date: 2012-10-22T20:10:47Z Tweaked run file to live more happily with typesafe's debian package commit ef683d4e01bc0ff3fb783bd6b1308b5e4ecd7ece Author: Josh Rosen <joshro...@eecs.berkeley.edu> Date: 2012-10-23T20:49:52Z Fix minor typos in quick start guide. commit cf0bf73d07600f92f24af7b97a2f60b12d1e4f96 Author: Josh Rosen <joshro...@eecs.berkeley.edu> Date: 2012-10-18T17:01:38Z Allow EC2 script to stop/destroy cluster after master/slave failures. commit 43465e92a934a7fc93154c97e397074707d8d803 Author: Josh Rosen <joshro...@eecs.berkeley.edu> Date: 2012-11-04T00:02:47Z Fix check for existing instances during EC2 launch. commit d20142b105acedc7074dc4edb743ea78cd851d7f Author: Shivaram Venkataraman <shiva...@eecs.berkeley.edu> Date: 2012-11-01T17:46:38Z Remove unnecessary hash-map put in MemoryStore commit 171e97af5b67dd322f787655d70baa40318dbb87 Author: Josh Rosen <joshro...@eecs.berkeley.edu> Date: 2012-10-31T06:32:38Z Cancel spot instance requests when exiting spark-ec2. commit 5acd753876eab712a3e8fbf3ae33fb4c0b978abd Author: Matei Zaharia <ma...@eecs.berkeley.edu> Date: 2012-10-21T06:33:37Z Various fixes to standalone mode and web UI: - Don't report a job as finishing multiple times - Don't show state of workers as LOADING when they're running - Show start and finish times in web UI - Sort web UI tables by ID and time by default commit 4fe0d808b0d211d7e00341a3ba95e83792c01681 Author: Imran Rashid <im...@quantifind.com> Date: 2012-11-07T23:35:51Z fix bug in getting slave id out of mesos commit a24540887c6968353db3bb9c28b23eb48a68da75 Author: Matei Zaharia <ma...@eecs.berkeley.edu> Date: 2012-11-08T08:10:13Z Merge pull request #300 from enachb/mesos_slavelost fix bug in getting slave id out of mesos commit b3b52c995a37385fc08af5837feea18bddee55a0 Author: Matei Zaharia <ma...@eecs.berkeley.edu> Date: 2012-11-08T17:53:40Z Fix for connections not being reused (from Josh Rosen) commit bb2b9ff37cd2503cc6ea82c5dd395187b0910af0 Author: Matei Zaharia <ma...@eecs.berkeley.edu> Date: 2012-11-09T07:13:12Z Added an option to spread out jobs in the standalone mode. commit e870ca50c6dbbc7bc951bb8432c4eb9b7c816c5e Author: Tathagata Das <tathagata.das1...@gmail.com> Date: 2012-11-09T22:09:37Z Fixed deadlock in BlockManager. 1. Changed the lock structure of BlockManager by replacing the 337 coarse-grained locks to use BlockInfo objects as per-block fine-grained locks. 2. Changed the MemoryStore lock structure by making the block putting threads lock on a different object (not the memory store) thus making sure putting threads minimally blocks to the getting treads. 3. Added spark.storage.ThreadingTest to stress test the BlockManager using 5 block producer and 5 block consumer threads. commit 9d5740f6bfdeb52747f70a4b6c7cb82c57b225d4 Author: Tathagata Das <tathagata.das1...@gmail.com> Date: 2012-11-09T23:46:15Z Incorporated Matei's suggestions. Tested with 5 producer(consumer) threads each doing 50k puts (gets), took 15 minutes to run, no errors or deadlocks. commit dc84ce72190f2910bced98a504fac20f305871a