Re: Revisiting Online serving of Spark models?

2018-06-02 Thread Maximiliano Felice
Hi! We're already in San Francisco waiting for the summit. We even think that we spotted @holdenk this afternoon. @chris, we're really interested in the Meetup you're hosting. My team will probably join it since the beginning of you have room for us, and I'll join it later after discussing the

Re: [VOTE] Spark 2.3.1 (RC4)

2018-06-02 Thread Denny Lee
+1 On Sat, Jun 2, 2018 at 4:53 PM Nicholas Chammas wrote: > I'll give that a try, but I'll still have to figure out what to do if none > of the release builds work with hadoop-aws, since Flintrock deploys Spark > release builds to set up a cluster. Building Spark is slow, so we only do > it if

Re: [VOTE] Spark 2.3.1 (RC4)

2018-06-02 Thread Nicholas Chammas
I'll give that a try, but I'll still have to figure out what to do if none of the release builds work with hadoop-aws, since Flintrock deploys Spark release builds to set up a cluster. Building Spark is slow, so we only do it if the user specifically requests a Spark version by git hash. (This is

Re: [VOTE] Spark 2.3.1 (RC4)

2018-06-02 Thread Wenchen Fan
+1 On Sun, Jun 3, 2018 at 6:54 AM, Marcelo Vanzin wrote: > If you're building your own Spark, definitely try the hadoop-cloud > profile. Then you don't even need to pull anything at runtime, > everything is already packaged with Spark. > > On Fri, Jun 1, 2018 at 6:51 PM, Nicholas Chammas >

Re: [VOTE] Spark 2.3.1 (RC4)

2018-06-02 Thread Marcelo Vanzin
If you're building your own Spark, definitely try the hadoop-cloud profile. Then you don't even need to pull anything at runtime, everything is already packaged with Spark. On Fri, Jun 1, 2018 at 6:51 PM, Nicholas Chammas wrote: > pyspark --packages org.apache.hadoop:hadoop-aws:2.7.3 didn’t work

Re: [VOTE] Spark 2.3.1 (RC4)

2018-06-02 Thread Sean Owen
+1 from me with the same comments as in the last RC. On Fri, Jun 1, 2018 at 5:29 PM Marcelo Vanzin wrote: > Please vote on releasing the following candidate as Apache Spark version > 2.3.1. > > Given that I expect at least a few people to be busy with Spark Summit next > week, I'm taking the