Jenkins build is back to normal : beam_Release_NightlySnapshot #359

2017-03-16 Thread Apache Jenkins Server
See

Re: Apache Beam (virtual) contributor meeting @ Tue Mar 7, 2017

2017-03-16 Thread Davor Bonaci
I'd like to thank everyone for coming -- notes and summary of the discussion are below. If there's any feedback, ideas for improvement, requests to do this again at some point, etc. -- please comment! --- Attendees: * Jason * Etienne * Kenn * Neleesh * Pramod * Raghu * Sergio * Amit * Aviem *

Re: Beam spark 2.x runner status

2017-03-16 Thread Jean-Baptiste Onofré
Hi guys, Yes, I started to experiment the profiles a bit and Amit and I plan to discuss about that during the week end. Give me some time to move forward a bit and I will get back to you with more details. Regards JB On 03/16/2017 05:15 PM, amarouni wrote: Yeah maintaining 2 RDD branches

Re: Beam spark 2.x runner status

2017-03-16 Thread amarouni
Yeah maintaining 2 RDD branches (master + 2.x branch) is doable but will add more maintenance merge work. The maven profiles solution is worth investigating, with Spark 1.6 RDD as the default profile and an additional Spark 2.x profile. As JBO mentioned carbondata I had a quick look and it looks

Re: Performance Testing Next Steps

2017-03-16 Thread Ismaël Mejía
> .. if the provider we are bringing up also > provides the data store, we can just omit the data store for that benchmark > and use what we've already brought up. Does that answer your question, or > have I misunderstood? Yes, and it is a perfect approach for the case, great idea. > Great point

Build failed in Jenkins: beam_Release_NightlySnapshot #358

2017-03-16 Thread Apache Jenkins Server
See Changes: [jbonofre] [BEAM-1660] Update JdbcIO JavaDoc about withCoder() use to ensure the [altay] [BEAM-547] Version should be accessed from pom file [tgroh] Add

Re: Beam spark 2.x runner status

2017-03-16 Thread Cody Innowhere
I'm personally in favor of maintaining one single branch, e.g., spark-runner, which supports both Spark 1.6 & 2.1. Since there's currently no DataFrame support in spark 1.x runner, there should be no conflicts if we put two versions of Spark into one runner. I'm also +1 for adding adapters in the