Re: Live coding & reviewing adventures

2018-07-24 Thread Holden Karau
I'll be doing this again this week & next looking at a few different topics.

Tomorrow (July 25th @ 10am pacific) Gris & I will be updating the PR from
my last live stream (adding Python dependency handling) -
https://www.twitch.tv/events/P92irbgYR9Sx6nMQ-lGY3g /
https://www.youtube.com/watch?v=4xDsY5QL2zM

In the afternoon @ 3 pm pacific I'll be looking at the dev tools we've had
some discussions around with respect to reviews - https://www.twitch.tv/
events/vNzcZ7DdSuGFNYURW_9WEQ / https://www.youtube.com/watch?v=6cTmC_fP9B0

Next week on Thursday August 1st @ 2pm pacific Gris & I will be setting up
Beam on her new laptop together, so for any new users looking to see how to
install Beam from source this one is for you (or for devs looking to see
how painful set up is) - https://www.twitch.tv/events/YAYvNp3tT0COkcpNBxnp6A
/ https://www.youtube.com/watch?v=x8Wg7qCDA5k

P.S.

As always I'll be doing my regular Friday code reviews in Spark -
https://www.youtube.com/watch?v=O4rRx-3PTiM . You can see the other ones I
have planned on my twitch  events
 and youtube
.

On Fri, Jul 13, 2018 at 11:54 AM, Holden Karau  wrote:

> Hi folks! I've been doing some live coding in my other projects and I
> figured I'd do some with Apache Beam as well.
>
> Today @ 3pm pacific I'm going be doing some impromptu exploration better
> review tooling possibilities (looking at forking spark-pr-dashboard for
> other projects like beam and setting up mentionbot to work with ASF infra)
> - https://www.youtube.com/watch?v=ff8_jbzC8JI
>
> Next week (Thursday the 19th at 2pm pacific) I'm going to be working on
> trying to get easier dependency management for the Python portable runner
> in place - https://www.youtube.com/watch?v=Sv0XhS2pYqA
>
> If your interested in seeing more of the development process I hope you
> will join me :)
>
> P.S.
>
> You can also follow on twitch which does a better job of notifications
> https://www.twitch.tv/holdenkarau
>
> Also one of the other thing I do is "live reviews" of PRs but they are
> generally opt-in and I don't have enough opt-ins from the Beam community to
> do live reviews in Beam, if you work on Beam and would be OK with me doing
> a live streamed review of your PRs let me know (if your curious to what
> they look like you can see some of them here in Spark land
> 
> ).
>
> --
> Twitter: https://twitter.com/holdenkarau
>



-- 
Twitter: https://twitter.com/holdenkarau


Re: Proof-of-concept Beam PR dashboard (based off of Spark's PR dashboard) to improve discoverability

2018-07-24 Thread Holden Karau
That one's probably going to be more work, but the code is open and I'd be
happy to help where I can.

On Tue, Jul 24, 2018, 12:58 PM Huygaa Batsaikhan  wrote:

> This is great. From previous thread
> ,
> "whose turn" feature was a popular request for the dashboard because it is
> hard to know whose attention is needed at any moment.
> How much effort is needed to implement such feature on top of the
> dashboard?
>
> On Fri, Jul 13, 2018 at 5:56 PM Holden Karau  wrote:
>
>> Took me waaay longer than planed, and the regexes and components could
>> use some work, but I've got a quick Beam PR dashboard up at
>> https://boos-demo-projects-are-rad.appspot.com/. The code is a fork of
>> the Spark one, and its at
>> https://github.com/holdenk/spark-pr-dashboard/tree/support-beam in the
>> beam support branch. I don't how useful this will be for folks, but given
>> the discussion going on around CODEOWNERS I figured people were feeling the
>> pain of trying to keep on top of reviews.
>>
>> I'm still working on trying to get mentionbot working (its being a bit
>> frustrating to upgrade to recent version of dependencies as a non-JS
>> programmer), but hopefully I can do something there too.
>>
>> If anyone has thoughts about what good tags would be for the review
>> dashboard let me know, I just kicked it off with some tabs which I
>> personally care about.
>>
>> Twitter: https://twitter.com/holdenkarau
>>
>


Re: Issues with Beam SQL on Spark

2018-07-24 Thread Kai Jiang
Thank you Andrew! I will take a look at if it is feasible to rewrite  
"jdbc:calcite:" in Beam's repackaged calcite.

Best,
Kai

On 2018/07/24 19:08:17, Andrew Pilloud  wrote: 
> I don't really think this is something that involves changes to
> DriverManager. Beam is causing the problem by relocating calcite's path but
> not also modifying the global state it creates.
> 
> Andrew
> 
> On Tue, Jul 24, 2018 at 12:03 PM Kai Jiang  wrote:
> 
> > Thanks Andrew! It's really helpful. I'll take a try on shade calcite with
> > rewriting the "jdbc:calcite".
> > I also have a look at the doc of DriverManager. Do you think include all
> > repackaged jdbc driver property setting like below will be helpful?
> >  jdbc.drivers=org.apache.beam.repackaged.beam.
> >
> > Best,
> > Kai
> >
> > On 2018/07/24 16:56:50, Andrew Pilloud  wrote:
> > > Looks like calcite isn't easily repackageable. This issue can be fixed
> > > either in our shading (by also rewriting the "jdbc:calcite:" string when
> > we
> > > shade calcite) or in calcite (by not using the driver manager to connect
> > > between calcite modules).
> > >
> > > Andrew
> > >
> > > On Mon, Jul 23, 2018 at 11:18 PM Kai Jiang  wrote:
> > >
> > > > Hi all,
> > > >
> > > > I met an issue when I ran Beam SQL on Spark. I want to check and see if
> > > > anyone has same issue with me. I believe let beam sql running on spark
> > is
> > > > important. If you encountered same problem, it will be really helpful
> > if
> > > > you could give some inputs.
> > > >
> > > > Context:
> > > > I setup TPC framework to run sql on spark. Code
> > > > <
> > https://github.com/vectorijk/beam/blob/tpch/sdks/java/extensions/tpc/src/main/java/org/apache/beam/sdk/extensions/tpc/BeamTpc.java
> > >
> > > > is simple which just ingests csv data and apply Sql on that. Gradle
> > > > <
> > https://github.com/vectorijk/beam/blob/tpch/sdks/java/extensions/tpc/build.gradle>
> > setting
> > > > includes `runner-spark` and necessary libraries.  Exception Stack trace
> > > > 
> > shows
> > > > some details. However, same code can running on Flink and Dataflow
> > > > successfully.
> > > >
> > > > Investigations:
> > > > BEAM-3386  also
> > > > describes the similar issue I have. It took me some time on
> > investigating
> > > > it. I guess there should be a version conflict between Calcite library
> > in
> > > > Spark and Beam SQL repackaged Calcite. The version of Calcite library
> > Spark
> > > > ( * - 2.3.1) used is very old (1.2.0-incubating).
> > > >
> > > > After packaging fat jar and submitting it to Spark, Spark registered
> > both
> > > > old version's calcite jdbc driver and Beam's repackaged jdbc driver in
> > > >
> > > > registeredDrivers(DriverManager.java#L294 <
> > https://github.com/JetBrains/jdk8u_jdk/blob/master/src/share/classes/java/sql/DriverManager.java#L294>).
> > Jdbc's DriverManager always connects to old version calcite's jdbc in spark
> > instead of beam's repackaged calcite.
> > > >
> > > >
> > > > Looking into Line DriverManager.java#L556 <
> > https://github.com/JetBrains/jdk8u_jdk/blob/master/src/share/classes/java/sql/DriverManager.java#L556>
> > and insert a breakpoint, aClass =
> > Class.forName(driver.getClass().getName(), true, classLoader);
> > > >
> > > > driver.getClass().getName() -> "org.apache.calcite.jdbc.Driver"
> > > > classLoader only has class 'org.apache.beam.**' and
> > > > 'org.apache.beam.repackaged.beam_***'. (There is no path of class
> > > > 'org.apache.calcite.*')
> > > >
> > > > Oddly, aClass is assigned with Class "org.apache.calcite.jdbc.Driver".
> > I
> > > > think it should raise an exception and be skipped. Actually, It did
> > not.  So
> > > > this spark's calcite jdbc driver has been connected. All logic
> > afterwards
> > > > goes to spark's calcite classpath. I believe that's pivot point.
> > > >
> > > > Potentially solutions:
> > > > *1.* Figure out why DriverManager.java#L556
> > > > <
> > https://github.com/JetBrains/jdk8u_jdk/blob/master/src/share/classes/java/sql/DriverManager.java#L556>
> > does
> > > > not throw exception.
> > > >
> > > > I guess it is the best option.
> > > >
> > > > 2. Upgrade Spark' calcite.
> > > >
> > > > It is not a good option because old calcite version affects many spark
> > > > versions.
> > > >
> > > > 3. Not using repackage for calcite library.
> > > >
> > > > I tried. I built fat jar with non-repackaged calcite. But, Spark is
> > still
> > > > using its own calcite.
> > > >
> > > > Plus, I am curious if there is any specific reason we need to use
> > > > repackage strategy for Calcite. @Mingmin Xu 
> > > >
> > > >
> > > > Thanks for reading!
> > > >
> > > > Best,
> > > > Kai
> > > > ᐧ
> > > >
> > >
> >
> 


Re: Proof-of-concept Beam PR dashboard (based off of Spark's PR dashboard) to improve discoverability

2018-07-24 Thread Huygaa Batsaikhan
This is great. From previous thread
,
"whose turn" feature was a popular request for the dashboard because it is
hard to know whose attention is needed at any moment.
How much effort is needed to implement such feature on top of the dashboard?

On Fri, Jul 13, 2018 at 5:56 PM Holden Karau  wrote:

> Took me waaay longer than planed, and the regexes and components could use
> some work, but I've got a quick Beam PR dashboard up at
> https://boos-demo-projects-are-rad.appspot.com/. The code is a fork of
> the Spark one, and its at
> https://github.com/holdenk/spark-pr-dashboard/tree/support-beam in the
> beam support branch. I don't how useful this will be for folks, but given
> the discussion going on around CODEOWNERS I figured people were feeling the
> pain of trying to keep on top of reviews.
>
> I'm still working on trying to get mentionbot working (its being a bit
> frustrating to upgrade to recent version of dependencies as a non-JS
> programmer), but hopefully I can do something there too.
>
> If anyone has thoughts about what good tags would be for the review
> dashboard let me know, I just kicked it off with some tabs which I
> personally care about.
>
> Twitter: https://twitter.com/holdenkarau
>


Re: Issues with Beam SQL on Spark

2018-07-24 Thread Andrew Pilloud
I don't really think this is something that involves changes to
DriverManager. Beam is causing the problem by relocating calcite's path but
not also modifying the global state it creates.

Andrew

On Tue, Jul 24, 2018 at 12:03 PM Kai Jiang  wrote:

> Thanks Andrew! It's really helpful. I'll take a try on shade calcite with
> rewriting the "jdbc:calcite".
> I also have a look at the doc of DriverManager. Do you think include all
> repackaged jdbc driver property setting like below will be helpful?
>  jdbc.drivers=org.apache.beam.repackaged.beam.
>
> Best,
> Kai
>
> On 2018/07/24 16:56:50, Andrew Pilloud  wrote:
> > Looks like calcite isn't easily repackageable. This issue can be fixed
> > either in our shading (by also rewriting the "jdbc:calcite:" string when
> we
> > shade calcite) or in calcite (by not using the driver manager to connect
> > between calcite modules).
> >
> > Andrew
> >
> > On Mon, Jul 23, 2018 at 11:18 PM Kai Jiang  wrote:
> >
> > > Hi all,
> > >
> > > I met an issue when I ran Beam SQL on Spark. I want to check and see if
> > > anyone has same issue with me. I believe let beam sql running on spark
> is
> > > important. If you encountered same problem, it will be really helpful
> if
> > > you could give some inputs.
> > >
> > > Context:
> > > I setup TPC framework to run sql on spark. Code
> > > <
> https://github.com/vectorijk/beam/blob/tpch/sdks/java/extensions/tpc/src/main/java/org/apache/beam/sdk/extensions/tpc/BeamTpc.java
> >
> > > is simple which just ingests csv data and apply Sql on that. Gradle
> > > <
> https://github.com/vectorijk/beam/blob/tpch/sdks/java/extensions/tpc/build.gradle>
> setting
> > > includes `runner-spark` and necessary libraries.  Exception Stack trace
> > > 
> shows
> > > some details. However, same code can running on Flink and Dataflow
> > > successfully.
> > >
> > > Investigations:
> > > BEAM-3386  also
> > > describes the similar issue I have. It took me some time on
> investigating
> > > it. I guess there should be a version conflict between Calcite library
> in
> > > Spark and Beam SQL repackaged Calcite. The version of Calcite library
> Spark
> > > ( * - 2.3.1) used is very old (1.2.0-incubating).
> > >
> > > After packaging fat jar and submitting it to Spark, Spark registered
> both
> > > old version's calcite jdbc driver and Beam's repackaged jdbc driver in
> > >
> > > registeredDrivers(DriverManager.java#L294 <
> https://github.com/JetBrains/jdk8u_jdk/blob/master/src/share/classes/java/sql/DriverManager.java#L294>).
> Jdbc's DriverManager always connects to old version calcite's jdbc in spark
> instead of beam's repackaged calcite.
> > >
> > >
> > > Looking into Line DriverManager.java#L556 <
> https://github.com/JetBrains/jdk8u_jdk/blob/master/src/share/classes/java/sql/DriverManager.java#L556>
> and insert a breakpoint, aClass =
> Class.forName(driver.getClass().getName(), true, classLoader);
> > >
> > > driver.getClass().getName() -> "org.apache.calcite.jdbc.Driver"
> > > classLoader only has class 'org.apache.beam.**' and
> > > 'org.apache.beam.repackaged.beam_***'. (There is no path of class
> > > 'org.apache.calcite.*')
> > >
> > > Oddly, aClass is assigned with Class "org.apache.calcite.jdbc.Driver".
> I
> > > think it should raise an exception and be skipped. Actually, It did
> not.  So
> > > this spark's calcite jdbc driver has been connected. All logic
> afterwards
> > > goes to spark's calcite classpath. I believe that's pivot point.
> > >
> > > Potentially solutions:
> > > *1.* Figure out why DriverManager.java#L556
> > > <
> https://github.com/JetBrains/jdk8u_jdk/blob/master/src/share/classes/java/sql/DriverManager.java#L556>
> does
> > > not throw exception.
> > >
> > > I guess it is the best option.
> > >
> > > 2. Upgrade Spark' calcite.
> > >
> > > It is not a good option because old calcite version affects many spark
> > > versions.
> > >
> > > 3. Not using repackage for calcite library.
> > >
> > > I tried. I built fat jar with non-repackaged calcite. But, Spark is
> still
> > > using its own calcite.
> > >
> > > Plus, I am curious if there is any specific reason we need to use
> > > repackage strategy for Calcite. @Mingmin Xu 
> > >
> > >
> > > Thanks for reading!
> > >
> > > Best,
> > > Kai
> > > ᐧ
> > >
> >
>


Re: Issues with Beam SQL on Spark

2018-07-24 Thread Kai Jiang
Thanks Andrew! It's really helpful. I'll take a try on shade calcite with 
rewriting the "jdbc:calcite".
I also have a look at the doc of DriverManager. Do you think include all 
repackaged jdbc driver property setting like below will be helpful?
 jdbc.drivers=org.apache.beam.repackaged.beam.

Best,
Kai

On 2018/07/24 16:56:50, Andrew Pilloud  wrote: 
> Looks like calcite isn't easily repackageable. This issue can be fixed
> either in our shading (by also rewriting the "jdbc:calcite:" string when we
> shade calcite) or in calcite (by not using the driver manager to connect
> between calcite modules).
> 
> Andrew
> 
> On Mon, Jul 23, 2018 at 11:18 PM Kai Jiang  wrote:
> 
> > Hi all,
> >
> > I met an issue when I ran Beam SQL on Spark. I want to check and see if
> > anyone has same issue with me. I believe let beam sql running on spark is
> > important. If you encountered same problem, it will be really helpful if
> > you could give some inputs.
> >
> > Context:
> > I setup TPC framework to run sql on spark. Code
> > 
> > is simple which just ingests csv data and apply Sql on that. Gradle
> > 
> >  setting
> > includes `runner-spark` and necessary libraries.  Exception Stack trace
> >  shows
> > some details. However, same code can running on Flink and Dataflow
> > successfully.
> >
> > Investigations:
> > BEAM-3386  also
> > describes the similar issue I have. It took me some time on investigating
> > it. I guess there should be a version conflict between Calcite library in
> > Spark and Beam SQL repackaged Calcite. The version of Calcite library Spark
> > ( * - 2.3.1) used is very old (1.2.0-incubating).
> >
> > After packaging fat jar and submitting it to Spark, Spark registered both
> > old version's calcite jdbc driver and Beam's repackaged jdbc driver in
> >
> > registeredDrivers(DriverManager.java#L294 
> > ).
> >  Jdbc's DriverManager always connects to old version calcite's jdbc in 
> > spark instead of beam's repackaged calcite.
> >
> >
> > Looking into Line DriverManager.java#L556 
> > 
> >  and insert a breakpoint, aClass =  
> > Class.forName(driver.getClass().getName(), true, classLoader);
> >
> > driver.getClass().getName() -> "org.apache.calcite.jdbc.Driver"
> > classLoader only has class 'org.apache.beam.**' and
> > 'org.apache.beam.repackaged.beam_***'. (There is no path of class
> > 'org.apache.calcite.*')
> >
> > Oddly, aClass is assigned with Class "org.apache.calcite.jdbc.Driver". I
> > think it should raise an exception and be skipped. Actually, It did not.  So
> > this spark's calcite jdbc driver has been connected. All logic afterwards
> > goes to spark's calcite classpath. I believe that's pivot point.
> >
> > Potentially solutions:
> > *1.* Figure out why DriverManager.java#L556
> > 
> >  does
> > not throw exception.
> >
> > I guess it is the best option.
> >
> > 2. Upgrade Spark' calcite.
> >
> > It is not a good option because old calcite version affects many spark
> > versions.
> >
> > 3. Not using repackage for calcite library.
> >
> > I tried. I built fat jar with non-repackaged calcite. But, Spark is still
> > using its own calcite.
> >
> > Plus, I am curious if there is any specific reason we need to use
> > repackage strategy for Calcite. @Mingmin Xu 
> >
> >
> > Thanks for reading!
> >
> > Best,
> > Kai
> > ᐧ
> >
> 


Re: Java precommit and postcommit tests are failing

2018-07-24 Thread Alan Myrvold
There were 2 reasons for the failing java precommit; there was a consistent
compile break that will be fixed by
https://github.com/apache/beam/pull/6018 and
there was a change, presumably due to the jenkins change, in the number of
workers computed based on the number of virtual processors detected which
caused the -Xmx gradle jvm flag to decrease from 3g to 1g.

After adjusting maxWorkers (and the memory settings), the precommit passed
for pull/6018 at
https://builds.apache.org/job/beam_PreCommit_Java_Commit/517/

Setting reasonable values for maxWorkers and Xms/Xmx is being tested in
https://github.com/apache/beam/pull/6046



On Tue, Jul 24, 2018 at 9:21 AM Alan Myrvold  wrote:

> I can take a look at it today
>
> On Mon, Jul 23, 2018 at 10:32 PM Rui Wang  wrote:
>
>> It is at least frequent (and event consistent) based on the list of
>> failed PR which run java checks.
>>
>>
>> Also, this issue is being tracked by
>> https://issues.apache.org/jira/browse/BEAM-4847.
>>
>>
>> -Rui
>>
>> On Mon, Jul 23, 2018 at 9:55 PM Jean-Baptiste Onofré 
>> wrote:
>>
>>> Hi Rui
>>>
>>> I will. I guess it randomly happens, right ?
>>>
>>> Regards
>>> JB
>>> Le 24 juil. 2018, à 02:37, Rui Wang  a écrit:

 Hi community,

 Seems like both Java precommit and postcommit tests are failing due to
 GC issues. I created a JIRA under test-failures component (
 https://issues.apache.org/jira/browse/BEAM-4848). Could someone who is
 familiar with Jenkins take a look at this?


 Thanks,
 Rui

>>>


Re: Issues with Beam SQL on Spark

2018-07-24 Thread Andrew Pilloud
Looks like calcite isn't easily repackageable. This issue can be fixed
either in our shading (by also rewriting the "jdbc:calcite:" string when we
shade calcite) or in calcite (by not using the driver manager to connect
between calcite modules).

Andrew

On Mon, Jul 23, 2018 at 11:18 PM Kai Jiang  wrote:

> Hi all,
>
> I met an issue when I ran Beam SQL on Spark. I want to check and see if
> anyone has same issue with me. I believe let beam sql running on spark is
> important. If you encountered same problem, it will be really helpful if
> you could give some inputs.
>
> Context:
> I setup TPC framework to run sql on spark. Code
> 
> is simple which just ingests csv data and apply Sql on that. Gradle
> 
>  setting
> includes `runner-spark` and necessary libraries.  Exception Stack trace
>  shows
> some details. However, same code can running on Flink and Dataflow
> successfully.
>
> Investigations:
> BEAM-3386  also
> describes the similar issue I have. It took me some time on investigating
> it. I guess there should be a version conflict between Calcite library in
> Spark and Beam SQL repackaged Calcite. The version of Calcite library Spark
> ( * - 2.3.1) used is very old (1.2.0-incubating).
>
> After packaging fat jar and submitting it to Spark, Spark registered both
> old version's calcite jdbc driver and Beam's repackaged jdbc driver in
>
> registeredDrivers(DriverManager.java#L294 
> ).
>  Jdbc's DriverManager always connects to old version calcite's jdbc in spark 
> instead of beam's repackaged calcite.
>
>
> Looking into Line DriverManager.java#L556 
> 
>  and insert a breakpoint, aClass =  
> Class.forName(driver.getClass().getName(), true, classLoader);
>
> driver.getClass().getName() -> "org.apache.calcite.jdbc.Driver"
> classLoader only has class 'org.apache.beam.**' and
> 'org.apache.beam.repackaged.beam_***'. (There is no path of class
> 'org.apache.calcite.*')
>
> Oddly, aClass is assigned with Class "org.apache.calcite.jdbc.Driver". I
> think it should raise an exception and be skipped. Actually, It did not.  So
> this spark's calcite jdbc driver has been connected. All logic afterwards
> goes to spark's calcite classpath. I believe that's pivot point.
>
> Potentially solutions:
> *1.* Figure out why DriverManager.java#L556
> 
>  does
> not throw exception.
>
> I guess it is the best option.
>
> 2. Upgrade Spark' calcite.
>
> It is not a good option because old calcite version affects many spark
> versions.
>
> 3. Not using repackage for calcite library.
>
> I tried. I built fat jar with non-repackaged calcite. But, Spark is still
> using its own calcite.
>
> Plus, I am curious if there is any specific reason we need to use
> repackage strategy for Calcite. @Mingmin Xu 
>
>
> Thanks for reading!
>
> Best,
> Kai
> ᐧ
>


Re: Java precommit and postcommit tests are failing

2018-07-24 Thread Alan Myrvold
I can take a look at it today

On Mon, Jul 23, 2018 at 10:32 PM Rui Wang  wrote:

> It is at least frequent (and event consistent) based on the list of failed
> PR which run java checks.
>
>
> Also, this issue is being tracked by
> https://issues.apache.org/jira/browse/BEAM-4847.
>
>
> -Rui
>
> On Mon, Jul 23, 2018 at 9:55 PM Jean-Baptiste Onofré 
> wrote:
>
>> Hi Rui
>>
>> I will. I guess it randomly happens, right ?
>>
>> Regards
>> JB
>> Le 24 juil. 2018, à 02:37, Rui Wang  a écrit:
>>>
>>> Hi community,
>>>
>>> Seems like both Java precommit and postcommit tests are failing due to
>>> GC issues. I created a JIRA under test-failures component (
>>> https://issues.apache.org/jira/browse/BEAM-4848). Could someone who is
>>> familiar with Jenkins take a look at this?
>>>
>>>
>>> Thanks,
>>> Rui
>>>
>>


Re: Unsubscribe

2018-07-24 Thread Thomas Weise
To unsubscribe, please use the -unsubscribe addresses listed on
https://beam.apache.org/community/contact-us/



On Tue, Jul 24, 2018 at 6:34 AM Chandan Biswas 
wrote:

>
>


Build failed in Jenkins: beam_Release_Gradle_NightlySnapshot #113

2018-07-24 Thread Apache Jenkins Server
See 


Changes:

[boyuanz] Added '--continue' switches into nightly build

[amyrvold] [BEAM-4831] Ignore failures during :beam-sdks-go:vet to allow 
./gradlew

[aaltay] Add bash script to automate "Preparation for GPG" (#6015)

--
[...truncated 17.63 MB...]
:beam-sdks-python-container:installDependencies (Thread[Task worker for ':' 
Thread 12,5,main]) started.

> Task :beam-sdks-python-container:installDependencies
Caching disabled for task ':beam-sdks-python-container:installDependencies': 
Caching has not been enabled for the task
Task ':beam-sdks-python-container:installDependencies' is not up-to-date 
because:
  Task has not declared any outputs despite executing actions.
Cache 

 not found, skip.
:beam-sdks-python-container:installDependencies (Thread[Task worker for ':' 
Thread 12,5,main]) completed. Took 0.563 secs.
:beam-sdks-python-container:buildLinuxAmd64 (Thread[Task worker for ':' Thread 
12,5,main]) started.

> Task :beam-sdks-python-container:buildLinuxAmd64
Build cache key for task ':beam-sdks-python-container:buildLinuxAmd64' is 
3e31a02da0011ff4929871d0b8933414
Caching disabled for task ':beam-sdks-python-container:buildLinuxAmd64': 
Caching has not been enabled for the task
Task ':beam-sdks-python-container:buildLinuxAmd64' is not up-to-date because:
  No history is available.
:beam-sdks-python-container:buildLinuxAmd64 (Thread[Task worker for ':' Thread 
12,5,main]) completed. Took 2.89 secs.
:beam-sdks-python-container:build (Thread[Task worker for ':' Thread 
12,5,main]) started.

> Task :beam-sdks-python-container:build
Caching disabled for task ':beam-sdks-python-container:build': Caching has not 
been enabled for the task
Task ':beam-sdks-python-container:build' is not up-to-date because:
  Task has not declared any outputs despite executing actions.
:beam-sdks-python-container:build (Thread[Task worker for ':' Thread 
12,5,main]) completed. Took 0.002 secs.
:beam-vendor-sdks-java-extensions-protobuf:jar (Thread[Task worker for ':' 
Thread 12,5,main]) started.

> Task :beam-vendor-sdks-java-extensions-protobuf:jar
Build cache key for task ':beam-vendor-sdks-java-extensions-protobuf:jar' is 
318184f3acfe150a3a38d605826c992a
Caching disabled for task ':beam-vendor-sdks-java-extensions-protobuf:jar': 
Caching has not been enabled for the task
Task ':beam-vendor-sdks-java-extensions-protobuf:jar' is not up-to-date because:
  No history is available.
:beam-vendor-sdks-java-extensions-protobuf:jar (Thread[Task worker for ':' 
Thread 12,5,main]) completed. Took 0.013 secs.
:beam-vendor-sdks-java-extensions-protobuf:compileTestJava (Thread[Task worker 
for ':' Thread 12,5,main]) started.

> Task :beam-vendor-sdks-java-extensions-protobuf:compileTestJava NO-SOURCE
file or directory 
'
 not found
Skipping task ':beam-vendor-sdks-java-extensions-protobuf:compileTestJava' as 
it has no source files and no previous output files.
:beam-vendor-sdks-java-extensions-protobuf:compileTestJava (Thread[Task worker 
for ':' Thread 12,5,main]) completed. Took 0.001 secs.
:beam-vendor-sdks-java-extensions-protobuf:processTestResources (Thread[Task 
worker for ':' Thread 12,5,main]) started.

> Task :beam-vendor-sdks-java-extensions-protobuf:processTestResources NO-SOURCE
file or directory 
'
 not found
Skipping task ':beam-vendor-sdks-java-extensions-protobuf:processTestResources' 
as it has no source files and no previous output files.
:beam-vendor-sdks-java-extensions-protobuf:processTestResources (Thread[Task 
worker for ':' Thread 12,5,main]) completed. Took 0.0 secs.
:beam-vendor-sdks-java-extensions-protobuf:testClasses (Thread[Task worker for 
':' Thread 15,5,main]) started.

> Task :beam-vendor-sdks-java-extensions-protobuf:testClasses UP-TO-DATE
Skipping task ':beam-vendor-sdks-java-extensions-protobuf:testClasses' as it 
has no actions.
:beam-vendor-sdks-java-extensions-protobuf:testClasses (Thread[Task worker for 
':' Thread 15,5,main]) completed. Took 0.0 secs.
:beam-vendor-sdks-java-extensions-protobuf:packageTests (Thread[Task worker for 
':' Thread 15,5,main]) started.

> Task :beam-vendor-sdks-java-extensions-protobuf:packageTests
Build cache key for task 
':beam-vendor-sdks-java-extensions-protobuf:packageTests' is 
725c57f0b1c0d6e4c572de9d9db06024
Caching disabled for task 
':beam-vendor-sdks-java-extensions-protobuf:packageTests': Caching has not been 
enabled for the task
Task ':beam-vendor-sdks-java-extensions-protobuf:packageTests' is not 
up-to-date 

unsubscribe

2018-07-24 Thread C Aravindh
unsubscribe