Build failed in Jenkins: beam_PreCommit_Java_Cron #506

2018-10-25 Thread Apache Jenkins Server
See 


Changes:

[kenn] [BEAM-5810] Jenkins emails to bui...@beam.apache.org instead of

[kenn] [BEAM-5810] Nightly release failures to bui...@beam.apache.org instead

[kenn] [BEAM-5810] Seed job failures to bui...@beam.apache.org instead of

[thw] [BEAM-5797] Ensure ExecutableStageDoFnOperator dispose is executed only

[mxm] [BEAM-2918] Add state support for batch in portable FlinkRunner

[mxm] [BEAM-4176] Enable StatefulParDo tests for PortableValidatesRunner

[25622840+adude3141] [BEAM-5176] remove preconfigured '-Xlint:deprecation' from 
compilerArgs

--
[...truncated 52.77 MB...]
  public static CoderProvider getCoderProvider() {
  ^
8 warnings
Packing task ':beam-sdks-java-io-hadoop-common:javadoc'
:beam-sdks-java-io-hadoop-common:javadoc (Thread[Task worker for ':',5,main]) 
completed. Took 1.935 secs.
:beam-sdks-java-io-hadoop-common:spotlessJava (Thread[Task worker for 
':',5,main]) started.

> Task :beam-sdks-java-io-hadoop-common:spotlessJava
Caching disabled for task ':beam-sdks-java-io-hadoop-common:spotlessJava': 
Caching has not been enabled for the task
Task ':beam-sdks-java-io-hadoop-common:spotlessJava' is not up-to-date because:
  No history is available.
All input files are considered out-of-date for incremental task 
':beam-sdks-java-io-hadoop-common:spotlessJava'.
:beam-sdks-java-io-hadoop-common:spotlessJava (Thread[Task worker for 
':',5,main]) completed. Took 0.052 secs.
:beam-sdks-java-io-hadoop-common:spotlessJavaCheck (Thread[Task worker for 
':',5,main]) started.

> Task :beam-sdks-java-io-hadoop-common:spotlessJavaCheck
Skipping task ':beam-sdks-java-io-hadoop-common:spotlessJavaCheck' as it has no 
actions.
:beam-sdks-java-io-hadoop-common:spotlessJavaCheck (Thread[Task worker for 
':',5,main]) completed. Took 0.0 secs.
:beam-sdks-java-io-hadoop-common:spotlessCheck (Thread[Task worker for 
':',5,main]) started.

> Task :beam-sdks-java-io-hadoop-common:spotlessCheck
Skipping task ':beam-sdks-java-io-hadoop-common:spotlessCheck' as it has no 
actions.
:beam-sdks-java-io-hadoop-common:spotlessCheck (Thread[Task worker for 
':',5,main]) completed. Took 0.0 secs.
:beam-sdks-java-io-hadoop-common:test (Thread[Task worker for ':',5,main]) 
started.
Gradle Test Executor 142 started executing tests.
Gradle Test Executor 142 finished executing tests.

> Task :beam-sdks-java-io-hadoop-common:test
Build cache key for task ':beam-sdks-java-io-hadoop-common:test' is 
7423a07ba8373b08d3f6a2bf5c64d815
Task ':beam-sdks-java-io-hadoop-common:test' is not up-to-date because:
  No history is available.
Starting process 'Gradle Test Executor 142'. Working directory: 

 Command: /usr/local/asfpackages/java/jdk1.8.0_172/bin/java 
-Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager
 -Dorg.gradle.native=false -Dfile.encoding=UTF-8 -Duser.country=US 
-Duser.language=en -Duser.variant -ea -cp 
/home/jenkins/.gradle/caches/4.10.2/workerMain/gradle-worker.jar 
worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test 
Executor 142'
Successfully started process 'Gradle Test Executor 142'

org.apache.beam.sdk.io.hadoop.WritableCoderTest > 
testAutomaticRegistrationOfCoderProvider STANDARD_ERROR
log4j:WARN No appenders could be found for logger 
(org.apache.beam.sdk.coders.CoderRegistry).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for 
more info.
Finished generating test XML results (0.0 secs) into: 

Generating HTML test report...
Finished generating test html results (0.001 secs) into: 

Packing task ':beam-sdks-java-io-hadoop-common:test'
:beam-sdks-java-io-hadoop-common:test (Thread[Task worker for ':',5,main]) 
completed. Took 1.207 secs.
:beam-sdks-java-io-hadoop-common:validateShadedJarDoesntLeakNonOrgApacheBeamClasses
 (Thread[Task worker for ':',5,main]) started.

> Task 
> :beam-sdks-java-io-hadoop-common:validateShadedJarDoesntLeakNonOrgApacheBeamClasses
Caching disabled for task 
':beam-sdks-java-io-hadoop-common:validateShadedJarDoesntLeakNonOrgApacheBeamClasses':
 Caching has not been enabled for the task
Task 
':beam-sdks-java-io-hadoop-common:validateShadedJarDoesntLeakNonOrgApacheBeamClasses'
 is not up-to-date because:
  Task has not declared any outputs despite executing actions.
:beam-sdks-java-io-hadoop-common:validateShadedJarDoesntLeakNonOrgApacheBeamClasses
 (Thread[Task worker for ':',5,main]) completed. Took 0.001 secs.
:beam-sdks-java-io-hadoop-common:check (Thr

Build failed in Jenkins: beam_PerformanceTests_ParquetIOIT #647

2018-10-25 Thread Apache Jenkins Server
See 


Changes:

[kenn] [BEAM-5810] Jenkins emails to bui...@beam.apache.org instead of

[kenn] [BEAM-5810] Nightly release failures to bui...@beam.apache.org instead

[kenn] [BEAM-5810] Seed job failures to bui...@beam.apache.org instead of

[thw] [BEAM-5797] Ensure ExecutableStageDoFnOperator dispose is executed only

[mxm] [BEAM-2918] Add state support for batch in portable FlinkRunner

[mxm] [BEAM-4176] Enable StatefulParDo tests for PortableValidatesRunner

[25622840+adude3141] [BEAM-5176] remove preconfigured '-Xlint:deprecation' from 
compilerArgs

--
[...truncated 276.60 KB...]
INFO: 2018-10-25T18:28:08.043Z: Fusing consumer Write Parquet 
files/WriteFiles/FinalizeTempFileBundles/Reshuffle.ViaRandomKey/Values/Values/Map
 into Write Parquet 
files/WriteFiles/FinalizeTempFileBundles/Reshuffle.ViaRandomKey/Reshuffle/ExpandIterable
Oct 25, 2018 6:28:21 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2018-10-25T18:28:08.091Z: Fusing consumer Write Parquet 
files/WriteFiles/GatherTempFileResults/Reshuffle/Window.Into()/Window.Assign 
into Write Parquet files/WriteFiles/GatherTempFileResults/Add void 
key/AddKeys/Map
Oct 25, 2018 6:28:21 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2018-10-25T18:28:08.134Z: Fusing consumer Write Parquet 
files/WriteFiles/GatherTempFileResults/Reshuffle/GroupByKey/Reify into Write 
Parquet 
files/WriteFiles/GatherTempFileResults/Reshuffle/Window.Into()/Window.Assign
Oct 25, 2018 6:28:21 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2018-10-25T18:28:08.175Z: Fusing consumer Write Parquet 
files/WriteFiles/FinalizeTempFileBundles/Reshuffle.ViaRandomKey/Pair with 
random key into Write Parquet files/WriteFiles/FinalizeTempFileBundles/Finalize
Oct 25, 2018 6:28:21 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2018-10-25T18:28:08.207Z: Fusing consumer Write Parquet 
files/WriteFiles/FinalizeTempFileBundles/Finalize into Write Parquet 
files/WriteFiles/GatherTempFileResults/Reshuffle.ViaRandomKey/Values/Values/Map
Oct 25, 2018 6:28:21 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2018-10-25T18:28:08.247Z: Fusing consumer Calculate 
hashcode/Values/Values/Map into Calculate 
hashcode/Combine.perKey(Hashing)/Combine.GroupedValues/Extract
Oct 25, 2018 6:28:21 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2018-10-25T18:28:08.292Z: Fusing consumer Write Parquet 
files/WriteFiles/GatherTempFileResults/Reshuffle.ViaRandomKey/Reshuffle/ExpandIterable
 into Write Parquet 
files/WriteFiles/GatherTempFileResults/Reshuffle.ViaRandomKey/Reshuffle/GroupByKey/GroupByWindow
Oct 25, 2018 6:28:21 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2018-10-25T18:28:08.379Z: Fusing consumer Write Parquet 
files/WriteFiles/GatherTempFileResults/Reshuffle.ViaRandomKey/Values/Values/Map 
into Write Parquet 
files/WriteFiles/GatherTempFileResults/Reshuffle.ViaRandomKey/Reshuffle/ExpandIterable
Oct 25, 2018 6:28:21 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2018-10-25T18:28:08.511Z: Fusing consumer Calculate 
hashcode/Combine.perKey(Hashing)/Combine.GroupedValues/Extract into Calculate 
hashcode/Combine.perKey(Hashing)/Combine.GroupedValues
Oct 25, 2018 6:28:21 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2018-10-25T18:28:08.560Z: Fusing consumer Calculate 
hashcode/Combine.perKey(Hashing)/Combine.GroupedValues into Calculate 
hashcode/Combine.perKey(Hashing)/GroupByKey/Read
Oct 25, 2018 6:28:21 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2018-10-25T18:28:08.608Z: Fusing consumer Calculate 
hashcode/Combine.perKey(Hashing)/GroupByKey/Reify into Calculate 
hashcode/Combine.perKey(Hashing)/GroupByKey+Calculate 
hashcode/Combine.perKey(Hashing)/Combine.GroupedValues/Partial
Oct 25, 2018 6:28:21 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2018-10-25T18:28:08.658Z: Fusing consumer Calculate 
hashcode/Combine.perKey(Hashing)/GroupByKey/Write into Calculate 
hashcode/Combine.perKey(Hashing)/GroupByKey/Reify
Oct 25, 2018 6:28:21 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2018-10-25T18:28:08.711Z: Fusing consumer Write Parquet 
files/WriteFiles/GatherTempFileResults/Reshuffle.ViaRandomKey/Reshuffle/GroupByKey/Write
 into Write Parquet 
files/WriteFiles/GatherTempFileResults/Reshuffle.ViaRandomKey/Reshuffle/GroupByKey/Reify
Oct 25, 2018 6:28:21 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil

Build failed in Jenkins: beam_PostCommit_Python_Verify #6377

2018-10-25 Thread Apache Jenkins Server
See 


Changes:

[mxm] [BEAM-2918] Add state support for batch in portable FlinkRunner

[mxm] [BEAM-4176] Enable StatefulParDo tests for PortableValidatesRunner

--
[...truncated 1.27 MB...]
test_delete_dir (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... 
ok
test_delete_error (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) 
... ok
test_delete_file (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) 
... ok
test_exists (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_match_directory 
(apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_match_directory_trailing_slash 
(apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_match_file (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... 
ok
test_match_file_empty 
(apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_match_file_error 
(apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_match_file_with_limits 
(apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... 
:276:
 DeprecationWarning: Please use assertEqual instead.
  self.assertEquals(len(files), 1)
ok
test_match_file_with_zero_limit 
(apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_mkdirs (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_mkdirs_failed (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) 
... ok
test_open (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_open_bad_path (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) 
... ok
test_rename_directory 
(apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_rename_file (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) 
... ok
test_rename_file_error 
(apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... SKIP: This test 
still needs to be fixed on Python 3TODO: BEAM-5627
test_scheme (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_size (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_url_join (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_url_split (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... 
ok
test_create_bq_dataset (apache_beam.io.gcp.tests.utils_test.UtilsTest) ... 
SKIP: Bigquery dependencies are not installed.
test_delete_bq_dataset (apache_beam.io.gcp.tests.utils_test.UtilsTest) ... 
SKIP: Bigquery dependencies are not installed.
test_delete_table_fails_not_found 
(apache_beam.io.gcp.tests.utils_test.UtilsTest) ... SKIP: Bigquery dependencies 
are not installed.
test_delete_table_succeeds (apache_beam.io.gcp.tests.utils_test.UtilsTest) ... 
SKIP: Bigquery dependencies are not installed.
test_big_query_legacy_sql 
(apache_beam.io.gcp.big_query_query_to_table_it_test.BigQueryQueryToTableIT) 
... SKIP: IT is skipped because --test-pipeline-options is not specified
test_big_query_new_types 
(apache_beam.io.gcp.big_query_query_to_table_it_test.BigQueryQueryToTableIT) 
... SKIP: IT is skipped because --test-pipeline-options is not specified
test_big_query_standard_sql 
(apache_beam.io.gcp.big_query_query_to_table_it_test.BigQueryQueryToTableIT) 
... SKIP: IT is skipped because --test-pipeline-options is not specified
test_bigquery_read_1M_python 
(apache_beam.io.gcp.bigquery_io_read_it_test.BigqueryIOReadIT) ... SKIP: IT is 
skipped because --test-pipeline-options is not specified
get_test_rows (apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... SKIP: 
GCP dependencies are not installed
test_read_from_query (apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... 
SKIP: GCP dependencies are not installed
test_read_from_query_sql_format 
(apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... SKIP: GCP 
dependencies are not installed
test_read_from_query_unflatten_records 
(apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... SKIP: GCP 
dependencies are not installed
test_read_from_table (apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... 
SKIP: GCP dependencies are not installed
test_read_from_table_and_job_complete_retry 
(apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... SKIP: GCP 
dependencies are not installed
test_read_from_table_and_multiple_pages 
(apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... SKIP: GCP 
dependencies are not installed
test_read_from_table_as_tablerows 
(apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... SKIP: GCP 
dependencies are not installed
test_table_schema_without_project 
(apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... SKIP: GCP 
dependencies are not installed
test_using_both_query_and_table_fails 
(apache_beam.io.gcp.bigquery_test.TestBigQueryReader) .

Jenkins build is back to normal : beam_PostCommit_Py_VR_Dataflow #1507

2018-10-25 Thread Apache Jenkins Server
See 




Build failed in Jenkins: beam_PreCommit_Community Metrics_Cron #7

2018-10-25 Thread Apache Jenkins Server
See 


Changes:

[kenn] [BEAM-5810] Jenkins emails to bui...@beam.apache.org instead of

[kenn] [BEAM-5810] Nightly release failures to bui...@beam.apache.org instead

[kenn] [BEAM-5810] Seed job failures to bui...@beam.apache.org instead of

[thw] [BEAM-5797] Ensure ExecutableStageDoFnOperator dispose is executed only

[mxm] [BEAM-2918] Add state support for batch in portable FlinkRunner

[mxm] [BEAM-4176] Enable StatefulParDo tests for PortableValidatesRunner

[25622840+adude3141] [BEAM-5176] remove preconfigured '-Xlint:deprecation' from 
compilerArgs

--
[...truncated 106.29 KB...]
Waiting for TCP socket on 172.19.0.1:3000 of service 'beamgrafana' (Connection 
refused (Connection refused))
Waiting for TCP socket on 172.19.0.1:3000 of service 'beamgrafana' (Connection 
refused (Connection refused))
Waiting for TCP socket on 172.19.0.1:3000 of service 'beamgrafana' (Connection 
refused (Connection refused))
Waiting for TCP socket on 172.19.0.1:3000 of service 'beamgrafana' (Connection 
refused (Connection refused))
Waiting for TCP socket on 172.19.0.1:3000 of service 'beamgrafana' (Connection 
refused (Connection refused))
Waiting for TCP socket on 172.19.0.1:3000 of service 'beamgrafana' (Connection 
refused (Connection refused))
Waiting for TCP socket on 172.19.0.1:3000 of service 'beamgrafana' (Connection 
refused (Connection refused))
Waiting for TCP socket on 172.19.0.1:3000 of service 'beamgrafana' (Connection 
refused (Connection refused))
Waiting for TCP socket on 172.19.0.1:3000 of service 'beamgrafana' (Connection 
refused (Connection refused))
Waiting for TCP socket on 172.19.0.1:3000 of service 'beamgrafana' (Connection 
refused (Connection refused))
Waiting for TCP socket on 172.19.0.1:3000 of service 'beamgrafana' (Connection 
refused (Connection refused))
Waiting for TCP socket on 172.19.0.1:3000 of service 'beamgrafana' (Connection 
refused (Connection refused))
Waiting for TCP socket on 172.19.0.1:3000 of service 'beamgrafana' (Connection 
refused (Connection refused))
Waiting for TCP socket on 172.19.0.1:3000 of service 'beamgrafana' (Connection 
refused (Connection refused))
Waiting for TCP socket on 172.19.0.1:3000 of service 'beamgrafana' (Connection 
refused (Connection refused))
Waiting for TCP socket on 172.19.0.1:3000 of service 'beamgrafana' (Connection 
refused (Connection refused))
Waiting for TCP socket on 172.19.0.1:3000 of service 'beamgrafana' (Connection 
refused (Connection refused))
Waiting for TCP socket on 172.19.0.1:3000 of service 'beamgrafana' (Connection 
refused (Connection refused))
Waiting for TCP socket on 172.19.0.1:3000 of service 'beamgrafana' (Connection 
refused (Connection refused))
Waiting for TCP socket on 172.19.0.1:3000 of service 'beamgrafana' (Connection 
refused (Connection refused))
Waiting for TCP socket on 172.19.0.1:3000 of service 'beamgrafana' (Connection 
refused (Connection refused))
Waiting for TCP socket on 172.19.0.1:3000 of service 'beamgrafana' (Connection 
refused (Connection refused))
Waiting for TCP socket on 172.19.0.1:3000 of service 'beamgrafana' (Connection 
refused (Connection refused))
Waiting for TCP socket on 172.19.0.1:3000 of service 'beamgrafana' (Connection 
refused (Connection refused))
Waiting for TCP socket on 172.19.0.1:3000 of service 'beamgrafana' (Connection 
refused (Connection refused))
Waiting for TCP socket on 172.19.0.1:3000 of service 'beamgrafana' (Connection 
refused (Connection refused))
Waiting for TCP socket on 172.19.0.1:3000 of service 'beamgrafana' (Connection 
refused (Connection refused))
Waiting for TCP socket on 172.19.0.1:3000 of service 'beamgrafana' (Connection 
refused (Connection refused))
Waiting for TCP socket on 172.19.0.1:3000 of service 'beamgrafana' (Connection 
refused (Connection refused))
Waiting for TCP socket on 172.19.0.1:3000 of service 'beamgrafana' (Connection 
refused (Connection refused))
Waiting for TCP socket on 172.19.0.1:3000 of service 'beamgrafana' (Connection 
refused (Connection refused))
Waiting for TCP socket on 172.19.0.1:3000 of service 'beamgrafana' (Connection 
refused (Connection refused))
Waiting for TCP socket on 172.19.0.1:3000 of service 'beamgrafana' (Connection 
refused (Connection refused))
Waiting for TCP socket on 172.19.0.1:3000 of service 'beamgrafana' (Connection 
refused (Connection refused))
Waiting for TCP socket on 172.19.0.1:3000 of service 'beamgrafana' (Connection 
refused (Connection refused))
Waiting for TCP socket on 172.19.0.1:3000 of service 'beamgrafana' (Connection 
refused (Connection refused))
Waiting for TCP socket on 172.19.0.1:3000 of service 'beamgrafana' (Connection 
refused (Connection refused))
Waiting for TCP socket on 172.19.0.1:3000 of service 'beamgrafana' (Connection 
refused (Connection refused))
Waiting for TCP socket on 172.19.0.1:3000 of service 'beamgrafana' (Connection 
ref

Jenkins build is back to normal : beam_PostCommit_Python_VR_Flink #509

2018-10-25 Thread Apache Jenkins Server
See 




Build failed in Jenkins: beam_PostCommit_Python_VR_Flink #508

2018-10-25 Thread Apache Jenkins Server
See 


Changes:

[25622840+adude3141] [BEAM-5176] remove preconfigured '-Xlint:deprecation' from 
compilerArgs

--
[...truncated 4.46 MB...]
[flink-runner-job-server] INFO 
org.apache.flink.runtime.highavailability.nonha.embedded.EmbeddedLeaderService 
- Proposing leadership to contender 
org.apache.flink.runtime.dispatcher.StandaloneDispatcher@7b083cab @ 
akka://flink/user/dispatcher44eb55c9-f5ed-4234-9b3e-b8bc033e9e0d
[flink-runner-job-server] INFO org.apache.flink.runtime.minicluster.MiniCluster 
- Flink Mini Cluster started successfully
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.dispatcher.StandaloneDispatcher - Dispatcher 
akka://flink/user/dispatcher44eb55c9-f5ed-4234-9b3e-b8bc033e9e0d was granted 
leadership with fencing token 0419c006-9e04-4a2d-a955-e0813086d978
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.dispatcher.StandaloneDispatcher - Recovering all 
persisted jobs.
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.highavailability.nonha.embedded.EmbeddedLeaderService 
- Received confirmation of leadership for leader 
akka://flink/user/dispatcher44eb55c9-f5ed-4234-9b3e-b8bc033e9e0d , 
session=0419c006-9e04-4a2d-a955-e0813086d978
[flink-akka.actor.default-dispatcher-2] INFO 
org.apache.flink.runtime.dispatcher.StandaloneDispatcher - Submitting job 
d65c219625bb75573a8624ce642c2b9b (test_windowing_1540490462.79).
[flink-akka.actor.default-dispatcher-2] INFO 
org.apache.flink.runtime.rpc.akka.AkkaRpcService - Starting RPC endpoint for 
org.apache.flink.runtime.jobmaster.JobMaster at akka://flink/user/jobmanager_41 
.
[flink-akka.actor.default-dispatcher-2] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Initializing job 
test_windowing_1540490462.79 (d65c219625bb75573a8624ce642c2b9b).
[flink-akka.actor.default-dispatcher-2] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Using restart strategy 
NoRestartStrategy for test_windowing_1540490462.79 
(d65c219625bb75573a8624ce642c2b9b).
[flink-akka.actor.default-dispatcher-2] INFO 
org.apache.flink.runtime.rpc.akka.AkkaRpcService - Starting RPC endpoint for 
org.apache.flink.runtime.jobmaster.slotpool.SlotPool at 
akka://flink/user/bb43b7f4-f3d4-44f9-ad41-4e94d32e19e9 .
[flink-akka.actor.default-dispatcher-2] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Job recovers via 
failover strategy: full graph restart
[flink-akka.actor.default-dispatcher-2] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Running initialization on master 
for job test_windowing_1540490462.79 (d65c219625bb75573a8624ce642c2b9b).
[flink-akka.actor.default-dispatcher-2] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Successfully ran initialization 
on master in 0 ms.
[flink-akka.actor.default-dispatcher-2] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - No state backend has been 
configured, using default (Memory / JobManager) MemoryStateBackend (data in 
heap memory / checkpoints to JobManager) (checkpoints: 'null', savepoints: 
'null', asynchronous: TRUE, maxStateSize: 5242880)
[flink-akka.actor.default-dispatcher-2] INFO 
org.apache.flink.runtime.highavailability.nonha.embedded.EmbeddedLeaderService 
- Proposing leadership to contender 
org.apache.flink.runtime.jobmaster.JobManagerRunner@46cb25f3 @ 
akka://flink/user/jobmanager_41
[flink-akka.actor.default-dispatcher-3] INFO 
org.apache.flink.runtime.jobmaster.JobManagerRunner - JobManager runner for job 
test_windowing_1540490462.79 (d65c219625bb75573a8624ce642c2b9b) was granted 
leadership with session id 962a5c21-2450-41b3-a66e-09d1e8fa2505 at 
akka://flink/user/jobmanager_41.
[flink-akka.actor.default-dispatcher-3] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Starting execution of job 
test_windowing_1540490462.79 (d65c219625bb75573a8624ce642c2b9b)
[flink-akka.actor.default-dispatcher-3] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Job 
test_windowing_1540490462.79 (d65c219625bb75573a8624ce642c2b9b) switched from 
state CREATED to RUNNING.
[flink-akka.actor.default-dispatcher-3] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Custom Source 
-> 14Create/Impulse.None/beam:env:docker:v1:0 -> ToKeyedWorkItem (1/1) 
(ec236c84729e8e772921f846e85badc4) switched from CREATED to SCHEDULED.
[flink-akka.actor.default-dispatcher-3] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Custom Source 
-> 26assert_that/Create/Impulse.None/beam:env:docker:v1:0 (1/1) 
(124fc208affc5e363ef00ccafe8077b7) switched from CREATED to SCHEDULED.
[flink-akka.actor.default-dispatcher-3] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - 
Create/MaybeReshuffle/Reshuffle/ReshufflePerKey/GroupByKey -> 
72Create/MaybeReshuffle/Reshuffle/ReshufflePerKey/GroupByKey/GroupByWindow.None/beam:env:docker:v1:0
 -> ToKeyedWorkItem (1/1) (

Build failed in Jenkins: beam_PostCommit_Py_VR_Dataflow #1506

2018-10-25 Thread Apache Jenkins Server
See 


Changes:

[mxm] [BEAM-2918] Add state support for batch in portable FlinkRunner

[mxm] [BEAM-4176] Enable StatefulParDo tests for PortableValidatesRunner

--
[...truncated 54.71 KB...]
  File 
"
 line 346, in MakeRequest
check_response_func=check_response_func)
  File 
"
 line 396, in _MakeRequestNoRetry
redirections=redirections, connection_type=connection_type)
  File 
"
 line 175, in new_request
redirections, connection_type)
  File 
"
 line 282, in request
connection_type=connection_type)
  File 
"
 line 1694, in request
(response, content) = self._request(conn, authority, uri, request_uri, 
method, body, headers, redirections, cachekey)
  File 
"
 line 1434, in _request
(response, content) = self._conn_request(conn, request_uri, method, body, 
headers)
  File 
"
 line 1390, in _conn_request
response = conn.getresponse()
  File "/usr/lib/python2.7/httplib.py", line 1136, in getresponse
response.begin()
  File "/usr/lib/python2.7/httplib.py", line 453, in begin
version, status, reason = self._read_status()
  File "/usr/lib/python2.7/httplib.py", line 409, in _read_status
line = self.fp.readline(_MAXLINE + 1)
  File "/usr/lib/python2.7/socket.py", line 480, in readline
data = self._sock.recv(self._rbufsize)
  File "/usr/lib/python2.7/ssl.py", line 756, in recv
return self.read(buflen)
  File "/usr/lib/python2.7/ssl.py", line 643, in read
v = self._sslobj.read(len)
  File 
"
 line 276, in signalhandler
raise TimedOutException()
TimedOutException: 'test_par_do_with_multiple_outputs_and_using_yield 
(apache_beam.transforms.ptransform_test.PTransformTest)'

--
XML: 

--
Ran 16 tests in 3002.393s

FAILED (errors=1)

> Task :beam-sdks-python:validatesRunnerBatchTests FAILED
:beam-sdks-python:validatesRunnerBatchTests (Thread[Task worker for 
':',5,main]) completed. Took 50 mins 4.093 secs.
:beam-sdks-python:validatesRunnerStreamingTests (Thread[Task worker for 
':',5,main]) started.

> Task :beam-sdks-python:validatesRunnerStreamingTests
Caching disabled for task ':beam-sdks-python:validatesRunnerStreamingTests': 
Caching has not been enabled for the task
Task ':beam-sdks-python:validatesRunnerStreamingTests' is not up-to-date 
because:
  Task has not declared any outputs despite executing actions.
Starting process 'command 'sh''. Working directory: 

 Command: sh -c . 

 && ./scripts/run_postcommit.sh ValidatesRunner,'!sickbay-streaming' streaming
Successfully started process 'command 'sh''


###
# Build tarball and set pipeline options.

# Check that the script is running in a known directory.
if [[ $PWD != *sdks/python* ]]; then
  echo 'Unable to locate Apache Beam Python SDK root directory'
  exit 1
fi

# Go to the Apache Beam Python SDK root
if [[ "*sdks/python" != $PWD ]]; then
  cd $(pwd | sed 's/sdks\/python.*/sdks\/python/')
fi
pwd | sed 's/sdks\/python.*/sdks\/python/'

RUNNER=${3:-TestDataflowRunner}

# Where to store integration test outputs.
GCS_LOCATION=${4:-gs://temp-storage-for-end-to-end-tests}

PROJECT=${5:-apache-beam-testing}

# Create a tarball
python 

Jenkins build is back to normal : beam_PostCommit_Python_Verify #6376

2018-10-25 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : beam_PostCommit_Py_VR_Dataflow #1505

2018-10-25 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : beam_PostCommit_Python_VR_Flink #506

2018-10-25 Thread Apache Jenkins Server
See 




Build failed in Jenkins: beam_PostCommit_Python_Verify #6375

2018-10-25 Thread Apache Jenkins Server
See 


Changes:

[thw] [BEAM-5797] Ensure ExecutableStageDoFnOperator dispose is executed only

--
[...truncated 1.27 MB...]
ok
test_delete_dir (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... 
ok
test_delete_error (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) 
... ok
test_delete_file (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) 
... ok
test_exists (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_match_directory 
(apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_match_directory_trailing_slash 
(apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_match_file (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... 
ok
test_match_file_empty 
(apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_match_file_error 
(apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_match_file_with_limits 
(apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... 
:276:
 DeprecationWarning: Please use assertEqual instead.
  self.assertEquals(len(files), 1)
ok
test_match_file_with_zero_limit 
(apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_mkdirs (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_mkdirs_failed (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) 
... ok
test_open (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_open_bad_path (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) 
... ok
test_rename_directory 
(apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_rename_file (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) 
... ok
test_rename_file_error 
(apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... SKIP: This test 
still needs to be fixed on Python 3TODO: BEAM-5627
test_scheme (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_size (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_url_join (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_url_split (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... 
ok
test_create_bq_dataset (apache_beam.io.gcp.tests.utils_test.UtilsTest) ... 
SKIP: Bigquery dependencies are not installed.
test_delete_bq_dataset (apache_beam.io.gcp.tests.utils_test.UtilsTest) ... 
SKIP: Bigquery dependencies are not installed.
test_delete_table_fails_not_found 
(apache_beam.io.gcp.tests.utils_test.UtilsTest) ... SKIP: Bigquery dependencies 
are not installed.
test_delete_table_succeeds (apache_beam.io.gcp.tests.utils_test.UtilsTest) ... 
SKIP: Bigquery dependencies are not installed.
test_big_query_legacy_sql 
(apache_beam.io.gcp.big_query_query_to_table_it_test.BigQueryQueryToTableIT) 
... SKIP: IT is skipped because --test-pipeline-options is not specified
test_big_query_new_types 
(apache_beam.io.gcp.big_query_query_to_table_it_test.BigQueryQueryToTableIT) 
... SKIP: IT is skipped because --test-pipeline-options is not specified
test_big_query_standard_sql 
(apache_beam.io.gcp.big_query_query_to_table_it_test.BigQueryQueryToTableIT) 
... SKIP: IT is skipped because --test-pipeline-options is not specified
test_bigquery_read_1M_python 
(apache_beam.io.gcp.bigquery_io_read_it_test.BigqueryIOReadIT) ... SKIP: IT is 
skipped because --test-pipeline-options is not specified
get_test_rows (apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... SKIP: 
GCP dependencies are not installed
test_read_from_query (apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... 
SKIP: GCP dependencies are not installed
test_read_from_query_sql_format 
(apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... SKIP: GCP 
dependencies are not installed
test_read_from_query_unflatten_records 
(apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... SKIP: GCP 
dependencies are not installed
test_read_from_table (apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... 
SKIP: GCP dependencies are not installed
test_read_from_table_and_job_complete_retry 
(apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... SKIP: GCP 
dependencies are not installed
test_read_from_table_and_multiple_pages 
(apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... SKIP: GCP 
dependencies are not installed
test_read_from_table_as_tablerows 
(apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... SKIP: GCP 
dependencies are not installed
test_table_schema_without_project 
(apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... SKIP: GCP 
dependencies are not installed
test_using_both_query_and_table_fails 
(apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... SKIP: GCP 
dependencies are not installed
test_using_neither

Build failed in Jenkins: beam_PostCommit_Py_VR_Dataflow #1504

2018-10-25 Thread Apache Jenkins Server
See 


Changes:

[thw] [BEAM-5797] Ensure ExecutableStageDoFnOperator dispose is executed only

--
[...truncated 52.79 KB...]
OK
:beam-sdks-python:validatesRunnerBatchTests (Thread[Task worker for ':' Thread 
3,5,main]) completed. Took 17 mins 8.121 secs.
:beam-sdks-python:validatesRunnerStreamingTests (Thread[Task worker for ':' 
Thread 3,5,main]) started.

> Task :beam-sdks-python:validatesRunnerStreamingTests
Caching disabled for task ':beam-sdks-python:validatesRunnerStreamingTests': 
Caching has not been enabled for the task
Task ':beam-sdks-python:validatesRunnerStreamingTests' is not up-to-date 
because:
  Task has not declared any outputs despite executing actions.
Starting process 'command 'sh''. Working directory: 

 Command: sh -c . 

 && ./scripts/run_postcommit.sh ValidatesRunner,'!sickbay-streaming' streaming
Successfully started process 'command 'sh''


###
# Build tarball and set pipeline options.

# Check that the script is running in a known directory.
if [[ $PWD != *sdks/python* ]]; then
  echo 'Unable to locate Apache Beam Python SDK root directory'
  exit 1
fi

# Go to the Apache Beam Python SDK root
if [[ "*sdks/python" != $PWD ]]; then
  cd $(pwd | sed 's/sdks\/python.*/sdks\/python/')
fi
pwd | sed 's/sdks\/python.*/sdks\/python/'

RUNNER=${3:-TestDataflowRunner}

# Where to store integration test outputs.
GCS_LOCATION=${4:-gs://temp-storage-for-end-to-end-tests}

PROJECT=${5:-apache-beam-testing}

# Create a tarball
python setup.py -q sdist
:398:
 UserWarning: Normalizing '2.9.0.dev' to '2.9.0.dev0'
  normalized_version,
warning: no files found matching 'README.md'
warning: no files found matching 'NOTICE'
warning: no files found matching 'LICENSE'
warning: cmd: standard file not found: should have one of README, README.rst, 
README.txt, README.md


SDK_LOCATION=$(find dist/apache-beam-*.tar.gz)
find dist/apache-beam-*.tar.gz

# Install test dependencies for ValidatesRunner tests.
echo "pyhamcrest" > postcommit_requirements.txt
echo "mock" >> postcommit_requirements.txt

# Options used to run testing pipeline on Cloud Dataflow Service. Also used for
# running on DirectRunner (some options ignored).
PIPELINE_OPTIONS=(
  "--runner=$RUNNER"
  "--project=$PROJECT"
  "--staging_location=$GCS_LOCATION/staging-it"
  "--temp_location=$GCS_LOCATION/temp-it"
  "--output=$GCS_LOCATION/py-it-cloud/output"
  "--sdk_location=$SDK_LOCATION"
>>> Set test pipeline to streaming
  "--requirements_file=postcommit_requirements.txt"
  "--num_workers=1"
  "--sleep_secs=20"
)

# Add streaming flag if specified.
if [[ "$2" = "streaming" ]]; then
  echo ">>> Set test pipeline to streaming"
  PIPELINE_OPTIONS+=("--streaming")
else
  echo ">>> Set test pipeline to batch"
fi

TESTS=""
if [[ "$3" = "TestDirectRunner" ]]; then
  if [[ "$2" = "streaming" ]]; then
TESTS="--tests=\
apache_beam.examples.wordcount_it_test:WordCountIT.test_wordcount_it,\
apache_beam.io.gcp.pubsub_integration_test:PubSubIntegrationTest"
  else
TESTS="--tests=\
apache_beam.examples.wordcount_it_test:WordCountIT.test_wordcount_it,\
apache_beam.io.gcp.pubsub_integration_test:PubSubIntegrationTest,\
apache_beam.io.gcp.big_query_query_to_table_it_test:BigQueryQueryToTableIT,\
apache_beam.io.gcp.bigquery_io_read_it_test"
  fi
fi

###
# Run tests and validate that jobs finish successfully.

JOINED_OPTS=$(IFS=" " ; echo "${PIPELINE_OPTIONS[*]}")
IFS=" " ; echo "${PIPELINE_OPTIONS[*]}"
>>> RUNNING TestDataflowRunner ValidatesRunner,!sickbay-streaming tests

echo ">>> RUNNING $RUNNER $1 tests"
python setup.py nosetests \
  --attr $1 \
  --nologcapture \
  --processes=8 \
  --process-timeout=3000 \
  --test-pipeline-options="$JOINED_OPTS" \
  $TESTS
:398:
 UserWarning: Normalizing '2.9.0.dev' to '2.9.0.dev0'
  normalized_version,
running nosetests
running egg_info
writing requirements to apache_beam.egg-info/requires.txt
writing apache_beam.egg-info/PKG-INFO
writing top-level names to apache_beam.egg-info/top_level.txt
writing dependency_links to apache_beam.egg-info/dependency_links.txt
writing entry points to apache_beam.egg-info/entry_points.txt
reading manifest file 'apache_beam.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching 'READM

Build failed in Jenkins: beam_PostCommit_Python_VR_Flink #505

2018-10-25 Thread Apache Jenkins Server
See 


Changes:

[thw] [BEAM-5797] Ensure ExecutableStageDoFnOperator dispose is executed only

--
[...truncated 4.36 MB...]
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - No state backend has been 
configured, using default (Memory / JobManager) MemoryStateBackend (data in 
heap memory / checkpoints to JobManager) (checkpoints: 'null', savepoints: 
'null', asynchronous: TRUE, maxStateSize: 5242880)
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.highavailability.nonha.embedded.EmbeddedLeaderService 
- Proposing leadership to contender 
org.apache.flink.runtime.jobmaster.JobManagerRunner@7299cdbf @ 
akka://flink/user/jobmanager_41
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.jobmaster.JobManagerRunner - JobManager runner for job 
test_windowing_1540476237.42 (b565b58ca1b76a3cd5e43444c5de2fe8) was granted 
leadership with session id 225699cd-e755-43ab-825e-599471f66e6b at 
akka://flink/user/jobmanager_41.
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Starting execution of job 
test_windowing_1540476237.42 (b565b58ca1b76a3cd5e43444c5de2fe8)
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Job 
test_windowing_1540476237.42 (b565b58ca1b76a3cd5e43444c5de2fe8) switched from 
state CREATED to RUNNING.
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Custom Source 
-> 14Create/Impulse.None/beam:env:docker:v1:0 -> ToKeyedWorkItem (1/1) 
(59f85a8ba50b42f876ab43ba40ff4e48) switched from CREATED to SCHEDULED.
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Custom Source 
-> 26assert_that/Create/Impulse.None/beam:env:docker:v1:0 (1/1) 
(ab504f3dc03d6d309ec090a3ffafb010) switched from CREATED to SCHEDULED.
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - 
Create/MaybeReshuffle/Reshuffle/ReshufflePerKey/GroupByKey -> 
72Create/MaybeReshuffle/Reshuffle/ReshufflePerKey/GroupByKey/GroupByWindow.None/beam:env:docker:v1:0
 -> ToKeyedWorkItem (1/1) (ee756671bf088e72169673b65020a82e) switched from 
CREATED to SCHEDULED.
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - GroupByKey -> 
24GroupByKey/GroupByWindow.None/beam:env:docker:v1:0 (1/1) 
(795104759c9cf32f78fe28a3f6d9f710) switched from CREATED to SCHEDULED.
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - ToKeyedWorkItem (1/1) 
(f895dc5db955894a2fede185fcae1dbf) switched from CREATED to SCHEDULED.
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - 
assert_that/Group/GroupByKey -> 
42assert_that/Group/GroupByKey/GroupByWindow.None/beam:env:docker:v1:0 (1/1) 
(3ced3cfde191083b7de6397d6cf2e47a) switched from CREATED to SCHEDULED.
[flink-akka.actor.default-dispatcher-3] INFO 
org.apache.flink.runtime.jobmaster.slotpool.SlotPool - Cannot serve slot 
request, no ResourceManager connected. Adding as pending request 
[SlotRequestId{d0337d2e629c042a8df9222664e9fb17}]
[jobmanager-future-thread-1] INFO 
org.apache.flink.runtime.highavailability.nonha.embedded.EmbeddedLeaderService 
- Received confirmation of leadership for leader 
akka://flink/user/jobmanager_41 , session=225699cd-e755-43ab-825e-599471f66e6b
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Connecting to ResourceManager 
akka://flink/user/resourcemanager_5868269a-e523-46c7-851e-418fc62c3ecb(a7c4df8b6a08b7f72f1cd03641d34cba)
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Resolved ResourceManager 
address, beginning registration
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Registration at ResourceManager 
attempt 1 (timeout=100ms)
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.resourcemanager.StandaloneResourceManager - 
Registering job manager 
825e599471f66e6b225699cde75543ab@akka://flink/user/jobmanager_41 for job 
b565b58ca1b76a3cd5e43444c5de2fe8.
[flink-akka.actor.default-dispatcher-2] INFO 
org.apache.flink.runtime.resourcemanager.StandaloneResourceManager - Registered 
job manager 825e599471f66e6b225699cde75543ab@akka://flink/user/jobmanager_41 
for job b565b58ca1b76a3cd5e43444c5de2fe8.
[flink-akka.actor.default-dispatcher-3] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - JobManager successfully 
registered at ResourceManager, leader id: a7c4df8b6a08b7f72f1cd03641d34cba.
[flink-akka.actor.default-dispatcher-3] INFO 
org.apache.flink.runtime.jobmaster.slotpool.SlotPool - Requesting new slot 

Build failed in Jenkins: beam_PostCommit_Python_Verify #6374

2018-10-25 Thread Apache Jenkins Server
See 


--
[...truncated 1.27 MB...]
ok
test_delete_dir (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... 
ok
test_delete_error (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) 
... ok
test_delete_file (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) 
... ok
test_exists (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_match_directory 
(apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_match_directory_trailing_slash 
(apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_match_file (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... 
ok
test_match_file_empty 
(apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_match_file_error 
(apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_match_file_with_limits 
(apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... 
:276:
 DeprecationWarning: Please use assertEqual instead.
  self.assertEquals(len(files), 1)
ok
test_match_file_with_zero_limit 
(apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_mkdirs (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_mkdirs_failed (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) 
... ok
test_open (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_open_bad_path (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) 
... ok
test_rename_directory 
(apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_rename_file (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) 
... ok
test_rename_file_error 
(apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... SKIP: This test 
still needs to be fixed on Python 3TODO: BEAM-5627
test_scheme (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_size (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_url_join (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_url_split (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... 
ok
test_create_bq_dataset (apache_beam.io.gcp.tests.utils_test.UtilsTest) ... 
SKIP: Bigquery dependencies are not installed.
test_delete_bq_dataset (apache_beam.io.gcp.tests.utils_test.UtilsTest) ... 
SKIP: Bigquery dependencies are not installed.
test_delete_table_fails_not_found 
(apache_beam.io.gcp.tests.utils_test.UtilsTest) ... SKIP: Bigquery dependencies 
are not installed.
test_delete_table_succeeds (apache_beam.io.gcp.tests.utils_test.UtilsTest) ... 
SKIP: Bigquery dependencies are not installed.
test_big_query_legacy_sql 
(apache_beam.io.gcp.big_query_query_to_table_it_test.BigQueryQueryToTableIT) 
... SKIP: IT is skipped because --test-pipeline-options is not specified
test_big_query_new_types 
(apache_beam.io.gcp.big_query_query_to_table_it_test.BigQueryQueryToTableIT) 
... SKIP: IT is skipped because --test-pipeline-options is not specified
test_big_query_standard_sql 
(apache_beam.io.gcp.big_query_query_to_table_it_test.BigQueryQueryToTableIT) 
... SKIP: IT is skipped because --test-pipeline-options is not specified
test_bigquery_read_1M_python 
(apache_beam.io.gcp.bigquery_io_read_it_test.BigqueryIOReadIT) ... SKIP: IT is 
skipped because --test-pipeline-options is not specified
get_test_rows (apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... SKIP: 
GCP dependencies are not installed
test_read_from_query (apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... 
SKIP: GCP dependencies are not installed
test_read_from_query_sql_format 
(apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... SKIP: GCP 
dependencies are not installed
test_read_from_query_unflatten_records 
(apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... SKIP: GCP 
dependencies are not installed
test_read_from_table (apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... 
SKIP: GCP dependencies are not installed
test_read_from_table_and_job_complete_retry 
(apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... SKIP: GCP 
dependencies are not installed
test_read_from_table_and_multiple_pages 
(apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... SKIP: GCP 
dependencies are not installed
test_read_from_table_as_tablerows 
(apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... SKIP: GCP 
dependencies are not installed
test_table_schema_without_project 
(apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... SKIP: GCP 
dependencies are not installed
test_using_both_query_and_table_fails 
(apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... SKIP: GCP 
dependencies are not installed
test_using_neither_query_nor_table_fails 
(apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... SKIP: GCP 
dependenc

Build failed in Jenkins: beam_PostCommit_Python_VR_Flink #504

2018-10-25 Thread Apache Jenkins Server
See 


--
[...truncated 4.35 MB...]
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.highavailability.nonha.embedded.EmbeddedLeaderService 
- Received confirmation of leadership for leader http://localhost:39963 , 
session=2712e094-7ce6-4d1c-b8c7-914214a9b422
[flink-runner-job-server] INFO org.apache.flink.runtime.rpc.akka.AkkaRpcService 
- Starting RPC endpoint for 
org.apache.flink.runtime.dispatcher.StandaloneDispatcher at 
akka://flink/user/dispatcher10696ae0-02bb-4748-8012-da14976ffe23 .
[flink-runner-job-server] INFO 
org.apache.flink.runtime.highavailability.nonha.embedded.EmbeddedLeaderService 
- Proposing leadership to contender 
org.apache.flink.runtime.dispatcher.StandaloneDispatcher@59449535 @ 
akka://flink/user/dispatcher10696ae0-02bb-4748-8012-da14976ffe23
[flink-runner-job-server] INFO org.apache.flink.runtime.minicluster.MiniCluster 
- Flink Mini Cluster started successfully
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.dispatcher.StandaloneDispatcher - Dispatcher 
akka://flink/user/dispatcher10696ae0-02bb-4748-8012-da14976ffe23 was granted 
leadership with fencing token 5109b41c-e187-40a8-9d57-4e6c45299d47
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.dispatcher.StandaloneDispatcher - Recovering all 
persisted jobs.
[flink-akka.actor.default-dispatcher-2] INFO 
org.apache.flink.runtime.highavailability.nonha.embedded.EmbeddedLeaderService 
- Received confirmation of leadership for leader 
akka://flink/user/dispatcher10696ae0-02bb-4748-8012-da14976ffe23 , 
session=5109b41c-e187-40a8-9d57-4e6c45299d47
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.dispatcher.StandaloneDispatcher - Submitting job 
eb6347f1b80f5c82b26541c3245316b3 (test_windowing_1540469445.37).
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.rpc.akka.AkkaRpcService - Starting RPC endpoint for 
org.apache.flink.runtime.jobmaster.JobMaster at akka://flink/user/jobmanager_41 
.
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Initializing job 
test_windowing_1540469445.37 (eb6347f1b80f5c82b26541c3245316b3).
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Using restart strategy 
NoRestartStrategy for test_windowing_1540469445.37 
(eb6347f1b80f5c82b26541c3245316b3).
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.rpc.akka.AkkaRpcService - Starting RPC endpoint for 
org.apache.flink.runtime.jobmaster.slotpool.SlotPool at 
akka://flink/user/957bbdea-e52f-46b0-8371-0b109abae3bc .
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Job recovers via 
failover strategy: full graph restart
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Running initialization on master 
for job test_windowing_1540469445.37 (eb6347f1b80f5c82b26541c3245316b3).
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Successfully ran initialization 
on master in 0 ms.
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - No state backend has been 
configured, using default (Memory / JobManager) MemoryStateBackend (data in 
heap memory / checkpoints to JobManager) (checkpoints: 'null', savepoints: 
'null', asynchronous: TRUE, maxStateSize: 5242880)
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.highavailability.nonha.embedded.EmbeddedLeaderService 
- Proposing leadership to contender 
org.apache.flink.runtime.jobmaster.JobManagerRunner@4e8da7b9 @ 
akka://flink/user/jobmanager_41
[flink-akka.actor.default-dispatcher-2] INFO 
org.apache.flink.runtime.jobmaster.JobManagerRunner - JobManager runner for job 
test_windowing_1540469445.37 (eb6347f1b80f5c82b26541c3245316b3) was granted 
leadership with session id 0ebc06f7-b003-41c4-a0a5-89ac562f91ed at 
akka://flink/user/jobmanager_41.
[flink-akka.actor.default-dispatcher-3] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Starting execution of job 
test_windowing_1540469445.37 (eb6347f1b80f5c82b26541c3245316b3)
[flink-akka.actor.default-dispatcher-3] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Job 
test_windowing_1540469445.37 (eb6347f1b80f5c82b26541c3245316b3) switched from 
state CREATED to RUNNING.
[flink-akka.actor.default-dispatcher-3] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Custom Source 
-> 14Create/Impulse.None/beam:env:docker:v1:0 -> ToKeyedWorkItem (1/1) 
(d8ad573df7a2bc2e9b17b4f07621de35) switched from CREATED to SCHEDULED.
[flink-akka.actor.default-dispatcher-3] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Custom Source 
-> 26assert_that/Create/Impulse.None/beam:env:docker:v1:0 (1/1) 

Jenkins build is back to normal : beam_PostCommit_Python_Verify #6372

2018-10-25 Thread Apache Jenkins Server
See 




Build failed in Jenkins: beam_PostCommit_Python_Verify #6371

2018-10-24 Thread Apache Jenkins Server
See 


Changes:

[lcwik] [BEAM-5098] Add withFanout side input regression test (#6724)

--
[...truncated 1.28 MB...]
ok
test_delete_dir (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... 
ok
test_delete_error (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) 
... ok
test_delete_file (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) 
... ok
test_exists (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_match_directory 
(apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_match_directory_trailing_slash 
(apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_match_file (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... 
ok
test_match_file_empty 
(apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_match_file_error 
(apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_match_file_with_limits 
(apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... 
:276:
 DeprecationWarning: Please use assertEqual instead.
  self.assertEquals(len(files), 1)
ok
test_match_file_with_zero_limit 
(apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_mkdirs (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_mkdirs_failed (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) 
... ok
test_open (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_open_bad_path (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) 
... ok
test_rename_directory 
(apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_rename_file (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) 
... ok
test_rename_file_error 
(apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... SKIP: This test 
still needs to be fixed on Python 3TODO: BEAM-5627
test_scheme (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_size (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_url_join (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_url_split (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... 
ok
test_create_bq_dataset (apache_beam.io.gcp.tests.utils_test.UtilsTest) ... 
SKIP: Bigquery dependencies are not installed.
test_delete_bq_dataset (apache_beam.io.gcp.tests.utils_test.UtilsTest) ... 
SKIP: Bigquery dependencies are not installed.
test_delete_table_fails_not_found 
(apache_beam.io.gcp.tests.utils_test.UtilsTest) ... SKIP: Bigquery dependencies 
are not installed.
test_delete_table_succeeds (apache_beam.io.gcp.tests.utils_test.UtilsTest) ... 
SKIP: Bigquery dependencies are not installed.
test_big_query_legacy_sql 
(apache_beam.io.gcp.big_query_query_to_table_it_test.BigQueryQueryToTableIT) 
... SKIP: IT is skipped because --test-pipeline-options is not specified
test_big_query_new_types 
(apache_beam.io.gcp.big_query_query_to_table_it_test.BigQueryQueryToTableIT) 
... SKIP: IT is skipped because --test-pipeline-options is not specified
test_big_query_standard_sql 
(apache_beam.io.gcp.big_query_query_to_table_it_test.BigQueryQueryToTableIT) 
... SKIP: IT is skipped because --test-pipeline-options is not specified
test_bigquery_read_1M_python 
(apache_beam.io.gcp.bigquery_io_read_it_test.BigqueryIOReadIT) ... SKIP: IT is 
skipped because --test-pipeline-options is not specified
get_test_rows (apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... SKIP: 
GCP dependencies are not installed
test_read_from_query (apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... 
SKIP: GCP dependencies are not installed
test_read_from_query_sql_format 
(apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... SKIP: GCP 
dependencies are not installed
test_read_from_query_unflatten_records 
(apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... SKIP: GCP 
dependencies are not installed
test_read_from_table (apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... 
SKIP: GCP dependencies are not installed
test_read_from_table_and_job_complete_retry 
(apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... SKIP: GCP 
dependencies are not installed
test_read_from_table_and_multiple_pages 
(apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... SKIP: GCP 
dependencies are not installed
test_read_from_table_as_tablerows 
(apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... SKIP: GCP 
dependencies are not installed
test_table_schema_without_project 
(apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... SKIP: GCP 
dependencies are not installed
test_using_both_query_and_table_fails 
(apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... SKIP: GCP 
dependencies are not installed
test_using_neither_query_n

Jenkins build is back to normal : beam_PostCommit_Py_VR_Dataflow #1500

2018-10-24 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : beam_PostCommit_Python_VR_Flink #501

2018-10-24 Thread Apache Jenkins Server
See 




Build failed in Jenkins: beam_PostCommit_Python_Verify #6370

2018-10-24 Thread Apache Jenkins Server
See 


Changes:

[lcwik] Use Collections isEmpty() instead of size() == 0 (#6720)

[lcwik] [BEAM-5855] Remove duplicated code files from the Dataflow worker

[lcwik] [BEAM-5496] Fixes bug of MqttIO fails to deserialize checkpoint (#6701)

--
[...truncated 1.27 MB...]
ok
test_delete_dir (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... 
ok
test_delete_error (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) 
... ok
test_delete_file (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) 
... ok
test_exists (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_match_directory 
(apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_match_directory_trailing_slash 
(apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_match_file (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... 
ok
test_match_file_empty 
(apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_match_file_error 
(apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_match_file_with_limits 
(apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... 
:276:
 DeprecationWarning: Please use assertEqual instead.
  self.assertEquals(len(files), 1)
ok
test_match_file_with_zero_limit 
(apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_mkdirs (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_mkdirs_failed (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) 
... ok
test_open (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_open_bad_path (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) 
... ok
test_rename_directory 
(apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_rename_file (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) 
... ok
test_rename_file_error 
(apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... SKIP: This test 
still needs to be fixed on Python 3TODO: BEAM-5627
test_scheme (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_size (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_url_join (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_url_split (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... 
ok
test_create_bq_dataset (apache_beam.io.gcp.tests.utils_test.UtilsTest) ... 
SKIP: Bigquery dependencies are not installed.
test_delete_bq_dataset (apache_beam.io.gcp.tests.utils_test.UtilsTest) ... 
SKIP: Bigquery dependencies are not installed.
test_delete_table_fails_not_found 
(apache_beam.io.gcp.tests.utils_test.UtilsTest) ... SKIP: Bigquery dependencies 
are not installed.
test_delete_table_succeeds (apache_beam.io.gcp.tests.utils_test.UtilsTest) ... 
SKIP: Bigquery dependencies are not installed.
test_big_query_legacy_sql 
(apache_beam.io.gcp.big_query_query_to_table_it_test.BigQueryQueryToTableIT) 
... SKIP: IT is skipped because --test-pipeline-options is not specified
test_big_query_new_types 
(apache_beam.io.gcp.big_query_query_to_table_it_test.BigQueryQueryToTableIT) 
... SKIP: IT is skipped because --test-pipeline-options is not specified
test_big_query_standard_sql 
(apache_beam.io.gcp.big_query_query_to_table_it_test.BigQueryQueryToTableIT) 
... SKIP: IT is skipped because --test-pipeline-options is not specified
test_bigquery_read_1M_python 
(apache_beam.io.gcp.bigquery_io_read_it_test.BigqueryIOReadIT) ... SKIP: IT is 
skipped because --test-pipeline-options is not specified
get_test_rows (apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... SKIP: 
GCP dependencies are not installed
test_read_from_query (apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... 
SKIP: GCP dependencies are not installed
test_read_from_query_sql_format 
(apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... SKIP: GCP 
dependencies are not installed
test_read_from_query_unflatten_records 
(apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... SKIP: GCP 
dependencies are not installed
test_read_from_table (apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... 
SKIP: GCP dependencies are not installed
test_read_from_table_and_job_complete_retry 
(apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... SKIP: GCP 
dependencies are not installed
test_read_from_table_and_multiple_pages 
(apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... SKIP: GCP 
dependencies are not installed
test_read_from_table_as_tablerows 
(apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... SKIP: GCP 
dependencies are not installed
test_table_schema_without_project 
(apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... SKIP: GCP 
dependencies are not installed
test_using_bot

Build failed in Jenkins: beam_PostCommit_Py_VR_Dataflow #1499

2018-10-24 Thread Apache Jenkins Server
See 


Changes:

[lcwik] [BEAM-5496] Fixes bug of MqttIO fails to deserialize checkpoint (#6701)

--
[...truncated 51.32 KB...]
OK
:beam-sdks-python:validatesRunnerBatchTests (Thread[Task worker for 
':',5,main]) completed. Took 17 mins 30.632 secs.
:beam-sdks-python:validatesRunnerStreamingTests (Thread[Task worker for 
':',5,main]) started.

> Task :beam-sdks-python:validatesRunnerStreamingTests
Caching disabled for task ':beam-sdks-python:validatesRunnerStreamingTests': 
Caching has not been enabled for the task
Task ':beam-sdks-python:validatesRunnerStreamingTests' is not up-to-date 
because:
  Task has not declared any outputs despite executing actions.
Starting process 'command 'sh''. Working directory: 

 Command: sh -c . 

 && ./scripts/run_postcommit.sh ValidatesRunner,'!sickbay-streaming' streaming
Successfully started process 'command 'sh''


###
# Build tarball and set pipeline options.

# Check that the script is running in a known directory.
if [[ $PWD != *sdks/python* ]]; then
  echo 'Unable to locate Apache Beam Python SDK root directory'
  exit 1
fi

# Go to the Apache Beam Python SDK root
if [[ "*sdks/python" != $PWD ]]; then
  cd $(pwd | sed 's/sdks\/python.*/sdks\/python/')
fi
pwd | sed 's/sdks\/python.*/sdks\/python/'

RUNNER=${3:-TestDataflowRunner}

# Where to store integration test outputs.
GCS_LOCATION=${4:-gs://temp-storage-for-end-to-end-tests}

PROJECT=${5:-apache-beam-testing}

# Create a tarball
python setup.py -q sdist
:398:
 UserWarning: Normalizing '2.9.0.dev' to '2.9.0.dev0'
  normalized_version,
warning: no files found matching 'README.md'
warning: no files found matching 'NOTICE'
warning: no files found matching 'LICENSE'
warning: cmd: standard file not found: should have one of README, README.rst, 
README.txt, README.md


SDK_LOCATION=$(find dist/apache-beam-*.tar.gz)
find dist/apache-beam-*.tar.gz

# Install test dependencies for ValidatesRunner tests.
echo "pyhamcrest" > postcommit_requirements.txt
echo "mock" >> postcommit_requirements.txt

# Options used to run testing pipeline on Cloud Dataflow Service. Also used for
# running on DirectRunner (some options ignored).
PIPELINE_OPTIONS=(
>>> Set test pipeline to streaming
  "--runner=$RUNNER"
  "--project=$PROJECT"
  "--staging_location=$GCS_LOCATION/staging-it"
  "--temp_location=$GCS_LOCATION/temp-it"
  "--output=$GCS_LOCATION/py-it-cloud/output"
  "--sdk_location=$SDK_LOCATION"
  "--requirements_file=postcommit_requirements.txt"
  "--num_workers=1"
  "--sleep_secs=20"
)

# Add streaming flag if specified.
if [[ "$2" = "streaming" ]]; then
  echo ">>> Set test pipeline to streaming"
  PIPELINE_OPTIONS+=("--streaming")
else
  echo ">>> Set test pipeline to batch"
fi

TESTS=""
if [[ "$3" = "TestDirectRunner" ]]; then
  if [[ "$2" = "streaming" ]]; then
TESTS="--tests=\
apache_beam.examples.wordcount_it_test:WordCountIT.test_wordcount_it,\
apache_beam.io.gcp.pubsub_integration_test:PubSubIntegrationTest"
  else
TESTS="--tests=\
apache_beam.examples.wordcount_it_test:WordCountIT.test_wordcount_it,\
apache_beam.io.gcp.pubsub_integration_test:PubSubIntegrationTest,\
apache_beam.io.gcp.big_query_query_to_table_it_test:BigQueryQueryToTableIT,\
apache_beam.io.gcp.bigquery_io_read_it_test"
  fi
fi

###
# Run tests and validate that jobs finish successfully.

JOINED_OPTS=$(IFS=" " ; echo "${PIPELINE_OPTIONS[*]}")
IFS=" " ; echo "${PIPELINE_OPTIONS[*]}"

echo ">>> RUNNING $RUNNER $1 tests"
python setup.py nosetests \
  --attr $1 \
  --nologcapture \
>>> RUNNING TestDataflowRunner ValidatesRunner,!sickbay-streaming tests
  --processes=8 \
  --process-timeout=3000 \
  --test-pipeline-options="$JOINED_OPTS" \
  $TESTS
:398:
 UserWarning: Normalizing '2.9.0.dev' to '2.9.0.dev0'
  normalized_version,
running nosetests
running egg_info
writing requirements to apache_beam.egg-info/requires.txt
writing apache_beam.egg-info/PKG-INFO
writing top-level names to apache_beam.egg-info/top_level.txt
writing dependency_links to apache_beam.egg-info/dependency_links.txt
writing entry points to apache_beam.egg-info/entry_points.txt
reading manifest file 'apache_beam.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching 'README.md'
warning: 

Jenkins build is back to normal : beam_PostCommit_Java_GradleBuild #1750

2018-10-24 Thread Apache Jenkins Server
See 




Build failed in Jenkins: beam_PostCommit_Python_VR_Flink #500

2018-10-24 Thread Apache Jenkins Server
See 


Changes:

[lcwik] [BEAM-5496] Fixes bug of MqttIO fails to deserialize checkpoint (#6701)

--
[...truncated 4.39 MB...]
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Starting execution of job 
test_windowing_1540433380.9 (4f9c1d2c8caa5ce100213939f819b1fa)
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Job 
test_windowing_1540433380.9 (4f9c1d2c8caa5ce100213939f819b1fa) switched from 
state CREATED to RUNNING.
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Custom Source 
-> 14Create/Impulse.None/beam:env:docker:v1:0 -> ToKeyedWorkItem (1/1) 
(35e1f822be0322ac303bd813bb38abb8) switched from CREATED to SCHEDULED.
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Custom Source 
-> 26assert_that/Create/Impulse.None/beam:env:docker:v1:0 (1/1) 
(5648baa078c03b4d460f23f9b4f9792b) switched from CREATED to SCHEDULED.
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - 
Create/MaybeReshuffle/Reshuffle/ReshufflePerKey/GroupByKey -> 
72Create/MaybeReshuffle/Reshuffle/ReshufflePerKey/GroupByKey/GroupByWindow.None/beam:env:docker:v1:0
 -> ToKeyedWorkItem (1/1) (188749c7eddfba038e356550a878d76f) switched from 
CREATED to SCHEDULED.
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - GroupByKey -> 
24GroupByKey/GroupByWindow.None/beam:env:docker:v1:0 (1/1) 
(edecc97ebe3a288ade5752bb96c498ba) switched from CREATED to SCHEDULED.
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - ToKeyedWorkItem (1/1) 
(b71b684f93c2f07141176822492ea532) switched from CREATED to SCHEDULED.
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - 
assert_that/Group/GroupByKey -> 
42assert_that/Group/GroupByKey/GroupByWindow.None/beam:env:docker:v1:0 (1/1) 
(0767a85266bd56af480463244e06787f) switched from CREATED to SCHEDULED.
[flink-akka.actor.default-dispatcher-2] INFO 
org.apache.flink.runtime.jobmaster.slotpool.SlotPool - Cannot serve slot 
request, no ResourceManager connected. Adding as pending request 
[SlotRequestId{28a35ae2c4d875f0d38a2cbd6e87b513}]
[jobmanager-future-thread-1] INFO 
org.apache.flink.runtime.highavailability.nonha.embedded.EmbeddedLeaderService 
- Received confirmation of leadership for leader 
akka://flink/user/jobmanager_41 , session=01db02c1-17aa-4795-9258-7a8f87888944
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Connecting to ResourceManager 
akka://flink/user/resourcemanager_ca7d4406-a4eb-4ef0-9261-09eacf8d6205(9bdc25c47e5449a87141fc15f0d548a2)
[flink-akka.actor.default-dispatcher-2] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Resolved ResourceManager 
address, beginning registration
[flink-akka.actor.default-dispatcher-2] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Registration at ResourceManager 
attempt 1 (timeout=100ms)
[flink-akka.actor.default-dispatcher-2] INFO 
org.apache.flink.runtime.resourcemanager.StandaloneResourceManager - 
Registering job manager 
92587a8f8788894401db02c117aa4795@akka://flink/user/jobmanager_41 for job 
4f9c1d2c8caa5ce100213939f819b1fa.
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.resourcemanager.StandaloneResourceManager - Registered 
job manager 92587a8f8788894401db02c117aa4795@akka://flink/user/jobmanager_41 
for job 4f9c1d2c8caa5ce100213939f819b1fa.
[flink-akka.actor.default-dispatcher-2] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - JobManager successfully 
registered at ResourceManager, leader id: 9bdc25c47e5449a87141fc15f0d548a2.
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.jobmaster.slotpool.SlotPool - Requesting new slot 
[SlotRequestId{28a35ae2c4d875f0d38a2cbd6e87b513}] and profile 
ResourceProfile{cpuCores=-1.0, heapMemoryInMB=-1, directMemoryInMB=0, 
nativeMemoryInMB=0, networkMemoryInMB=0} from resource manager.
[flink-akka.actor.default-dispatcher-2] INFO 
org.apache.flink.runtime.resourcemanager.StandaloneResourceManager - Request 
slot with profile ResourceProfile{cpuCores=-1.0, heapMemoryInMB=-1, 
directMemoryInMB=0, nativeMemoryInMB=0, networkMemoryInMB=0} for job 
4f9c1d2c8caa5ce100213939f819b1fa with allocation id 
AllocationID{57b7f74ee3dfbc85af35702f5c10a7cc}.
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.taskexecutor.TaskExecutor - Receive slot request 
AllocationID{57b7f74ee3dfbc85af35702f5c10a7cc} for job 
4f9c1d2c8caa5ce100213939f819b1fa from resource manager with leader id 
9bdc25c47e5449a87141fc15f0d548a2.
[flink-akka.actor.default-dispatcher-5] INF

Build failed in Jenkins: beam_PostCommit_Python_VR_Flink #499

2018-10-24 Thread Apache Jenkins Server
See 


Changes:

[lcwik] Use Collections isEmpty() instead of size() == 0 (#6720)

[lcwik] [BEAM-5855] Remove duplicated code files from the Dataflow worker

--
[...truncated 4.35 MB...]
[flink-akka.actor.default-dispatcher-3] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Starting execution of job 
test_windowing_1540432725.36 (be6de2d763f2be5046f2ec5cc2978b06)
[flink-akka.actor.default-dispatcher-3] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Job 
test_windowing_1540432725.36 (be6de2d763f2be5046f2ec5cc2978b06) switched from 
state CREATED to RUNNING.
[flink-akka.actor.default-dispatcher-3] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Custom Source 
-> 14Create/Impulse.None/beam:env:docker:v1:0 -> ToKeyedWorkItem (1/1) 
(15ced1fd22b67e609f1b5fbd32375f1a) switched from CREATED to SCHEDULED.
[flink-akka.actor.default-dispatcher-3] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Custom Source 
-> 26assert_that/Create/Impulse.None/beam:env:docker:v1:0 (1/1) 
(c0b081ea6b79be536a1dd10b36431795) switched from CREATED to SCHEDULED.
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.jobmaster.slotpool.SlotPool - Cannot serve slot 
request, no ResourceManager connected. Adding as pending request 
[SlotRequestId{c509110f5454c5107796910db89601c3}]
[flink-akka.actor.default-dispatcher-3] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - 
Create/MaybeReshuffle/Reshuffle/ReshufflePerKey/GroupByKey -> 
72Create/MaybeReshuffle/Reshuffle/ReshufflePerKey/GroupByKey/GroupByWindow.None/beam:env:docker:v1:0
 -> ToKeyedWorkItem (1/1) (cb0c54fdb24403944b32123f4d2963e3) switched from 
CREATED to SCHEDULED.
[flink-akka.actor.default-dispatcher-3] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - GroupByKey -> 
24GroupByKey/GroupByWindow.None/beam:env:docker:v1:0 (1/1) 
(a4d5f1504393bff297202f4917650939) switched from CREATED to SCHEDULED.
[flink-akka.actor.default-dispatcher-3] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - ToKeyedWorkItem (1/1) 
(be688c4c9c34e3ffa843458f65ad7330) switched from CREATED to SCHEDULED.
[flink-akka.actor.default-dispatcher-3] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - 
assert_that/Group/GroupByKey -> 
42assert_that/Group/GroupByKey/GroupByWindow.None/beam:env:docker:v1:0 (1/1) 
(d79da1d331debe37acda9eff2dab9209) switched from CREATED to SCHEDULED.
[jobmanager-future-thread-1] INFO 
org.apache.flink.runtime.highavailability.nonha.embedded.EmbeddedLeaderService 
- Received confirmation of leadership for leader 
akka://flink/user/jobmanager_41 , session=a0103fe2-a0a9-42b9-90a2-c08de799e470
[flink-akka.actor.default-dispatcher-3] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Connecting to ResourceManager 
akka://flink/user/resourcemanager_0907d595-cd46-4bb5-9b8d-b1bb94e4d960(a8806cd114b41fdfcd08a5119fc84a03)
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Resolved ResourceManager 
address, beginning registration
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Registration at ResourceManager 
attempt 1 (timeout=100ms)
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.resourcemanager.StandaloneResourceManager - 
Registering job manager 
90a2c08de799e470a0103fe2a0a942b9@akka://flink/user/jobmanager_41 for job 
be6de2d763f2be5046f2ec5cc2978b06.
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.resourcemanager.StandaloneResourceManager - Registered 
job manager 90a2c08de799e470a0103fe2a0a942b9@akka://flink/user/jobmanager_41 
for job be6de2d763f2be5046f2ec5cc2978b06.
[flink-akka.actor.default-dispatcher-2] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - JobManager successfully 
registered at ResourceManager, leader id: a8806cd114b41fdfcd08a5119fc84a03.
[flink-akka.actor.default-dispatcher-2] INFO 
org.apache.flink.runtime.jobmaster.slotpool.SlotPool - Requesting new slot 
[SlotRequestId{c509110f5454c5107796910db89601c3}] and profile 
ResourceProfile{cpuCores=-1.0, heapMemoryInMB=-1, directMemoryInMB=0, 
nativeMemoryInMB=0, networkMemoryInMB=0} from resource manager.
[flink-akka.actor.default-dispatcher-2] INFO 
org.apache.flink.runtime.resourcemanager.StandaloneResourceManager - Request 
slot with profile ResourceProfile{cpuCores=-1.0, heapMemoryInMB=-1, 
directMemoryInMB=0, nativeMemoryInMB=0, networkMemoryInMB=0} for job 
be6de2d763f2be5046f2ec5cc2978b06 with allocation id 
AllocationID{4515130fe4799061955778275db87c99}.
[flink-akka.actor.default-dispatcher-3] INFO 
org.apache.flink.runtime.taskexecutor.TaskExecutor - Receive slot request 
AllocationID{4515130fe4799061955778275db87c99} for job 
be6de2d763f2be5046f2ec5cc2978b06 from resource manager with leader id 
a8806cd114b41fd

Jenkins build is back to normal : beam_PostCommit_Py_VR_Dataflow #1497

2018-10-24 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : beam_PreCommit_Java_Cron #503

2018-10-24 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : beam_PerformanceTests_XmlIOIT #915

2018-10-24 Thread Apache Jenkins Server
See 




Build failed in Jenkins: beam_PostCommit_Py_VR_Dataflow #1496

2018-10-24 Thread Apache Jenkins Server
See 


Changes:

[bshu] Add random int to table name in big_query_query_to_table_it_test.py.

[github] Update big_query_query_to_table_it_test.py

--
[...truncated 54.62 KB...]
  File 
"
 line 346, in MakeRequest
check_response_func=check_response_func)
  File 
"
 line 396, in _MakeRequestNoRetry
redirections=redirections, connection_type=connection_type)
  File 
"
 line 175, in new_request
redirections, connection_type)
  File 
"
 line 282, in request
connection_type=connection_type)
  File 
"
 line 1694, in request
(response, content) = self._request(conn, authority, uri, request_uri, 
method, body, headers, redirections, cachekey)
  File 
"
 line 1434, in _request
(response, content) = self._conn_request(conn, request_uri, method, body, 
headers)
  File 
"
 line 1390, in _conn_request
response = conn.getresponse()
  File "/usr/lib/python2.7/httplib.py", line 1136, in getresponse
response.begin()
  File "/usr/lib/python2.7/httplib.py", line 453, in begin
version, status, reason = self._read_status()
  File "/usr/lib/python2.7/httplib.py", line 409, in _read_status
line = self.fp.readline(_MAXLINE + 1)
  File "/usr/lib/python2.7/socket.py", line 480, in readline
data = self._sock.recv(self._rbufsize)
  File "/usr/lib/python2.7/ssl.py", line 756, in recv
return self.read(buflen)
  File "/usr/lib/python2.7/ssl.py", line 643, in read
v = self._sslobj.read(len)
  File 
"
 line 276, in signalhandler
raise TimedOutException()
TimedOutException: 'test_as_list_twice 
(apache_beam.transforms.sideinputs_test.SideInputsTest)'

--
XML: 

--
Ran 16 tests in 3443.345s

FAILED (errors=1)

> Task :beam-sdks-python:validatesRunnerBatchTests FAILED
:beam-sdks-python:validatesRunnerBatchTests (Thread[Task worker for 
':',5,main]) completed. Took 57 mins 25.161 secs.
:beam-sdks-python:validatesRunnerStreamingTests (Thread[Task worker for 
':',5,main]) started.

> Task :beam-sdks-python:validatesRunnerStreamingTests
Caching disabled for task ':beam-sdks-python:validatesRunnerStreamingTests': 
Caching has not been enabled for the task
Task ':beam-sdks-python:validatesRunnerStreamingTests' is not up-to-date 
because:
  Task has not declared any outputs despite executing actions.
Starting process 'command 'sh''. Working directory: 

 Command: sh -c . 

 && ./scripts/run_postcommit.sh ValidatesRunner,'!sickbay-streaming' streaming
Successfully started process 'command 'sh''


###
# Build tarball and set pipeline options.

# Check that the script is running in a known directory.
if [[ $PWD != *sdks/python* ]]; then
  echo 'Unable to locate Apache Beam Python SDK root directory'
  exit 1
fi

# Go to the Apache Beam Python SDK root
if [[ "*sdks/python" != $PWD ]]; then
  cd $(pwd | sed 's/sdks\/python.*/sdks\/python/')
fi
pwd | sed 's/sdks\/python.*/sdks\/python/'

RUNNER=${3:-TestDataflowRunner}

# Where to store integration test outputs.
GCS_LOCATION=${4:-gs://temp-storage-for-end-to-end-tests}

PROJECT=${5:-apache-beam-testing}

# Create a tarball
python setup.py -q sdist


Jenkins build is back to normal : beam_PostCommit_Java_ValidatesRunner_Dataflow_Gradle #1338

2018-10-24 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : beam_PostCommit_Java_ValidatesRunner_Spark_Gradle #1960

2018-10-24 Thread Apache Jenkins Server
See 




Build failed in Jenkins: beam_PostCommit_Java_ValidatesRunner_Spark_Gradle #1959

2018-10-24 Thread Apache Jenkins Server
See 


Changes:

[amyrvold] [BEAM-5288] Fix Go dataflow runner --worker_harness_container_image

--
[...truncated 30.14 MB...]
[dispatcher-event-loop-2] INFO org.apache.spark.scheduler.TaskSetManager - 
Starting task 0.0 in stage 547.0 (TID 463, localhost, executor driver, 
partition 0, PROCESS_LOCAL, 8308 bytes)
[dispatcher-event-loop-2] INFO org.apache.spark.scheduler.TaskSetManager - 
Starting task 1.0 in stage 547.0 (TID 464, localhost, executor driver, 
partition 1, PROCESS_LOCAL, 8308 bytes)
[dispatcher-event-loop-2] INFO org.apache.spark.scheduler.TaskSetManager - 
Starting task 2.0 in stage 547.0 (TID 465, localhost, executor driver, 
partition 2, PROCESS_LOCAL, 8308 bytes)
[dispatcher-event-loop-2] INFO org.apache.spark.scheduler.TaskSetManager - 
Starting task 3.0 in stage 547.0 (TID 466, localhost, executor driver, 
partition 3, PROCESS_LOCAL, 8308 bytes)
[Executor task launch worker for task 465] INFO 
org.apache.spark.executor.Executor - Running task 2.0 in stage 547.0 (TID 465)
[Executor task launch worker for task 464] INFO 
org.apache.spark.executor.Executor - Running task 1.0 in stage 547.0 (TID 464)
[Executor task launch worker for task 463] INFO 
org.apache.spark.executor.Executor - Running task 0.0 in stage 547.0 (TID 463)
[Executor task launch worker for task 466] INFO 
org.apache.spark.executor.Executor - Running task 3.0 in stage 547.0 (TID 466)
[Executor task launch worker for task 464] INFO 
org.apache.spark.storage.ShuffleBlockFetcherIterator - Getting 0 non-empty 
blocks out of 5 blocks
[Executor task launch worker for task 466] INFO 
org.apache.spark.storage.ShuffleBlockFetcherIterator - Getting 0 non-empty 
blocks out of 5 blocks
[Executor task launch worker for task 464] INFO 
org.apache.spark.storage.ShuffleBlockFetcherIterator - Started 0 remote fetches 
in 0 ms
[Executor task launch worker for task 465] INFO 
org.apache.spark.storage.ShuffleBlockFetcherIterator - Getting 0 non-empty 
blocks out of 5 blocks
[Executor task launch worker for task 466] INFO 
org.apache.spark.storage.ShuffleBlockFetcherIterator - Started 0 remote fetches 
in 0 ms
[Executor task launch worker for task 465] INFO 
org.apache.spark.storage.ShuffleBlockFetcherIterator - Started 0 remote fetches 
in 0 ms
[Executor task launch worker for task 466] INFO 
org.apache.spark.storage.BlockManager - Found block rdd_2441_3 locally
[Executor task launch worker for task 464] INFO 
org.apache.spark.storage.BlockManager - Found block rdd_2441_1 locally
[Executor task launch worker for task 465] INFO 
org.apache.spark.storage.BlockManager - Found block rdd_2441_2 locally
[Executor task launch worker for task 463] INFO 
org.apache.spark.storage.ShuffleBlockFetcherIterator - Getting 0 non-empty 
blocks out of 5 blocks
[Executor task launch worker for task 463] INFO 
org.apache.spark.storage.ShuffleBlockFetcherIterator - Started 0 remote fetches 
in 0 ms
[Executor task launch worker for task 463] INFO 
org.apache.spark.storage.BlockManager - Found block rdd_2441_0 locally
[Executor task launch worker for task 464] INFO 
org.apache.spark.storage.memory.MemoryStore - Block rdd_2756_1 stored as bytes 
in memory (estimated size 4.0 B, free 13.5 GB)
[Executor task launch worker for task 465] INFO 
org.apache.spark.storage.memory.MemoryStore - Block rdd_2756_2 stored as bytes 
in memory (estimated size 4.0 B, free 13.5 GB)
[Executor task launch worker for task 466] INFO 
org.apache.spark.storage.memory.MemoryStore - Block rdd_2756_3 stored as bytes 
in memory (estimated size 4.0 B, free 13.5 GB)
[Executor task launch worker for task 463] INFO 
org.apache.spark.storage.memory.MemoryStore - Block rdd_2756_0 stored as bytes 
in memory (estimated size 4.0 B, free 13.5 GB)
[dispatcher-event-loop-0] INFO org.apache.spark.storage.BlockManagerInfo - 
Added rdd_2756_2 in memory on localhost:45355 (size: 4.0 B, free: 13.5 GB)
[dispatcher-event-loop-0] INFO org.apache.spark.storage.BlockManagerInfo - 
Added rdd_2756_1 in memory on localhost:45355 (size: 4.0 B, free: 13.5 GB)
[dispatcher-event-loop-0] INFO org.apache.spark.storage.BlockManagerInfo - 
Added rdd_2756_3 in memory on localhost:45355 (size: 4.0 B, free: 13.5 GB)
[dispatcher-event-loop-0] INFO org.apache.spark.storage.BlockManagerInfo - 
Added rdd_2756_0 in memory on localhost:45355 (size: 4.0 B, free: 13.5 GB)
[Executor task launch worker for task 464] INFO 
org.apache.spark.executor.Executor - Finished task 1.0 in stage 547.0 (TID 
464). 59881 bytes result sent to driver
[Executor task launch worker for task 463] INFO 
org.apache.spark.executor.Executor - Finished task 0.0 in stage 547.0 (TID 
463). 59881 bytes result sent to driver
[Executor task launch worker for task 465] INFO 
org.apache.spark.

Build failed in Jenkins: beam_PostCommit_Java_GradleBuild #1749

2018-10-24 Thread Apache Jenkins Server
See 


Changes:

[scott] [BEAM-5741] Make 'Contact Us' link more visible from contributor guide.

[gleb] Upgrade gradle-spotless to 3.15.0

[gleb] Add missing package to WindowingTest

[gleb] Add licenseHeader to gradle-spotless

[mxm] [BEAM-2918] Add state support for streaming in portable FlinkRunner

[scott] Add missing build() statement to new RAT PreCommit.

[33067037+akedin] [BEAM-5807] Conversion from AVRO records to rows (#6777)

[kenn] [BEAM-5845] Split Dataflow run of examples precommit to its own job

[scott] [BEAM-5840] Validate 'docker-compose up'

--
[...truncated 51.05 MB...]
at 
io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
at 
io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
at 
io.grpc.internal.CensusStatsModule$StatsClientInterceptor$1$1.onClose(CensusStatsModule.java:684)
at 
io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
at 
io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
at 
io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
at 
io.grpc.internal.CensusTracingModule$TracingClientInterceptor$1$1.onClose(CensusTracingModule.java:403)
at 
io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:459)
at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:63)
at 
io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.close(ClientCallImpl.java:546)
at 
io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.access$600(ClientCallImpl.java:467)
at 
io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:584)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at 
io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
... 3 more

Oct 24, 2018 11:13:16 PM 
org.apache.beam.sdk.io.gcp.spanner.SpannerIO$WriteToSpannerFn processElement
WARNING: Failed to submit the mutation group
com.google.cloud.spanner.SpannerException: FAILED_PRECONDITION: 
io.grpc.StatusRuntimeException: FAILED_PRECONDITION: Value must not be NULL in 
table users.
at 
com.google.cloud.spanner.SpannerExceptionFactory.newSpannerExceptionPreformatted(SpannerExceptionFactory.java:119)
at 
com.google.cloud.spanner.SpannerExceptionFactory.newSpannerException(SpannerExceptionFactory.java:43)
at 
com.google.cloud.spanner.SpannerExceptionFactory.newSpannerException(SpannerExceptionFactory.java:80)
at 
com.google.cloud.spanner.spi.v1.GrpcSpannerRpc.get(GrpcSpannerRpc.java:456)
at 
com.google.cloud.spanner.spi.v1.GrpcSpannerRpc.commit(GrpcSpannerRpc.java:404)
at 
com.google.cloud.spanner.SpannerImpl$SessionImpl$2.call(SpannerImpl.java:797)
at 
com.google.cloud.spanner.SpannerImpl$SessionImpl$2.call(SpannerImpl.java:794)
at 
com.google.cloud.spanner.SpannerImpl.runWithRetries(SpannerImpl.java:227)
at 
com.google.cloud.spanner.SpannerImpl$SessionImpl.writeAtLeastOnce(SpannerImpl.java:793)
at 
com.google.cloud.spanner.SessionPool$PooledSession.writeAtLeastOnce(SessionPool.java:319)
at 
com.google.cloud.spanner.DatabaseClientImpl.writeAtLeastOnce(DatabaseClientImpl.java:60)
at 
org.apache.beam.sdk.io.gcp.spanner.SpannerIO$WriteToSpannerFn.processElement(SpannerIO.java:1108)
at 
org.apache.beam.sdk.io.gcp.spanner.SpannerIO$WriteToSpannerFn$DoFnInvoker.invokeProcessElement(Unknown
 Source)
at 
org.apache.beam.repackaged.beam_runners_direct_java.runners.core.SimpleDoFnRunner.invokeProcessElement(SimpleDoFnRunner.java:275)
at 
org.apache.beam.repackaged.beam_runners_direct_java.runners.core.SimpleDoFnRunner.processElement(SimpleDoFnRunner.java:240)
at 
org.apache.beam.repackaged.beam_runners_direct_java.runners.core.SimplePushbackSideInputDoFnRunner.processElementInReadyWindows(SimplePushbackSideInputDoFnRunner.java:78)
at 
org.apache.beam.runners.direct.ParDoEvaluator.processElement(ParDoEvaluator.java:207)
at 
org.apache.beam.runners.direct.DoFnLifecycleManagerRemovingTransformEvaluator.processElement(DoFnLifecycleManagerRemovingTransformEvaluator.java:54)
at 
org.apache.beam.runners.direct.DirectTransformExecutor.processElements(DirectTransformExecutor.java:160)
at 
org.apache.beam.runners.direct.DirectTransformExecutor.run(DirectTransformExecutor.java:124)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(Thr

Jenkins build is back to normal : beam_PostCommit_Java_PVR_Flink #98

2018-10-24 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : beam_PostCommit_Python_VR_Flink #493

2018-10-24 Thread Apache Jenkins Server
See 




Build failed in Jenkins: beam_PostCommit_Java_PVR_Flink #97

2018-10-24 Thread Apache Jenkins Server
See 


Changes:

[amyrvold] [BEAM-4431] Add "Edit this Page" button to website

[scott] [BEAM-5837] Add health check prober for community metrics infrastructure

--
[...truncated 346.59 MB...]
[DataSink (org.apache.flink.api.java.io.DiscardingOutputFormat@7acd9684) 
(13/16)] INFO org.apache.flink.runtime.taskmanager.Task - Creating FileSystem 
stream leak safety net for task DataSink 
(org.apache.flink.api.java.io.DiscardingOutputFormat@7acd9684) (13/16) 
(c069413564227d64f43f72e84c9bc6f7) [DEPLOYING]
[DataSink (org.apache.flink.api.java.io.DiscardingOutputFormat@7acd9684) 
(13/16)] INFO org.apache.flink.runtime.taskmanager.Task - Loading JAR files for 
task DataSink (org.apache.flink.api.java.io.DiscardingOutputFormat@7acd9684) 
(13/16) (c069413564227d64f43f72e84c9bc6f7) [DEPLOYING].
[MapPartition (MapPartition at 
PAssert$148/GroupGlobally/GroupDummyAndContents.out/beam:env:docker:v1:0) 
(1/16)] INFO org.apache.flink.runtime.taskmanager.Task - Ensuring all 
FileSystem streams are closed for task MapPartition (MapPartition at 
PAssert$148/GroupGlobally/GroupDummyAndContents.out/beam:env:docker:v1:0) 
(1/16) (9aa9d32a7072c00dacf19c4918408755) [FINISHED]
[DataSink (org.apache.flink.api.java.io.DiscardingOutputFormat@7acd9684) 
(4/16)] INFO org.apache.flink.runtime.taskmanager.Task - Ensuring all 
FileSystem streams are closed for task DataSink 
(org.apache.flink.api.java.io.DiscardingOutputFormat@7acd9684) (4/16) 
(04269e7678aadb5d9b60aaa884a91a12) [FINISHED]
[jobmanager-future-thread-11] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - DataSink 
(org.apache.flink.api.java.io.DiscardingOutputFormat@7acd9684) (1/16) 
(6f07af0adcb5df4a143a499748f5bfb7) switched from CREATED to SCHEDULED.
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - DataSink 
(org.apache.flink.api.java.io.DiscardingOutputFormat@7acd9684) (1/16) 
(6f07af0adcb5df4a143a499748f5bfb7) switched from SCHEDULED to DEPLOYING.
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Deploying DataSink 
(org.apache.flink.api.java.io.DiscardingOutputFormat@7acd9684) (1/16) (attempt 
#0) to localhost
[DataSink (org.apache.flink.api.java.io.DiscardingOutputFormat@7acd9684) 
(13/16)] INFO org.apache.flink.runtime.taskmanager.Task - Registering task at 
network: DataSink 
(org.apache.flink.api.java.io.DiscardingOutputFormat@7acd9684) (13/16) 
(c069413564227d64f43f72e84c9bc6f7) [DEPLOYING].
[MapPartition (MapPartition at 
PAssert$148/GroupGlobally/GroupDummyAndContents.out/beam:env:docker:v1:0) 
(5/16)] INFO org.apache.flink.runtime.taskmanager.Task - MapPartition 
(MapPartition at 
PAssert$148/GroupGlobally/GroupDummyAndContents.out/beam:env:docker:v1:0) 
(5/16) (3d7826c03afbd65514c40e85d017de02) switched from RUNNING to FINISHED.
[MapPartition (MapPartition at 
PAssert$148/GroupGlobally/GroupDummyAndContents.out/beam:env:docker:v1:0) 
(5/16)] INFO org.apache.flink.runtime.taskmanager.Task - Freeing task resources 
for MapPartition (MapPartition at 
PAssert$148/GroupGlobally/GroupDummyAndContents.out/beam:env:docker:v1:0) 
(5/16) (3d7826c03afbd65514c40e85d017de02).
[DataSink (org.apache.flink.api.java.io.DiscardingOutputFormat@7acd9684) 
(13/16)] INFO org.apache.flink.runtime.taskmanager.Task - DataSink 
(org.apache.flink.api.java.io.DiscardingOutputFormat@7acd9684) (13/16) 
(c069413564227d64f43f72e84c9bc6f7) switched from DEPLOYING to RUNNING.
[jobmanager-future-thread-1] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - DataSink 
(org.apache.flink.api.java.io.DiscardingOutputFormat@7acd9684) (5/16) 
(f208b961a304175b5624fa83d3fff8ac) switched from CREATED to SCHEDULED.
[flink-akka.actor.default-dispatcher-7] INFO 
org.apache.flink.runtime.taskexecutor.TaskExecutor - Received task DataSink 
(org.apache.flink.api.java.io.DiscardingOutputFormat@7acd9684) (16/16).
[MapPartition (MapPartition at 
PAssert$148/GroupGlobally/GroupDummyAndContents.out/beam:env:docker:v1:0) 
(5/16)] INFO org.apache.flink.runtime.taskmanager.Task - Ensuring all 
FileSystem streams are closed for task MapPartition (MapPartition at 
PAssert$148/GroupGlobally/GroupDummyAndContents.out/beam:env:docker:v1:0) 
(5/16) (3d7826c03afbd65514c40e85d017de02) [FINISHED]
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - DataSink 
(org.apache.flink.api.java.io.DiscardingOutputFormat@7acd9684) (13/16) 
(c069413564227d64f43f72e84c9bc6f7) switched from DEPLOYING to RUNNING.
[flink-akka.actor.default-dispatcher-7] INFO 
org.apache.flink.runtime.taskexecutor.TaskExecutor - Un-registering task and 
sending final execution state FINISHED to JobManager for task MapPartition 
(MapPartition at 
PAssert$148/GroupGlobally/GroupDummyAndC

Build failed in Jenkins: beam_PostCommit_Python_VR_Flink #492

2018-10-24 Thread Apache Jenkins Server
See 


Changes:

[scott] [BEAM-5837] Add health check prober for community metrics infrastructure

--
[...truncated 262.59 KB...]
Note: Recompile with -Xlint:unchecked for details.
Created classpath snapshot for incremental compilation in 0.824 secs. 415 
duplicate classes found in classpath (see all with --debug).
Packing task ':beam-sdks-java-harness:compileJava'
:beam-sdks-java-harness:compileJava (Thread[Task worker for ':' Thread 
7,5,main]) completed. Took 8.014 secs.
:beam-sdks-java-harness:classes (Thread[Task worker for ':' Thread 7,5,main]) 
started.

> Task :beam-sdks-java-harness:classes
Skipping task ':beam-sdks-java-harness:classes' as it has no actions.
:beam-sdks-java-harness:classes (Thread[Task worker for ':' Thread 7,5,main]) 
completed. Took 0.0 secs.
:beam-sdks-java-harness:jar (Thread[Task worker for ':' Thread 7,5,main]) 
started.

> Task :beam-sdks-java-harness:jar
Build cache key for task ':beam-sdks-java-harness:jar' is 
05a8b416f59b8244e5b470e02702d2fb
Caching disabled for task ':beam-sdks-java-harness:jar': Caching has not been 
enabled for the task
Task ':beam-sdks-java-harness:jar' is not up-to-date because:
  No history is available.
:beam-sdks-java-harness:jar (Thread[Task worker for ':' Thread 7,5,main]) 
completed. Took 0.042 secs.
:beam-sdks-java-harness:shadowJar (Thread[Task worker for ':' Thread 7,5,main]) 
started.

> Task :beam-sdks-python-container:buildLinuxAmd64
Build cache key for task ':beam-sdks-python-container:buildLinuxAmd64' is 
8b3c1dec1f894d25b8482e148deb094a
Caching disabled for task ':beam-sdks-python-container:buildLinuxAmd64': 
Caching has not been enabled for the task
Task ':beam-sdks-python-container:buildLinuxAmd64' is not up-to-date because:
  No history is available.
:beam-sdks-python-container:buildLinuxAmd64 (Thread[Task worker for ':' Thread 
6,5,main]) completed. Took 3.025 secs.
:beam-sdks-python-container:build (Thread[Task worker for ':' Thread 6,5,main]) 
started.

> Task :beam-sdks-python-container:build
Caching disabled for task ':beam-sdks-python-container:build': Caching has not 
been enabled for the task
Task ':beam-sdks-python-container:build' is not up-to-date because:
  Task has not declared any outputs despite executing actions.
:beam-sdks-python-container:build (Thread[Task worker for ':' Thread 6,5,main]) 
completed. Took 0.0 secs.
:beam-sdks-python-container:copyDockerfileDependencies (Thread[Task worker for 
':' Thread 6,5,main]) started.

> Task :beam-sdks-python-container:copyDockerfileDependencies
Build cache key for task 
':beam-sdks-python-container:copyDockerfileDependencies' is 
1213041a9569e8a6a16f48838ddf1ef1
Caching disabled for task 
':beam-sdks-python-container:copyDockerfileDependencies': Caching has not been 
enabled for the task
Task ':beam-sdks-python-container:copyDockerfileDependencies' is not up-to-date 
because:
  No history is available.
:beam-sdks-python-container:copyDockerfileDependencies (Thread[Task worker for 
':' Thread 6,5,main]) completed. Took 0.037 secs.
:beam-sdks-python-container:dockerPrepare (Thread[Task worker for ':' Thread 
6,5,main]) started.

> Task :beam-sdks-python-container:dockerPrepare
Build cache key for task ':beam-sdks-python-container:dockerPrepare' is 
3eeea70c627d157069808e28aa42f609
Caching disabled for task ':beam-sdks-python-container:dockerPrepare': Caching 
has not been enabled for the task
Task ':beam-sdks-python-container:dockerPrepare' is not up-to-date because:
  No history is available.
:beam-sdks-python-container:dockerPrepare (Thread[Task worker for ':' Thread 
6,5,main]) completed. Took 0.099 secs.
:beam-sdks-python-container:docker (Thread[Task worker for ':' Thread 
6,5,main]) started.

> Task :beam-sdks-java-harness:shadowJar
Build cache key for task ':beam-sdks-java-harness:shadowJar' is 
b3ad4679192c5b0e9b8a8f13824e5895
Caching disabled for task ':beam-sdks-java-harness:shadowJar': Caching has not 
been enabled for the task
Task ':beam-sdks-java-harness:shadowJar' is not up-to-date because:
  No history is available.

> Task :beam-sdks-python-container:docker FAILED
Caching disabled for task ':beam-sdks-python-container:docker': Caching has not 
been enabled for the task
Task ':beam-sdks-python-container:docker' is not up-to-date because:
  Task has not declared any outputs despite executing actions.
Starting process 'command 'docker''. Working directory: 

 Command: docker build --no-cache -t 
jenkins-docker-apache.bintray.io/beam/python:latest .
Successfully started process 'command 'docker''
Sending build context to Docker daemon  17.65MB
Step 1/9 : FROM python:2-stretch
2-stretch: Pulling from library/python
Digest: sha256:629868162ed980d9b24bf3bf0699cb300b368e8b144be9fc43a6a4740cc02561
Status: Downloaded newer imag

Build failed in Jenkins: beam_PostCommit_Python_VR_Flink #491

2018-10-24 Thread Apache Jenkins Server
See 


Changes:

[amyrvold] [BEAM-4431] Add "Edit this Page" button to website

--
[...truncated 4.38 MB...]
[flink-akka.actor.default-dispatcher-2] INFO 
org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint - 
http://localhost:44555 was granted leadership with 
leaderSessionID=4d8c0ed4-77a8-4b64-9de1-5ee876c86dec
[flink-akka.actor.default-dispatcher-2] INFO 
org.apache.flink.runtime.highavailability.nonha.embedded.EmbeddedLeaderService 
- Received confirmation of leadership for leader http://localhost:44555 , 
session=4d8c0ed4-77a8-4b64-9de1-5ee876c86dec
[flink-runner-job-server] INFO org.apache.flink.runtime.rpc.akka.AkkaRpcService 
- Starting RPC endpoint for 
org.apache.flink.runtime.dispatcher.StandaloneDispatcher at 
akka://flink/user/dispatcher7c055423-99cd-4f51-ac4a-152f21131641 .
[flink-runner-job-server] INFO 
org.apache.flink.runtime.highavailability.nonha.embedded.EmbeddedLeaderService 
- Proposing leadership to contender 
org.apache.flink.runtime.dispatcher.StandaloneDispatcher@3fe6450 @ 
akka://flink/user/dispatcher7c055423-99cd-4f51-ac4a-152f21131641
[flink-runner-job-server] INFO org.apache.flink.runtime.minicluster.MiniCluster 
- Flink Mini Cluster started successfully
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.dispatcher.StandaloneDispatcher - Dispatcher 
akka://flink/user/dispatcher7c055423-99cd-4f51-ac4a-152f21131641 was granted 
leadership with fencing token 41079882-3cb8-4abe-91ba-2faedf508138
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.dispatcher.StandaloneDispatcher - Recovering all 
persisted jobs.
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.highavailability.nonha.embedded.EmbeddedLeaderService 
- Received confirmation of leadership for leader 
akka://flink/user/dispatcher7c055423-99cd-4f51-ac4a-152f21131641 , 
session=41079882-3cb8-4abe-91ba-2faedf508138
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.dispatcher.StandaloneDispatcher - Submitting job 
14ed18f54049a5a6aca3a22ebb062f52 (test_windowing_1540413442.58).
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.rpc.akka.AkkaRpcService - Starting RPC endpoint for 
org.apache.flink.runtime.jobmaster.JobMaster at akka://flink/user/jobmanager_41 
.
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Initializing job 
test_windowing_1540413442.58 (14ed18f54049a5a6aca3a22ebb062f52).
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Using restart strategy 
NoRestartStrategy for test_windowing_1540413442.58 
(14ed18f54049a5a6aca3a22ebb062f52).
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.rpc.akka.AkkaRpcService - Starting RPC endpoint for 
org.apache.flink.runtime.jobmaster.slotpool.SlotPool at 
akka://flink/user/590aeb49-13df-4e55-ace0-0dc85e6a7ca0 .
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Job recovers via 
failover strategy: full graph restart
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Running initialization on master 
for job test_windowing_1540413442.58 (14ed18f54049a5a6aca3a22ebb062f52).
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Successfully ran initialization 
on master in 0 ms.
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - No state backend has been 
configured, using default (Memory / JobManager) MemoryStateBackend (data in 
heap memory / checkpoints to JobManager) (checkpoints: 'null', savepoints: 
'null', asynchronous: TRUE, maxStateSize: 5242880)
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.highavailability.nonha.embedded.EmbeddedLeaderService 
- Proposing leadership to contender 
org.apache.flink.runtime.jobmaster.JobManagerRunner@617f2bd0 @ 
akka://flink/user/jobmanager_41
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.jobmaster.JobManagerRunner - JobManager runner for job 
test_windowing_1540413442.58 (14ed18f54049a5a6aca3a22ebb062f52) was granted 
leadership with session id 813b94a1-0cd4-435a-9058-854191377335 at 
akka://flink/user/jobmanager_41.
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Starting execution of job 
test_windowing_1540413442.58 (14ed18f54049a5a6aca3a22ebb062f52)
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Job 
test_windowing_1540413442.58 (14ed18f54049a5a6aca3a22ebb062f52) switched from 
state CREATED to RUNNING.
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Custom Source 
-> 14Create/Impulse.None/beam:env:do

Jenkins build is back to normal : beam_PostCommit_Python_Verify #6365

2018-10-24 Thread Apache Jenkins Server
See 




Build failed in Jenkins: beam_PostCommit_Java_ValidatesRunner_Dataflow_Gradle #1337

2018-10-24 Thread Apache Jenkins Server
See 


Changes:

[thw] [BEAM-5848] Fix coder for streaming impulse source.

[gleb] Upgrade gradle-spotless to 3.15.0

[gleb] Add missing package to WindowingTest

[gleb] Add licenseHeader to gradle-spotless

[mxm] [BEAM-2918] Add state support for streaming in portable FlinkRunner

[33067037+akedin] [BEAM-5807] Conversion from AVRO records to rows (#6777)

--
[...truncated 20.47 MB...]
INFO: Adding Create123/Read(CreateSource) as step s10
Oct 24, 2018 8:25:06 PM 
org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
INFO: Adding OutputSideInputs as step s11
Oct 24, 2018 8:25:06 PM 
org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
INFO: Adding PAssert$33/GroupGlobally/Window.Into()/Window.Assign as step 
s12
Oct 24, 2018 8:25:06 PM 
org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
INFO: Adding 
PAssert$33/GroupGlobally/GatherAllOutputs/Reify.Window/ParDo(Anonymous) as step 
s13
Oct 24, 2018 8:25:06 PM 
org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
INFO: Adding PAssert$33/GroupGlobally/GatherAllOutputs/WithKeys/AddKeys/Map 
as step s14
Oct 24, 2018 8:25:06 PM 
org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
INFO: Adding 
PAssert$33/GroupGlobally/GatherAllOutputs/Window.Into()/Window.Assign as step 
s15
Oct 24, 2018 8:25:06 PM 
org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
INFO: Adding PAssert$33/GroupGlobally/GatherAllOutputs/GroupByKey as step 
s16
Oct 24, 2018 8:25:06 PM 
org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
INFO: Adding PAssert$33/GroupGlobally/GatherAllOutputs/Values/Values/Map as 
step s17
Oct 24, 2018 8:25:06 PM 
org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
INFO: Adding PAssert$33/GroupGlobally/RewindowActuals/Window.Assign as step 
s18
Oct 24, 2018 8:25:06 PM 
org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
INFO: Adding PAssert$33/GroupGlobally/KeyForDummy/AddKeys/Map as step s19
Oct 24, 2018 8:25:06 PM 
org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
INFO: Adding 
PAssert$33/GroupGlobally/RemoveActualsTriggering/Flatten.PCollections as step 
s20
Oct 24, 2018 8:25:06 PM 
org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
INFO: Adding PAssert$33/GroupGlobally/Create.Values/Read(CreateSource) as 
step s21
Oct 24, 2018 8:25:06 PM 
org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
INFO: Adding PAssert$33/GroupGlobally/WindowIntoDummy/Window.Assign as step 
s22
Oct 24, 2018 8:25:06 PM 
org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
INFO: Adding 
PAssert$33/GroupGlobally/RemoveDummyTriggering/Flatten.PCollections as step s23
Oct 24, 2018 8:25:06 PM 
org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
INFO: Adding PAssert$33/GroupGlobally/FlattenDummyAndContents as step s24
Oct 24, 2018 8:25:06 PM 
org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
INFO: Adding PAssert$33/GroupGlobally/NeverTrigger/Flatten.PCollections as 
step s25
Oct 24, 2018 8:25:06 PM 
org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
INFO: Adding PAssert$33/GroupGlobally/GroupDummyAndContents as step s26
Oct 24, 2018 8:25:06 PM 
org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
INFO: Adding PAssert$33/GroupGlobally/Values/Values/Map as step s27
Oct 24, 2018 8:25:06 PM 
org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
INFO: Adding PAssert$33/GroupGlobally/ParDo(Concat) as step s28
Oct 24, 2018 8:25:06 PM 
org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
INFO: Adding PAssert$33/GetPane/Map as step s29
Oct 24, 2018 8:25:06 PM 
org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
INFO: Adding PAssert$33/RunChecks as step s30
Oct 24, 2018 8:25:06 PM 
org.apache.beam.runners.dataflow.DataflowPipelineTranslator$Translator addStep
INFO: Adding PAssert$33/VerifyAssertions/ParDo(DefaultConclude) as step s31
Oct 24, 2018 8:25:06 PM org.apache.beam.runners.dataflow.DataflowRunner run
INFO: Staging pipeline description to 
gs://temp-storage-for-validates-runner-tests//viewtest0testsingletonsideinput-jenkins-1024202458-9666bee8/output/results/staging/
Oct 24, 2018 8:25:06 PM org.apache.beam.runners.dataflow.util.PackageUtil 
tryStagePackage
INFO: Uploading <70577 bytes, hash 2xTMrYn-QRHxcVqHFaWRJg> to 
gs://temp-s

Jenkins build is back to normal : beam_PostCommit_Java_ValidatesRunner_Samza_Gradle #1045

2018-10-24 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : beam_PostCommit_Py_VR_Dataflow #1492

2018-10-24 Thread Apache Jenkins Server
See 




Build failed in Jenkins: beam_PostCommit_Java_ValidatesRunner_Samza_Gradle #1044

2018-10-24 Thread Apache Jenkins Server
See 


Changes:

[amyrvold] [BEAM-4431] Add "Edit this Page" button to website

--
[...truncated 82.93 KB...]
:beam-sdks-java-core:compileTestJava (Thread[Task worker for ':' Thread 
2,5,main]) started.

> Task :beam-sdks-java-extensions-google-cloud-platform-core:compileJava
Build cache key for task 
':beam-sdks-java-extensions-google-cloud-platform-core:compileJava' is 
5a82a22a282b01cea039c122e9fbee6f
Task ':beam-sdks-java-extensions-google-cloud-platform-core:compileJava' is not 
up-to-date because:
  No history is available.
Custom actions are attached to task 
':beam-sdks-java-extensions-google-cloud-platform-core:compileJava'.
All input files are considered out-of-date for incremental task 
':beam-sdks-java-extensions-google-cloud-platform-core:compileJava'.
Full recompilation is required because no incremental change information is 
available. This is usually caused by clean builds or changing compiler 
arguments.
Compiling with error-prone compiler

> Task :beam-model-job-management:compileJava
Created classpath snapshot for incremental compilation in 0.006 secs. 39 
duplicate classes found in classpath (see all with --debug).
Packing task ':beam-model-job-management:compileJava'
:beam-model-job-management:compileJava (Thread[Task worker for ':' Thread 
3,5,main]) completed. Took 9.783 secs.
:beam-model-job-management:classes (Thread[Task worker for ':' Thread 
3,5,main]) started.

> Task :beam-model-job-management:classes
Skipping task ':beam-model-job-management:classes' as it has no actions.
:beam-model-job-management:classes (Thread[Task worker for ':' Thread 
3,5,main]) completed. Took 0.0 secs.
:beam-model-job-management:shadowJar (Thread[Task worker for ':' Thread 
11,5,main]) started.

> Task :beam-vendor-sdks-java-extensions-protobuf:compileJava
file or directory 
'
 not found
Build cache key for task 
':beam-vendor-sdks-java-extensions-protobuf:compileJava' is 
077ff2170deb0b2ead62e57c07478bdc
Task ':beam-vendor-sdks-java-extensions-protobuf:compileJava' is not up-to-date 
because:
  No history is available.
Custom actions are attached to task 
':beam-vendor-sdks-java-extensions-protobuf:compileJava'.
All input files are considered out-of-date for incremental task 
':beam-vendor-sdks-java-extensions-protobuf:compileJava'.
Full recompilation is required because no incremental change information is 
available. This is usually caused by clean builds or changing compiler 
arguments.
file or directory 
'
 not found
Compiling with error-prone compiler

> Task :beam-model-job-management:shadowJar
Build cache key for task ':beam-model-job-management:shadowJar' is 
0ec83abee9d42248cd7671ab7893f5b5
Caching disabled for task ':beam-model-job-management:shadowJar': Caching has 
not been enabled for the task
Task ':beam-model-job-management:shadowJar' is not up-to-date because:
  No history is available.
***
GRADLE SHADOW STATS

Total Jars: 1 (includes project)
Total Time: 0.0s [0ms]
Average Time/Jar: 0.0s [0.0ms]
***
:beam-model-job-management:shadowJar (Thread[Task worker for ':' Thread 
11,5,main]) completed. Took 2.596 secs.
:beam-runners-core-construction-java:compileJava (Thread[Task worker for ':' 
Thread 11,5,main]) started.

> Task :beam-sdks-java-core:compileTestJava
Build cache key for task ':beam-sdks-java-core:compileTestJava' is 
7e97030d595baee675b53b0d6e35c722
Task ':beam-sdks-java-core:compileTestJava' is not up-to-date because:
  No history is available.
Custom actions are attached to task ':beam-sdks-java-core:compileTestJava'.
All input files are considered out-of-date for incremental task 
':beam-sdks-java-core:compileTestJava'.
Full recompilation is required because no incremental change information is 
available. This is usually caused by clean builds or changing compiler 
arguments.
Compiling with error-prone compiler

> Task :beam-model-fn-execution:compileJava
Created classpath snapshot for incremental compilation in 0.006 secs. 39 
duplicate classes found in classpath (see all with --debug).
Packing task ':beam-model-fn-execution:compileJava'
:beam-model-fn-execution:compileJava (Thread[Task worker for ':' Thread 
4,5,main]) completed. Took 13.693 secs.
:beam-model-fn-execution:classes (Thread[Task worker for ':' Thread 4,5,main]) 
started.

> Task :beam-model-fn-execution:classes
Skipping task ':beam-model-fn-execution:classes' as it has no actions.
:beam-model-fn-execution:classes (Thread[Task worker for ':' Thread 4,5,main]) 
completed. Took 0.0 secs.
:beam-model-fn-execution:shadowJar (Thread[Task worker for 

Build failed in Jenkins: beam_PreCommit_Java_Cron #502

2018-10-24 Thread Apache Jenkins Server
See 


Changes:

[scott] [BEAM-5741] Make 'Contact Us' link more visible from contributor guide.

[thw] [BEAM-5848] Fix coder for streaming impulse source.

[gleb] Upgrade gradle-spotless to 3.15.0

[gleb] Add missing package to WindowingTest

[gleb] Add licenseHeader to gradle-spotless

[mxm] [BEAM-2918] Add state support for streaming in portable FlinkRunner

[scott] Add missing build() statement to new RAT PreCommit.

[33067037+akedin] [BEAM-5807] Conversion from AVRO records to rows (#6777)

--
[...truncated 45.74 MB...]
  No history is available.
All input files are considered out-of-date for incremental task 
':beam-sdks-java-io-hadoop-common:spotlessJava'.
:beam-sdks-java-io-hadoop-common:spotlessJava (Thread[Task worker for ':' 
Thread 9,5,main]) completed. Took 0.039 secs.
:beam-sdks-java-io-hadoop-common:spotlessJavaCheck (Thread[Task worker for ':' 
Thread 9,5,main]) started.

> Task :beam-sdks-java-io-hadoop-common:spotlessJavaCheck
Skipping task ':beam-sdks-java-io-hadoop-common:spotlessJavaCheck' as it has no 
actions.
:beam-sdks-java-io-hadoop-common:spotlessJavaCheck (Thread[Task worker for ':' 
Thread 9,5,main]) completed. Took 0.0 secs.
:beam-sdks-java-io-hadoop-common:spotlessCheck (Thread[Task worker for ':' 
Thread 9,5,main]) started.

> Task :beam-sdks-java-io-hadoop-common:spotlessCheck
Skipping task ':beam-sdks-java-io-hadoop-common:spotlessCheck' as it has no 
actions.
:beam-sdks-java-io-hadoop-common:spotlessCheck (Thread[Task worker for ':' 
Thread 9,5,main]) completed. Took 0.0 secs.
:beam-sdks-java-io-hadoop-common:test (Thread[Task worker for ':' Thread 
9,5,main]) started.
Gradle Test Executor 144 started executing tests.
Gradle Test Executor 144 finished executing tests.

> Task :beam-sdks-java-io-hadoop-common:test
Build cache key for task ':beam-sdks-java-io-hadoop-common:test' is 
162ac4ae02cbd1d87a0d56a8752b909b
Task ':beam-sdks-java-io-hadoop-common:test' is not up-to-date because:
  No history is available.
Starting process 'Gradle Test Executor 144'. Working directory: 

 Command: /usr/local/asfpackages/java/jdk1.8.0_172/bin/java 
-Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager
 -Dorg.gradle.native=false -Dfile.encoding=UTF-8 -Duser.country=US 
-Duser.language=en -Duser.variant -ea -cp 
/home/jenkins/.gradle/caches/4.10.2/workerMain/gradle-worker.jar 
worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test 
Executor 144'
Successfully started process 'Gradle Test Executor 144'

org.apache.beam.sdk.io.hadoop.WritableCoderTest > 
testAutomaticRegistrationOfCoderProvider STANDARD_ERROR
log4j:WARN No appenders could be found for logger 
(org.apache.beam.sdk.coders.CoderRegistry).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for 
more info.
Finished generating test XML results (0.001 secs) into: 

Generating HTML test report...
Finished generating test html results (0.001 secs) into: 

Packing task ':beam-sdks-java-io-hadoop-common:test'
:beam-sdks-java-io-hadoop-common:test (Thread[Task worker for ':' Thread 
9,5,main]) completed. Took 1.285 secs.
:beam-sdks-java-io-hadoop-common:validateShadedJarDoesntLeakNonOrgApacheBeamClasses
 (Thread[Task worker for ':' Thread 9,5,main]) started.

> Task 
> :beam-sdks-java-io-hadoop-common:validateShadedJarDoesntLeakNonOrgApacheBeamClasses
Caching disabled for task 
':beam-sdks-java-io-hadoop-common:validateShadedJarDoesntLeakNonOrgApacheBeamClasses':
 Caching has not been enabled for the task
Task 
':beam-sdks-java-io-hadoop-common:validateShadedJarDoesntLeakNonOrgApacheBeamClasses'
 is not up-to-date because:
  Task has not declared any outputs despite executing actions.
:beam-sdks-java-io-hadoop-common:validateShadedJarDoesntLeakNonOrgApacheBeamClasses
 (Thread[Task worker for ':' Thread 9,5,main]) completed. Took 0.001 secs.
:beam-sdks-java-io-hadoop-common:check (Thread[Task worker for ':' Thread 
9,5,main]) started.

> Task :beam-sdks-java-io-hadoop-common:check
Skipping task ':beam-sdks-java-io-hadoop-common:check' as it has no actions.
:beam-sdks-java-io-hadoop-common:check (Thread[Task worker for ':' Thread 
9,5,main]) completed. Took 0.0 secs.
:beam-sdks-java-io-hadoop-common:build (Thread[Task worker for ':' Thread 
9,5,main]) started.

> Task :beam-sdks-java-io-hadoop-common:build
Skipping task ':beam-sdks-java-io-hadoop-common:build' as it has no actions.
:beam-sdks-java-io-hadoop-common:build (Thread[Task worker for ':' Thread

Build failed in Jenkins: beam_PostCommit_Py_VR_Dataflow #1491

2018-10-24 Thread Apache Jenkins Server
See 


Changes:

[kenn] [BEAM-5845] Split Dataflow run of examples precommit to its own job

--
[...truncated 51.56 KB...]
  File 
"
 line 571, in test_flatten_multiple_pcollections_having_multiple_consumers
pipeline.run()
  File 
"
 line 107, in run
else test_runner_api))
  File 
"
 line 403, in run
self.to_runner_api(), self.runner, self._options).run(False)
  File 
"
 line 416, in run
return self.runner.run_pipeline(self)
  File 
"
 line 50, in run_pipeline
self.result = super(TestDataflowRunner, self).run_pipeline(pipeline)
  File 
"
 line 402, in run_pipeline
self.dataflow_client.create_job(self.job), self)
  File 
"
 line 184, in wrapper
return fun(*args, **kwargs)
  File 
"
 line 490, in create_job
self.create_job_description(job)
  File 
"
 line 519, in create_job_description
resources = self._stage_resources(job.options)
  File 
"
 line 452, in _stage_resources
staging_location=google_cloud_options.staging_location)
  File 
"
 line 161, in stage_job_resources
requirements_cache_path)
  File 
"
 line 419, in _populate_requirements_cache
processes.check_output(cmd_args)
  File 
"
 line 52, in check_output
return subprocess.check_output(*args, **kwargs)
  File "/usr/lib/python2.7/subprocess.py", line 574, in check_output
raise CalledProcessError(retcode, cmd, output=output)
CalledProcessError: Command 
'['
 '-m', 'pip', 'download', '--dest', '/tmp/dataflow-requirements-cache', '-r', 
'postcommit_requirements.txt', '--exists-action', 'i', '--no-binary', ':all:']' 
returned non-zero exit status 1

--
XML: 

--
Ran 16 tests in 1042.523s

FAILED (errors=1)

> Task :beam-sdks-python:validatesRunnerBatchTests FAILED
:beam-sdks-python:validatesRunnerBatchTests (Thread[Task worker for 
':',5,main]) completed. Took 17 mins 23.997 secs.
:beam-sdks-python:validatesRunnerStreamingTests (Thread[Task worker for 
':',5,main]) started.

> Task :beam-sdks-python:validatesRunnerStreamingTests
Caching disabled for task ':beam-sdks-python:validatesRunnerStreamingTests': 
Caching has not been enabled for the task
Task ':beam-sdks-python:validatesRunnerStreamingTests' is not up-to-date 
because:
  Task has not declared any outputs despite executing actions.
Starting process 'command 'sh''. Working directory: 

 Command: sh -c . 

 && ./scripts/run_postcommit.sh ValidatesRunner,'!sickbay-streaming' streaming
Successfully started process 'command 'sh''


###
# Build tarball and set pipeline options.

# Check that the script is running in a known directory.
if [[ $PWD != *sdks/python* ]]; then
  echo 'Unable to locate Apache Beam Python SDK root directory'
  e

Build failed in Jenkins: beam_PostCommit_Python_Verify #6364

2018-10-24 Thread Apache Jenkins Server
See 


Changes:

[scott] [BEAM-5741] Make 'Contact Us' link more visible from contributor guide.

[gleb] Upgrade gradle-spotless to 3.15.0

[gleb] Add missing package to WindowingTest

[gleb] Add licenseHeader to gradle-spotless

[scott] Add missing build() statement to new RAT PreCommit.

[33067037+akedin] [BEAM-5807] Conversion from AVRO records to rows (#6777)

--
[...truncated 1.28 MB...]
ok
test_delete_dir (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... 
ok
test_delete_error (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) 
... ok
test_delete_file (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) 
... ok
test_exists (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_match_directory 
(apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_match_directory_trailing_slash 
(apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_match_file (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... 
ok
test_match_file_empty 
(apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_match_file_error 
(apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_match_file_with_limits 
(apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... 
:276:
 DeprecationWarning: Please use assertEqual instead.
  self.assertEquals(len(files), 1)
ok
test_match_file_with_zero_limit 
(apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_mkdirs (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_mkdirs_failed (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) 
... ok
test_open (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_open_bad_path (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) 
... ok
test_rename_directory 
(apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_rename_file (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) 
... ok
test_rename_file_error 
(apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... SKIP: This test 
still needs to be fixed on Python 3TODO: BEAM-5627
test_scheme (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_size (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_url_join (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_url_split (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... 
ok
test_create_bq_dataset (apache_beam.io.gcp.tests.utils_test.UtilsTest) ... 
SKIP: Bigquery dependencies are not installed.
test_delete_bq_dataset (apache_beam.io.gcp.tests.utils_test.UtilsTest) ... 
SKIP: Bigquery dependencies are not installed.
test_delete_table_fails_not_found 
(apache_beam.io.gcp.tests.utils_test.UtilsTest) ... SKIP: Bigquery dependencies 
are not installed.
test_delete_table_succeeds (apache_beam.io.gcp.tests.utils_test.UtilsTest) ... 
SKIP: Bigquery dependencies are not installed.
test_big_query_legacy_sql 
(apache_beam.io.gcp.big_query_query_to_table_it_test.BigQueryQueryToTableIT) 
... SKIP: IT is skipped because --test-pipeline-options is not specified
test_big_query_new_types 
(apache_beam.io.gcp.big_query_query_to_table_it_test.BigQueryQueryToTableIT) 
... SKIP: IT is skipped because --test-pipeline-options is not specified
test_big_query_standard_sql 
(apache_beam.io.gcp.big_query_query_to_table_it_test.BigQueryQueryToTableIT) 
... SKIP: IT is skipped because --test-pipeline-options is not specified
test_bigquery_read_1M_python 
(apache_beam.io.gcp.bigquery_io_read_it_test.BigqueryIOReadIT) ... SKIP: IT is 
skipped because --test-pipeline-options is not specified
get_test_rows (apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... SKIP: 
GCP dependencies are not installed
test_read_from_query (apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... 
SKIP: GCP dependencies are not installed
test_read_from_query_sql_format 
(apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... SKIP: GCP 
dependencies are not installed
test_read_from_query_unflatten_records 
(apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... SKIP: GCP 
dependencies are not installed
test_read_from_table (apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... 
SKIP: GCP dependencies are not installed
test_read_from_table_and_job_complete_retry 
(apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... SKIP: GCP 
dependencies are not installed
test_read_from_table_and_multiple_pages 
(apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... SKIP: GCP 
dependencies are not installed
test_read_from_table_as_tablerows 
(apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... SKIP: GCP 
dependencies are not installed
test_table_schema_wit

Build failed in Jenkins: beam_PerformanceTests_XmlIOIT #914

2018-10-24 Thread Apache Jenkins Server
See 


Changes:

[scott] [BEAM-5741] Make 'Contact Us' link more visible from contributor guide.

[thw] [BEAM-5848] Fix coder for streaming impulse source.

[gleb] Upgrade gradle-spotless to 3.15.0

[gleb] Add missing package to WindowingTest

[gleb] Add licenseHeader to gradle-spotless

[mxm] [BEAM-2918] Add state support for streaming in portable FlinkRunner

[scott] Add missing build() statement to new RAT PreCommit.

[33067037+akedin] [BEAM-5807] Conversion from AVRO records to rows (#6777)

--
[...truncated 277.83 KB...]
INFO: 2018-10-24T18:15:19.937Z: Fusing consumer Write xml 
files/WriteFiles/GatherTempFileResults/Reshuffle/GroupByKey/GroupByWindow into 
Write xml files/WriteFiles/GatherTempFileResults/Reshuffle/GroupByKey/Read
Oct 24, 2018 6:15:24 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2018-10-24T18:15:19.978Z: Fusing consumer Write xml 
files/WriteFiles/GatherTempFileResults/Reshuffle/GroupByKey/Reify into Write 
xml files/WriteFiles/GatherTempFileResults/Reshuffle/Window.Into()/Window.Assign
Oct 24, 2018 6:15:24 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2018-10-24T18:15:20.023Z: Fusing consumer Write xml 
files/WriteFiles/FinalizeTempFileBundles/Finalize into Write xml 
files/WriteFiles/GatherTempFileResults/Reshuffle.ViaRandomKey/Values/Values/Map
Oct 24, 2018 6:15:24 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2018-10-24T18:15:20.067Z: Fusing consumer Write xml 
files/WriteFiles/FinalizeTempFileBundles/Reshuffle.ViaRandomKey/Pair with 
random key into Write xml files/WriteFiles/FinalizeTempFileBundles/Finalize
Oct 24, 2018 6:15:24 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2018-10-24T18:15:20.114Z: Fusing consumer Read xml 
files/ReadAllViaFileBasedSource/Read ranges into Read xml 
files/ReadAllViaFileBasedSource/Reshuffle/Values/Values/Map
Oct 24, 2018 6:15:24 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2018-10-24T18:15:20.166Z: Fusing consumer Get file names/Values/Map 
into Write xml 
files/WriteFiles/FinalizeTempFileBundles/Reshuffle.ViaRandomKey/Values/Values/Map
Oct 24, 2018 6:15:24 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2018-10-24T18:15:20.457Z: Fusing consumer Write xml 
files/WriteFiles/FinalizeTempFileBundles/Reshuffle.ViaRandomKey/Values/Values/Map
 into Write xml 
files/WriteFiles/FinalizeTempFileBundles/Reshuffle.ViaRandomKey/Reshuffle/ExpandIterable
Oct 24, 2018 6:15:24 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2018-10-24T18:15:20.523Z: Fusing consumer Write xml 
files/WriteFiles/GatherTempFileResults/Reshuffle/GroupByKey/Write into Write 
xml files/WriteFiles/GatherTempFileResults/Reshuffle/GroupByKey/Reify
Oct 24, 2018 6:15:24 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2018-10-24T18:15:20.572Z: Fusing consumer Read xml 
files/ReadAllViaFileBasedSource/Reshuffle/Reshuffle/GroupByKey/Write into Read 
xml files/ReadAllViaFileBasedSource/Reshuffle/Reshuffle/GroupByKey/Reify
Oct 24, 2018 6:15:24 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2018-10-24T18:15:20.633Z: Fusing consumer Read xml 
files/ReadAllViaFileBasedSource/Reshuffle/Reshuffle/GroupByKey/Reify into Read 
xml 
files/ReadAllViaFileBasedSource/Reshuffle/Reshuffle/Window.Into()/Window.Assign
Oct 24, 2018 6:15:24 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2018-10-24T18:15:20.683Z: Fusing consumer Read xml 
files/ReadAllViaFileBasedSource/Reshuffle/Values/Values/Map into Read xml 
files/ReadAllViaFileBasedSource/Reshuffle/Reshuffle/ExpandIterable
Oct 24, 2018 6:15:24 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2018-10-24T18:15:20.788Z: Fusing consumer Write xml 
files/WriteFiles/GatherTempFileResults/Reshuffle.ViaRandomKey/Reshuffle/ExpandIterable
 into Write xml 
files/WriteFiles/GatherTempFileResults/Reshuffle.ViaRandomKey/Reshuffle/GroupByKey/GroupByWindow
Oct 24, 2018 6:15:24 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2018-10-24T18:15:20.831Z: Fusing consumer Write xml 
files/WriteFiles/GatherTempFileResults/Reshuffle.ViaRandomKey/Reshuffle/GroupByKey/GroupByWindow
 into Write xml 
files/WriteFiles/GatherTempFileResults/Reshuffle.ViaRandomKey/Reshuffle/GroupByKey/Read
Oct 24, 2018 6:15:24 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2018-10-24T18:15:20.880Z: Fusing consumer Read xml 
files/ReadAllViaFileBasedSource/Reshuffle/Reshu

Jenkins build is back to normal : beam_PerformanceTests_Python #1598

2018-10-24 Thread Apache Jenkins Server
See 




Build failed in Jenkins: beam_PostCommit_Py_VR_Dataflow #1490

2018-10-24 Thread Apache Jenkins Server
See 


Changes:

[scott] [BEAM-5741] Make 'Contact Us' link more visible from contributor guide.

[scott] Add missing build() statement to new RAT PreCommit.

--
[...truncated 54.99 KB...]
  File 
"
 line 396, in _MakeRequestNoRetry
redirections=redirections, connection_type=connection_type)
  File 
"
 line 175, in new_request
redirections, connection_type)
  File 
"
 line 282, in request
connection_type=connection_type)
  File 
"
 line 1694, in request
(response, content) = self._request(conn, authority, uri, request_uri, 
method, body, headers, redirections, cachekey)
  File 
"
 line 1434, in _request
(response, content) = self._conn_request(conn, request_uri, method, body, 
headers)
  File 
"
 line 1390, in _conn_request
response = conn.getresponse()
  File "/usr/lib/python2.7/httplib.py", line 1136, in getresponse
response.begin()
  File "/usr/lib/python2.7/httplib.py", line 453, in begin
version, status, reason = self._read_status()
  File "/usr/lib/python2.7/httplib.py", line 409, in _read_status
line = self.fp.readline(_MAXLINE + 1)
  File "/usr/lib/python2.7/socket.py", line 480, in readline
data = self._sock.recv(self._rbufsize)
  File "/usr/lib/python2.7/ssl.py", line 756, in recv
return self.read(buflen)
  File "/usr/lib/python2.7/ssl.py", line 643, in read
v = self._sslobj.read(len)
  File 
"
 line 276, in signalhandler
raise TimedOutException()
TimedOutException: 'test_par_do_with_multiple_outputs_and_using_yield 
(apache_beam.transforms.ptransform_test.PTransformTest)'

--
XML: 

--
Ran 16 tests in 3002.389s

FAILED (errors=1)

> Task :beam-sdks-python:validatesRunnerBatchTests FAILED
:beam-sdks-python:validatesRunnerBatchTests (Thread[Task worker for 
':',5,main]) completed. Took 50 mins 4.062 secs.
:beam-sdks-python:validatesRunnerStreamingTests (Thread[Task worker for 
':',5,main]) started.

> Task :beam-sdks-python:validatesRunnerStreamingTests
Caching disabled for task ':beam-sdks-python:validatesRunnerStreamingTests': 
Caching has not been enabled for the task
Task ':beam-sdks-python:validatesRunnerStreamingTests' is not up-to-date 
because:
  Task has not declared any outputs despite executing actions.
Starting process 'command 'sh''. Working directory: 

 Command: sh -c . 

 && ./scripts/run_postcommit.sh ValidatesRunner,'!sickbay-streaming' streaming
Successfully started process 'command 'sh''


###
# Build tarball and set pipeline options.

# Check that the script is running in a known directory.
if [[ $PWD != *sdks/python* ]]; then
  echo 'Unable to locate Apache Beam Python SDK root directory'
  exit 1
fi

# Go to the Apache Beam Python SDK root
if [[ "*sdks/python" != $PWD ]]; then
  cd $(pwd | sed 's/sdks\/python.*/sdks\/python/')
fi
pwd | sed 's/sdks\/python.*/sdks\/python/'

RUNNER=${3:-TestDataflowRunner}

# Where to store integration test outputs.
GCS_LOCATION=${4:-gs://temp-storage-for-end-to-end-tests}

PROJECT=${5:-apache-beam-testing}

# Create a tarball
python setup.py -q sdist
:398:
 UserWarning: Normalizing '2.9.0.dev' to '2.9.0.dev0'
  normalized_version,
w

Jenkins build is back to normal : beam_PostCommit_Python_VR_Flink #488

2018-10-24 Thread Apache Jenkins Server
See 




Build failed in Jenkins: beam_PreCommit_Java Examples Dataflow_Cron #3

2018-10-24 Thread Apache Jenkins Server
See 


Changes:

[scott] [BEAM-5741] Make 'Contact Us' link more visible from contributor guide.

[thw] [BEAM-5848] Fix coder for streaming impulse source.

[gleb] Upgrade gradle-spotless to 3.15.0

[gleb] Add missing package to WindowingTest

[gleb] Add licenseHeader to gradle-spotless

[mxm] [BEAM-2918] Add state support for streaming in portable FlinkRunner

[scott] Add missing build() statement to new RAT PreCommit.

[33067037+akedin] [BEAM-5807] Conversion from AVRO records to rows (#6777)

--
Started by timer
[EnvInject] - Loading node environment variables.
Building remotely on beam3 (beam) in workspace 

Cloning the remote Git repository
Cloning repository https://github.com/apache/beam.git
 > git init 
 > 
 >  # timeout=10
Fetching upstream changes from https://github.com/apache/beam.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/beam.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url https://github.com/apache/beam.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url https://github.com/apache/beam.git # timeout=10
Fetching upstream changes from https://github.com/apache/beam.git
 > git fetch --tags --progress https://github.com/apache/beam.git 
 > +refs/heads/*:refs/remotes/origin/* 
 > +refs/pull/${ghprbPullId}/*:refs/remotes/origin/pr/${ghprbPullId}/*
 > git rev-parse origin/master^{commit} # timeout=10
Checking out Revision 3eeb688818fcfadf81a56e3fc6faeb2a91a99310 (origin/master)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 3eeb688818fcfadf81a56e3fc6faeb2a91a99310
Commit message: "Merge pull request #6733: [BEAM-5741] Make 'Contact Us' link 
more visible from contributor guide."
 > git rev-list --no-walk 7800c3078d8ecaee7d2e789f02b759e579263249 # timeout=10
Cleaning workspace
 > git rev-parse --verify HEAD # timeout=10
Resetting working tree
 > git reset --hard # timeout=10
 > git clean -fdx # timeout=10
[EnvInject] - Executing scripts and injecting environment variables after the 
SCM step.
[EnvInject] - Injecting as environment variables the properties content 
SPARK_LOCAL_IP=127.0.0.1

[EnvInject] - Variables injected successfully.
[Gradle] - Launching build.
[src] $ 
"
 --info --continue --max-workers=12 -Dorg.gradle.jvmargs=-Xms2g 
-Dorg.gradle.jvmargs=-Xmx4g :javaExamplesDataflowPreCommit
Initialized native services in: /home/jenkins/.gradle/native
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/4.10.2/userguide/gradle_daemon.html.
Starting process 'Gradle build daemon'. Working directory: 
/home/jenkins/.gradle/daemon/4.10.2 Command: 
/usr/local/asfpackages/java/jdk1.8.0_172/bin/java -Xmx4g -Dfile.encoding=UTF-8 
-Duser.country=US -Duser.language=en -Duser.variant -cp 
/home/jenkins/.gradle/wrapper/dists/gradle-4.10.2-bin/cghg6c4gf4vkiutgsab8yrnwv/gradle-4.10.2/lib/gradle-launcher-4.10.2.jar
 org.gradle.launcher.daemon.bootstrap.GradleDaemon 4.10.2
Successfully started process 'Gradle build daemon'
An attempt to start the daemon took 0.959 secs.
The client will now receive all logging from the daemon (pid: 32682). The 
daemon log file: /home/jenkins/.gradle/daemon/4.10.2/daemon-32682.out.log
Closing daemon's stdin at end of input.
The daemon will no longer process any standard input.
Daemon will be stopped at the end of the build stopping after processing
Using 12 worker leases.
Starting Build
Parallel execution is an incubating feature.

> Configure project :buildSrc
Evaluating project ':buildSrc' using build file 
'
Invalidating in-memory cache of 
/home/jenkins/.gradle/caches/4.10.2/fileHashes/fileHashes.bin
Invalidating in-memory cache of 
/home/jenkins/.gradle/caches/4.10.2/fileHashes/resourceHashesCache.bin
Invalidating in-memory cache of 
/home/jenkins/.gradle/caches/journal-1/file-access.bin
Invalidating in-memory cache of 
/home/jenkins/.gradle/caches/journal-1/file-access.bin
file or directory 
'
 not found
Selected primary task 'build' from project :
file or directory 
'
 not found
:buildSrc:compileJava (Thread[Task worker for ':buildSrc' Thread 2,5,main]) 
started.
Using local directory buil

Jenkins build is back to normal : beam_PostCommit_Py_VR_Dataflow #1489

2018-10-24 Thread Apache Jenkins Server
See 




Build failed in Jenkins: beam_PostCommit_Python_VR_Flink #487

2018-10-24 Thread Apache Jenkins Server
See 


Changes:

[scott] [BEAM-5741] Make 'Contact Us' link more visible from contributor guide.

--
[...truncated 4.35 MB...]
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.highavailability.nonha.embedded.EmbeddedLeaderService 
- Proposing leadership to contender 
org.apache.flink.runtime.jobmaster.JobManagerRunner@39a43277 @ 
akka://flink/user/jobmanager_41
[flink-akka.actor.default-dispatcher-2] INFO 
org.apache.flink.runtime.jobmaster.JobManagerRunner - JobManager runner for job 
test_windowing_1540400863.75 (4d97ffcbe559d6b6ec97839701fb3fcd) was granted 
leadership with session id bb7cbeec-61b7-4da9-b83c-403b1ed944d9 at 
akka://flink/user/jobmanager_41.
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Starting execution of job 
test_windowing_1540400863.75 (4d97ffcbe559d6b6ec97839701fb3fcd)
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Job 
test_windowing_1540400863.75 (4d97ffcbe559d6b6ec97839701fb3fcd) switched from 
state CREATED to RUNNING.
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Custom Source 
-> 14Create/Impulse.None/beam:env:docker:v1:0 -> ToKeyedWorkItem (1/1) 
(82ecb81373e2301d77a253f13a476151) switched from CREATED to SCHEDULED.
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Custom Source 
-> 26assert_that/Create/Impulse.None/beam:env:docker:v1:0 (1/1) 
(7a4d15ae13ae5ea9d91a00375ba1c851) switched from CREATED to SCHEDULED.
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - 
Create/MaybeReshuffle/Reshuffle/ReshufflePerKey/GroupByKey -> 
72Create/MaybeReshuffle/Reshuffle/ReshufflePerKey/GroupByKey/GroupByWindow.None/beam:env:docker:v1:0
 -> ToKeyedWorkItem (1/1) (d27b9beb405a54507eff95df74d3428d) switched from 
CREATED to SCHEDULED.
[flink-akka.actor.default-dispatcher-2] INFO 
org.apache.flink.runtime.jobmaster.slotpool.SlotPool - Cannot serve slot 
request, no ResourceManager connected. Adding as pending request 
[SlotRequestId{f5696ca8235e0e74db02d3350b1e8cb8}]
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - GroupByKey -> 
24GroupByKey/GroupByWindow.None/beam:env:docker:v1:0 (1/1) 
(7d1e85405c753c7af1059d0007d2aefa) switched from CREATED to SCHEDULED.
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - ToKeyedWorkItem (1/1) 
(c12cea15fa55fb72a9352ec1a23aaa11) switched from CREATED to SCHEDULED.
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - 
assert_that/Group/GroupByKey -> 
42assert_that/Group/GroupByKey/GroupByWindow.None/beam:env:docker:v1:0 (1/1) 
(cda3007441aa5816d9a808957eed8874) switched from CREATED to SCHEDULED.
[jobmanager-future-thread-1] INFO 
org.apache.flink.runtime.highavailability.nonha.embedded.EmbeddedLeaderService 
- Received confirmation of leadership for leader 
akka://flink/user/jobmanager_41 , session=bb7cbeec-61b7-4da9-b83c-403b1ed944d9
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Connecting to ResourceManager 
akka://flink/user/resourcemanager_254f6927-7974-4d96-9239-995d4e387615(9bd288979a72798362ac3c23039d449d)
[flink-akka.actor.default-dispatcher-2] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Resolved ResourceManager 
address, beginning registration
[flink-akka.actor.default-dispatcher-2] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Registration at ResourceManager 
attempt 1 (timeout=100ms)
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.resourcemanager.StandaloneResourceManager - 
Registering job manager 
b83c403b1ed944d9bb7cbeec61b74da9@akka://flink/user/jobmanager_41 for job 
4d97ffcbe559d6b6ec97839701fb3fcd.
[flink-akka.actor.default-dispatcher-2] INFO 
org.apache.flink.runtime.resourcemanager.StandaloneResourceManager - Registered 
job manager b83c403b1ed944d9bb7cbeec61b74da9@akka://flink/user/jobmanager_41 
for job 4d97ffcbe559d6b6ec97839701fb3fcd.
[flink-akka.actor.default-dispatcher-2] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - JobManager successfully 
registered at ResourceManager, leader id: 9bd288979a72798362ac3c23039d449d.
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.jobmaster.slotpool.SlotPool - Requesting new slot 
[SlotRequestId{f5696ca8235e0e74db02d3350b1e8cb8}] and profile 
ResourceProfile{cpuCores=-1.0, heapMemoryInMB=-1, directMemoryInMB=0, 
nativeMemoryInMB=0, networkMemoryInMB=0} from resource manager.
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.resourcemanager.StandaloneResourceManager - Request 

Build failed in Jenkins: beam_PostCommit_Py_VR_Dataflow #1488

2018-10-24 Thread Apache Jenkins Server
See 


Changes:

[mxm] [BEAM-2918] Add state support for streaming in portable FlinkRunner

--
[...truncated 51.33 KB...]
OK
:beam-sdks-python:validatesRunnerBatchTests (Thread[Task worker for 
':',5,main]) completed. Took 18 mins 38.943 secs.
:beam-sdks-python:validatesRunnerStreamingTests (Thread[Task worker for 
':',5,main]) started.

> Task :beam-sdks-python:validatesRunnerStreamingTests
Caching disabled for task ':beam-sdks-python:validatesRunnerStreamingTests': 
Caching has not been enabled for the task
Task ':beam-sdks-python:validatesRunnerStreamingTests' is not up-to-date 
because:
  Task has not declared any outputs despite executing actions.
Starting process 'command 'sh''. Working directory: 

 Command: sh -c . 

 && ./scripts/run_postcommit.sh ValidatesRunner,'!sickbay-streaming' streaming
Successfully started process 'command 'sh''


###
# Build tarball and set pipeline options.

# Check that the script is running in a known directory.
if [[ $PWD != *sdks/python* ]]; then
  echo 'Unable to locate Apache Beam Python SDK root directory'
  exit 1
fi

# Go to the Apache Beam Python SDK root
if [[ "*sdks/python" != $PWD ]]; then
  cd $(pwd | sed 's/sdks\/python.*/sdks\/python/')
fi
pwd | sed 's/sdks\/python.*/sdks\/python/'

RUNNER=${3:-TestDataflowRunner}

# Where to store integration test outputs.
GCS_LOCATION=${4:-gs://temp-storage-for-end-to-end-tests}

PROJECT=${5:-apache-beam-testing}

# Create a tarball
python setup.py -q sdist
:398:
 UserWarning: Normalizing '2.9.0.dev' to '2.9.0.dev0'
  normalized_version,
warning: no files found matching 'README.md'
warning: no files found matching 'NOTICE'
warning: no files found matching 'LICENSE'
warning: cmd: standard file not found: should have one of README, README.rst, 
README.txt, README.md


SDK_LOCATION=$(find dist/apache-beam-*.tar.gz)
find dist/apache-beam-*.tar.gz

# Install test dependencies for ValidatesRunner tests.
echo "pyhamcrest" > postcommit_requirements.txt
echo "mock" >> postcommit_requirements.txt

# Options used to run testing pipeline on Cloud Dataflow Service. Also used for
# running on DirectRunner (some options ignored).
PIPELINE_OPTIONS=(
  "--runner=$RUNNER"
  "--project=$PROJECT"
  "--staging_location=$GCS_LOCATION/staging-it"
  "--temp_location=$GCS_LOCATION/temp-it"
  "--output=$GCS_LOCATION/py-it-cloud/output"
  "--sdk_location=$SDK_LOCATION"
  "--requirements_file=postcommit_requirements.txt"
  "--num_workers=1"
  "--sleep_secs=20"
)

# Add streaming flag if specified.
if [[ "$2" = "streaming" ]]; then
>>> Set test pipeline to streaming
  echo ">>> Set test pipeline to streaming"
  PIPELINE_OPTIONS+=("--streaming")
else
  echo ">>> Set test pipeline to batch"
fi

TESTS=""
if [[ "$3" = "TestDirectRunner" ]]; then
  if [[ "$2" = "streaming" ]]; then
TESTS="--tests=\
apache_beam.examples.wordcount_it_test:WordCountIT.test_wordcount_it,\
apache_beam.io.gcp.pubsub_integration_test:PubSubIntegrationTest"
  else
TESTS="--tests=\
apache_beam.examples.wordcount_it_test:WordCountIT.test_wordcount_it,\
apache_beam.io.gcp.pubsub_integration_test:PubSubIntegrationTest,\
apache_beam.io.gcp.big_query_query_to_table_it_test:BigQueryQueryToTableIT,\
apache_beam.io.gcp.bigquery_io_read_it_test"
  fi
fi

###
# Run tests and validate that jobs finish successfully.

JOINED_OPTS=$(IFS=" " ; echo "${PIPELINE_OPTIONS[*]}")
IFS=" " ; echo "${PIPELINE_OPTIONS[*]}"

>>> RUNNING TestDataflowRunner ValidatesRunner,!sickbay-streaming tests
echo ">>> RUNNING $RUNNER $1 tests"
python setup.py nosetests \
  --attr $1 \
  --nologcapture \
  --processes=8 \
  --process-timeout=3000 \
  --test-pipeline-options="$JOINED_OPTS" \
  $TESTS
:398:
 UserWarning: Normalizing '2.9.0.dev' to '2.9.0.dev0'
  normalized_version,
running nosetests
running egg_info
writing requirements to apache_beam.egg-info/requires.txt
writing apache_beam.egg-info/PKG-INFO
writing top-level names to apache_beam.egg-info/top_level.txt
writing dependency_links to apache_beam.egg-info/dependency_links.txt
writing entry points to apache_beam.egg-info/entry_points.txt
reading manifest file 'apache_beam.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching 'README.md'
warning: no fil

Jenkins build is back to normal : beam_PostCommit_Python_VR_Flink #484

2018-10-24 Thread Apache Jenkins Server
See 




Build failed in Jenkins: beam_PostCommit_Python_VR_Flink #483

2018-10-24 Thread Apache Jenkins Server
See 


Changes:

[mxm] [BEAM-2918] Add state support for streaming in portable FlinkRunner

--
[...truncated 4.35 MB...]
[flink-akka.actor.default-dispatcher-2] INFO 
org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint - 
http://localhost:34067 was granted leadership with 
leaderSessionID=5dcd7702-dc68-4ba1-8c4b-b852915f41bc
[flink-akka.actor.default-dispatcher-2] INFO 
org.apache.flink.runtime.highavailability.nonha.embedded.EmbeddedLeaderService 
- Received confirmation of leadership for leader http://localhost:34067 , 
session=5dcd7702-dc68-4ba1-8c4b-b852915f41bc
[flink-runner-job-server] INFO org.apache.flink.runtime.rpc.akka.AkkaRpcService 
- Starting RPC endpoint for 
org.apache.flink.runtime.dispatcher.StandaloneDispatcher at 
akka://flink/user/dispatcher7b8f70f5-a07f-4b0f-92a1-f404ab5ddc8b .
[flink-runner-job-server] INFO 
org.apache.flink.runtime.highavailability.nonha.embedded.EmbeddedLeaderService 
- Proposing leadership to contender 
org.apache.flink.runtime.dispatcher.StandaloneDispatcher@758def6e @ 
akka://flink/user/dispatcher7b8f70f5-a07f-4b0f-92a1-f404ab5ddc8b
[flink-runner-job-server] INFO org.apache.flink.runtime.minicluster.MiniCluster 
- Flink Mini Cluster started successfully
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.dispatcher.StandaloneDispatcher - Dispatcher 
akka://flink/user/dispatcher7b8f70f5-a07f-4b0f-92a1-f404ab5ddc8b was granted 
leadership with fencing token 6a116885-4557-4979-bd11-4e0c7ed4b87c
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.dispatcher.StandaloneDispatcher - Recovering all 
persisted jobs.
[flink-akka.actor.default-dispatcher-3] INFO 
org.apache.flink.runtime.highavailability.nonha.embedded.EmbeddedLeaderService 
- Received confirmation of leadership for leader 
akka://flink/user/dispatcher7b8f70f5-a07f-4b0f-92a1-f404ab5ddc8b , 
session=6a116885-4557-4979-bd11-4e0c7ed4b87c
[flink-akka.actor.default-dispatcher-2] INFO 
org.apache.flink.runtime.dispatcher.StandaloneDispatcher - Submitting job 
71962f0ef738b1bbcc7327da114c3dd8 (test_windowing_1540397388.75).
[flink-akka.actor.default-dispatcher-2] INFO 
org.apache.flink.runtime.rpc.akka.AkkaRpcService - Starting RPC endpoint for 
org.apache.flink.runtime.jobmaster.JobMaster at akka://flink/user/jobmanager_41 
.
[flink-akka.actor.default-dispatcher-2] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Initializing job 
test_windowing_1540397388.75 (71962f0ef738b1bbcc7327da114c3dd8).
[flink-akka.actor.default-dispatcher-2] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Using restart strategy 
NoRestartStrategy for test_windowing_1540397388.75 
(71962f0ef738b1bbcc7327da114c3dd8).
[flink-akka.actor.default-dispatcher-2] INFO 
org.apache.flink.runtime.rpc.akka.AkkaRpcService - Starting RPC endpoint for 
org.apache.flink.runtime.jobmaster.slotpool.SlotPool at 
akka://flink/user/3175a2c2-d580-49e0-bf21-d7cd1f528fbb .
[flink-akka.actor.default-dispatcher-2] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Job recovers via 
failover strategy: full graph restart
[flink-akka.actor.default-dispatcher-2] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Running initialization on master 
for job test_windowing_1540397388.75 (71962f0ef738b1bbcc7327da114c3dd8).
[flink-akka.actor.default-dispatcher-2] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Successfully ran initialization 
on master in 0 ms.
[flink-akka.actor.default-dispatcher-2] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - No state backend has been 
configured, using default (Memory / JobManager) MemoryStateBackend (data in 
heap memory / checkpoints to JobManager) (checkpoints: 'null', savepoints: 
'null', asynchronous: TRUE, maxStateSize: 5242880)
[flink-akka.actor.default-dispatcher-2] INFO 
org.apache.flink.runtime.highavailability.nonha.embedded.EmbeddedLeaderService 
- Proposing leadership to contender 
org.apache.flink.runtime.jobmaster.JobManagerRunner@b2cd7c4 @ 
akka://flink/user/jobmanager_41
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.jobmaster.JobManagerRunner - JobManager runner for job 
test_windowing_1540397388.75 (71962f0ef738b1bbcc7327da114c3dd8) was granted 
leadership with session id 9bae1379-8e0c-4132-a5fe-491e2902f996 at 
akka://flink/user/jobmanager_41.
[flink-akka.actor.default-dispatcher-3] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Starting execution of job 
test_windowing_1540397388.75 (71962f0ef738b1bbcc7327da114c3dd8)
[flink-akka.actor.default-dispatcher-3] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Job 
test_windowing_1540397388.75 (71962f0ef738b1bbcc7327da114c3dd8) switched from 
state CREATED to RUNNING.
[flink-akka.actor.default-dispatcher-3] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Custom Source 
-> 14Create/Impulse.None

Jenkins build is back to normal : beam_PostCommit_Python_VR_Flink #482

2018-10-24 Thread Apache Jenkins Server
See 




Build failed in Jenkins: beam_PerformanceTests_Python #1597

2018-10-24 Thread Apache Jenkins Server
See 


--
[...truncated 101.48 KB...]
adding 'apache-beam-2.9.0.dev0/apache_beam/runners/worker/bundle_processor.py'
adding 'apache-beam-2.9.0.dev0/apache_beam/runners/worker/log_handler.py'
adding 'apache-beam-2.9.0.dev0/apache_beam/runners/worker/statesampler_fast.pxd'
adding 
'apache-beam-2.9.0.dev0/apache_beam/runners/worker/worker_id_interceptor.py'
adding 'apache-beam-2.9.0.dev0/apache_beam/runners/worker/operations.pxd'
adding 'apache-beam-2.9.0.dev0/apache_beam/runners/worker/sdk_worker_main.py'
adding 'apache-beam-2.9.0.dev0/apache_beam/runners/worker/__init__.py'
adding 'apache-beam-2.9.0.dev0/apache_beam/runners/worker/data_plane_test.py'
adding 'apache-beam-2.9.0.dev0/apache_beam/internal/util_test.py'
adding 'apache-beam-2.9.0.dev0/apache_beam/internal/pickler.py'
adding 'apache-beam-2.9.0.dev0/apache_beam/internal/util.py'
adding 'apache-beam-2.9.0.dev0/apache_beam/internal/module_test.py'
adding 'apache-beam-2.9.0.dev0/apache_beam/internal/__init__.py'
adding 'apache-beam-2.9.0.dev0/apache_beam/internal/pickler_test.py'
adding 'apache-beam-2.9.0.dev0/apache_beam/internal/gcp/json_value_test.py'
adding 'apache-beam-2.9.0.dev0/apache_beam/internal/gcp/json_value.py'
adding 'apache-beam-2.9.0.dev0/apache_beam/internal/gcp/__init__.py'
adding 'apache-beam-2.9.0.dev0/apache_beam/internal/gcp/auth.py'
adding 'apache-beam-2.9.0.dev0/apache_beam/metrics/execution.py'
adding 'apache-beam-2.9.0.dev0/apache_beam/metrics/execution.pxd'
adding 'apache-beam-2.9.0.dev0/apache_beam/metrics/cells_test.py'
adding 'apache-beam-2.9.0.dev0/apache_beam/metrics/execution_test.py'
adding 'apache-beam-2.9.0.dev0/apache_beam/metrics/metricbase.py'
adding 'apache-beam-2.9.0.dev0/apache_beam/metrics/cells.py'
adding 'apache-beam-2.9.0.dev0/apache_beam/metrics/metric.py'
adding 'apache-beam-2.9.0.dev0/apache_beam/metrics/__init__.py'
adding 'apache-beam-2.9.0.dev0/apache_beam/metrics/metric_test.py'
adding 'apache-beam-2.9.0.dev0/apache_beam/metrics/monitoring_infos.py'
adding 'apache-beam-2.9.0.dev0/apache_beam/tools/coders_microbenchmark.py'
adding 'apache-beam-2.9.0.dev0/apache_beam/tools/sideinput_microbenchmark.py'
adding 'apache-beam-2.9.0.dev0/apache_beam/tools/map_fn_microbenchmark.py'
adding 'apache-beam-2.9.0.dev0/apache_beam/tools/microbenchmarks_test.py'
adding 'apache-beam-2.9.0.dev0/apache_beam/tools/__init__.py'
adding 
'apache-beam-2.9.0.dev0/apache_beam/tools/distribution_counter_microbenchmark.py'
adding 'apache-beam-2.9.0.dev0/apache_beam/tools/utils.py'
adding 'apache-beam-2.9.0.dev0/apache_beam/examples/wordcount_debugging_test.py'
adding 'apache-beam-2.9.0.dev0/apache_beam/examples/windowed_wordcount.py'
adding 'apache-beam-2.9.0.dev0/apache_beam/examples/wordcount_debugging.py'
adding 'apache-beam-2.9.0.dev0/apache_beam/examples/avro_bitcoin.py'
adding 
'apache-beam-2.9.0.dev0/apache_beam/examples/streaming_wordcount_debugging.py'
adding 'apache-beam-2.9.0.dev0/apache_beam/examples/wordcount_minimal.py'
adding 'apache-beam-2.9.0.dev0/apache_beam/examples/streaming_wordcount.py'
adding 'apache-beam-2.9.0.dev0/apache_beam/examples/wordcount.py'
adding 'apache-beam-2.9.0.dev0/apache_beam/examples/wordcount_it_test.py'
adding 'apache-beam-2.9.0.dev0/apache_beam/examples/__init__.py'
adding 'apache-beam-2.9.0.dev0/apache_beam/examples/fastavro_it_test.py'
adding 'apache-beam-2.9.0.dev0/apache_beam/examples/wordcount_minimal_test.py'
adding 
'apache-beam-2.9.0.dev0/apache_beam/examples/streaming_wordcount_it_test.py'
adding 'apache-beam-2.9.0.dev0/apache_beam/examples/wordcount_test.py'
adding 
'apache-beam-2.9.0.dev0/apache_beam/examples/cookbook/bigquery_side_input_test.py'
adding 
'apache-beam-2.9.0.dev0/apache_beam/examples/cookbook/bigquery_side_input.py'
adding 
'apache-beam-2.9.0.dev0/apache_beam/examples/cookbook/bigquery_tornadoes_test.py'
adding 
'apache-beam-2.9.0.dev0/apache_beam/examples/cookbook/multiple_output_pardo_test.py'
adding 'apache-beam-2.9.0.dev0/apache_beam/examples/cookbook/filters_test.py'
adding 
'apache-beam-2.9.0.dev0/apache_beam/examples/cookbook/datastore_wordcount.py'
adding 
'apache-beam-2.9.0.dev0/apache_beam/examples/cookbook/bigquery_tornadoes.py'
adding 
'apache-beam-2.9.0.dev0/apache_beam/examples/cookbook/custom_ptransform_test.py'
adding 
'apache-beam-2.9.0.dev0/apache_beam/examples/cookbook/group_with_coder_test.py'
adding 'apache-beam-2.9.0.dev0/apache_beam/examples/cookbook/bigquery_schema.py'
adding 
'apache-beam-2.9.0.dev0/apache_beam/examples/cookbook/bigquery_tornadoes_it_test.py'
adding 
'apache-beam-2.9.0.dev0/apache_beam/examples/cookbook/datastore_wordcount_it_test.py'
adding 'apache-beam-2.9.0.dev0/apache_beam/examples/cookbook/mergecontacts.py'
adding 
'apache-beam-2.9.0.dev0/apache_beam/examples/cookbook/mergecontacts_test.py'
adding 'apache-beam-2.9.0.dev0/apache_beam/examples/cookbook/__init__.py'
adding 
'apache-beam-2.9.0.dev

Build failed in Jenkins: beam_PostCommit_Python_VR_Flink #481

2018-10-24 Thread Apache Jenkins Server
See 


--
[...truncated 4.29 MB...]
[flink-runner-job-server] INFO 
org.apache.flink.runtime.highavailability.nonha.embedded.EmbeddedLeaderService 
- Proposing leadership to contender 
org.apache.flink.runtime.dispatcher.StandaloneDispatcher@24441c64 @ 
akka://flink/user/dispatcher8cdfd65a-562b-43c0-96f9-9120fccf2d9a
[flink-runner-job-server] INFO org.apache.flink.runtime.minicluster.MiniCluster 
- Flink Mini Cluster started successfully
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.dispatcher.StandaloneDispatcher - Dispatcher 
akka://flink/user/dispatcher8cdfd65a-562b-43c0-96f9-9120fccf2d9a was granted 
leadership with fencing token 4fd9a6d4-fb15-48f4-a0b3-29f4daa293ae
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.dispatcher.StandaloneDispatcher - Recovering all 
persisted jobs.
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.highavailability.nonha.embedded.EmbeddedLeaderService 
- Received confirmation of leadership for leader 
akka://flink/user/dispatcher8cdfd65a-562b-43c0-96f9-9120fccf2d9a , 
session=4fd9a6d4-fb15-48f4-a0b3-29f4daa293ae
[flink-akka.actor.default-dispatcher-3] INFO 
org.apache.flink.runtime.dispatcher.StandaloneDispatcher - Submitting job 
25872f138a710b278c766ec3debfd51d (test_windowing_1540383145.36).
[flink-akka.actor.default-dispatcher-3] INFO 
org.apache.flink.runtime.rpc.akka.AkkaRpcService - Starting RPC endpoint for 
org.apache.flink.runtime.jobmaster.JobMaster at akka://flink/user/jobmanager_39 
.
[flink-akka.actor.default-dispatcher-3] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Initializing job 
test_windowing_1540383145.36 (25872f138a710b278c766ec3debfd51d).
[flink-akka.actor.default-dispatcher-3] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Using restart strategy 
NoRestartStrategy for test_windowing_1540383145.36 
(25872f138a710b278c766ec3debfd51d).
[flink-akka.actor.default-dispatcher-3] INFO 
org.apache.flink.runtime.rpc.akka.AkkaRpcService - Starting RPC endpoint for 
org.apache.flink.runtime.jobmaster.slotpool.SlotPool at 
akka://flink/user/b2e3b2d7-a705-4198-8745-d7606a038b7c .
[flink-akka.actor.default-dispatcher-3] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Job recovers via 
failover strategy: full graph restart
[flink-akka.actor.default-dispatcher-3] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Running initialization on master 
for job test_windowing_1540383145.36 (25872f138a710b278c766ec3debfd51d).
[flink-akka.actor.default-dispatcher-3] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Successfully ran initialization 
on master in 0 ms.
[flink-akka.actor.default-dispatcher-3] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - No state backend has been 
configured, using default (Memory / JobManager) MemoryStateBackend (data in 
heap memory / checkpoints to JobManager) (checkpoints: 'null', savepoints: 
'null', asynchronous: TRUE, maxStateSize: 5242880)
[flink-akka.actor.default-dispatcher-3] INFO 
org.apache.flink.runtime.highavailability.nonha.embedded.EmbeddedLeaderService 
- Proposing leadership to contender 
org.apache.flink.runtime.jobmaster.JobManagerRunner@739fdba3 @ 
akka://flink/user/jobmanager_39
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.jobmaster.JobManagerRunner - JobManager runner for job 
test_windowing_1540383145.36 (25872f138a710b278c766ec3debfd51d) was granted 
leadership with session id b6d22aee-2b74-4eea-bfdf-d07954effc22 at 
akka://flink/user/jobmanager_39.
[flink-akka.actor.default-dispatcher-2] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Starting execution of job 
test_windowing_1540383145.36 (25872f138a710b278c766ec3debfd51d)
[flink-akka.actor.default-dispatcher-2] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Job 
test_windowing_1540383145.36 (25872f138a710b278c766ec3debfd51d) switched from 
state CREATED to RUNNING.
[flink-akka.actor.default-dispatcher-2] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Custom Source 
-> 14Create/Impulse.None/beam:env:docker:v1:0 -> ToKeyedWorkItem (1/1) 
(2e177ff267d5f768ef5055a77f86233b) switched from CREATED to SCHEDULED.
[flink-akka.actor.default-dispatcher-2] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Custom Source 
-> 26assert_that/Create/Impulse.None/beam:env:docker:v1:0 (1/1) 
(421429c6f7cb76a54ffe7a2a456c545e) switched from CREATED to SCHEDULED.
[flink-akka.actor.default-dispatcher-2] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - 
Create/MaybeReshuffle/Reshuffle/ReshufflePerKey/GroupByKey -> 
72Create/MaybeReshuffle/Reshuffle/ReshufflePerKey/GroupByKey/GroupByWindow.None/beam:env:docker:v1:0
 -> ToKeyedWorkItem (1/1) (17495161ea7f5adfa43fc9486e342ccf) switched from 
CREATED to SCHEDULED.
[flink-akka.actor.default-dispatcher-2] INFO 
o

Build failed in Jenkins: beam_PreCommit_Java Examples Dataflow_Cron #2

2018-10-24 Thread Apache Jenkins Server
See 


--
Started by timer
[EnvInject] - Loading node environment variables.
Building remotely on beam6 (beam) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/beam.git # timeout=10
Fetching upstream changes from https://github.com/apache/beam.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/beam.git 
 > +refs/heads/*:refs/remotes/origin/* 
 > +refs/pull/${ghprbPullId}/*:refs/remotes/origin/pr/${ghprbPullId}/*
 > git rev-parse origin/master^{commit} # timeout=10
Checking out Revision 7800c3078d8ecaee7d2e789f02b759e579263249 (origin/master)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 7800c3078d8ecaee7d2e789f02b759e579263249
Commit message: "Merge pull request #6807: [BEAM-5833] Fix java-harness build 
by adding flush() to BeamFnDataWriteRunnerTest"
 > git rev-list --no-walk 7800c3078d8ecaee7d2e789f02b759e579263249 # timeout=10
Cleaning workspace
 > git rev-parse --verify HEAD # timeout=10
Resetting working tree
 > git reset --hard # timeout=10
 > git clean -fdx # timeout=10
[EnvInject] - Executing scripts and injecting environment variables after the 
SCM step.
[EnvInject] - Injecting as environment variables the properties content 
SPARK_LOCAL_IP=127.0.0.1

[EnvInject] - Variables injected successfully.
[Gradle] - Launching build.
[src] $ 
"
 --info --continue --max-workers=12 -Dorg.gradle.jvmargs=-Xms2g 
-Dorg.gradle.jvmargs=-Xmx4g :javaExamplesDataflowPreCommit
Initialized native services in: /home/jenkins/.gradle/native
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/4.10.2/userguide/gradle_daemon.html.
Starting process 'Gradle build daemon'. Working directory: 
/home/jenkins/.gradle/daemon/4.10.2 Command: 
/usr/local/asfpackages/java/jdk1.8.0_172/bin/java -Xmx4g -Dfile.encoding=UTF-8 
-Duser.country=US -Duser.language=en -Duser.variant -cp 
/home/jenkins/.gradle/wrapper/dists/gradle-4.10.2-bin/cghg6c4gf4vkiutgsab8yrnwv/gradle-4.10.2/lib/gradle-launcher-4.10.2.jar
 org.gradle.launcher.daemon.bootstrap.GradleDaemon 4.10.2
Successfully started process 'Gradle build daemon'
An attempt to start the daemon took 0.892 secs.
The client will now receive all logging from the daemon (pid: 15358). The 
daemon log file: /home/jenkins/.gradle/daemon/4.10.2/daemon-15358.out.log
Closing daemon's stdin at end of input.
The daemon will no longer process any standard input.
Daemon will be stopped at the end of the build stopping after processing
Using 12 worker leases.
Starting Build
Parallel execution is an incubating feature.

> Configure project :buildSrc
Evaluating project ':buildSrc' using build file 
'
file or directory 
'
 not found
Selected primary task 'build' from project :
file or directory 
'
 not found
:buildSrc:compileJava (Thread[Task worker for ':buildSrc',5,main]) started.
Using local directory build cache for build ':buildSrc' (location = 
/home/jenkins/.gradle/caches/build-cache-1, removeUnusedEntriesAfter = 7 days).

> Task :buildSrc:compileJava NO-SOURCE
file or directory 
'
 not found
Skipping task ':buildSrc:compileJava' as it has no source files and no previous 
output files.
:buildSrc:compileJava (Thread[Task worker for ':buildSrc',5,main]) completed. 
Took 0.113 secs.
:buildSrc:compileGroovy (Thread[Task worker for ':buildSrc',5,main]) started.

> Task :buildSrc:compileGroovy FROM-CACHE
Build cache key for task ':buildSrc:compileGroovy' is 
c110a1a460af3ccf05b2d11c0048ee40
Task ':buildSrc:compileGroovy' is not up-to-date because:
  No history is available.
Origin for task ':buildSrc:compileGroovy': {executionTime=2425, 
hostName=apache-beam-jenkins-slave-group-t4pj, operatingSystem=Linux, 
buildInvocationId=veqwkgbn2zcibec2ce4fgkev4u, creationTime=1540231225862, 
type=org.gradle.api.tasks.compile.GroovyCompile_Decorated, userName=jenkins, 
gradleVersion=4.10.2, 
rootPath=/home/jenkins/jenkins-slave/workspace/beam_PreCommit_Java_Cron/src/buildSrc,
 path=:compileGroovy}
Unpacked output for task ':buildSrc:compileGroovy' from cache.
:buildSrc:compileGroovy (Thread[Task worker for ':buildSrc',5

Build failed in Jenkins: beam_PreCommit_Java FnApi_Cron #3

2018-10-24 Thread Apache Jenkins Server
See 


--
Started by timer
[EnvInject] - Loading node environment variables.
Building remotely on beam13 (beam) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/beam.git # timeout=10
Fetching upstream changes from https://github.com/apache/beam.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/beam.git 
 > +refs/heads/*:refs/remotes/origin/* 
 > +refs/pull/${ghprbPullId}/*:refs/remotes/origin/pr/${ghprbPullId}/*
 > git rev-parse origin/master^{commit} # timeout=10
Checking out Revision 7800c3078d8ecaee7d2e789f02b759e579263249 (origin/master)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 7800c3078d8ecaee7d2e789f02b759e579263249
Commit message: "Merge pull request #6807: [BEAM-5833] Fix java-harness build 
by adding flush() to BeamFnDataWriteRunnerTest"
 > git rev-list --no-walk 7800c3078d8ecaee7d2e789f02b759e579263249 # timeout=10
Cleaning workspace
 > git rev-parse --verify HEAD # timeout=10
Resetting working tree
 > git reset --hard # timeout=10
 > git clean -fdx # timeout=10
[EnvInject] - Executing scripts and injecting environment variables after the 
SCM step.
[EnvInject] - Injecting as environment variables the properties content 
SPARK_LOCAL_IP=127.0.0.1

[EnvInject] - Variables injected successfully.
[Gradle] - Launching build.
[src] $ 
"
 --info --continue --max-workers=12 -Dorg.gradle.jvmargs=-Xms2g 
-Dorg.gradle.jvmargs=-Xmx4g :javaPreCommitFnApi
Initialized native services in: /home/jenkins/.gradle/native
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/4.10.2/userguide/gradle_daemon.html.
Starting process 'Gradle build daemon'. Working directory: 
/home/jenkins/.gradle/daemon/4.10.2 Command: 
/usr/local/asfpackages/java/jdk1.8.0_172/bin/java -Xmx4g -Dfile.encoding=UTF-8 
-Duser.country=US -Duser.language=en -Duser.variant -cp 
/home/jenkins/.gradle/wrapper/dists/gradle-4.10.2-bin/cghg6c4gf4vkiutgsab8yrnwv/gradle-4.10.2/lib/gradle-launcher-4.10.2.jar
 org.gradle.launcher.daemon.bootstrap.GradleDaemon 4.10.2
Successfully started process 'Gradle build daemon'
An attempt to start the daemon took 0.986 secs.
The client will now receive all logging from the daemon (pid: 13818). The 
daemon log file: /home/jenkins/.gradle/daemon/4.10.2/daemon-13818.out.log
Closing daemon's stdin at end of input.
The daemon will no longer process any standard input.
Daemon will be stopped at the end of the build stopping after processing
Using 12 worker leases.
Starting Build
Parallel execution is an incubating feature.

> Configure project :buildSrc
Evaluating project ':buildSrc' using build file 
'
Invalidating in-memory cache of 
/home/jenkins/.gradle/caches/4.10.2/fileHashes/fileHashes.bin
Invalidating in-memory cache of 
/home/jenkins/.gradle/caches/4.10.2/fileHashes/resourceHashesCache.bin
Invalidating in-memory cache of 
/home/jenkins/.gradle/caches/journal-1/file-access.bin
Invalidating in-memory cache of 
/home/jenkins/.gradle/caches/4.10.2/fileHashes/fileHashes.bin
Invalidating in-memory cache of 
/home/jenkins/.gradle/caches/4.10.2/fileHashes/resourceHashesCache.bin
Invalidating in-memory cache of 
/home/jenkins/.gradle/caches/journal-1/file-access.bin
file or directory 
'
 not found
Invalidating in-memory cache of 
/home/jenkins/.gradle/caches/journal-1/file-access.bin
Selected primary task 'build' from project :
file or directory 
'
 not found
:buildSrc:compileJava (Thread[Task worker for ':buildSrc' Thread 5,5,main]) 
started.
Using local directory build cache for build ':buildSrc' (location = 
/home/jenkins/.gradle/caches/build-cache-1, removeUnusedEntriesAfter = 7 days).

> Task :buildSrc:compileJava NO-SOURCE
file or directory 
'
 not found
Skipping task ':buildSrc:compileJava' as it has no source files and no previous 
output files.
:buildSrc:compileJava (Thread[Task worker for ':buildSrc' Thread 5,5,main]) 
completed. Took 0.091 secs.
:buildSrc:compileGroovy (Thread[Task worker for ':buildSrc' Thread 5,5,main]) 
started.

> Task :buildSrc:compileGroovy FROM-CACHE
Invalidating in-memory cache of 
/home/jenkins/.gradle/caches/4.10.2/fileHashes/fileHashes.bin
Invalidating in-memory cache of 
/home/

Jenkins build is back to normal : beam_PreCommit_Java_Cron #500

2018-10-24 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : beam_PostCommit_Java_GradleBuild #1745

2018-10-23 Thread Apache Jenkins Server
See 




Build failed in Jenkins: beam_PreCommit_Java FnApi_Cron #2

2018-10-23 Thread Apache Jenkins Server
See 


Changes:

[kedin] Fix java-harness build by adding flush() to

--
Started by timer
[EnvInject] - Loading node environment variables.
Building remotely on beam13 (beam) in workspace 

Cloning the remote Git repository
Cloning repository https://github.com/apache/beam.git
 > git init 
 >  # 
 > timeout=10
Fetching upstream changes from https://github.com/apache/beam.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/beam.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url https://github.com/apache/beam.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url https://github.com/apache/beam.git # timeout=10
Fetching upstream changes from https://github.com/apache/beam.git
 > git fetch --tags --progress https://github.com/apache/beam.git 
 > +refs/heads/*:refs/remotes/origin/* 
 > +refs/pull/${ghprbPullId}/*:refs/remotes/origin/pr/${ghprbPullId}/*
 > git rev-parse origin/master^{commit} # timeout=10
Checking out Revision 7800c3078d8ecaee7d2e789f02b759e579263249 (origin/master)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 7800c3078d8ecaee7d2e789f02b759e579263249
Commit message: "Merge pull request #6807: [BEAM-5833] Fix java-harness build 
by adding flush() to BeamFnDataWriteRunnerTest"
 > git rev-list --no-walk 5e603ad4c642cfba0a6db70abd05ed8e9d89c7d6 # timeout=10
Cleaning workspace
 > git rev-parse --verify HEAD # timeout=10
Resetting working tree
 > git reset --hard # timeout=10
 > git clean -fdx # timeout=10
[EnvInject] - Executing scripts and injecting environment variables after the 
SCM step.
[EnvInject] - Injecting as environment variables the properties content 
SPARK_LOCAL_IP=127.0.0.1

[EnvInject] - Variables injected successfully.
[Gradle] - Launching build.
[src] $ 
"
 --info --continue --max-workers=12 -Dorg.gradle.jvmargs=-Xms2g 
-Dorg.gradle.jvmargs=-Xmx4g :javaPreCommitFnApi
Initialized native services in: /home/jenkins/.gradle/native
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/4.10.2/userguide/gradle_daemon.html.
Starting process 'Gradle build daemon'. Working directory: 
/home/jenkins/.gradle/daemon/4.10.2 Command: 
/usr/local/asfpackages/java/jdk1.8.0_172/bin/java -Xmx4g -Dfile.encoding=UTF-8 
-Duser.country=US -Duser.language=en -Duser.variant -cp 
/home/jenkins/.gradle/wrapper/dists/gradle-4.10.2-bin/cghg6c4gf4vkiutgsab8yrnwv/gradle-4.10.2/lib/gradle-launcher-4.10.2.jar
 org.gradle.launcher.daemon.bootstrap.GradleDaemon 4.10.2
Successfully started process 'Gradle build daemon'
An attempt to start the daemon took 1.024 secs.
The client will now receive all logging from the daemon (pid: 16890). The 
daemon log file: /home/jenkins/.gradle/daemon/4.10.2/daemon-16890.out.log
Closing daemon's stdin at end of input.
The daemon will no longer process any standard input.
Daemon will be stopped at the end of the build stopping after processing
Using 12 worker leases.
Starting Build
Parallel execution is an incubating feature.

> Configure project :buildSrc
Evaluating project ':buildSrc' using build file 
'
Invalidating in-memory cache of 
/home/jenkins/.gradle/caches/4.10.2/fileHashes/fileHashes.bin
Invalidating in-memory cache of 
/home/jenkins/.gradle/caches/4.10.2/fileHashes/resourceHashesCache.bin
Invalidating in-memory cache of 
/home/jenkins/.gradle/caches/4.10.2/fileHashes/fileHashes.bin
Invalidating in-memory cache of 
/home/jenkins/.gradle/caches/4.10.2/fileHashes/resourceHashesCache.bin
Invalidating in-memory cache of 
/home/jenkins/.gradle/caches/journal-1/file-access.bin
file or directory 
'
 not found
Selected primary task 'build' from project :
file or directory 
'
 not found
:buildSrc:compileJava (Thread[Task worker for ':buildSrc' Thread 8,5,main]) 
started.
Using local directory build cache for build ':buildSrc' (location = 
/home/jenkins/.gradle/caches/build-cache-1, removeUnusedEntriesAfter = 7 days).

> Task :buildSrc:compileJava NO-SOURCE
file or directory 
'
 not found
Skipping task ':buildSrc:compileJava' as it has no source files and no previous 
output files.
:buildSrc:compileJava (Thread[Ta

Build failed in Jenkins: beam_PreCommit_Java Examples Dataflow_Cron #1

2018-10-23 Thread Apache Jenkins Server
See 


--
Started by timer
[EnvInject] - Loading node environment variables.
Building remotely on beam6 (beam) in workspace 

Cloning the remote Git repository
Cloning repository https://github.com/apache/beam.git
 > git init 
 > 
 >  # timeout=10
Fetching upstream changes from https://github.com/apache/beam.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/beam.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url https://github.com/apache/beam.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url https://github.com/apache/beam.git # timeout=10
Fetching upstream changes from https://github.com/apache/beam.git
 > git fetch --tags --progress https://github.com/apache/beam.git 
 > +refs/heads/*:refs/remotes/origin/* 
 > +refs/pull/${ghprbPullId}/*:refs/remotes/origin/pr/${ghprbPullId}/*
 > git rev-parse origin/master^{commit} # timeout=10
Checking out Revision 7800c3078d8ecaee7d2e789f02b759e579263249 (origin/master)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 7800c3078d8ecaee7d2e789f02b759e579263249
Commit message: "Merge pull request #6807: [BEAM-5833] Fix java-harness build 
by adding flush() to BeamFnDataWriteRunnerTest"
First time build. Skipping changelog.
Cleaning workspace
 > git rev-parse --verify HEAD # timeout=10
Resetting working tree
 > git reset --hard # timeout=10
 > git clean -fdx # timeout=10
[EnvInject] - Executing scripts and injecting environment variables after the 
SCM step.
[EnvInject] - Injecting as environment variables the properties content 
SPARK_LOCAL_IP=127.0.0.1

[EnvInject] - Variables injected successfully.
[Gradle] - Launching build.
[src] $ 
"
 --info --continue --max-workers=12 -Dorg.gradle.jvmargs=-Xms2g 
-Dorg.gradle.jvmargs=-Xmx4g :javaExamplesDataflowPreCommit
Initialized native services in: /home/jenkins/.gradle/native
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/4.10.2/userguide/gradle_daemon.html.
Starting process 'Gradle build daemon'. Working directory: 
/home/jenkins/.gradle/daemon/4.10.2 Command: 
/usr/local/asfpackages/java/jdk1.8.0_172/bin/java -Xmx4g -Dfile.encoding=UTF-8 
-Duser.country=US -Duser.language=en -Duser.variant -cp 
/home/jenkins/.gradle/wrapper/dists/gradle-4.10.2-bin/cghg6c4gf4vkiutgsab8yrnwv/gradle-4.10.2/lib/gradle-launcher-4.10.2.jar
 org.gradle.launcher.daemon.bootstrap.GradleDaemon 4.10.2
Successfully started process 'Gradle build daemon'
An attempt to start the daemon took 1.13 secs.
The client will now receive all logging from the daemon (pid: 30479). The 
daemon log file: /home/jenkins/.gradle/daemon/4.10.2/daemon-30479.out.log
Closing daemon's stdin at end of input.
The daemon will no longer process any standard input.
Daemon will be stopped at the end of the build stopping after processing
Using 12 worker leases.
Starting Build
Parallel execution is an incubating feature.

> Configure project :buildSrc
Evaluating project ':buildSrc' using build file 
'
file or directory 
'
 not found
Selected primary task 'build' from project :
file or directory 
'
 not found
:buildSrc:compileJava (Thread[Task worker for ':buildSrc',5,main]) started.
Using local directory build cache for build ':buildSrc' (location = 
/home/jenkins/.gradle/caches/build-cache-1, removeUnusedEntriesAfter = 7 days).

> Task :buildSrc:compileJava NO-SOURCE
file or directory 
'
 not found
Skipping task ':buildSrc:compileJava' as it has no source files and no previous 
output files.
:buildSrc:compileJava (Thread[Task worker for ':buildSrc',5,main]) completed. 
Took 0.096 secs.
:buildSrc:compileGroovy (Thread[Task worker for ':buildSrc',5,main]) started.

> Task :buildSrc:compileGroovy FROM-CACHE
Build cache key for task ':buildSrc:compileGroovy' is 
c110a1a460af3ccf05b2d11c0048ee40
Task ':buildSrc:compileGroovy' is not up-to-date because:
  No history is available.
Origin for task ':buildSrc:compileGroovy': {executionTime=2425, 
hostName=apache-beam-jenkins-slave-group-t4pj, operatingSystem=Linux, 

Jenkins build is back to normal : beam_PostCommit_Py_VR_Dataflow #1484

2018-10-23 Thread Apache Jenkins Server
See 




Build failed in Jenkins: beam_PreCommit_Java_Cron #499

2018-10-23 Thread Apache Jenkins Server
See 


Changes:

[kedin] [SQL] Move builtin aggregations creation to a map of factories

[kedin] [SQL] Simplify AggregationRel

[kedin] [SQL] Add AggregationCall wrapper

[kedin] [SQL] Inline aggregation rel helper transforms

[kedin] [SQL] Move CombineFn creation to AggregationCall constructor

[kedin] [SQL] Split and rename Aggregation CombineFn wrappers

[kedin] [SQL] Make AggregationCombineFnAdapter non-AutoValue

[kedin] [SQL] Convert ifs to guard statements in AggregationCombineFnAdapter

[kedin] [SQL] Convert Covariance to accept rows instead of KVs

[kedin] [SQL] Split Args Adapters from AggregationCombineFnAdapter

[kedin] [SQL] Extract MultipleAggregationFn from BeamAggregationTransforms

[kedin] [SQL] Clean up, comment aggregation transforms

[kenn] [BEAM-5833] Fix checkstyle breakage

[scott] [BEAM-5837] Add initial jenkins job to verify community metrics infra.

[scott] Create separate :rat precommit and remove it from others.

--
[...truncated 45.77 MB...]

> Task :beam-sdks-java-io-hadoop-common:spotlessJava
Caching disabled for task ':beam-sdks-java-io-hadoop-common:spotlessJava': 
Caching has not been enabled for the task
Task ':beam-sdks-java-io-hadoop-common:spotlessJava' is not up-to-date because:
  No history is available.
All input files are considered out-of-date for incremental task 
':beam-sdks-java-io-hadoop-common:spotlessJava'.
:beam-sdks-java-io-hadoop-common:spotlessJava (Thread[Task worker for ':' 
Thread 9,5,main]) completed. Took 0.033 secs.
:beam-sdks-java-io-hadoop-common:spotlessJavaCheck (Thread[Task worker for ':' 
Thread 9,5,main]) started.

> Task :beam-sdks-java-io-hadoop-common:spotlessJavaCheck
Skipping task ':beam-sdks-java-io-hadoop-common:spotlessJavaCheck' as it has no 
actions.
:beam-sdks-java-io-hadoop-common:spotlessJavaCheck (Thread[Task worker for ':' 
Thread 9,5,main]) completed. Took 0.0 secs.
:beam-sdks-java-io-hadoop-common:spotlessCheck (Thread[Task worker for ':' 
Thread 9,5,main]) started.

> Task :beam-sdks-java-io-hadoop-common:spotlessCheck
Skipping task ':beam-sdks-java-io-hadoop-common:spotlessCheck' as it has no 
actions.
:beam-sdks-java-io-hadoop-common:spotlessCheck (Thread[Task worker for ':' 
Thread 9,5,main]) completed. Took 0.0 secs.
:beam-sdks-java-io-hadoop-common:test (Thread[Task worker for ':' Thread 
9,5,main]) started.
Gradle Test Executor 142 started executing tests.
Gradle Test Executor 142 finished executing tests.

> Task :beam-sdks-java-io-hadoop-common:test
Build cache key for task ':beam-sdks-java-io-hadoop-common:test' is 
aad14f6b503dff002ee092860343bbba
Task ':beam-sdks-java-io-hadoop-common:test' is not up-to-date because:
  No history is available.
Starting process 'Gradle Test Executor 142'. Working directory: 

 Command: /usr/local/asfpackages/java/jdk1.8.0_172/bin/java 
-Djava.security.manager=worker.org.gradle.process.internal.worker.child.BootstrapSecurityManager
 -Dorg.gradle.native=false -Dfile.encoding=UTF-8 -Duser.country=US 
-Duser.language=en -Duser.variant -ea -cp 
/home/jenkins/.gradle/caches/4.10.2/workerMain/gradle-worker.jar 
worker.org.gradle.process.internal.worker.GradleWorkerMain 'Gradle Test 
Executor 142'
Successfully started process 'Gradle Test Executor 142'

org.apache.beam.sdk.io.hadoop.WritableCoderTest > 
testAutomaticRegistrationOfCoderProvider STANDARD_ERROR
log4j:WARN No appenders could be found for logger 
(org.apache.beam.sdk.coders.CoderRegistry).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for 
more info.
Finished generating test XML results (0.0 secs) into: 

Generating HTML test report...
Finished generating test html results (0.001 secs) into: 

Packing task ':beam-sdks-java-io-hadoop-common:test'
:beam-sdks-java-io-hadoop-common:test (Thread[Task worker for ':' Thread 
9,5,main]) completed. Took 1.119 secs.
:beam-sdks-java-io-hadoop-common:validateShadedJarDoesntLeakNonOrgApacheBeamClasses
 (Thread[Task worker for ':' Thread 9,5,main]) started.

> Task 
> :beam-sdks-java-io-hadoop-common:validateShadedJarDoesntLeakNonOrgApacheBeamClasses
Caching disabled for task 
':beam-sdks-java-io-hadoop-common:validateShadedJarDoesntLeakNonOrgApacheBeamClasses':
 Caching has not been enabled for the task
Task 
':beam-sdks-java-io-hadoop-common:validateShadedJarDoesntLeakNonOrgApacheBeamClasses'
 is not up-to-date because:
  Task has not declared any outputs despite executing actions.
:beam-sdks-java-io-hadoop-common:validateShadedJarDoesntLeakNonOrgApacheBeamCl

Build failed in Jenkins: beam_PostCommit_Py_VR_Dataflow #1483

2018-10-23 Thread Apache Jenkins Server
See 


--
[...truncated 53.16 KB...]
  File 
"
 line 241, in test_par_do_with_multiple_outputs_and_using_yield
pipeline.run()
  File 
"
 line 107, in run
else test_runner_api))
  File 
"
 line 403, in run
self.to_runner_api(), self.runner, self._options).run(False)
  File 
"
 line 416, in run
return self.runner.run_pipeline(self)
  File 
"
 line 50, in run_pipeline
self.result = super(TestDataflowRunner, self).run_pipeline(pipeline)
  File 
"
 line 402, in run_pipeline
self.dataflow_client.create_job(self.job), self)
  File 
"
 line 184, in wrapper
return fun(*args, **kwargs)
  File 
"
 line 490, in create_job
self.create_job_description(job)
  File 
"
 line 519, in create_job_description
resources = self._stage_resources(job.options)
  File 
"
 line 452, in _stage_resources
staging_location=google_cloud_options.staging_location)
  File 
"
 line 161, in stage_job_resources
requirements_cache_path)
  File 
"
 line 419, in _populate_requirements_cache
processes.check_output(cmd_args)
  File 
"
 line 52, in check_output
return subprocess.check_output(*args, **kwargs)
  File "/usr/lib/python2.7/subprocess.py", line 574, in check_output
raise CalledProcessError(retcode, cmd, output=output)
CalledProcessError: Command 
'['
 '-m', 'pip', 'download', '--dest', '/tmp/dataflow-requirements-cache', '-r', 
'postcommit_requirements.txt', '--exists-action', 'i', '--no-binary', ':all:']' 
returned non-zero exit status 1

--
XML: 

--
Ran 16 tests in 1007.328s

FAILED (errors=1)

> Task :beam-sdks-python:validatesRunnerBatchTests FAILED
:beam-sdks-python:validatesRunnerBatchTests (Thread[Task worker for 
':',5,main]) completed. Took 16 mins 48.979 secs.
:beam-sdks-python:validatesRunnerStreamingTests (Thread[Task worker for 
':',5,main]) started.

> Task :beam-sdks-python:validatesRunnerStreamingTests
Caching disabled for task ':beam-sdks-python:validatesRunnerStreamingTests': 
Caching has not been enabled for the task
Task ':beam-sdks-python:validatesRunnerStreamingTests' is not up-to-date 
because:
  Task has not declared any outputs despite executing actions.
Starting process 'command 'sh''. Working directory: 

 Command: sh -c . 

 && ./scripts/run_postcommit.sh ValidatesRunner,'!sickbay-streaming' streaming
Successfully started process 'command 'sh''


###
# Build tarball and set pipeline options.

# Check that the script is running in a known directory.
if [[ $PWD != *sdks/python* ]]; then
  echo 'Unable to locate Apache Beam Python SDK root directory'
  exit 1
fi

# Go to the Apache Beam Python SDK root
if [[ "*sdks/python" != $PWD ]]; then
  cd $(pwd | sed 's/sd

Jenkins build is back to normal : beam_PerformanceTests_XmlIOIT #911

2018-10-23 Thread Apache Jenkins Server
See 




Build failed in Jenkins: beam_PreCommit_Java FnApi_Cron #1

2018-10-23 Thread Apache Jenkins Server
See 


--
Started by timer
[EnvInject] - Loading node environment variables.
Building remotely on beam10 (beam) in workspace 

Cloning the remote Git repository
Cloning repository https://github.com/apache/beam.git
 > git init 
 >  # 
 > timeout=10
Fetching upstream changes from https://github.com/apache/beam.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/beam.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url https://github.com/apache/beam.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url https://github.com/apache/beam.git # timeout=10
Fetching upstream changes from https://github.com/apache/beam.git
 > git fetch --tags --progress https://github.com/apache/beam.git 
 > +refs/heads/*:refs/remotes/origin/* 
 > +refs/pull/${ghprbPullId}/*:refs/remotes/origin/pr/${ghprbPullId}/*
 > git rev-parse origin/master^{commit} # timeout=10
Checking out Revision 5e603ad4c642cfba0a6db70abd05ed8e9d89c7d6 (origin/master)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 5e603ad4c642cfba0a6db70abd05ed8e9d89c7d6
Commit message: "Merge pull request #6805:  Create separate :rat precommit and 
remove it from others"
First time build. Skipping changelog.
Cleaning workspace
 > git rev-parse --verify HEAD # timeout=10
Resetting working tree
 > git reset --hard # timeout=10
 > git clean -fdx # timeout=10
[EnvInject] - Executing scripts and injecting environment variables after the 
SCM step.
[EnvInject] - Injecting as environment variables the properties content 
SPARK_LOCAL_IP=127.0.0.1

[EnvInject] - Variables injected successfully.
[Gradle] - Launching build.
[src] $ 
"
 --info --continue --max-workers=12 -Dorg.gradle.jvmargs=-Xms2g 
-Dorg.gradle.jvmargs=-Xmx4g :javaPreCommitFnApi
Initialized native services in: /home/jenkins/.gradle/native
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/4.10.2/userguide/gradle_daemon.html.
Starting process 'Gradle build daemon'. Working directory: 
/home/jenkins/.gradle/daemon/4.10.2 Command: 
/usr/local/asfpackages/java/jdk1.8.0_172/bin/java -Xmx4g -Dfile.encoding=UTF-8 
-Duser.country=US -Duser.language=en -Duser.variant -cp 
/home/jenkins/.gradle/wrapper/dists/gradle-4.10.2-bin/cghg6c4gf4vkiutgsab8yrnwv/gradle-4.10.2/lib/gradle-launcher-4.10.2.jar
 org.gradle.launcher.daemon.bootstrap.GradleDaemon 4.10.2
Successfully started process 'Gradle build daemon'
An attempt to start the daemon took 0.926 secs.
The client will now receive all logging from the daemon (pid: 20264). The 
daemon log file: /home/jenkins/.gradle/daemon/4.10.2/daemon-20264.out.log
Closing daemon's stdin at end of input.
The daemon will no longer process any standard input.
Daemon will be stopped at the end of the build stopping after processing
Using 12 worker leases.
Starting Build
Parallel execution is an incubating feature.

> Configure project :buildSrc
Evaluating project ':buildSrc' using build file 
'
Invalidating in-memory cache of 
/home/jenkins/.gradle/caches/4.10.2/fileHashes/fileHashes.bin
Invalidating in-memory cache of 
/home/jenkins/.gradle/caches/4.10.2/fileHashes/resourceHashesCache.bin
file or directory 
'
 not found
Invalidating in-memory cache of 
/home/jenkins/.gradle/caches/journal-1/file-access.bin
Selected primary task 'build' from project :
file or directory 
'
 not found
:buildSrc:compileJava (Thread[Task worker for ':buildSrc',5,main]) started.
Using local directory build cache for build ':buildSrc' (location = 
/home/jenkins/.gradle/caches/build-cache-1, removeUnusedEntriesAfter = 7 days).

> Task :buildSrc:compileJava NO-SOURCE
file or directory 
'
 not found
Skipping task ':buildSrc:compileJava' as it has no source files and no previous 
output files.
:buildSrc:compileJava (Thread[Task worker for ':buildSrc',5,main]) completed. 
Took 0.087 secs.
:buildSrc:compileGroovy (Thread[Task worker for ':buildSrc',5,main]) started.

> Task :buildSrc:compileGroovy FROM-CACHE
Invalidating in-memory cache of 
/home/jenkins/.gradle/caches/4.10.2/fileHashes/fileHashes.bin
Invalidating in-memory cache of 
/home/jenkins/.gradle/caches/4.10.2/fi

Jenkins build is back to normal : beam_PostCommit_Java_PVR_Flink #85

2018-10-23 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : beam_PostCommit_Java_ValidatesRunner_Flink_Gradle #1953

2018-10-23 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : beam_PostCommit_Python_VR_Flink #476

2018-10-23 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : beam_PostCommit_Py_VR_Dataflow #1480

2018-10-23 Thread Apache Jenkins Server
See 




Build failed in Jenkins: beam_PostCommit_Java_GradleBuild #1743

2018-10-23 Thread Apache Jenkins Server
See 


Changes:

[ehudm] Upgrade BigQuery client from 0.25.0 to 1.6.0

[klk] [BEAM-5829] Convert tests to use DECIMAL for price field

[klk] [BEAM-5830] Use the word LANGUAGE instead of SDK on site

[tweise] Replace deprecated StateTag.StateBinder in FlinkStateInternals  (#6754)

[aaltay] [beam-5818] update docs to specify 7 days for RC bugs (#6782)

[github] [BEAM-3612] Make function registration idempotent

[github] [BEAM-3612] Make type registration idempotent

--
[...truncated 51.08 MB...]
at 
com.google.cloud.spanner.SpannerImpl.runWithRetries(SpannerImpl.java:227)
at 
com.google.cloud.spanner.SpannerImpl$SessionImpl.writeAtLeastOnce(SpannerImpl.java:793)
at 
com.google.cloud.spanner.SessionPool$PooledSession.writeAtLeastOnce(SessionPool.java:319)
at 
com.google.cloud.spanner.DatabaseClientImpl.writeAtLeastOnce(DatabaseClientImpl.java:60)
at 
org.apache.beam.sdk.io.gcp.spanner.SpannerIO$WriteToSpannerFn.processElement(SpannerIO.java:1108)
at 
org.apache.beam.sdk.io.gcp.spanner.SpannerIO$WriteToSpannerFn$DoFnInvoker.invokeProcessElement(Unknown
 Source)
at 
org.apache.beam.repackaged.beam_runners_direct_java.runners.core.SimpleDoFnRunner.invokeProcessElement(SimpleDoFnRunner.java:275)
at 
org.apache.beam.repackaged.beam_runners_direct_java.runners.core.SimpleDoFnRunner.processElement(SimpleDoFnRunner.java:240)
at 
org.apache.beam.repackaged.beam_runners_direct_java.runners.core.SimplePushbackSideInputDoFnRunner.processElementInReadyWindows(SimplePushbackSideInputDoFnRunner.java:78)
at 
org.apache.beam.runners.direct.ParDoEvaluator.processElement(ParDoEvaluator.java:207)
at 
org.apache.beam.runners.direct.DoFnLifecycleManagerRemovingTransformEvaluator.processElement(DoFnLifecycleManagerRemovingTransformEvaluator.java:55)
at 
org.apache.beam.runners.direct.DirectTransformExecutor.processElements(DirectTransformExecutor.java:160)
at 
org.apache.beam.runners.direct.DirectTransformExecutor.run(DirectTransformExecutor.java:124)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.concurrent.ExecutionException: 
io.grpc.StatusRuntimeException: FAILED_PRECONDITION: Value must not be NULL in 
table users.
at 
com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:500)
at 
com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:479)
at 
com.google.cloud.spanner.spi.v1.GrpcSpannerRpc.get(GrpcSpannerRpc.java:450)
... 21 more
Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: Value must 
not be NULL in table users.
at io.grpc.Status.asRuntimeException(Status.java:526)
at 
io.grpc.stub.ClientCalls$UnaryStreamToFuture.onClose(ClientCalls.java:468)
at 
io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
at 
io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
at 
io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
at 
com.google.cloud.spanner.spi.v1.SpannerErrorInterceptor$1$1.onClose(SpannerErrorInterceptor.java:100)
at 
io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
at 
io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
at 
io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
at 
com.google.cloud.spanner.spi.v1.WatchdogInterceptor$MonitoredCall$1.onClose(WatchdogInterceptor.java:190)
at 
io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
at 
io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
at 
io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
at 
io.grpc.internal.CensusStatsModule$StatsClientInterceptor$1$1.onClose(CensusStatsModule.java:684)
at 
io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
at 
io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
at 
io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
at 
io.grpc.internal.

Build failed in Jenkins: beam_PostCommit_Java_PVR_Flink #84

2018-10-23 Thread Apache Jenkins Server
See 


Changes:

[kedin] [SQL] Move builtin aggregations creation to a map of factories

[kedin] [SQL] Simplify AggregationRel

[kedin] [SQL] Add AggregationCall wrapper

[kedin] [SQL] Inline aggregation rel helper transforms

[kedin] [SQL] Move CombineFn creation to AggregationCall constructor

[kedin] [SQL] Split and rename Aggregation CombineFn wrappers

[kedin] [SQL] Make AggregationCombineFnAdapter non-AutoValue

[kedin] [SQL] Convert ifs to guard statements in AggregationCombineFnAdapter

[kedin] [SQL] Convert Covariance to accept rows instead of KVs

[kedin] [SQL] Split Args Adapters from AggregationCombineFnAdapter

[kedin] [SQL] Extract MultipleAggregationFn from BeamAggregationTransforms

[kedin] [SQL] Clean up, comment aggregation transforms

--
[...truncated 219.18 MB...]
[MapPartition (MapPartition at 
PAssert$98/WindowToken/Window.Assign.out/beam:env:docker:v1:0) (13/16)] INFO 
org.apache.flink.runtime.taskmanager.Task - MapPartition (MapPartition at 
PAssert$98/WindowToken/Window.Assign.out/beam:env:docker:v1:0) (13/16) 
(9f700b449ca161f32206cd27847602a6) switched from RUNNING to FINISHED.
[MapPartition (MapPartition at 
PAssert$98/WindowToken/Window.Assign.out/beam:env:docker:v1:0) (13/16)] INFO 
org.apache.flink.runtime.taskmanager.Task - Freeing task resources for 
MapPartition (MapPartition at 
PAssert$98/WindowToken/Window.Assign.out/beam:env:docker:v1:0) (13/16) 
(9f700b449ca161f32206cd27847602a6).
[jobmanager-future-thread-12] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - DataSink 
(org.apache.flink.api.java.io.DiscardingOutputFormat@6df99931) (13/16) 
(ee3c9010a60ba2761c6b7dd3fbbb38f8) switched from CREATED to SCHEDULED.
[MapPartition (MapPartition at 
PAssert$98/WindowToken/Window.Assign.out/beam:env:docker:v1:0) (13/16)] INFO 
org.apache.flink.runtime.taskmanager.Task - Ensuring all FileSystem streams are 
closed for task MapPartition (MapPartition at 
PAssert$98/WindowToken/Window.Assign.out/beam:env:docker:v1:0) (13/16) 
(9f700b449ca161f32206cd27847602a6) [FINISHED]
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.taskexecutor.TaskExecutor - Un-registering task and 
sending final execution state FINISHED to JobManager for task MapPartition 
(MapPartition at PAssert$98/WindowToken/Window.Assign.out/beam:env:docker:v1:0) 
9f700b449ca161f32206cd27847602a6.
[flink-akka.actor.default-dispatcher-7] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - DataSink 
(org.apache.flink.api.java.io.DiscardingOutputFormat@6df99931) (13/16) 
(ee3c9010a60ba2761c6b7dd3fbbb38f8) switched from SCHEDULED to DEPLOYING.
[flink-akka.actor.default-dispatcher-7] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Deploying DataSink 
(org.apache.flink.api.java.io.DiscardingOutputFormat@6df99931) (13/16) (attempt 
#0) to localhost
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - MapPartition 
(MapPartition at PAssert$98/WindowToken/Window.Assign.out/beam:env:docker:v1:0) 
(13/16) (9f700b449ca161f32206cd27847602a6) switched from RUNNING to FINISHED.
[flink-akka.actor.default-dispatcher-3] INFO 
org.apache.flink.runtime.taskexecutor.TaskExecutor - Received task DataSink 
(org.apache.flink.api.java.io.DiscardingOutputFormat@6df99931) (13/16).
[DataSink (org.apache.flink.api.java.io.DiscardingOutputFormat@6df99931) 
(13/16)] INFO org.apache.flink.runtime.taskmanager.Task - DataSink 
(org.apache.flink.api.java.io.DiscardingOutputFormat@6df99931) (13/16) 
(ee3c9010a60ba2761c6b7dd3fbbb38f8) switched from CREATED to DEPLOYING.
[DataSink (org.apache.flink.api.java.io.DiscardingOutputFormat@6df99931) 
(13/16)] INFO org.apache.flink.runtime.taskmanager.Task - Creating FileSystem 
stream leak safety net for task DataSink 
(org.apache.flink.api.java.io.DiscardingOutputFormat@6df99931) (13/16) 
(ee3c9010a60ba2761c6b7dd3fbbb38f8) [DEPLOYING]
[DataSink (org.apache.flink.api.java.io.DiscardingOutputFormat@6df99931) 
(13/16)] INFO org.apache.flink.runtime.taskmanager.Task - Loading JAR files for 
task DataSink (org.apache.flink.api.java.io.DiscardingOutputFormat@6df99931) 
(13/16) (ee3c9010a60ba2761c6b7dd3fbbb38f8) [DEPLOYING].
[DataSink (org.apache.flink.api.java.io.DiscardingOutputFormat@6df99931) 
(13/16)] INFO org.apache.flink.runtime.taskmanager.Task - Registering task at 
network: DataSink 
(org.apache.flink.api.java.io.DiscardingOutputFormat@6df99931) (13/16) 
(ee3c9010a60ba2761c6b7dd3fbbb38f8) [DEPLOYING].
[DataSink (org.apache.flink.api.java.io.DiscardingOutputFormat@6df99931) 
(13/16)] INFO org.apache.flink.runtime.taskmanager.Task - DataSink 
(org.apache.flink.api.java.io.DiscardingOutputFormat@6df99931) (13/16) 
(ee3c9010a60ba2761c6b7dd3fbbb38f8) switched from DEPLOYING to RUNNING.
[flink-akka.actor.default-dis

Build failed in Jenkins: beam_PostCommit_Java_ValidatesRunner_Flink_Gradle #1952

2018-10-23 Thread Apache Jenkins Server
See 


Changes:

[kedin] [SQL] Move builtin aggregations creation to a map of factories

[kedin] [SQL] Simplify AggregationRel

[kedin] [SQL] Add AggregationCall wrapper

[kedin] [SQL] Inline aggregation rel helper transforms

[kedin] [SQL] Move CombineFn creation to AggregationCall constructor

[kedin] [SQL] Split and rename Aggregation CombineFn wrappers

[kedin] [SQL] Make AggregationCombineFnAdapter non-AutoValue

[kedin] [SQL] Convert ifs to guard statements in AggregationCombineFnAdapter

[kedin] [SQL] Convert Covariance to accept rows instead of KVs

[kedin] [SQL] Split Args Adapters from AggregationCombineFnAdapter

[kedin] [SQL] Extract MultipleAggregationFn from BeamAggregationTransforms

[kedin] [SQL] Clean up, comment aggregation transforms

--
[...truncated 53.92 MB...]
INFO: No state backend has been configured, using default (Memory / 
JobManager) MemoryStateBackend (data in heap memory / checkpoints to 
JobManager) (checkpoints: 'null', savepoints: 'null', asynchronous: TRUE, 
maxStateSize: 5242880)
Oct 23, 2018 10:35:32 PM 
org.apache.flink.runtime.highavailability.nonha.embedded.EmbeddedLeaderService 
updateLeader
INFO: Proposing leadership to contender 
org.apache.flink.runtime.jobmaster.JobManagerRunner@26dc7c5 @ 
akka://flink/user/jobmanager_269
Oct 23, 2018 10:35:32 PM 
org.apache.flink.runtime.jobmaster.JobManagerRunner 
verifyJobSchedulingStatusAndStartJobManager
INFO: JobManager runner for job 
metricspushertest0test-jenkins-1023223531-b4747a58 
(e7b3eac20f6df1fa2cd4946f5d9e09c2) was granted leadership with session id 
0d0b76aa-ef88-429d-bc33-85f4e73ba518 at akka://flink/user/jobmanager_269.
Oct 23, 2018 10:35:32 PM org.apache.flink.runtime.jobmaster.JobMaster 
startJobExecution
INFO: Starting execution of job 
metricspushertest0test-jenkins-1023223531-b4747a58 
(e7b3eac20f6df1fa2cd4946f5d9e09c2)
Oct 23, 2018 10:35:32 PM 
org.apache.flink.runtime.executiongraph.ExecutionGraph transitionState
INFO: Job metricspushertest0test-jenkins-1023223531-b4747a58 
(e7b3eac20f6df1fa2cd4946f5d9e09c2) switched from state CREATED to RUNNING.
Oct 23, 2018 10:35:32 PM org.apache.flink.runtime.executiongraph.Execution 
transitionState
INFO: Source: 
GenerateSequence/Read(UnboundedCountingSource)/Create/Read(CreateSource) -> 
GenerateSequence/Read(UnboundedCountingSource)/Split/ParMultiDo(Split) -> 
GenerateSequence/Read(UnboundedCountingSource)/Reshuffle/Pair with random 
key/ParMultiDo(AssignShard) -> 
GenerateSequence/Read(UnboundedCountingSource)/Reshuffle/Reshuffle/Window.Into()/Window.Assign.out
 -> 
GenerateSequence/Read(UnboundedCountingSource)/Reshuffle/Reshuffle/ReifyOriginalTimestamps/ParDo(Anonymous)/ParMultiDo(Anonymous)
 -> ToKeyedWorkItem (1/1) (601419308020fa6a4167b6d9b27c37b5) switched from 
CREATED to SCHEDULED.
Oct 23, 2018 10:35:32 PM org.apache.flink.runtime.executiongraph.Execution 
transitionState
INFO: 
GenerateSequence/Read(UnboundedCountingSource)/Reshuffle/Reshuffle/GroupByKey 
-> 
GenerateSequence/Read(UnboundedCountingSource)/Reshuffle/Reshuffle/ExpandIterable/ParMultiDo(Anonymous)
 -> 
GenerateSequence/Read(UnboundedCountingSource)/Reshuffle/Reshuffle/RestoreOriginalTimestamps/ReifyTimestamps.RemoveWildcard/ParDo(Anonymous)/ParMultiDo(Anonymous)
 -> 
GenerateSequence/Read(UnboundedCountingSource)/Reshuffle/Reshuffle/RestoreOriginalTimestamps/Reify.ExtractTimestampsFromValues/ParDo(Anonymous)/ParMultiDo(Anonymous)
 -> 
GenerateSequence/Read(UnboundedCountingSource)/Reshuffle/Values/Values/Map/ParMultiDo(Anonymous)
 -> GenerateSequence/Read(UnboundedCountingSource)/Read/ParMultiDo(Read) -> 
GenerateSequence/Read(UnboundedCountingSource)/StripIds/ParMultiDo(StripIds) -> 
ParDo(Counting)/ParMultiDo(Counting) (1/1) (829e6d6d8efd4311ce1a5dd4de0cfc2a) 
switched from CREATED to SCHEDULED.
Oct 23, 2018 10:35:32 PM 
org.apache.flink.runtime.jobmaster.slotpool.SlotPool 
stashRequestWaitingForResourceManager
INFO: Cannot serve slot request, no ResourceManager connected. Adding as 
pending request [SlotRequestId{d8a5023a4bcf5b7175bf6cb75a271fec}]
Oct 23, 2018 10:35:32 PM 
org.apache.flink.runtime.highavailability.nonha.embedded.EmbeddedLeaderService 
confirmLeader
INFO: Received confirmation of leadership for leader 
akka://flink/user/jobmanager_269 , session=0d0b76aa-ef88-429d-bc33-85f4e73ba518
Oct 23, 2018 10:35:32 PM org.apache.flink.runtime.jobmaster.JobMaster 
connectToResourceManager
INFO: Connecting to ResourceManager 
akka://flink/user/resourcemanager_ab181320-2243-4ecf-af5b-76cd39571277(996bc109e96905e746ada5d503514b93)
Oct 23, 2018 10:35:32 PM 
org.apache.flink.runtime.registration.RetryingRegistration 
lambda$startRegistration$0
INFO: Resolved ResourceManager address, beginning registration
Oct 23, 2018 10:35:32 PM 
org.apache.flink.r

Build failed in Jenkins: beam_PostCommit_Python_VR_Flink #475

2018-10-23 Thread Apache Jenkins Server
See 


Changes:

[kedin] [SQL] Move builtin aggregations creation to a map of factories

[kedin] [SQL] Simplify AggregationRel

[kedin] [SQL] Add AggregationCall wrapper

[kedin] [SQL] Inline aggregation rel helper transforms

[kedin] [SQL] Move CombineFn creation to AggregationCall constructor

[kedin] [SQL] Split and rename Aggregation CombineFn wrappers

[kedin] [SQL] Make AggregationCombineFnAdapter non-AutoValue

[kedin] [SQL] Convert ifs to guard statements in AggregationCombineFnAdapter

[kedin] [SQL] Convert Covariance to accept rows instead of KVs

[kedin] [SQL] Split Args Adapters from AggregationCombineFnAdapter

[kedin] [SQL] Extract MultipleAggregationFn from BeamAggregationTransforms

[kedin] [SQL] Clean up, comment aggregation transforms

--
[...truncated 4.30 MB...]
[flink-akka.actor.default-dispatcher-3] INFO 
org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint - 
http://localhost:35587 was granted leadership with 
leaderSessionID=b711c037-8a3f-40e8-acbe-726b62375853
[flink-akka.actor.default-dispatcher-3] INFO 
org.apache.flink.runtime.highavailability.nonha.embedded.EmbeddedLeaderService 
- Received confirmation of leadership for leader http://localhost:35587 , 
session=b711c037-8a3f-40e8-acbe-726b62375853
[flink-runner-job-server] INFO org.apache.flink.runtime.rpc.akka.AkkaRpcService 
- Starting RPC endpoint for 
org.apache.flink.runtime.dispatcher.StandaloneDispatcher at 
akka://flink/user/dispatcher37484d6a-1088-4b7f-8b3c-c686d2d047c8 .
[flink-runner-job-server] INFO 
org.apache.flink.runtime.highavailability.nonha.embedded.EmbeddedLeaderService 
- Proposing leadership to contender 
org.apache.flink.runtime.dispatcher.StandaloneDispatcher@28f7b82e @ 
akka://flink/user/dispatcher37484d6a-1088-4b7f-8b3c-c686d2d047c8
[flink-runner-job-server] INFO org.apache.flink.runtime.minicluster.MiniCluster 
- Flink Mini Cluster started successfully
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.dispatcher.StandaloneDispatcher - Dispatcher 
akka://flink/user/dispatcher37484d6a-1088-4b7f-8b3c-c686d2d047c8 was granted 
leadership with fencing token e4b24e9a-8a0f-4353-b946-dc2b38104a09
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.dispatcher.StandaloneDispatcher - Recovering all 
persisted jobs.
[flink-akka.actor.default-dispatcher-2] INFO 
org.apache.flink.runtime.highavailability.nonha.embedded.EmbeddedLeaderService 
- Received confirmation of leadership for leader 
akka://flink/user/dispatcher37484d6a-1088-4b7f-8b3c-c686d2d047c8 , 
session=e4b24e9a-8a0f-4353-b946-dc2b38104a09
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.dispatcher.StandaloneDispatcher - Submitting job 
ce540a98882c9dde14e2e2a392291f72 (test_windowing_1540333889.14).
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.rpc.akka.AkkaRpcService - Starting RPC endpoint for 
org.apache.flink.runtime.jobmaster.JobMaster at akka://flink/user/jobmanager_39 
.
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Initializing job 
test_windowing_1540333889.14 (ce540a98882c9dde14e2e2a392291f72).
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Using restart strategy 
NoRestartStrategy for test_windowing_1540333889.14 
(ce540a98882c9dde14e2e2a392291f72).
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.rpc.akka.AkkaRpcService - Starting RPC endpoint for 
org.apache.flink.runtime.jobmaster.slotpool.SlotPool at 
akka://flink/user/e79ceb44-832c-47e6-8494-6df4fd44a512 .
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Job recovers via 
failover strategy: full graph restart
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Running initialization on master 
for job test_windowing_1540333889.14 (ce540a98882c9dde14e2e2a392291f72).
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Successfully ran initialization 
on master in 0 ms.
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - No state backend has been 
configured, using default (Memory / JobManager) MemoryStateBackend (data in 
heap memory / checkpoints to JobManager) (checkpoints: 'null', savepoints: 
'null', asynchronous: TRUE, maxStateSize: 5242880)
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.highavailability.nonha.embedded.EmbeddedLeaderService 
- Proposing leadership to contender 
org.apache.flink.runtime.jobmaster.JobManagerRunner@272adf55 @ 
akka://flink/user/jobmanager_39
[flink-akka.actor.default-dispatcher-2] INFO 
org.apache.flink.runtime.jobmaster.JobManagerRunner - JobManager runner for job 
test_windowing_1540333889.14 (ce540a98882c9dde14e2e2

Jenkins build is back to normal : beam_PostCommit_Java_PVR_Flink #83

2018-10-23 Thread Apache Jenkins Server
See 




Build failed in Jenkins: beam_PostCommit_Py_VR_Dataflow #1479

2018-10-23 Thread Apache Jenkins Server
See 


Changes:

[kenn] [BEAM-5833] Fix checkstyle breakage

--
[...truncated 62.10 KB...]
WARNING:root:Waiting indefinitely for streaming job.
test_undeclared_outputs (apache_beam.transforms.ptransform_test.PTransformTest) 
... FAIL
WARNING:root:Waiting indefinitely for streaming job.
test_par_do_with_multiple_outputs_and_using_yield 
(apache_beam.transforms.ptransform_test.PTransformTest) ... FAIL
WARNING:root:Waiting indefinitely for streaming job.
test_as_dict_twice (apache_beam.transforms.sideinputs_test.SideInputsTest) ... 
FAIL
WARNING:root:Waiting indefinitely for streaming job.
test_default_value_singleton_side_input 
(apache_beam.transforms.sideinputs_test.SideInputsTest) ... FAIL
WARNING:root:Waiting indefinitely for streaming job.
test_as_singleton_with_different_defaults 
(apache_beam.transforms.sideinputs_test.SideInputsTest) ... FAIL
WARNING:root:Waiting indefinitely for streaming job.
WARNING:root:Waiting indefinitely for streaming job.
test_iterable_side_input 
(apache_beam.transforms.sideinputs_test.SideInputsTest) ... FAIL
test_empty_singleton_side_input 
(apache_beam.transforms.sideinputs_test.SideInputsTest) ... FAIL
WARNING:root:Waiting indefinitely for streaming job.
test_as_singleton_without_unique_labels 
(apache_beam.transforms.sideinputs_test.SideInputsTest) ... FAIL
WARNING:root:Waiting indefinitely for streaming job.
test_flattened_side_input 
(apache_beam.transforms.sideinputs_test.SideInputsTest) ... FAIL
WARNING:root:Waiting indefinitely for streaming job.
test_as_list_and_as_dict_side_inputs 
(apache_beam.transforms.sideinputs_test.SideInputsTest) ... FAIL

==
FAIL: test_multiple_empty_outputs 
(apache_beam.transforms.ptransform_test.PTransformTest)
--
Traceback (most recent call last):
  File 
"
 line 284, in test_multiple_empty_outputs
pipeline.run()
  File 
"
 line 111, in run
"Pipeline execution failed."
AssertionError: Pipeline execution failed.
 >> begin captured stdout << -
Found: 
https://console.cloud.google.com/dataflow/jobsDetail/locations/us-central1/jobs/2018-10-23_14_14_22-8890116831213913788?project=apache-beam-testing.

- >> end captured stdout << --

==
FAIL: test_par_do_with_multiple_outputs_and_using_return 
(apache_beam.transforms.ptransform_test.PTransformTest)
--
Traceback (most recent call last):
  File 
"
 line 257, in test_par_do_with_multiple_outputs_and_using_return
pipeline.run()
  File 
"
 line 111, in run
"Pipeline execution failed."
AssertionError: Pipeline execution failed.
 >> begin captured stdout << -
Found: 
https://console.cloud.google.com/dataflow/jobsDetail/locations/us-central1/jobs/2018-10-23_14_14_22-7660480714087854148?project=apache-beam-testing.

- >> end captured stdout << --

==
FAIL: test_as_list_twice (apache_beam.transforms.sideinputs_test.SideInputsTest)
--
Traceback (most recent call last):
  File 
"
 line 274, in test_as_list_twice
pipeline.run()
  File 
"
 line 111, in run
"Pipeline execution failed."
AssertionError: Pipeline execution failed.
 >> begin captured stdout << -
Found: 
https://console.cloud.google.com/dataflow/jobsDetail/locations/us-central1/jobs/2018-10-23_14_14_22-3596647888048251815?project=apache-beam-testing.

- >> end captured stdout << --

==
FAIL: test_flatten_multiple_pcollections_having_multiple_consumers 
(apache_beam.transforms.ptransform_test.PTransformTest)
--

Jenkins build is back to normal : beam_PostCommit_Python_Verify #6355

2018-10-23 Thread Apache Jenkins Server
See 




Build failed in Jenkins: beam_PreCommit_Java_Cron #498

2018-10-23 Thread Apache Jenkins Server
See 


Changes:

[ehudm] Upgrade BigQuery client from 0.25.0 to 1.6.0

[klk] [BEAM-5829] Convert tests to use DECIMAL for price field

[klk] [BEAM-5830] Use the word LANGUAGE instead of SDK on site

[robertwb] [BEAM-5791] Implement time-based pushback in the dataflow harness 
data

[katarzyna.kucharczyk] [BEAM-5758] Load tests of Python Synthetic Sources: 
GroupByKey,

[thw] [BEAM-5467] Remove createProcessWorker from docker execution path.

[tweise] Replace deprecated StateTag.StateBinder in FlinkStateInternals  (#6754)

[aaltay] [beam-5818] update docs to specify 7 days for RC bugs (#6782)

[github] [BEAM-3612] Make function registration idempotent

[github] [BEAM-3612] Make type registration idempotent

--
[...truncated 46.03 MB...]
:beam-sdks-java-io-hadoop-common:validateShadedJarDoesntLeakNonOrgApacheBeamClasses
 (Thread[Task worker for ':' Thread 2,5,main]) started.

> Task 
> :beam-sdks-java-io-hadoop-common:validateShadedJarDoesntLeakNonOrgApacheBeamClasses
Caching disabled for task 
':beam-sdks-java-io-hadoop-common:validateShadedJarDoesntLeakNonOrgApacheBeamClasses':
 Caching has not been enabled for the task
Task 
':beam-sdks-java-io-hadoop-common:validateShadedJarDoesntLeakNonOrgApacheBeamClasses'
 is not up-to-date because:
  Task has not declared any outputs despite executing actions.
:beam-sdks-java-io-hadoop-common:validateShadedJarDoesntLeakNonOrgApacheBeamClasses
 (Thread[Task worker for ':' Thread 2,5,main]) completed. Took 0.001 secs.
:beam-sdks-java-io-hadoop-common:check (Thread[Task worker for ':' Thread 
2,5,main]) started.

> Task :beam-sdks-java-io-hadoop-common:check
Skipping task ':beam-sdks-java-io-hadoop-common:check' as it has no actions.
:beam-sdks-java-io-hadoop-common:check (Thread[Task worker for ':' Thread 
2,5,main]) completed. Took 0.0 secs.
:beam-sdks-java-io-hadoop-common:build (Thread[Task worker for ':' Thread 
8,5,main]) started.

> Task :beam-sdks-java-io-hadoop-common:build
Skipping task ':beam-sdks-java-io-hadoop-common:build' as it has no actions.
:beam-sdks-java-io-hadoop-common:build (Thread[Task worker for ':' Thread 
8,5,main]) completed. Took 0.0 secs.
:beam-sdks-java-io-hadoop-common:buildDependents (Thread[Task worker for ':' 
Thread 8,5,main]) started.

> Task :beam-sdks-java-io-hadoop-common:buildDependents
Caching disabled for task ':beam-sdks-java-io-hadoop-common:buildDependents': 
Caching has not been enabled for the task
Task ':beam-sdks-java-io-hadoop-common:buildDependents' is not up-to-date 
because:
  Task has not declared any outputs despite executing actions.
:beam-sdks-java-io-hadoop-common:buildDependents (Thread[Task worker for ':' 
Thread 8,5,main]) completed. Took 0.0 secs.
:beam-vendor-sdks-java-extensions-protobuf:jar (Thread[Task worker for ':' 
Thread 4,5,main]) started.

> Task :beam-vendor-sdks-java-extensions-protobuf:jar
Build cache key for task ':beam-vendor-sdks-java-extensions-protobuf:jar' is 
5041dc99151d9d3ea85904a15452161d
Caching disabled for task ':beam-vendor-sdks-java-extensions-protobuf:jar': 
Caching has not been enabled for the task
Task ':beam-vendor-sdks-java-extensions-protobuf:jar' is not up-to-date because:
  No history is available.
:beam-vendor-sdks-java-extensions-protobuf:jar (Thread[Task worker for ':' 
Thread 4,5,main]) completed. Took 0.007 secs.
:beam-vendor-sdks-java-extensions-protobuf:compileTestJava (Thread[Task worker 
for ':' Thread 4,5,main]) started.

> Task :beam-vendor-sdks-java-extensions-protobuf:compileTestJava NO-SOURCE
file or directory 
'
 not found
Skipping task ':beam-vendor-sdks-java-extensions-protobuf:compileTestJava' as 
it has no source files and no previous output files.
:beam-vendor-sdks-java-extensions-protobuf:compileTestJava (Thread[Task worker 
for ':' Thread 4,5,main]) completed. Took 0.0 secs.
:beam-vendor-sdks-java-extensions-protobuf:processTestResources (Thread[Task 
worker for ':' Thread 4,5,main]) started.

> Task :beam-vendor-sdks-java-extensions-protobuf:processTestResources NO-SOURCE
file or directory 
'
 not found
Skipping task ':beam-vendor-sdks-java-extensions-protobuf:processTestResources' 
as it has no source files and no previous output files.
:beam-vendor-sdks-java-extensions-protobuf:processTestResources (Thread[Task 
worker for ':' Thread 4,5,main]) completed. Took 0.0 secs.
:beam-vendor-sdks-java-extensions-protobuf:testClasses (Thread[Task worker for 
':' Thread 9,5,main]) started.

> Task :beam-vendor-sdks-java-extensions-protobuf:testClasses UP-TO-DATE
Skipping task ':beam-vendor-sdks-java-extensions-protobuf:testClasses' as it 
has no actions.
:beam-vendor-sdks-java-extensions-protobuf:te

Build failed in Jenkins: beam_PostCommit_Java_GradleBuild #1742

2018-10-23 Thread Apache Jenkins Server
See 


Changes:

[robertwb] [BEAM-5791] Implement time-based pushback in the dataflow harness 
data

[katarzyna.kucharczyk] [BEAM-5758] Load tests of Python Synthetic Sources: 
GroupByKey,

[thw] [BEAM-5467] Remove createProcessWorker from docker execution path.

--
[...truncated 51.01 MB...]
at 
org.apache.beam.runners.direct.DoFnLifecycleManagerRemovingTransformEvaluator.processElement(DoFnLifecycleManagerRemovingTransformEvaluator.java:55)
at 
org.apache.beam.runners.direct.DirectTransformExecutor.processElements(DirectTransformExecutor.java:160)
at 
org.apache.beam.runners.direct.DirectTransformExecutor.run(DirectTransformExecutor.java:124)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.concurrent.ExecutionException: 
io.grpc.StatusRuntimeException: FAILED_PRECONDITION: Value must not be NULL in 
table users.
at 
com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:500)
at 
com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:479)
at 
com.google.cloud.spanner.spi.v1.GrpcSpannerRpc.get(GrpcSpannerRpc.java:450)
... 21 more
Caused by: io.grpc.StatusRuntimeException: FAILED_PRECONDITION: Value must 
not be NULL in table users.
at io.grpc.Status.asRuntimeException(Status.java:526)
at 
io.grpc.stub.ClientCalls$UnaryStreamToFuture.onClose(ClientCalls.java:468)
at 
io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
at 
io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
at 
io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
at 
com.google.cloud.spanner.spi.v1.SpannerErrorInterceptor$1$1.onClose(SpannerErrorInterceptor.java:100)
at 
io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
at 
io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
at 
io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
at 
com.google.cloud.spanner.spi.v1.WatchdogInterceptor$MonitoredCall$1.onClose(WatchdogInterceptor.java:190)
at 
io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
at 
io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
at 
io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
at 
io.grpc.internal.CensusStatsModule$StatsClientInterceptor$1$1.onClose(CensusStatsModule.java:684)
at 
io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
at 
io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
at 
io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
at 
io.grpc.internal.CensusTracingModule$TracingClientInterceptor$1$1.onClose(CensusTracingModule.java:403)
at 
io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:459)
at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:63)
at 
io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.close(ClientCallImpl.java:546)
at 
io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.access$600(ClientCallImpl.java:467)
at 
io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:584)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at 
io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
... 3 more

Oct 23, 2018 6:59:36 PM 
org.apache.beam.sdk.io.gcp.spanner.SpannerIO$WriteToSpannerFn processElement
WARNING: Failed to submit the mutation group
com.google.cloud.spanner.SpannerException: FAILED_PRECONDITION: 
io.grpc.StatusRuntimeException: FAILED_PRECONDITION: Value must not be NULL in 
table users.
at 
com.google.cloud.spanner.SpannerExceptionFactory.newSpannerExceptionPreformatted(SpannerExceptionFactory.java:119)
at 
com.google.cloud.spanner.SpannerExceptionFactory.newSpannerException(SpannerExceptionFactory.java:43)
at 
com.google.cloud.spanner.SpannerExceptionFactory.newSpannerExceptio

Build failed in Jenkins: beam_PostCommit_Java_PVR_Flink #82

2018-10-23 Thread Apache Jenkins Server
See 


Changes:

[github] [BEAM-3612] Make function registration idempotent

[github] [BEAM-3612] Make type registration idempotent

--
[...truncated 468.16 MB...]
[flink-akka.actor.default-dispatcher-7] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Deploying DataSink 
(org.apache.flink.api.java.io.DiscardingOutputFormat@f03ee20) (10/16) (attempt 
#0) to localhost
[MapPartition (MapPartition at 
PAssert$125/GroupGlobally/GroupDummyAndContents.out/beam:env:docker:v1:0) 
(10/16)] INFO org.apache.flink.runtime.taskmanager.Task - MapPartition 
(MapPartition at 
PAssert$125/GroupGlobally/GroupDummyAndContents.out/beam:env:docker:v1:0) 
(10/16) (e445d273c1486f56a13d96ecbedef76b) switched from RUNNING to FINISHED.
[MapPartition (MapPartition at 
PAssert$125/GroupGlobally/GroupDummyAndContents.out/beam:env:docker:v1:0) 
(10/16)] INFO org.apache.flink.runtime.taskmanager.Task - Freeing task 
resources for MapPartition (MapPartition at 
PAssert$125/GroupGlobally/GroupDummyAndContents.out/beam:env:docker:v1:0) 
(10/16) (e445d273c1486f56a13d96ecbedef76b).
[MapPartition (MapPartition at 
PAssert$125/GroupGlobally/GroupDummyAndContents.out/beam:env:docker:v1:0) 
(10/16)] INFO org.apache.flink.runtime.taskmanager.Task - Ensuring all 
FileSystem streams are closed for task MapPartition (MapPartition at 
PAssert$125/GroupGlobally/GroupDummyAndContents.out/beam:env:docker:v1:0) 
(10/16) (e445d273c1486f56a13d96ecbedef76b) [FINISHED]
[MapPartition (MapPartition at 
PAssert$125/GroupGlobally/GroupDummyAndContents.out/beam:env:docker:v1:0) 
(5/16)] INFO org.apache.flink.runtime.taskmanager.Task - MapPartition 
(MapPartition at 
PAssert$125/GroupGlobally/GroupDummyAndContents.out/beam:env:docker:v1:0) 
(5/16) (a5fff9d6d62aeef718e8fcf4af9882a8) switched from RUNNING to FINISHED.
[MapPartition (MapPartition at 
PAssert$125/GroupGlobally/GroupDummyAndContents.out/beam:env:docker:v1:0) 
(5/16)] INFO org.apache.flink.runtime.taskmanager.Task - Freeing task resources 
for MapPartition (MapPartition at 
PAssert$125/GroupGlobally/GroupDummyAndContents.out/beam:env:docker:v1:0) 
(5/16) (a5fff9d6d62aeef718e8fcf4af9882a8).
[jobmanager-future-thread-3] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - DataSink 
(org.apache.flink.api.java.io.DiscardingOutputFormat@f03ee20) (5/16) 
(67b82726c8203439684f4a20cabbd32e) switched from CREATED to SCHEDULED.
[MapPartition (MapPartition at 
PAssert$125/GroupGlobally/GroupDummyAndContents.out/beam:env:docker:v1:0) 
(5/16)] INFO org.apache.flink.runtime.taskmanager.Task - Ensuring all 
FileSystem streams are closed for task MapPartition (MapPartition at 
PAssert$125/GroupGlobally/GroupDummyAndContents.out/beam:env:docker:v1:0) 
(5/16) (a5fff9d6d62aeef718e8fcf4af9882a8) [FINISHED]
[MapPartition (MapPartition at 
PAssert$125/GroupGlobally/GroupDummyAndContents.out/beam:env:docker:v1:0) 
(6/16)] INFO org.apache.flink.runtime.taskmanager.Task - MapPartition 
(MapPartition at 
PAssert$125/GroupGlobally/GroupDummyAndContents.out/beam:env:docker:v1:0) 
(6/16) (76afb103ec968b23d7b3d52a0e122a2b) switched from RUNNING to FINISHED.
[DataSink (org.apache.flink.api.java.io.DiscardingOutputFormat@f03ee20) 
(12/16)] INFO org.apache.flink.runtime.taskmanager.Task - Registering task at 
network: DataSink (org.apache.flink.api.java.io.DiscardingOutputFormat@f03ee20) 
(12/16) (40cb3dc3894aed569ac2bf1ea8695bef) [DEPLOYING].
[MapPartition (MapPartition at 
PAssert$125/GroupGlobally/GroupDummyAndContents.out/beam:env:docker:v1:0) 
(6/16)] INFO org.apache.flink.runtime.taskmanager.Task - Freeing task resources 
for MapPartition (MapPartition at 
PAssert$125/GroupGlobally/GroupDummyAndContents.out/beam:env:docker:v1:0) 
(6/16) (76afb103ec968b23d7b3d52a0e122a2b).
[MapPartition (MapPartition at 
PAssert$125/GroupGlobally/GroupDummyAndContents.out/beam:env:docker:v1:0) 
(4/16)] INFO org.apache.flink.runtime.taskmanager.Task - MapPartition 
(MapPartition at 
PAssert$125/GroupGlobally/GroupDummyAndContents.out/beam:env:docker:v1:0) 
(4/16) (5c4fe4db4406448aaba5ee62218a5b9d) switched from RUNNING to FINISHED.
[MapPartition (MapPartition at 
PAssert$125/GroupGlobally/GroupDummyAndContents.out/beam:env:docker:v1:0) 
(4/16)] INFO org.apache.flink.runtime.taskmanager.Task - Freeing task resources 
for MapPartition (MapPartition at 
PAssert$125/GroupGlobally/GroupDummyAndContents.out/beam:env:docker:v1:0) 
(4/16) (5c4fe4db4406448aaba5ee62218a5b9d).
[jobmanager-future-thread-2] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - DataSink 
(org.apache.flink.api.java.io.DiscardingOutputFormat@f03ee20) (4/16) 
(b76a3d8bb0e412c7317dcb5fff59e665) switched from CREATED to SCHEDULED.
[MapPartition (MapPartition at 
PAssert$125/GroupGlobally/GroupDummyAndContents.out/beam:env:docker:v1:0) 
(6/16)] INFO org.apache.

Build failed in Jenkins: beam_PerformanceTests_XmlIOIT #910

2018-10-23 Thread Apache Jenkins Server
See 


Changes:

[ehudm] Upgrade BigQuery client from 0.25.0 to 1.6.0

[klk] [BEAM-5829] Convert tests to use DECIMAL for price field

[klk] [BEAM-5830] Use the word LANGUAGE instead of SDK on site

[robertwb] [BEAM-5791] Implement time-based pushback in the dataflow harness 
data

[katarzyna.kucharczyk] [BEAM-5758] Load tests of Python Synthetic Sources: 
GroupByKey,

[thw] [BEAM-5467] Remove createProcessWorker from docker execution path.

[tweise] Replace deprecated StateTag.StateBinder in FlinkStateInternals  (#6754)

[aaltay] [beam-5818] update docs to specify 7 days for RC bugs (#6782)

[github] [BEAM-3612] Make function registration idempotent

[github] [BEAM-3612] Make type registration idempotent

--
[...truncated 270.57 KB...]
INFO: 2018-10-23T18:13:13.606Z: Fusing consumer Write xml 
files/WriteFiles/GatherTempFileResults/Reshuffle/GroupByKey/GroupByWindow into 
Write xml files/WriteFiles/GatherTempFileResults/Reshuffle/GroupByKey/Read
Oct 23, 2018 6:13:19 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2018-10-23T18:13:13.654Z: Fusing consumer Write xml 
files/WriteFiles/GatherTempFileResults/Reshuffle/GroupByKey/Reify into Write 
xml files/WriteFiles/GatherTempFileResults/Reshuffle/Window.Into()/Window.Assign
Oct 23, 2018 6:13:19 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2018-10-23T18:13:13.701Z: Fusing consumer Write xml 
files/WriteFiles/FinalizeTempFileBundles/Finalize into Write xml 
files/WriteFiles/GatherTempFileResults/Reshuffle.ViaRandomKey/Values/Values/Map
Oct 23, 2018 6:13:19 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2018-10-23T18:13:13.750Z: Fusing consumer Write xml 
files/WriteFiles/FinalizeTempFileBundles/Reshuffle.ViaRandomKey/Pair with 
random key into Write xml files/WriteFiles/FinalizeTempFileBundles/Finalize
Oct 23, 2018 6:13:19 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2018-10-23T18:13:13.797Z: Fusing consumer Read xml 
files/ReadAllViaFileBasedSource/Read ranges into Read xml 
files/ReadAllViaFileBasedSource/Reshuffle/Values/Values/Map
Oct 23, 2018 6:13:19 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2018-10-23T18:13:13.838Z: Fusing consumer Get file names/Values/Map 
into Write xml 
files/WriteFiles/FinalizeTempFileBundles/Reshuffle.ViaRandomKey/Values/Values/Map
Oct 23, 2018 6:13:19 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2018-10-23T18:13:13.903Z: Fusing consumer Write xml 
files/WriteFiles/FinalizeTempFileBundles/Reshuffle.ViaRandomKey/Values/Values/Map
 into Write xml 
files/WriteFiles/FinalizeTempFileBundles/Reshuffle.ViaRandomKey/Reshuffle/ExpandIterable
Oct 23, 2018 6:13:19 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2018-10-23T18:13:13.941Z: Fusing consumer Write xml 
files/WriteFiles/GatherTempFileResults/Reshuffle/GroupByKey/Write into Write 
xml files/WriteFiles/GatherTempFileResults/Reshuffle/GroupByKey/Reify
Oct 23, 2018 6:13:19 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2018-10-23T18:13:13.987Z: Fusing consumer Read xml 
files/ReadAllViaFileBasedSource/Reshuffle/Reshuffle/GroupByKey/Write into Read 
xml files/ReadAllViaFileBasedSource/Reshuffle/Reshuffle/GroupByKey/Reify
Oct 23, 2018 6:13:19 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2018-10-23T18:13:14.030Z: Fusing consumer Read xml 
files/ReadAllViaFileBasedSource/Reshuffle/Reshuffle/GroupByKey/Reify into Read 
xml 
files/ReadAllViaFileBasedSource/Reshuffle/Reshuffle/Window.Into()/Window.Assign
Oct 23, 2018 6:13:19 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2018-10-23T18:13:14.079Z: Fusing consumer Read xml 
files/ReadAllViaFileBasedSource/Reshuffle/Values/Values/Map into Read xml 
files/ReadAllViaFileBasedSource/Reshuffle/Reshuffle/ExpandIterable
Oct 23, 2018 6:13:19 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2018-10-23T18:13:14.125Z: Fusing consumer Write xml 
files/WriteFiles/GatherTempFileResults/Reshuffle.ViaRandomKey/Reshuffle/ExpandIterable
 into Write xml 
files/WriteFiles/GatherTempFileResults/Reshuffle.ViaRandomKey/Reshuffle/GroupByKey/GroupByWindow
Oct 23, 2018 6:13:19 PM 
org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler process
INFO: 2018-10-23T18:13:14.169Z: Fusing consumer Write xml 
files/WriteFiles/GatherTempFileResults/Reshuffle.ViaRandomKey/Reshuffle/GroupByKey/GroupByWindow
 into Write xml 
files/WriteFiles/GatherTempFileResults/Reshuffle.ViaRandomKey/Reshuffle/GroupByKey/Read

Build failed in Jenkins: beam_PostCommit_Python_Verify #6354

2018-10-23 Thread Apache Jenkins Server
See 


Changes:

[klk] [BEAM-5829] Convert tests to use DECIMAL for price field

[tweise] Replace deprecated StateTag.StateBinder in FlinkStateInternals  (#6754)

[aaltay] [beam-5818] update docs to specify 7 days for RC bugs (#6782)

--
[...truncated 1.27 MB...]
  self.assertNotEquals(data, self._fake_hdfs.files[path].getvalue())
ok
test_delete_dir (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... 
ok
test_delete_error (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) 
... ok
test_delete_file (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) 
... ok
test_exists (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_match_directory 
(apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_match_directory_trailing_slash 
(apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_match_file (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... 
ok
test_match_file_empty 
(apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_match_file_error 
(apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_match_file_with_limits 
(apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... 
:276:
 DeprecationWarning: Please use assertEqual instead.
  self.assertEquals(len(files), 1)
ok
test_match_file_with_zero_limit 
(apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_mkdirs (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_mkdirs_failed (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) 
... ok
test_open (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_open_bad_path (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) 
... ok
test_rename_directory 
(apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_rename_file (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) 
... ok
test_rename_file_error 
(apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... SKIP: This test 
still needs to be fixed on Python 3TODO: BEAM-5627
test_scheme (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_size (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_url_join (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... ok
test_url_split (apache_beam.io.hadoopfilesystem_test.HadoopFileSystemTest) ... 
ok
test_delete_table_fails_dataset_not_exist 
(apache_beam.io.gcp.tests.utils_test.UtilsTest) ... SKIP: Bigquery dependencies 
are not installed.
test_delete_table_fails_service_error 
(apache_beam.io.gcp.tests.utils_test.UtilsTest) ... SKIP: Bigquery dependencies 
are not installed.
test_delete_table_fails_table_not_exist 
(apache_beam.io.gcp.tests.utils_test.UtilsTest) ... SKIP: Bigquery dependencies 
are not installed.
test_delete_table_succeeds (apache_beam.io.gcp.tests.utils_test.UtilsTest) ... 
SKIP: Bigquery dependencies are not installed.
test_big_query_legacy_sql 
(apache_beam.io.gcp.big_query_query_to_table_it_test.BigQueryQueryToTableIT) 
... SKIP: IT is skipped because --test-pipeline-options is not specified
test_big_query_new_types 
(apache_beam.io.gcp.big_query_query_to_table_it_test.BigQueryQueryToTableIT) 
... SKIP: IT is skipped because --test-pipeline-options is not specified
test_big_query_standard_sql 
(apache_beam.io.gcp.big_query_query_to_table_it_test.BigQueryQueryToTableIT) 
... SKIP: IT is skipped because --test-pipeline-options is not specified
get_test_rows (apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... SKIP: 
GCP dependencies are not installed
test_read_from_query (apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... 
SKIP: GCP dependencies are not installed
test_read_from_query_sql_format 
(apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... SKIP: GCP 
dependencies are not installed
test_read_from_query_unflatten_records 
(apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... SKIP: GCP 
dependencies are not installed
test_read_from_table (apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... 
SKIP: GCP dependencies are not installed
test_read_from_table_and_job_complete_retry 
(apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... SKIP: GCP 
dependencies are not installed
test_read_from_table_and_multiple_pages 
(apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... SKIP: GCP 
dependencies are not installed
test_read_from_table_as_tablerows 
(apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... SKIP: GCP 
dependencies are not installed
test_table_schema_without_project 
(apache_beam.io.gcp.bigquery_test.TestBigQueryReader) ... SKIP: GCP 
dependencies are not installed
test_using_both_query_and_table_fails 
(apache_beam.io.gcp.bigquery_test.T

Jenkins build is back to normal : beam_PostCommit_Python_VR_Flink #473

2018-10-23 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : beam_PostCommit_Java_ValidatesRunner_Spark_Gradle #1935

2018-10-23 Thread Apache Jenkins Server
See 




Build failed in Jenkins: beam_PostCommit_Python_VR_Flink #472

2018-10-23 Thread Apache Jenkins Server
See 


Changes:

[github] [BEAM-3612] Make function registration idempotent

[github] [BEAM-3612] Make type registration idempotent

--
[...truncated 4.30 MB...]
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.jobmaster.JobManagerRunner - JobManager runner for job 
test_windowing_1540317115.36 (461ea00fe1e996fd7473ceecaa908c02) was granted 
leadership with session id f13f8c69-26f5-414e-bbc0-094fbc7f60a4 at 
akka://flink/user/jobmanager_39.
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Starting execution of job 
test_windowing_1540317115.36 (461ea00fe1e996fd7473ceecaa908c02)
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Job 
test_windowing_1540317115.36 (461ea00fe1e996fd7473ceecaa908c02) switched from 
state CREATED to RUNNING.
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Custom Source 
-> 14Create/Impulse.None/beam:env:docker:v1:0 -> ToKeyedWorkItem (1/1) 
(68ad1b23dec74b883e3636fd2394df37) switched from CREATED to SCHEDULED.
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Custom Source 
-> 26assert_that/Create/Impulse.None/beam:env:docker:v1:0 (1/1) 
(9ed85b7096b4a77001c8b157e3be246d) switched from CREATED to SCHEDULED.
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - 
Create/MaybeReshuffle/Reshuffle/ReshufflePerKey/GroupByKey -> 
72Create/MaybeReshuffle/Reshuffle/ReshufflePerKey/GroupByKey/GroupByWindow.None/beam:env:docker:v1:0
 -> ToKeyedWorkItem (1/1) (f7b825c3261e95925ed4587dfd868559) switched from 
CREATED to SCHEDULED.
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - GroupByKey -> 
24GroupByKey/GroupByWindow.None/beam:env:docker:v1:0 (1/1) 
(2e0a47738c16e47b6be3f60e168937ea) switched from CREATED to SCHEDULED.
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - ToKeyedWorkItem (1/1) 
(7ddd7224eae5ababdc1a9b8d4ffee622) switched from CREATED to SCHEDULED.
[flink-akka.actor.default-dispatcher-2] INFO 
org.apache.flink.runtime.jobmaster.slotpool.SlotPool - Cannot serve slot 
request, no ResourceManager connected. Adding as pending request 
[SlotRequestId{4c82c9556a5d3ca6ca02ceda153a08ee}]
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - 
assert_that/Group/GroupByKey -> 
42assert_that/Group/GroupByKey/GroupByWindow.None/beam:env:docker:v1:0 (1/1) 
(bdaeb79121944b9a1748cd3b0467f787) switched from CREATED to SCHEDULED.
[jobmanager-future-thread-1] INFO 
org.apache.flink.runtime.highavailability.nonha.embedded.EmbeddedLeaderService 
- Received confirmation of leadership for leader 
akka://flink/user/jobmanager_39 , session=f13f8c69-26f5-414e-bbc0-094fbc7f60a4
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Connecting to ResourceManager 
akka://flink/user/resourcemanager_755d967a-d20e-4cbf-a2b6-eb0b780dbb3a(891fc9f7490ae3fdc679fb9070c14588)
[flink-akka.actor.default-dispatcher-2] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Resolved ResourceManager 
address, beginning registration
[flink-akka.actor.default-dispatcher-2] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Registration at ResourceManager 
attempt 1 (timeout=100ms)
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.resourcemanager.StandaloneResourceManager - 
Registering job manager 
bbc0094fbc7f60a4f13f8c6926f5414e@akka://flink/user/jobmanager_39 for job 
461ea00fe1e996fd7473ceecaa908c02.
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.resourcemanager.StandaloneResourceManager - Registered 
job manager bbc0094fbc7f60a4f13f8c6926f5414e@akka://flink/user/jobmanager_39 
for job 461ea00fe1e996fd7473ceecaa908c02.
[flink-akka.actor.default-dispatcher-3] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - JobManager successfully 
registered at ResourceManager, leader id: 891fc9f7490ae3fdc679fb9070c14588.
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.jobmaster.slotpool.SlotPool - Requesting new slot 
[SlotRequestId{4c82c9556a5d3ca6ca02ceda153a08ee}] and profile 
ResourceProfile{cpuCores=-1.0, heapMemoryInMB=-1, directMemoryInMB=0, 
nativeMemoryInMB=0, networkMemoryInMB=0} from resource manager.
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.resourcemanager.StandaloneResourceManager - Request 
slot with profile ResourceProfile{cpuCores=-1.0, heapMemoryInMB=-1, 
directMemoryInMB=0, nativeMemoryInMB=0, networkMemoryInMB=0} for job 
461ea00fe1e996fd7473ceecaa908c02 with allocation id 
AllocationID{be1bf17ee7b9b6d8bb

Build failed in Jenkins: beam_PostCommit_Java_ValidatesRunner_Spark_Gradle #1934

2018-10-23 Thread Apache Jenkins Server
See 


Changes:

[ehudm] Upgrade BigQuery client from 0.25.0 to 1.6.0

--
[...truncated 30.08 MB...]
[Spark Context Cleaner] INFO org.apache.spark.ContextCleaner - Cleaned 
accumulator 9980
[Spark Context Cleaner] INFO org.apache.spark.ContextCleaner - Cleaned 
accumulator 10814
[Spark Context Cleaner] INFO org.apache.spark.ContextCleaner - Cleaned 
accumulator 10186
[Spark Context Cleaner] INFO org.apache.spark.ContextCleaner - Cleaned 
accumulator 10360
[Spark Context Cleaner] INFO org.apache.spark.ContextCleaner - Cleaned 
accumulator 10498
[Spark Context Cleaner] INFO org.apache.spark.ContextCleaner - Cleaned 
accumulator 10781
[Spark Context Cleaner] INFO org.apache.spark.ContextCleaner - Cleaned 
accumulator 10876
[Spark Context Cleaner] INFO org.apache.spark.ContextCleaner - Cleaned 
accumulator 10180
[Spark Context Cleaner] INFO org.apache.spark.ContextCleaner - Cleaned 
accumulator 11053
[Spark Context Cleaner] INFO org.apache.spark.ContextCleaner - Cleaned 
accumulator 11043
[Spark Context Cleaner] INFO org.apache.spark.ContextCleaner - Cleaned 
accumulator 10651
[Spark Context Cleaner] INFO org.apache.spark.ContextCleaner - Cleaned 
accumulator 10221
[Spark Context Cleaner] INFO org.apache.spark.ContextCleaner - Cleaned 
accumulator 10387
[Spark Context Cleaner] INFO org.apache.spark.ContextCleaner - Cleaned 
accumulator 9910
[Spark Context Cleaner] INFO org.apache.spark.ContextCleaner - Cleaned 
accumulator 9959
[Spark Context Cleaner] INFO org.apache.spark.ContextCleaner - Cleaned 
accumulator 10334
[Spark Context Cleaner] INFO org.apache.spark.ContextCleaner - Cleaned 
accumulator 10399
[Spark Context Cleaner] INFO org.apache.spark.ContextCleaner - Cleaned 
accumulator 10115
[Spark Context Cleaner] INFO org.apache.spark.ContextCleaner - Cleaned 
accumulator 10339
[Spark Context Cleaner] INFO org.apache.spark.ContextCleaner - Cleaned 
accumulator 10750
[Spark Context Cleaner] INFO org.apache.spark.ContextCleaner - Cleaned 
accumulator 10139
[Spark Context Cleaner] INFO org.apache.spark.ContextCleaner - Cleaned 
accumulator 10545
[Spark Context Cleaner] INFO org.apache.spark.ContextCleaner - Cleaned 
accumulator 10130
[Spark Context Cleaner] INFO org.apache.spark.ContextCleaner - Cleaned 
accumulator 10095
[Spark Context Cleaner] INFO org.apache.spark.ContextCleaner - Cleaned 
accumulator 10332
[Spark Context Cleaner] INFO org.apache.spark.ContextCleaner - Cleaned 
accumulator 10528
[Spark Context Cleaner] INFO org.apache.spark.ContextCleaner - Cleaned 
accumulator 10182
[Spark Context Cleaner] INFO org.apache.spark.ContextCleaner - Cleaned 
accumulator 10945
[Spark Context Cleaner] INFO org.apache.spark.ContextCleaner - Cleaned 
accumulator 10942
[Spark Context Cleaner] INFO org.apache.spark.ContextCleaner - Cleaned 
accumulator 10541
[dag-scheduler-event-loop] INFO org.apache.spark.scheduler.DAGScheduler - 
ShuffleMapStage 568 (mapToPair at GroupCombineFunctions.java:56) finished in 
0.040 s
[Spark Context Cleaner] INFO org.apache.spark.ContextCleaner - Cleaned 
accumulator 10338
[dag-scheduler-event-loop] INFO org.apache.spark.scheduler.DAGScheduler - 
looking for newly runnable stages
[dag-scheduler-event-loop] INFO org.apache.spark.scheduler.DAGScheduler - 
running: Set()
[Spark Context Cleaner] INFO org.apache.spark.ContextCleaner - Cleaned 
accumulator 10487
[Spark Context Cleaner] INFO org.apache.spark.ContextCleaner - Cleaned 
accumulator 10034
[Spark Context Cleaner] INFO org.apache.spark.ContextCleaner - Cleaned 
accumulator 10031
[dag-scheduler-event-loop] INFO org.apache.spark.scheduler.DAGScheduler - 
waiting: Set(ResultStage 575)
[Spark Context Cleaner] INFO org.apache.spark.ContextCleaner - Cleaned 
accumulator 10132
[Spark Context Cleaner] INFO org.apache.spark.ContextCleaner - Cleaned 
accumulator 10748
[dag-scheduler-event-loop] INFO org.apache.spark.scheduler.DAGScheduler - 
failed: Set()
[Spark Context Cleaner] INFO org.apache.spark.ContextCleaner - Cleaned 
accumulator 10052
[Spark Context Cleaner] INFO org.apache.spark.ContextCleaner - Cleaned 
accumulator 10782
[Spark Context Cleaner] INFO org.apache.spark.ContextCleaner - Cleaned 
accumulator 9942
[Spark Context Cleaner] INFO org.apache.spark.ContextCleaner - Cleaned 
accumulator 10585
[Spark Context Cleaner] INFO org.apache.spark.ContextCleaner - Cleaned 
accumulator 10534
[dispatcher-event-loop-1] INFO org.apache.spark.storage.BlockManagerInfo - 
Removed broadcast_90_piece0 on localhost:35145 in memory (size: 34.1 KB, free: 
13.5 GB)
[dag-scheduler-event-loop] INFO org.apache.spark.scheduler.DAGScheduler - 
Submitting ResultStage 575 (MapPartitionsRDD[2844] at map at 
Translat

Jenkins build is back to normal : beam_PostCommit_Python_VR_Flink #471

2018-10-23 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : beam_PostCommit_Java_ValidatesRunner_Samza_Gradle #1025

2018-10-23 Thread Apache Jenkins Server
See 




Build failed in Jenkins: beam_PostCommit_Java_ValidatesRunner_Samza_Gradle #1024

2018-10-23 Thread Apache Jenkins Server
See 


Changes:

[ehudm] Upgrade BigQuery client from 0.25.0 to 1.6.0

--
[...truncated 49.12 MB...]
INFO: Starting stores in task instance Partition 0
Oct 23, 2018 5:34:51 PM org.apache.samza.util.Logging$class info
INFO: Got non logged storage partition directory as 
/tmp/beam-samza-test/beamStore/Partition_0
Oct 23, 2018 5:34:51 PM org.apache.samza.util.Logging$class info
INFO: Got logged storage partition directory as 
/tmp/beam-samza-test/beamStore/Partition_0
Oct 23, 2018 5:34:51 PM org.apache.samza.util.Logging$class info
INFO: Deleting logged storage partition directory 
/tmp/beam-samza-test/beamStore/Partition_0.
Oct 23, 2018 5:34:51 PM org.apache.samza.util.Logging$class info
INFO: Using non logged storage partition directory: 
/tmp/beam-samza-test/beamStore/Partition_0 for store: beamStore.
Oct 23, 2018 5:34:51 PM org.apache.samza.util.Logging$class info
INFO: Validating change log streams: Map()
Oct 23, 2018 5:34:51 PM org.apache.samza.util.Logging$class info
INFO: Got change log stream metadata: Map()
Oct 23, 2018 5:34:51 PM org.apache.samza.util.Logging$class info
INFO: Assigning oldest change log offsets for taskName Partition 0: Map()
Oct 23, 2018 5:34:51 PM org.apache.samza.util.Logging$class info
INFO: Starting table manager in task instance Partition 0
Oct 23, 2018 5:34:51 PM org.apache.samza.util.Logging$class info
INFO: Starting host statistics monitor
Oct 23, 2018 5:34:51 PM org.apache.samza.util.Logging$class info
INFO: Registering task instances with producers.
Oct 23, 2018 5:34:51 PM org.apache.samza.util.Logging$class info
INFO: Starting producer multiplexer.
Oct 23, 2018 5:34:51 PM org.apache.samza.util.Logging$class info
INFO: Initializing stream tasks.
Oct 23, 2018 5:34:51 PM org.apache.samza.operators.StreamGraphImpl 
getKVSerdes
INFO: Using NoOpSerde as the key serde for stream 
0-split0_out__PCollection_. Keys will not be (de)serialized
Oct 23, 2018 5:34:51 PM org.apache.samza.operators.StreamGraphImpl 
getKVSerdes
INFO: Using NoOpSerde as the value serde for stream 
0-split0_out__PCollection_. Values will not be (de)serialized
Oct 23, 2018 5:34:51 PM org.apache.samza.operators.StreamGraphImpl 
getKVSerdes
INFO: Using NoOpSerde as the key serde for stream 
1-split1_out__PCollection_. Keys will not be (de)serialized
Oct 23, 2018 5:34:51 PM org.apache.samza.operators.StreamGraphImpl 
getKVSerdes
INFO: Using NoOpSerde as the value serde for stream 
1-split1_out__PCollection_. Values will not be (de)serialized
Oct 23, 2018 5:34:51 PM org.apache.samza.operators.StreamGraphImpl 
getKVSerdes
INFO: Using NoOpSerde as the key serde for stream 
2-split2_out__PCollection_. Keys will not be (de)serialized
Oct 23, 2018 5:34:51 PM org.apache.samza.operators.StreamGraphImpl 
getKVSerdes
INFO: Using NoOpSerde as the value serde for stream 
2-split2_out__PCollection_. Values will not be (de)serialized
Oct 23, 2018 5:34:51 PM org.apache.samza.operators.StreamGraphImpl 
getKVSerdes
INFO: Using NoOpSerde as the key serde for stream 
3-split3_out__PCollection_. Keys will not be (de)serialized
Oct 23, 2018 5:34:51 PM org.apache.samza.operators.StreamGraphImpl 
getKVSerdes
INFO: Using NoOpSerde as the value serde for stream 
3-split3_out__PCollection_. Values will not be (de)serialized
Oct 23, 2018 5:34:51 PM org.apache.samza.operators.StreamGraphImpl 
getKVSerdes
INFO: Using NoOpSerde as the key serde for stream 
4-split4_out__PCollection_. Keys will not be (de)serialized
Oct 23, 2018 5:34:51 PM org.apache.samza.operators.StreamGraphImpl 
getKVSerdes
INFO: Using NoOpSerde as the value serde for stream 
4-split4_out__PCollection_. Values will not be (de)serialized
Oct 23, 2018 5:34:51 PM org.apache.samza.operators.StreamGraphImpl 
getKVSerdes
INFO: Using NoOpSerde as the key serde for stream 
5-split5_out__PCollection_. Keys will not be (de)serialized
Oct 23, 2018 5:34:51 PM org.apache.samza.operators.StreamGraphImpl 
getKVSerdes
INFO: Using NoOpSerde as the value serde for stream 
5-split5_out__PCollection_. Values will not be (de)serialized
Oct 23, 2018 5:34:51 PM org.apache.samza.operators.StreamGraphImpl 
getKVSerdes
INFO: Using NoOpSerde as the key serde for stream 
6-split6_out__PCollection_. Keys will not be (de)serialized
Oct 23, 2018 5:34:51 PM org.apache.samza.operators.StreamGraphImpl 
getKVSerdes
INFO: Using NoOpSerde as the value serde for stream 
6-split6_out__PCollection_. Values will not be (de)serialized
Oct 23, 2018 5:34:51 PM org.apache.samza.operators.StreamGraphImpl 
getKVSerdes
INFO: Using NoOpSerde as the key serde for stream 
7-split7_out__PCollection_. Keys will not be (de)serialized
Oct 23, 2018 5:3

Build failed in Jenkins: beam_PostCommit_Python_VR_Flink #470

2018-10-23 Thread Apache Jenkins Server
See 


Changes:

[klk] [BEAM-5830] Use the word LANGUAGE instead of SDK on site

--
[...truncated 4.30 MB...]
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint - 
http://localhost:43125 was granted leadership with 
leaderSessionID=d0e9575c-fc86-465e-9e16-6abb2490588e
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.highavailability.nonha.embedded.EmbeddedLeaderService 
- Received confirmation of leadership for leader http://localhost:43125 , 
session=d0e9575c-fc86-465e-9e16-6abb2490588e
[flink-runner-job-server] INFO org.apache.flink.runtime.rpc.akka.AkkaRpcService 
- Starting RPC endpoint for 
org.apache.flink.runtime.dispatcher.StandaloneDispatcher at 
akka://flink/user/dispatcherdad39ced-491e-46b3-8ee3-97a214eaa3b9 .
[flink-runner-job-server] INFO 
org.apache.flink.runtime.highavailability.nonha.embedded.EmbeddedLeaderService 
- Proposing leadership to contender 
org.apache.flink.runtime.dispatcher.StandaloneDispatcher@becdb3b @ 
akka://flink/user/dispatcherdad39ced-491e-46b3-8ee3-97a214eaa3b9
[flink-runner-job-server] INFO org.apache.flink.runtime.minicluster.MiniCluster 
- Flink Mini Cluster started successfully
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.dispatcher.StandaloneDispatcher - Dispatcher 
akka://flink/user/dispatcherdad39ced-491e-46b3-8ee3-97a214eaa3b9 was granted 
leadership with fencing token b79b29db-2353-4878-8aed-14367945f372
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.dispatcher.StandaloneDispatcher - Recovering all 
persisted jobs.
[flink-akka.actor.default-dispatcher-3] INFO 
org.apache.flink.runtime.highavailability.nonha.embedded.EmbeddedLeaderService 
- Received confirmation of leadership for leader 
akka://flink/user/dispatcherdad39ced-491e-46b3-8ee3-97a214eaa3b9 , 
session=b79b29db-2353-4878-8aed-14367945f372
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.dispatcher.StandaloneDispatcher - Submitting job 
8bd4864fd8662f76249b0ab263946508 (test_windowing_1540313761.14).
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.rpc.akka.AkkaRpcService - Starting RPC endpoint for 
org.apache.flink.runtime.jobmaster.JobMaster at akka://flink/user/jobmanager_39 
.
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Initializing job 
test_windowing_1540313761.14 (8bd4864fd8662f76249b0ab263946508).
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Using restart strategy 
NoRestartStrategy for test_windowing_1540313761.14 
(8bd4864fd8662f76249b0ab263946508).
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.rpc.akka.AkkaRpcService - Starting RPC endpoint for 
org.apache.flink.runtime.jobmaster.slotpool.SlotPool at 
akka://flink/user/360179da-ece1-4157-8301-a28894fe1963 .
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Job recovers via 
failover strategy: full graph restart
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Running initialization on master 
for job test_windowing_1540313761.14 (8bd4864fd8662f76249b0ab263946508).
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Successfully ran initialization 
on master in 0 ms.
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - No state backend has been 
configured, using default (Memory / JobManager) MemoryStateBackend (data in 
heap memory / checkpoints to JobManager) (checkpoints: 'null', savepoints: 
'null', asynchronous: TRUE, maxStateSize: 5242880)
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.highavailability.nonha.embedded.EmbeddedLeaderService 
- Proposing leadership to contender 
org.apache.flink.runtime.jobmaster.JobManagerRunner@725856c4 @ 
akka://flink/user/jobmanager_39
[flink-akka.actor.default-dispatcher-2] INFO 
org.apache.flink.runtime.jobmaster.JobManagerRunner - JobManager runner for job 
test_windowing_1540313761.14 (8bd4864fd8662f76249b0ab263946508) was granted 
leadership with session id f991a7c2-7af1-4fc7-803f-fb9eef615b6c at 
akka://flink/user/jobmanager_39.
[flink-akka.actor.default-dispatcher-2] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Starting execution of job 
test_windowing_1540313761.14 (8bd4864fd8662f76249b0ab263946508)
[flink-akka.actor.default-dispatcher-2] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Job 
test_windowing_1540313761.14 (8bd4864fd8662f76249b0ab263946508) switched from 
state CREATED to RUNNING.
[flink-akka.actor.default-dispatcher-2] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Custom Source 
-> 14Create/Impulse.None/beam:env:d

Build failed in Jenkins: beam_PostCommit_Java_GradleBuild #1741

2018-10-23 Thread Apache Jenkins Server
See 


Changes:

[mxm] [BEAM-5793] Disable pylint 'self-argument' check

--
[...truncated 50.59 MB...]
at io.grpc.Status.asRuntimeException(Status.java:526)
at 
io.grpc.stub.ClientCalls$UnaryStreamToFuture.onClose(ClientCalls.java:468)
at 
io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
at 
io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
at 
io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
at 
com.google.cloud.spanner.spi.v1.SpannerErrorInterceptor$1$1.onClose(SpannerErrorInterceptor.java:100)
at 
io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
at 
io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
at 
io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
at 
com.google.cloud.spanner.spi.v1.WatchdogInterceptor$MonitoredCall$1.onClose(WatchdogInterceptor.java:190)
at 
io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
at 
io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
at 
io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
at 
io.grpc.internal.CensusStatsModule$StatsClientInterceptor$1$1.onClose(CensusStatsModule.java:684)
at 
io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
at 
io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
at 
io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
at 
io.grpc.internal.CensusTracingModule$TracingClientInterceptor$1$1.onClose(CensusTracingModule.java:403)
at 
io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:459)
at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:63)
at 
io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.close(ClientCallImpl.java:546)
at 
io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.access$600(ClientCallImpl.java:467)
at 
io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:584)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at 
io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
... 3 more

Oct 23, 2018 3:04:54 PM 
org.apache.beam.sdk.io.gcp.spanner.SpannerIO$WriteToSpannerFn processElement
WARNING: Failed to submit the mutation group
com.google.cloud.spanner.SpannerException: FAILED_PRECONDITION: 
io.grpc.StatusRuntimeException: FAILED_PRECONDITION: Value must not be NULL in 
table users.
at 
com.google.cloud.spanner.SpannerExceptionFactory.newSpannerExceptionPreformatted(SpannerExceptionFactory.java:119)
at 
com.google.cloud.spanner.SpannerExceptionFactory.newSpannerException(SpannerExceptionFactory.java:43)
at 
com.google.cloud.spanner.SpannerExceptionFactory.newSpannerException(SpannerExceptionFactory.java:80)
at 
com.google.cloud.spanner.spi.v1.GrpcSpannerRpc.get(GrpcSpannerRpc.java:456)
at 
com.google.cloud.spanner.spi.v1.GrpcSpannerRpc.commit(GrpcSpannerRpc.java:404)
at 
com.google.cloud.spanner.SpannerImpl$SessionImpl$2.call(SpannerImpl.java:797)
at 
com.google.cloud.spanner.SpannerImpl$SessionImpl$2.call(SpannerImpl.java:794)
at 
com.google.cloud.spanner.SpannerImpl.runWithRetries(SpannerImpl.java:227)
at 
com.google.cloud.spanner.SpannerImpl$SessionImpl.writeAtLeastOnce(SpannerImpl.java:793)
at 
com.google.cloud.spanner.SessionPool$PooledSession.writeAtLeastOnce(SessionPool.java:319)
at 
com.google.cloud.spanner.DatabaseClientImpl.writeAtLeastOnce(DatabaseClientImpl.java:60)
at 
org.apache.beam.sdk.io.gcp.spanner.SpannerIO$WriteToSpannerFn.processElement(SpannerIO.java:1108)
at 
org.apache.beam.sdk.io.gcp.spanner.SpannerIO$WriteToSpannerFn$DoFnInvoker.invokeProcessElement(Unknown
 Source)
at 
org.apache.beam.repackaged.beam_runners_direct_java.runners.core.SimpleDoFnRunner.invokeProcessElement(SimpleDoFnRunner.java:275)
at 
org.apache.beam.repackaged.beam_runners_direct_java.runners.core.SimpleDoFnRunner.processElement(SimpleDoFnRunner.java:240)
at 
org.apache.beam.repackaged.beam_runners_direct_java.runners.core.SimplePushbackSideInputDoFnRunner.processElementInReadyWindows(SimplePushbackSideInputDoFnRunner.java:78)
at 
org.apache.beam.runners.

Jenkins build is back to normal : beam_PostCommit_Python_VR_Flink #466

2018-10-23 Thread Apache Jenkins Server
See 




Build failed in Jenkins: beam_PostCommit_Python_VR_Flink #465

2018-10-23 Thread Apache Jenkins Server
See 


Changes:

[thw] [BEAM-5467] Remove createProcessWorker from docker execution path.

--
[...truncated 4.29 MB...]
[flink-akka.actor.default-dispatcher-3] INFO 
org.apache.flink.runtime.highavailability.nonha.embedded.EmbeddedLeaderService 
- Received confirmation of leadership for leader http://localhost:43369 , 
session=5eaef88b-031a-4b72-a048-c7dc5fcb54e2
[flink-runner-job-server] INFO org.apache.flink.runtime.rpc.akka.AkkaRpcService 
- Starting RPC endpoint for 
org.apache.flink.runtime.dispatcher.StandaloneDispatcher at 
akka://flink/user/dispatcherd4182979-4e0a-4f31-a713-052a4e5cf083 .
[flink-runner-job-server] INFO 
org.apache.flink.runtime.highavailability.nonha.embedded.EmbeddedLeaderService 
- Proposing leadership to contender 
org.apache.flink.runtime.dispatcher.StandaloneDispatcher@5dd194d @ 
akka://flink/user/dispatcherd4182979-4e0a-4f31-a713-052a4e5cf083
[flink-runner-job-server] INFO org.apache.flink.runtime.minicluster.MiniCluster 
- Flink Mini Cluster started successfully
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.dispatcher.StandaloneDispatcher - Dispatcher 
akka://flink/user/dispatcherd4182979-4e0a-4f31-a713-052a4e5cf083 was granted 
leadership with fencing token fd907930-ebe9-4cb7-a6a5-6b87ed70ca04
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.dispatcher.StandaloneDispatcher - Recovering all 
persisted jobs.
[flink-akka.actor.default-dispatcher-5] INFO 
org.apache.flink.runtime.highavailability.nonha.embedded.EmbeddedLeaderService 
- Received confirmation of leadership for leader 
akka://flink/user/dispatcherd4182979-4e0a-4f31-a713-052a4e5cf083 , 
session=fd907930-ebe9-4cb7-a6a5-6b87ed70ca04
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.dispatcher.StandaloneDispatcher - Submitting job 
1ac60580950256f2559e27cce3de7f1b (test_windowing_1540305936.36).
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.rpc.akka.AkkaRpcService - Starting RPC endpoint for 
org.apache.flink.runtime.jobmaster.JobMaster at akka://flink/user/jobmanager_39 
.
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Initializing job 
test_windowing_1540305936.36 (1ac60580950256f2559e27cce3de7f1b).
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Using restart strategy 
NoRestartStrategy for test_windowing_1540305936.36 
(1ac60580950256f2559e27cce3de7f1b).
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.rpc.akka.AkkaRpcService - Starting RPC endpoint for 
org.apache.flink.runtime.jobmaster.slotpool.SlotPool at 
akka://flink/user/a3037403-dd6e-42ca-8270-a35973a10444 .
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Job recovers via 
failover strategy: full graph restart
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Running initialization on master 
for job test_windowing_1540305936.36 (1ac60580950256f2559e27cce3de7f1b).
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Successfully ran initialization 
on master in 0 ms.
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - No state backend has been 
configured, using default (Memory / JobManager) MemoryStateBackend (data in 
heap memory / checkpoints to JobManager) (checkpoints: 'null', savepoints: 
'null', asynchronous: TRUE, maxStateSize: 5242880)
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.highavailability.nonha.embedded.EmbeddedLeaderService 
- Proposing leadership to contender 
org.apache.flink.runtime.jobmaster.JobManagerRunner@2f7e2b75 @ 
akka://flink/user/jobmanager_39
[flink-akka.actor.default-dispatcher-4] INFO 
org.apache.flink.runtime.jobmaster.JobManagerRunner - JobManager runner for job 
test_windowing_1540305936.36 (1ac60580950256f2559e27cce3de7f1b) was granted 
leadership with session id 37c68566-1ec6-491a-8702-1de4f811e3a8 at 
akka://flink/user/jobmanager_39.
[flink-akka.actor.default-dispatcher-3] INFO 
org.apache.flink.runtime.jobmaster.JobMaster - Starting execution of job 
test_windowing_1540305936.36 (1ac60580950256f2559e27cce3de7f1b)
[flink-akka.actor.default-dispatcher-3] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Job 
test_windowing_1540305936.36 (1ac60580950256f2559e27cce3de7f1b) switched from 
state CREATED to RUNNING.
[flink-akka.actor.default-dispatcher-3] INFO 
org.apache.flink.runtime.executiongraph.ExecutionGraph - Source: Custom Source 
-> 14Create/Impulse.None/beam:env:docker:v1:0 -> ToKeyedWorkItem (1/1) 
(391adb9e2af91798f5534bd19b10d02d) switched from CREATED to SCHEDULED.
[flink-akka.actor.default-dispatcher-3] INFO 
org.apache.flink.runtime.executiongraph.Execution

Jenkins build is back to normal : beam_PostCommit_Py_VR_Dataflow #1471

2018-10-23 Thread Apache Jenkins Server
See 




Build failed in Jenkins: beam_PreCommit_Java_Cron #497

2018-10-23 Thread Apache Jenkins Server
See 


Changes:

[mxm] [BEAM-5793] Disable pylint 'self-argument' check

--
[...truncated 45.56 MB...]
:beam-sdks-java-io-hadoop-common:build (Thread[Task worker for ':' Thread 
10,5,main]) started.

> Task :beam-sdks-java-io-hadoop-common:build
Skipping task ':beam-sdks-java-io-hadoop-common:build' as it has no actions.
:beam-sdks-java-io-hadoop-common:build (Thread[Task worker for ':' Thread 
10,5,main]) completed. Took 0.0 secs.
:beam-sdks-java-io-hadoop-common:buildDependents (Thread[Task worker for ':' 
Thread 10,5,main]) started.

> Task :beam-sdks-java-io-hadoop-common:buildDependents
Caching disabled for task ':beam-sdks-java-io-hadoop-common:buildDependents': 
Caching has not been enabled for the task
Task ':beam-sdks-java-io-hadoop-common:buildDependents' is not up-to-date 
because:
  Task has not declared any outputs despite executing actions.
:beam-sdks-java-io-hadoop-common:buildDependents (Thread[Task worker for ':' 
Thread 10,5,main]) completed. Took 0.0 secs.
:beam-vendor-sdks-java-extensions-protobuf:jar (Thread[Task worker for ':' 
Thread 10,5,main]) started.

> Task :beam-vendor-sdks-java-extensions-protobuf:jar
Build cache key for task ':beam-vendor-sdks-java-extensions-protobuf:jar' is 
5041dc99151d9d3ea85904a15452161d
Caching disabled for task ':beam-vendor-sdks-java-extensions-protobuf:jar': 
Caching has not been enabled for the task
Task ':beam-vendor-sdks-java-extensions-protobuf:jar' is not up-to-date because:
  No history is available.
:beam-vendor-sdks-java-extensions-protobuf:jar (Thread[Task worker for ':' 
Thread 10,5,main]) completed. Took 0.009 secs.
:beam-vendor-sdks-java-extensions-protobuf:compileTestJava (Thread[Task worker 
for ':' Thread 10,5,main]) started.

> Task :beam-vendor-sdks-java-extensions-protobuf:compileTestJava NO-SOURCE
file or directory 
'
 not found
Skipping task ':beam-vendor-sdks-java-extensions-protobuf:compileTestJava' as 
it has no source files and no previous output files.
:beam-vendor-sdks-java-extensions-protobuf:compileTestJava (Thread[Task worker 
for ':' Thread 10,5,main]) completed. Took 0.001 secs.
:beam-vendor-sdks-java-extensions-protobuf:processTestResources (Thread[Task 
worker for ':' Thread 10,5,main]) started.

> Task :beam-vendor-sdks-java-extensions-protobuf:processTestResources NO-SOURCE
file or directory 
'
 not found
Skipping task ':beam-vendor-sdks-java-extensions-protobuf:processTestResources' 
as it has no source files and no previous output files.
:beam-vendor-sdks-java-extensions-protobuf:processTestResources (Thread[Task 
worker for ':' Thread 10,5,main]) completed. Took 0.0 secs.
:beam-vendor-sdks-java-extensions-protobuf:testClasses (Thread[Task worker for 
':' Thread 10,5,main]) started.

> Task :beam-vendor-sdks-java-extensions-protobuf:testClasses UP-TO-DATE
Skipping task ':beam-vendor-sdks-java-extensions-protobuf:testClasses' as it 
has no actions.
:beam-vendor-sdks-java-extensions-protobuf:testClasses (Thread[Task worker for 
':' Thread 10,5,main]) completed. Took 0.0 secs.
:beam-vendor-sdks-java-extensions-protobuf:packageTests (Thread[Task worker for 
':' Thread 10,5,main]) started.

> Task :beam-vendor-sdks-java-extensions-protobuf:packageTests
Build cache key for task 
':beam-vendor-sdks-java-extensions-protobuf:packageTests' is 
2b50c5905885674a697a325a077c7fa7
Caching disabled for task 
':beam-vendor-sdks-java-extensions-protobuf:packageTests': Caching has not been 
enabled for the task
Task ':beam-vendor-sdks-java-extensions-protobuf:packageTests' is not 
up-to-date because:
  No history is available.
:beam-vendor-sdks-java-extensions-protobuf:packageTests (Thread[Task worker for 
':' Thread 10,5,main]) completed. Took 0.005 secs.
:beam-vendor-sdks-java-extensions-protobuf:assemble (Thread[Task worker for ':' 
Thread 10,5,main]) started.

> Task :beam-vendor-sdks-java-extensions-protobuf:assemble
Skipping task ':beam-vendor-sdks-java-extensions-protobuf:assemble' as it has 
no actions.
:beam-vendor-sdks-java-extensions-protobuf:assemble (Thread[Task worker for ':' 
Thread 10,5,main]) completed. Took 0.0 secs.
:beam-vendor-sdks-java-extensions-protobuf:checkstyleMain (Thread[Task worker 
for ':' Thread 10,5,main]) started.

> Task :beam-vendor-sdks-java-extensions-protobuf:checkstyleMain NO-SOURCE
file or directory 
'
 not found
Skipping task ':beam-vendor-sdks-java-extensions-protobuf:checkstyleMain' as it 
has no source files and no previous output files.
:beam-vendor-sdks-java-extensions-protobuf:checkstyleMain (Thread[Task worker 

Build failed in Jenkins: beam_PostCommit_Py_VR_Dataflow #1470

2018-10-23 Thread Apache Jenkins Server
See 


--
[...truncated 59.07 KB...]
# Check that the script is running in a known directory.
if [[ $PWD != *sdks/python* ]]; then
  echo 'Unable to locate Apache Beam Python SDK root directory'
  exit 1
fi

# Go to the Apache Beam Python SDK root
if [[ "*sdks/python" != $PWD ]]; then
  cd $(pwd | sed 's/sdks\/python.*/sdks\/python/')
fi
pwd | sed 's/sdks\/python.*/sdks\/python/'

RUNNER=${3:-TestDataflowRunner}

# Where to store integration test outputs.
GCS_LOCATION=${4:-gs://temp-storage-for-end-to-end-tests}

PROJECT=${5:-apache-beam-testing}

# Create a tarball
python setup.py -q sdist
:398:
 UserWarning: Normalizing '2.9.0.dev' to '2.9.0.dev0'
  normalized_version,
warning: no files found matching 'README.md'
warning: no files found matching 'NOTICE'
warning: no files found matching 'LICENSE'
warning: cmd: standard file not found: should have one of README, README.rst, 
README.txt, README.md


SDK_LOCATION=$(find dist/apache-beam-*.tar.gz)
find dist/apache-beam-*.tar.gz

# Install test dependencies for ValidatesRunner tests.
echo "pyhamcrest" > postcommit_requirements.txt
echo "mock" >> postcommit_requirements.txt

# Options used to run testing pipeline on Cloud Dataflow Service. Also used for
# running on DirectRunner (some options ignored).
PIPELINE_OPTIONS=(
  "--runner=$RUNNER"
  "--project=$PROJECT"
  "--staging_location=$GCS_LOCATION/staging-it"
  "--temp_location=$GCS_LOCATION/temp-it"
  "--output=$GCS_LOCATION/py-it-cloud/output"
  "--sdk_location=$SDK_LOCATION"
  "--requirements_file=postcommit_requirements.txt"
  "--num_workers=1"
  "--sleep_secs=20"
>>> Set test pipeline to streaming
)

# Add streaming flag if specified.
if [[ "$2" = "streaming" ]]; then
  echo ">>> Set test pipeline to streaming"
  PIPELINE_OPTIONS+=("--streaming")
else
  echo ">>> Set test pipeline to batch"
fi

TESTS=""
if [[ "$3" = "TestDirectRunner" ]]; then
  if [[ "$2" = "streaming" ]]; then
TESTS="--tests=\
apache_beam.examples.wordcount_it_test:WordCountIT.test_wordcount_it,\
apache_beam.io.gcp.pubsub_integration_test:PubSubIntegrationTest"
  else
TESTS="--tests=\
apache_beam.examples.wordcount_it_test:WordCountIT.test_wordcount_it,\
apache_beam.io.gcp.pubsub_integration_test:PubSubIntegrationTest,\
apache_beam.io.gcp.big_query_query_to_table_it_test:BigQueryQueryToTableIT"
  fi
fi

###
# Run tests and validate that jobs finish successfully.

JOINED_OPTS=$(IFS=" " ; echo "${PIPELINE_OPTIONS[*]}")
IFS=" " ; echo "${PIPELINE_OPTIONS[*]}"

echo ">>> RUNNING $RUNNER $1 tests"
python setup.py nosetests \
  --attr $1 \
>>> RUNNING TestDataflowRunner ValidatesRunner,!sickbay-streaming tests
  --nologcapture \
  --processes=8 \
  --process-timeout=3000 \
  --test-pipeline-options="$JOINED_OPTS" \
  $TESTS
:398:
 UserWarning: Normalizing '2.9.0.dev' to '2.9.0.dev0'
  normalized_version,
running nosetests
running egg_info
writing requirements to apache_beam.egg-info/requires.txt
writing apache_beam.egg-info/PKG-INFO
writing top-level names to apache_beam.egg-info/top_level.txt
writing dependency_links to apache_beam.egg-info/dependency_links.txt
writing entry points to apache_beam.egg-info/entry_points.txt
reading manifest file 'apache_beam.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching 'README.md'
warning: no files found matching 'NOTICE'
warning: no files found matching 'LICENSE'
writing manifest file 'apache_beam.egg-info/SOURCES.txt'
WARNING:root:snappy is not installed; some tests will be skipped.
WARNING:root:Tensorflow is not installed, so skipping some tests.
:808:
 DeprecationWarning: options is deprecated since First stable release. 
References to .options will not be supported
  options = pbegin.pipeline.options.view_as(DebugOptions)
:808:
 DeprecationWarning: options is deprecated since First stable release. 
References to .options will not be supported
  options = pbegin.pipeline.options.view_as(DebugOptions)
:808:
 DeprecationWarning: options is deprecated since First stable release. 
References to .options will not be supported
  options = pbegin.pipeline.options.view_as(DebugOptions)


  1   2   3   4   5   6   7   8   9   10   >