[GitHub] [flink] rmetzger commented on a change in pull request #10976: [FLINK-13978][build system] Add experimental support for building on Azure Pipelines

2020-02-14 Thread GitBox
rmetzger commented on a change in pull request #10976: [FLINK-13978][build 
system] Add experimental support for building on Azure Pipelines
URL: https://github.com/apache/flink/pull/10976#discussion_r379543557
 
 

 ##
 File path: tools/azure-pipelines/jobs-template.yml
 ##
 @@ -0,0 +1,137 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+parameters:
+  test_pool_definition: # defines the hardware pool for compilation and unit 
test execution.
+  e2e_pool_definion: # defines the hardware pool for end-to-end test execution
+  stage_name: # defines a unique identifier for all jobs in a stage (in case 
the jobs are added multiple times to a stage)
+  environment: # defines environment variables for downstream scripts
+
+jobs:
+- job: compile_${{parameters.stage_name}}
+  condition: not(eq(variables['MODE'], 'e2e'))
+  pool: ${{parameters.test_pool_definition}}
+  container: flink-build-container
+  timeoutInMinutes: 240
+  cancelTimeoutInMinutes: 1
+  workspace:
+clean: all # this cleans the entire workspace directory before running a 
new job
+# It is necessary because the custom build machines are reused for tests.
+# See also 
https://docs.microsoft.com/en-us/azure/devops/pipelines/process/phases?view=azure-devops=yaml#workspace
 
+
+  steps:
+  # The cache task is persisting the .m2 directory between builds, so that
+  # we do not have to re-download all dependencies from maven central for 
+  # each build. The hope is that downloading the cache is faster than
+  # all dependencies individually.
+  # In this configuration, we a hash over all committed (not generated) .pom 
files 
+  # as a key for the build cache (CACHE_KEY). If we have a cache miss on the 
hash
+  # (usually because a pom file has changed), we'll fall back to a key without
+  # the pom files (CACHE_FALLBACK_KEY).
+  # Offical documentation of the Cache task: 
https://docs.microsoft.com/en-us/azure/devops/pipelines/caching/?view=azure-devops
+  - task: Cache@2
+inputs:
+  key: $(CACHE_KEY)
+  restoreKeys: $(CACHE_FALLBACK_KEY)
+  path: $(MAVEN_CACHE_FOLDER)
+continueOnError: true # continue the build even if the cache fails.
+displayName: Cache Maven local repo
+
+  # Compile
+  - script: STAGE=compile ${{parameters.environment}} 
./tools/azure_controller.sh compile
+displayName: Build
+
+  # upload artifacts for next stage
+  - task: PublishPipelineArtifact@1
+inputs:
+  path: $(CACHE_FLINK_DIR)
+  artifact: FlinkCompileCacheDir-${{parameters.stage_name}}
+
+- job: test_${{parameters.stage_name}}
+  dependsOn: compile_${{parameters.stage_name}}
+  condition: not(eq(variables['MODE'], 'e2e'))
+  pool: ${{parameters.test_pool_definition}}
+  container: flink-build-container
+  timeoutInMinutes: 240
+  cancelTimeoutInMinutes: 1
+  workspace:
+clean: all
+  strategy:
+matrix:
+  core:
+module: core
+  python:
+module: python
+  libraries:
+module: libraries
+  blink_planner:
+module: blink_planner
+  connectors:
+module: connectors
+  kafka_gelly:
+module: kafka/gelly
+  tests:
+module: tests
+  legacy_scheduler_core:
+module: legacy_scheduler_core
+  legacy_scheduler_tests:
+module: legacy_scheduler_tests
+  misc:
+module: misc
+  steps:
+
+  # download artifacts
+  - task: DownloadPipelineArtifact@2
+inputs:
+  path: $(CACHE_FLINK_DIR)
+  artifact: FlinkCompileCacheDir-${{parameters.stage_name}}
+
+  # recreate "build-target" symlink for python tests
+  - script: |
+  ls -lisah $(CACHE_FLINK_DIR)
+  ls -lisah .
+  ln -snf 
$(CACHE_FLINK_DIR)/flink-dist/target/flink-*-SNAPSHOT-bin/flink-*-SNAPSHOT 
$(CACHE_FLINK_DIR)/build-target
+displayName: Recreate 'build-target' symlink
+  # Test
+  - script: STAGE=test ${{parameters.environment}} ./tools/azure_controller.sh 
$(module)
+displayName: Test - $(module)
+
+  - task: PublishTestResults@2
+inputs:
+  testResultsFormat: 'JUnit'
+
+
+- job: e2e_${{parameters.stage_name}}
+  condition: eq(variables['MODE'], 'e2e')
+  # We are not running this job on a container, but in a VM.
 
 Review comment:
   I fixed the downloading of 

[GitHub] [flink] rmetzger commented on a change in pull request #10976: [FLINK-13978][build system] Add experimental support for building on Azure Pipelines

2020-02-14 Thread GitBox
rmetzger commented on a change in pull request #10976: [FLINK-13978][build 
system] Add experimental support for building on Azure Pipelines
URL: https://github.com/apache/flink/pull/10976#discussion_r379420605
 
 

 ##
 File path: tools/azure-pipelines/jobs-template.yml
 ##
 @@ -0,0 +1,137 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+parameters:
+  test_pool_definition: # defines the hardware pool for compilation and unit 
test execution.
+  e2e_pool_definion: # defines the hardware pool for end-to-end test execution
+  stage_name: # defines a unique identifier for all jobs in a stage (in case 
the jobs are added multiple times to a stage)
+  environment: # defines environment variables for downstream scripts
+
+jobs:
+- job: compile_${{parameters.stage_name}}
+  condition: not(eq(variables['MODE'], 'e2e'))
+  pool: ${{parameters.test_pool_definition}}
+  container: flink-build-container
+  timeoutInMinutes: 240
+  cancelTimeoutInMinutes: 1
+  workspace:
+clean: all # this cleans the entire workspace directory before running a 
new job
+# It is necessary because the custom build machines are reused for tests.
+# See also 
https://docs.microsoft.com/en-us/azure/devops/pipelines/process/phases?view=azure-devops=yaml#workspace
 
+
+  steps:
+  # The cache task is persisting the .m2 directory between builds, so that
+  # we do not have to re-download all dependencies from maven central for 
+  # each build. The hope is that downloading the cache is faster than
+  # all dependencies individually.
+  # In this configuration, we a hash over all committed (not generated) .pom 
files 
+  # as a key for the build cache (CACHE_KEY). If we have a cache miss on the 
hash
+  # (usually because a pom file has changed), we'll fall back to a key without
+  # the pom files (CACHE_FALLBACK_KEY).
+  # Offical documentation of the Cache task: 
https://docs.microsoft.com/en-us/azure/devops/pipelines/caching/?view=azure-devops
+  - task: Cache@2
+inputs:
+  key: $(CACHE_KEY)
+  restoreKeys: $(CACHE_FALLBACK_KEY)
+  path: $(MAVEN_CACHE_FOLDER)
+continueOnError: true # continue the build even if the cache fails.
+displayName: Cache Maven local repo
+
+  # Compile
+  - script: STAGE=compile ${{parameters.environment}} 
./tools/azure_controller.sh compile
+displayName: Build
+
+  # upload artifacts for next stage
+  - task: PublishPipelineArtifact@1
+inputs:
+  path: $(CACHE_FLINK_DIR)
+  artifact: FlinkCompileCacheDir-${{parameters.stage_name}}
+
+- job: test_${{parameters.stage_name}}
+  dependsOn: compile_${{parameters.stage_name}}
+  condition: not(eq(variables['MODE'], 'e2e'))
+  pool: ${{parameters.test_pool_definition}}
+  container: flink-build-container
+  timeoutInMinutes: 240
+  cancelTimeoutInMinutes: 1
+  workspace:
+clean: all
+  strategy:
+matrix:
+  core:
+module: core
+  python:
+module: python
+  libraries:
+module: libraries
+  blink_planner:
+module: blink_planner
+  connectors:
+module: connectors
+  kafka_gelly:
+module: kafka/gelly
+  tests:
+module: tests
+  legacy_scheduler_core:
+module: legacy_scheduler_core
+  legacy_scheduler_tests:
+module: legacy_scheduler_tests
+  misc:
+module: misc
+  steps:
+
+  # download artifacts
+  - task: DownloadPipelineArtifact@2
+inputs:
+  path: $(CACHE_FLINK_DIR)
+  artifact: FlinkCompileCacheDir-${{parameters.stage_name}}
+
+  # recreate "build-target" symlink for python tests
+  - script: |
+  ls -lisah $(CACHE_FLINK_DIR)
+  ls -lisah .
+  ln -snf 
$(CACHE_FLINK_DIR)/flink-dist/target/flink-*-SNAPSHOT-bin/flink-*-SNAPSHOT 
$(CACHE_FLINK_DIR)/build-target
+displayName: Recreate 'build-target' symlink
+  # Test
+  - script: STAGE=test ${{parameters.environment}} ./tools/azure_controller.sh 
$(module)
+displayName: Test - $(module)
+
+  - task: PublishTestResults@2
+inputs:
+  testResultsFormat: 'JUnit'
+
+
+- job: e2e_${{parameters.stage_name}}
+  condition: eq(variables['MODE'], 'e2e')
+  # We are not running this job on a container, but in a VM.
 
 Review comment:
   I will look into the problems 

[GitHub] [flink] rmetzger commented on a change in pull request #10976: [FLINK-13978][build system] Add experimental support for building on Azure Pipelines

2020-02-14 Thread GitBox
rmetzger commented on a change in pull request #10976: [FLINK-13978][build 
system] Add experimental support for building on Azure Pipelines
URL: https://github.com/apache/flink/pull/10976#discussion_r379419236
 
 

 ##
 File path: flink-end-to-end-tests/test-scripts/test_streaming_elasticsearch.sh
 ##
 @@ -39,10 +39,31 @@ on_exit test_cleanup
 
TEST_ES_JAR=${END_TO_END_DIR}/flink-elasticsearch${ELASTICSEARCH_VERSION}-test/target/Elasticsearch${ELASTICSEARCH_VERSION}SinkExample.jar
 
 # run the Flink job
-$FLINK_DIR/bin/flink run -p 1 $TEST_ES_JAR \
+JOB_ID=$($FLINK_DIR/bin/flink run -d -p 1 $TEST_ES_JAR \
   --numRecords 20 \
   --index index \
-  --type type
+  --type type | awk '{print $NF}' | tail -n 1)
 
+
+# wait for 10 seconds
+wait_job_submitted ${JOB_ID}
 
 Review comment:
   I will undo my changes. I don't want spend more time with this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] rmetzger commented on a change in pull request #10976: [FLINK-13978][build system] Add experimental support for building on Azure Pipelines

2020-02-14 Thread GitBox
rmetzger commented on a change in pull request #10976: [FLINK-13978][build 
system] Add experimental support for building on Azure Pipelines
URL: https://github.com/apache/flink/pull/10976#discussion_r379419236
 
 

 ##
 File path: flink-end-to-end-tests/test-scripts/test_streaming_elasticsearch.sh
 ##
 @@ -39,10 +39,31 @@ on_exit test_cleanup
 
TEST_ES_JAR=${END_TO_END_DIR}/flink-elasticsearch${ELASTICSEARCH_VERSION}-test/target/Elasticsearch${ELASTICSEARCH_VERSION}SinkExample.jar
 
 # run the Flink job
-$FLINK_DIR/bin/flink run -p 1 $TEST_ES_JAR \
+JOB_ID=$($FLINK_DIR/bin/flink run -d -p 1 $TEST_ES_JAR \
   --numRecords 20 \
   --index index \
-  --type type
+  --type type | awk '{print $NF}' | tail -n 1)
 
+
+# wait for 10 seconds
+wait_job_submitted ${JOB_ID}
 
 Review comment:
   I will undo my changes. I don't spend more time with this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] rmetzger commented on a change in pull request #10976: [FLINK-13978][build system] Add experimental support for building on Azure Pipelines

2020-02-14 Thread GitBox
rmetzger commented on a change in pull request #10976: [FLINK-13978][build 
system] Add experimental support for building on Azure Pipelines
URL: https://github.com/apache/flink/pull/10976#discussion_r379325288
 
 

 ##
 File path: tools/azure-pipelines/jobs-template.yml
 ##
 @@ -0,0 +1,137 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+parameters:
+  test_pool_definition: # defines the hardware pool for compilation and unit 
test execution.
+  e2e_pool_definion: # defines the hardware pool for end-to-end test execution
+  stage_name: # defines a unique identifier for all jobs in a stage (in case 
the jobs are added multiple times to a stage)
+  environment: # defines environment variables for downstream scripts
+
+jobs:
+- job: compile_${{parameters.stage_name}}
+  condition: not(eq(variables['MODE'], 'e2e'))
+  pool: ${{parameters.test_pool_definition}}
+  container: flink-build-container
+  timeoutInMinutes: 240
+  cancelTimeoutInMinutes: 1
+  workspace:
+clean: all # this cleans the entire workspace directory before running a 
new job
+# It is necessary because the custom build machines are reused for tests.
+# See also 
https://docs.microsoft.com/en-us/azure/devops/pipelines/process/phases?view=azure-devops=yaml#workspace
 
+
+  steps:
+  # The cache task is persisting the .m2 directory between builds, so that
+  # we do not have to re-download all dependencies from maven central for 
+  # each build. The hope is that downloading the cache is faster than
+  # all dependencies individually.
+  # In this configuration, we a hash over all committed (not generated) .pom 
files 
+  # as a key for the build cache (CACHE_KEY). If we have a cache miss on the 
hash
+  # (usually because a pom file has changed), we'll fall back to a key without
+  # the pom files (CACHE_FALLBACK_KEY).
+  # Offical documentation of the Cache task: 
https://docs.microsoft.com/en-us/azure/devops/pipelines/caching/?view=azure-devops
+  - task: Cache@2
+inputs:
+  key: $(CACHE_KEY)
+  restoreKeys: $(CACHE_FALLBACK_KEY)
+  path: $(MAVEN_CACHE_FOLDER)
+continueOnError: true # continue the build even if the cache fails.
+displayName: Cache Maven local repo
+
+  # Compile
+  - script: STAGE=compile ${{parameters.environment}} 
./tools/azure_controller.sh compile
+displayName: Build
+
+  # upload artifacts for next stage
+  - task: PublishPipelineArtifact@1
+inputs:
+  path: $(CACHE_FLINK_DIR)
+  artifact: FlinkCompileCacheDir-${{parameters.stage_name}}
+
+- job: test_${{parameters.stage_name}}
+  dependsOn: compile_${{parameters.stage_name}}
+  condition: not(eq(variables['MODE'], 'e2e'))
+  pool: ${{parameters.test_pool_definition}}
+  container: flink-build-container
+  timeoutInMinutes: 240
+  cancelTimeoutInMinutes: 1
+  workspace:
+clean: all
+  strategy:
+matrix:
+  core:
+module: core
+  python:
+module: python
+  libraries:
+module: libraries
+  blink_planner:
+module: blink_planner
+  connectors:
+module: connectors
+  kafka_gelly:
+module: kafka/gelly
+  tests:
+module: tests
+  legacy_scheduler_core:
+module: legacy_scheduler_core
+  legacy_scheduler_tests:
+module: legacy_scheduler_tests
+  misc:
+module: misc
+  steps:
+
+  # download artifacts
+  - task: DownloadPipelineArtifact@2
+inputs:
+  path: $(CACHE_FLINK_DIR)
+  artifact: FlinkCompileCacheDir-${{parameters.stage_name}}
+
+  # recreate "build-target" symlink for python tests
+  - script: |
+  ls -lisah $(CACHE_FLINK_DIR)
+  ls -lisah .
+  ln -snf 
$(CACHE_FLINK_DIR)/flink-dist/target/flink-*-SNAPSHOT-bin/flink-*-SNAPSHOT 
$(CACHE_FLINK_DIR)/build-target
+displayName: Recreate 'build-target' symlink
+  # Test
+  - script: STAGE=test ${{parameters.environment}} ./tools/azure_controller.sh 
$(module)
+displayName: Test - $(module)
+
+  - task: PublishTestResults@2
+inputs:
+  testResultsFormat: 'JUnit'
+
+
+- job: e2e_${{parameters.stage_name}}
+  condition: eq(variables['MODE'], 'e2e')
+  # We are not running this job on a container, but in a VM.
 
 Review comment:
   I kicked off this hacky build 

[GitHub] [flink] rmetzger commented on a change in pull request #10976: [FLINK-13978][build system] Add experimental support for building on Azure Pipelines

2020-02-13 Thread GitBox
rmetzger commented on a change in pull request #10976: [FLINK-13978][build 
system] Add experimental support for building on Azure Pipelines
URL: https://github.com/apache/flink/pull/10976#discussion_r379291684
 
 

 ##
 File path: tools/azure-pipelines/jobs-template.yml
 ##
 @@ -0,0 +1,137 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+parameters:
+  test_pool_definition: # defines the hardware pool for compilation and unit 
test execution.
+  e2e_pool_definion: # defines the hardware pool for end-to-end test execution
+  stage_name: # defines a unique identifier for all jobs in a stage (in case 
the jobs are added multiple times to a stage)
+  environment: # defines environment variables for downstream scripts
+
+jobs:
+- job: compile_${{parameters.stage_name}}
+  condition: not(eq(variables['MODE'], 'e2e'))
+  pool: ${{parameters.test_pool_definition}}
+  container: flink-build-container
+  timeoutInMinutes: 240
+  cancelTimeoutInMinutes: 1
+  workspace:
+clean: all # this cleans the entire workspace directory before running a 
new job
+# It is necessary because the custom build machines are reused for tests.
+# See also 
https://docs.microsoft.com/en-us/azure/devops/pipelines/process/phases?view=azure-devops=yaml#workspace
 
+
+  steps:
+  # The cache task is persisting the .m2 directory between builds, so that
+  # we do not have to re-download all dependencies from maven central for 
+  # each build. The hope is that downloading the cache is faster than
+  # all dependencies individually.
+  # In this configuration, we a hash over all committed (not generated) .pom 
files 
+  # as a key for the build cache (CACHE_KEY). If we have a cache miss on the 
hash
+  # (usually because a pom file has changed), we'll fall back to a key without
+  # the pom files (CACHE_FALLBACK_KEY).
+  # Offical documentation of the Cache task: 
https://docs.microsoft.com/en-us/azure/devops/pipelines/caching/?view=azure-devops
+  - task: Cache@2
+inputs:
+  key: $(CACHE_KEY)
+  restoreKeys: $(CACHE_FALLBACK_KEY)
+  path: $(MAVEN_CACHE_FOLDER)
+continueOnError: true # continue the build even if the cache fails.
+displayName: Cache Maven local repo
+
+  # Compile
+  - script: STAGE=compile ${{parameters.environment}} 
./tools/azure_controller.sh compile
+displayName: Build
+
+  # upload artifacts for next stage
+  - task: PublishPipelineArtifact@1
+inputs:
+  path: $(CACHE_FLINK_DIR)
+  artifact: FlinkCompileCacheDir-${{parameters.stage_name}}
+
+- job: test_${{parameters.stage_name}}
+  dependsOn: compile_${{parameters.stage_name}}
+  condition: not(eq(variables['MODE'], 'e2e'))
+  pool: ${{parameters.test_pool_definition}}
+  container: flink-build-container
+  timeoutInMinutes: 240
+  cancelTimeoutInMinutes: 1
+  workspace:
+clean: all
+  strategy:
+matrix:
+  core:
+module: core
+  python:
+module: python
+  libraries:
+module: libraries
+  blink_planner:
+module: blink_planner
+  connectors:
+module: connectors
+  kafka_gelly:
+module: kafka/gelly
+  tests:
+module: tests
+  legacy_scheduler_core:
+module: legacy_scheduler_core
+  legacy_scheduler_tests:
+module: legacy_scheduler_tests
+  misc:
+module: misc
+  steps:
+
+  # download artifacts
+  - task: DownloadPipelineArtifact@2
+inputs:
+  path: $(CACHE_FLINK_DIR)
+  artifact: FlinkCompileCacheDir-${{parameters.stage_name}}
+
+  # recreate "build-target" symlink for python tests
+  - script: |
+  ls -lisah $(CACHE_FLINK_DIR)
+  ls -lisah .
+  ln -snf 
$(CACHE_FLINK_DIR)/flink-dist/target/flink-*-SNAPSHOT-bin/flink-*-SNAPSHOT 
$(CACHE_FLINK_DIR)/build-target
+displayName: Recreate 'build-target' symlink
+  # Test
+  - script: STAGE=test ${{parameters.environment}} ./tools/azure_controller.sh 
$(module)
+displayName: Test - $(module)
+
+  - task: PublishTestResults@2
+inputs:
+  testResultsFormat: 'JUnit'
+
+
+- job: e2e_${{parameters.stage_name}}
+  condition: eq(variables['MODE'], 'e2e')
+  # We are not running this job on a container, but in a VM.
 
 Review comment:
   Running the pre commit on a 

[GitHub] [flink] rmetzger commented on a change in pull request #10976: [FLINK-13978][build system] Add experimental support for building on Azure Pipelines

2020-02-13 Thread GitBox
rmetzger commented on a change in pull request #10976: [FLINK-13978][build 
system] Add experimental support for building on Azure Pipelines
URL: https://github.com/apache/flink/pull/10976#discussion_r378978694
 
 

 ##
 File path: tools/azure_controller.sh
 ##
 @@ -0,0 +1,219 @@
+#!/usr/bin/env bash
+
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#  http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+echo $M2_HOME
+echo $PATH
+echo $MAVEN_OPTS
+
+mvn -version
+echo "Commit: $(git rev-parse HEAD)"
+
+# Set up a custom Maven settings file, configuring an Google-hosted maven 
central
+# mirror.
+cat << EOF > /tmp/az_settings.xml
+
+  
+
+  google-maven-central
+  GCS Maven Central mirror
+  
https://maven-central.storage-download.googleapis.com/repos/central/data/
 
 Review comment:
   Testing now 
https://dev.azure.com/rmetzger/Flink/_build/results?buildId=5147=results


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] rmetzger commented on a change in pull request #10976: [FLINK-13978][build system] Add experimental support for building on Azure Pipelines

2020-02-13 Thread GitBox
rmetzger commented on a change in pull request #10976: [FLINK-13978][build 
system] Add experimental support for building on Azure Pipelines
URL: https://github.com/apache/flink/pull/10976#discussion_r378927106
 
 

 ##
 File path: tools/azure-pipelines/jobs-template.yml
 ##
 @@ -0,0 +1,137 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+parameters:
+  test_pool_definition: # defines the hardware pool for compilation and unit 
test execution.
+  e2e_pool_definion: # defines the hardware pool for end-to-end test execution
+  stage_name: # defines a unique identifier for all jobs in a stage (in case 
the jobs are added multiple times to a stage)
+  environment: # defines environment variables for downstream scripts
+
+jobs:
+- job: compile_${{parameters.stage_name}}
+  condition: not(eq(variables['MODE'], 'e2e'))
+  pool: ${{parameters.test_pool_definition}}
+  container: flink-build-container
+  timeoutInMinutes: 240
+  cancelTimeoutInMinutes: 1
+  workspace:
+clean: all # this cleans the entire workspace directory before running a 
new job
+# It is necessary because the custom build machines are reused for tests.
+# See also 
https://docs.microsoft.com/en-us/azure/devops/pipelines/process/phases?view=azure-devops=yaml#workspace
 
+
+  steps:
+  # The cache task is persisting the .m2 directory between builds, so that
+  # we do not have to re-download all dependencies from maven central for 
+  # each build. The hope is that downloading the cache is faster than
+  # all dependencies individually.
+  # In this configuration, we a hash over all committed (not generated) .pom 
files 
+  # as a key for the build cache (CACHE_KEY). If we have a cache miss on the 
hash
+  # (usually because a pom file has changed), we'll fall back to a key without
+  # the pom files (CACHE_FALLBACK_KEY).
+  # Offical documentation of the Cache task: 
https://docs.microsoft.com/en-us/azure/devops/pipelines/caching/?view=azure-devops
+  - task: Cache@2
+inputs:
+  key: $(CACHE_KEY)
+  restoreKeys: $(CACHE_FALLBACK_KEY)
+  path: $(MAVEN_CACHE_FOLDER)
+continueOnError: true # continue the build even if the cache fails.
+displayName: Cache Maven local repo
+
+  # Compile
+  - script: STAGE=compile ${{parameters.environment}} 
./tools/azure_controller.sh compile
+displayName: Build
+
+  # upload artifacts for next stage
+  - task: PublishPipelineArtifact@1
+inputs:
+  path: $(CACHE_FLINK_DIR)
+  artifact: FlinkCompileCacheDir-${{parameters.stage_name}}
+
+- job: test_${{parameters.stage_name}}
+  dependsOn: compile_${{parameters.stage_name}}
+  condition: not(eq(variables['MODE'], 'e2e'))
+  pool: ${{parameters.test_pool_definition}}
+  container: flink-build-container
+  timeoutInMinutes: 240
+  cancelTimeoutInMinutes: 1
+  workspace:
+clean: all
+  strategy:
+matrix:
+  core:
+module: core
+  python:
+module: python
+  libraries:
+module: libraries
+  blink_planner:
+module: blink_planner
+  connectors:
+module: connectors
+  kafka_gelly:
+module: kafka/gelly
+  tests:
+module: tests
+  legacy_scheduler_core:
+module: legacy_scheduler_core
+  legacy_scheduler_tests:
+module: legacy_scheduler_tests
+  misc:
+module: misc
+  steps:
+
+  # download artifacts
+  - task: DownloadPipelineArtifact@2
+inputs:
+  path: $(CACHE_FLINK_DIR)
+  artifact: FlinkCompileCacheDir-${{parameters.stage_name}}
+
+  # recreate "build-target" symlink for python tests
+  - script: |
+  ls -lisah $(CACHE_FLINK_DIR)
+  ls -lisah .
+  ln -snf 
$(CACHE_FLINK_DIR)/flink-dist/target/flink-*-SNAPSHOT-bin/flink-*-SNAPSHOT 
$(CACHE_FLINK_DIR)/build-target
+displayName: Recreate 'build-target' symlink
+  # Test
+  - script: STAGE=test ${{parameters.environment}} ./tools/azure_controller.sh 
$(module)
+displayName: Test - $(module)
+
+  - task: PublishTestResults@2
+inputs:
+  testResultsFormat: 'JUnit'
+
+
+- job: e2e_${{parameters.stage_name}}
+  condition: eq(variables['MODE'], 'e2e')
+  # We are not running this job on a container, but in a VM.
 
 Review comment:
   I'm trying out running the 

[GitHub] [flink] rmetzger commented on a change in pull request #10976: [FLINK-13978][build system] Add experimental support for building on Azure Pipelines

2020-02-13 Thread GitBox
rmetzger commented on a change in pull request #10976: [FLINK-13978][build 
system] Add experimental support for building on Azure Pipelines
URL: https://github.com/apache/flink/pull/10976#discussion_r378868088
 
 

 ##
 File path: tools/azure_controller.sh
 ##
 @@ -0,0 +1,220 @@
+#!/usr/bin/env bash
+
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#  http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+echo $M2_HOME
+echo $PATH
+echo $MAVEN_OPTS
+
+mvn -version
+echo "Commit: $(git rev-parse HEAD)"
+
+# Set up a custom Maven settings file, configuring an Google-hosted maven 
central
+# mirror.
+cat << EOF > /tmp/az_settings.xml
+
+  
+
+  google-maven-central
+  GCS Maven Central mirror
+  
https://maven-central.storage-download.googleapis.com/repos/central/data/
+  central
+
+  
+
+EOF
+
+
+HERE="`dirname \"$0\"`" # relative
+HERE="`( cd \"$HERE\" && pwd )`"# absolutized and normalized
+if [ -z "$HERE" ] ; then
+# error; for some reason, the path is not accessible
+# to the script (e.g. permissions re-evaled after suid)
+exit 1  # fail
+fi
+
+source "${HERE}/travis/stage.sh"
+source "${HERE}/travis/shade.sh"
+
+print_system_info() {
+echo "CPU information"
+lscpu
+
+echo "Memory information"
+cat /proc/meminfo
+
+echo "Disk information"
+df -hH
+
+echo "Running build as"
+whoami
+}
+
+print_system_info
+
+
+STAGE=$1
+echo "Current stage: \"$STAGE\""
+
+EXIT_CODE=0
+
+#adding -Dmaven.wagon.http.pool=false (see 
https://developercommunity.visualstudio.com/content/problem/851041/microsoft-hosted-agents-run-into-maven-central-tim.html)
+# --settings /tmp/az_settings.xml 
 
 Review comment:
   Setting up a maven proxy is indeed pretty simple. Got something working 
locally in a few minutes.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] rmetzger commented on a change in pull request #10976: [FLINK-13978][build system] Add experimental support for building on Azure Pipelines

2020-02-13 Thread GitBox
rmetzger commented on a change in pull request #10976: [FLINK-13978][build 
system] Add experimental support for building on Azure Pipelines
URL: https://github.com/apache/flink/pull/10976#discussion_r378852616
 
 

 ##
 File path: tools/azure_controller.sh
 ##
 @@ -0,0 +1,220 @@
+#!/usr/bin/env bash
+
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#  http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+echo $M2_HOME
+echo $PATH
+echo $MAVEN_OPTS
+
+mvn -version
+echo "Commit: $(git rev-parse HEAD)"
+
+# Set up a custom Maven settings file, configuring an Google-hosted maven 
central
+# mirror.
+cat << EOF > /tmp/az_settings.xml
+
+  
+
+  google-maven-central
+  GCS Maven Central mirror
+  
https://maven-central.storage-download.googleapis.com/repos/central/data/
+  central
+
+  
+
+EOF
+
+
+HERE="`dirname \"$0\"`" # relative
+HERE="`( cd \"$HERE\" && pwd )`"# absolutized and normalized
+if [ -z "$HERE" ] ; then
+# error; for some reason, the path is not accessible
+# to the script (e.g. permissions re-evaled after suid)
+exit 1  # fail
+fi
+
+source "${HERE}/travis/stage.sh"
+source "${HERE}/travis/shade.sh"
+
+print_system_info() {
+echo "CPU information"
+lscpu
+
+echo "Memory information"
+cat /proc/meminfo
+
+echo "Disk information"
+df -hH
+
+echo "Running build as"
+whoami
+}
+
+print_system_info
+
+
+STAGE=$1
+echo "Current stage: \"$STAGE\""
+
+EXIT_CODE=0
+
+#adding -Dmaven.wagon.http.pool=false (see 
https://developercommunity.visualstudio.com/content/problem/851041/microsoft-hosted-agents-run-into-maven-central-tim.html)
+# --settings /tmp/az_settings.xml 
 
 Review comment:
   The cherry-picked build now fails with 
https://maven-central.storage-download.googleapis.com/repos/central/data/org/apache/beam/beam-runners-java-fn-execution/2.19.0/beam-runners-java-fn-execution-2.19.0.pom
 missing. The artifact has been released on Feb 03.
   I just send an email to the guys running the google mirror to see what they 
have to say :) 
   
   I'm starting to wonder whether we should see if it makes sense to set up our 
own Maven mirror on one of our build machines: 
https://www.sonatype.com/download-oss-sonatype


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] rmetzger commented on a change in pull request #10976: [FLINK-13978][build system] Add experimental support for building on Azure Pipelines

2020-02-13 Thread GitBox
rmetzger commented on a change in pull request #10976: [FLINK-13978][build 
system] Add experimental support for building on Azure Pipelines
URL: https://github.com/apache/flink/pull/10976#discussion_r378777994
 
 

 ##
 File path: flink-end-to-end-tests/test-scripts/test_streaming_elasticsearch.sh
 ##
 @@ -39,10 +39,31 @@ on_exit test_cleanup
 
TEST_ES_JAR=${END_TO_END_DIR}/flink-elasticsearch${ELASTICSEARCH_VERSION}-test/target/Elasticsearch${ELASTICSEARCH_VERSION}SinkExample.jar
 
 # run the Flink job
-$FLINK_DIR/bin/flink run -p 1 $TEST_ES_JAR \
+JOB_ID=$($FLINK_DIR/bin/flink run -d -p 1 $TEST_ES_JAR \
   --numRecords 20 \
   --index index \
-  --type type
+  --type type | awk '{print $NF}' | tail -n 1)
 
+
+# wait for 10 seconds
+wait_job_submitted ${JOB_ID}
 
 Review comment:
   This call just waits for the job to be submitted (somehow). The second loop 
expects the job to be in state "RUNNING".
   It would fail if the job status was "CREATED", which I assume is a state 
that we might see.
   
   To be honest, I can also undo the changes to the elasticsearch script + the 
common.sh script. I just left them in in case the tests are failing in the 
future.
   
   With my changes, the tests will at least fail after a few seconds, instead 
of hanging indefinitely, and they will print the Flink logs to make debugging 
easier.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] rmetzger commented on a change in pull request #10976: [FLINK-13978][build system] Add experimental support for building on Azure Pipelines

2020-02-13 Thread GitBox
rmetzger commented on a change in pull request #10976: [FLINK-13978][build 
system] Add experimental support for building on Azure Pipelines
URL: https://github.com/apache/flink/pull/10976#discussion_r378772192
 
 

 ##
 File path: tools/azure-pipelines/jobs-template.yml
 ##
 @@ -0,0 +1,137 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+parameters:
+  test_pool_definition: # defines the hardware pool for compilation and unit 
test execution.
+  e2e_pool_definion: # defines the hardware pool for end-to-end test execution
+  stage_name: # defines a unique identifier for all jobs in a stage (in case 
the jobs are added multiple times to a stage)
+  environment: # defines environment variables for downstream scripts
+
+jobs:
+- job: compile_${{parameters.stage_name}}
+  condition: not(eq(variables['MODE'], 'e2e'))
+  pool: ${{parameters.test_pool_definition}}
+  container: flink-build-container
+  timeoutInMinutes: 240
+  cancelTimeoutInMinutes: 1
+  workspace:
+clean: all # this cleans the entire workspace directory before running a 
new job
+# It is necessary because the custom build machines are reused for tests.
+# See also 
https://docs.microsoft.com/en-us/azure/devops/pipelines/process/phases?view=azure-devops=yaml#workspace
 
+
+  steps:
+  # The cache task is persisting the .m2 directory between builds, so that
+  # we do not have to re-download all dependencies from maven central for 
+  # each build. The hope is that downloading the cache is faster than
+  # all dependencies individually.
+  # In this configuration, we a hash over all committed (not generated) .pom 
files 
+  # as a key for the build cache (CACHE_KEY). If we have a cache miss on the 
hash
+  # (usually because a pom file has changed), we'll fall back to a key without
+  # the pom files (CACHE_FALLBACK_KEY).
+  # Offical documentation of the Cache task: 
https://docs.microsoft.com/en-us/azure/devops/pipelines/caching/?view=azure-devops
+  - task: Cache@2
+inputs:
+  key: $(CACHE_KEY)
+  restoreKeys: $(CACHE_FALLBACK_KEY)
 
 Review comment:
   Yes


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] rmetzger commented on a change in pull request #10976: [FLINK-13978][build system] Add experimental support for building on Azure Pipelines

2020-02-13 Thread GitBox
rmetzger commented on a change in pull request #10976: [FLINK-13978][build 
system] Add experimental support for building on Azure Pipelines
URL: https://github.com/apache/flink/pull/10976#discussion_r378769414
 
 

 ##
 File path: tools/azure-pipelines/jobs-template.yml
 ##
 @@ -0,0 +1,137 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+parameters:
+  test_pool_definition: # defines the hardware pool for compilation and unit 
test execution.
+  e2e_pool_definion: # defines the hardware pool for end-to-end test execution
+  stage_name: # defines a unique identifier for all jobs in a stage (in case 
the jobs are added multiple times to a stage)
+  environment: # defines environment variables for downstream scripts
+
+jobs:
+- job: compile_${{parameters.stage_name}}
+  condition: not(eq(variables['MODE'], 'e2e'))
+  pool: ${{parameters.test_pool_definition}}
+  container: flink-build-container
+  timeoutInMinutes: 240
+  cancelTimeoutInMinutes: 1
+  workspace:
+clean: all # this cleans the entire workspace directory before running a 
new job
+# It is necessary because the custom build machines are reused for tests.
+# See also 
https://docs.microsoft.com/en-us/azure/devops/pipelines/process/phases?view=azure-devops=yaml#workspace
 
+
+  steps:
+  # The cache task is persisting the .m2 directory between builds, so that
+  # we do not have to re-download all dependencies from maven central for 
+  # each build. The hope is that downloading the cache is faster than
+  # all dependencies individually.
+  # In this configuration, we a hash over all committed (not generated) .pom 
files 
+  # as a key for the build cache (CACHE_KEY). If we have a cache miss on the 
hash
+  # (usually because a pom file has changed), we'll fall back to a key without
+  # the pom files (CACHE_FALLBACK_KEY).
+  # Offical documentation of the Cache task: 
https://docs.microsoft.com/en-us/azure/devops/pipelines/caching/?view=azure-devops
+  - task: Cache@2
+inputs:
+  key: $(CACHE_KEY)
+  restoreKeys: $(CACHE_FALLBACK_KEY)
+  path: $(MAVEN_CACHE_FOLDER)
+continueOnError: true # continue the build even if the cache fails.
+displayName: Cache Maven local repo
+
+  # Compile
+  - script: STAGE=compile ${{parameters.environment}} 
./tools/azure_controller.sh compile
+displayName: Build
+
+  # upload artifacts for next stage
+  - task: PublishPipelineArtifact@1
+inputs:
+  path: $(CACHE_FLINK_DIR)
+  artifact: FlinkCompileCacheDir-${{parameters.stage_name}}
+
+- job: test_${{parameters.stage_name}}
+  dependsOn: compile_${{parameters.stage_name}}
+  condition: not(eq(variables['MODE'], 'e2e'))
+  pool: ${{parameters.test_pool_definition}}
+  container: flink-build-container
+  timeoutInMinutes: 240
+  cancelTimeoutInMinutes: 1
+  workspace:
+clean: all
+  strategy:
+matrix:
+  core:
+module: core
+  python:
+module: python
+  libraries:
+module: libraries
+  blink_planner:
+module: blink_planner
+  connectors:
+module: connectors
+  kafka_gelly:
+module: kafka/gelly
+  tests:
+module: tests
+  legacy_scheduler_core:
+module: legacy_scheduler_core
+  legacy_scheduler_tests:
+module: legacy_scheduler_tests
+  misc:
+module: misc
+  steps:
+
+  # download artifacts
+  - task: DownloadPipelineArtifact@2
+inputs:
+  path: $(CACHE_FLINK_DIR)
+  artifact: FlinkCompileCacheDir-${{parameters.stage_name}}
+
+  # recreate "build-target" symlink for python tests
+  - script: |
+  ls -lisah $(CACHE_FLINK_DIR)
+  ls -lisah .
+  ln -snf 
$(CACHE_FLINK_DIR)/flink-dist/target/flink-*-SNAPSHOT-bin/flink-*-SNAPSHOT 
$(CACHE_FLINK_DIR)/build-target
+displayName: Recreate 'build-target' symlink
+  # Test
+  - script: STAGE=test ${{parameters.environment}} ./tools/azure_controller.sh 
$(module)
+displayName: Test - $(module)
+
+  - task: PublishTestResults@2
+inputs:
+  testResultsFormat: 'JUnit'
+
+
+- job: e2e_${{parameters.stage_name}}
+  condition: eq(variables['MODE'], 'e2e')
+  # We are not running this job on a container, but in a VM.
 
 Review comment:
   Correct, the pre-commit tests 

[GitHub] [flink] rmetzger commented on a change in pull request #10976: [FLINK-13978][build system] Add experimental support for building on Azure Pipelines

2020-02-13 Thread GitBox
rmetzger commented on a change in pull request #10976: [FLINK-13978][build 
system] Add experimental support for building on Azure Pipelines
URL: https://github.com/apache/flink/pull/10976#discussion_r378768263
 
 

 ##
 File path: tools/azure_controller.sh
 ##
 @@ -0,0 +1,220 @@
+#!/usr/bin/env bash
+
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#  http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+echo $M2_HOME
+echo $PATH
+echo $MAVEN_OPTS
+
+mvn -version
+echo "Commit: $(git rev-parse HEAD)"
+
+# Set up a custom Maven settings file, configuring an Google-hosted maven 
central
+# mirror.
+cat << EOF > /tmp/az_settings.xml
+
+  
+
+  google-maven-central
+  GCS Maven Central mirror
+  
https://maven-central.storage-download.googleapis.com/repos/central/data/
+  central
+
+  
+
+EOF
+
+
+HERE="`dirname \"$0\"`" # relative
+HERE="`( cd \"$HERE\" && pwd )`"# absolutized and normalized
+if [ -z "$HERE" ] ; then
+# error; for some reason, the path is not accessible
+# to the script (e.g. permissions re-evaled after suid)
+exit 1  # fail
+fi
+
+source "${HERE}/travis/stage.sh"
+source "${HERE}/travis/shade.sh"
+
+print_system_info() {
+echo "CPU information"
+lscpu
+
+echo "Memory information"
+cat /proc/meminfo
+
+echo "Disk information"
+df -hH
+
+echo "Running build as"
+whoami
+}
+
+print_system_info
+
+
+STAGE=$1
+echo "Current stage: \"$STAGE\""
+
+EXIT_CODE=0
+
+#adding -Dmaven.wagon.http.pool=false (see 
https://developercommunity.visualstudio.com/content/problem/851041/microsoft-hosted-agents-run-into-maven-central-tim.html)
+# --settings /tmp/az_settings.xml 
 
 Review comment:
   Thanks a lot. I'll rebase.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] rmetzger commented on a change in pull request #10976: [FLINK-13978][build system] Add experimental support for building on Azure Pipelines

2020-02-12 Thread GitBox
rmetzger commented on a change in pull request #10976: [FLINK-13978][build 
system] Add experimental support for building on Azure Pipelines
URL: https://github.com/apache/flink/pull/10976#discussion_r378447373
 
 

 ##
 File path: tools/azure_controller.sh
 ##
 @@ -0,0 +1,220 @@
+#!/usr/bin/env bash
+
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#  http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+echo $M2_HOME
+echo $PATH
+echo $MAVEN_OPTS
+
+mvn -version
+echo "Commit: $(git rev-parse HEAD)"
+
+# Set up a custom Maven settings file, configuring an Google-hosted maven 
central
+# mirror.
+cat << EOF > /tmp/az_settings.xml
+
+  
+
+  google-maven-central
+  GCS Maven Central mirror
+  
https://maven-central.storage-download.googleapis.com/repos/central/data/
+  central
+
+  
+
+EOF
+
+
+HERE="`dirname \"$0\"`" # relative
+HERE="`( cd \"$HERE\" && pwd )`"# absolutized and normalized
+if [ -z "$HERE" ] ; then
+# error; for some reason, the path is not accessible
+# to the script (e.g. permissions re-evaled after suid)
+exit 1  # fail
+fi
+
+source "${HERE}/travis/stage.sh"
+source "${HERE}/travis/shade.sh"
+
+print_system_info() {
+echo "CPU information"
+lscpu
+
+echo "Memory information"
+cat /proc/meminfo
+
+echo "Disk information"
+df -hH
+
+echo "Running build as"
+whoami
+}
+
+print_system_info
+
+
+STAGE=$1
+echo "Current stage: \"$STAGE\""
+
+EXIT_CODE=0
+
+#adding -Dmaven.wagon.http.pool=false (see 
https://developercommunity.visualstudio.com/content/problem/851041/microsoft-hosted-agents-run-into-maven-central-tim.html)
+# --settings /tmp/az_settings.xml 
 
 Review comment:
   I tried that already :( 
   Maven does not seem to try different mirrors when something is not available 
somewhere :(


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] rmetzger commented on a change in pull request #10976: [FLINK-13978][build system] Add experimental support for building on Azure Pipelines

2020-02-12 Thread GitBox
rmetzger commented on a change in pull request #10976: [FLINK-13978][build 
system] Add experimental support for building on Azure Pipelines
URL: https://github.com/apache/flink/pull/10976#discussion_r378394935
 
 

 ##
 File path: tools/azure_controller.sh
 ##
 @@ -0,0 +1,220 @@
+#!/usr/bin/env bash
+
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#  http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+echo $M2_HOME
+echo $PATH
+echo $MAVEN_OPTS
+
+mvn -version
+echo "Commit: $(git rev-parse HEAD)"
+
+# Set up a custom Maven settings file, configuring an Google-hosted maven 
central
+# mirror.
+cat << EOF > /tmp/az_settings.xml
+
+  
+
+  google-maven-central
+  GCS Maven Central mirror
+  
https://maven-central.storage-download.googleapis.com/repos/central/data/
+  central
+
+  
+
+EOF
+
+
+HERE="`dirname \"$0\"`" # relative
+HERE="`( cd \"$HERE\" && pwd )`"# absolutized and normalized
+if [ -z "$HERE" ] ; then
+# error; for some reason, the path is not accessible
+# to the script (e.g. permissions re-evaled after suid)
+exit 1  # fail
+fi
+
+source "${HERE}/travis/stage.sh"
+source "${HERE}/travis/shade.sh"
+
+print_system_info() {
+echo "CPU information"
+lscpu
+
+echo "Memory information"
+cat /proc/meminfo
+
+echo "Disk information"
+df -hH
+
+echo "Running build as"
+whoami
+}
+
+print_system_info
+
+
+STAGE=$1
+echo "Current stage: \"$STAGE\""
+
+EXIT_CODE=0
+
+#adding -Dmaven.wagon.http.pool=false (see 
https://developercommunity.visualstudio.com/content/problem/851041/microsoft-hosted-agents-run-into-maven-central-tim.html)
+# --settings /tmp/az_settings.xml 
 
 Review comment:
   Yeah .. 
   The google mirror caused me some nice headache last night :( 
   ```
   2020-02-11T15:40:36.2013808Z [INFO] --- gmavenplus-plugin:1.8.1:execute 
(merge-categories) @ flink-end-to-end-tests ---
   2020-02-11T15:40:36.2032562Z [INFO] Downloading: 
https://maven-central.storage-download.googleapis.com/repos/central/data/org/codehaus/groovy/groovy-all/2.5.9/groovy-all-2.5.9.pom
   2020-02-11T15:40:36.4201087Z [WARNING] The POM for 
org.codehaus.groovy:groovy-all:pom:2.5.9 is missing, no dependency information 
available
   ```
   It seems that this file is really not available on the Google mirror. I 
guess we have to rely on the maven central mirrors :( 
   
   
https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_apis/build/builds/5069/logs/14


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] rmetzger commented on a change in pull request #10976: [FLINK-13978][build system] Add experimental support for building on Azure Pipelines

2020-02-12 Thread GitBox
rmetzger commented on a change in pull request #10976: [FLINK-13978][build 
system] Add experimental support for building on Azure Pipelines
URL: https://github.com/apache/flink/pull/10976#discussion_r378393653
 
 

 ##
 File path: tools/azure_controller.sh
 ##
 @@ -0,0 +1,220 @@
+#!/usr/bin/env bash
+
+#  Licensed to the Apache Software Foundation (ASF) under one
+#  or more contributor license agreements.  See the NOTICE file
+#  distributed with this work for additional information
+#  regarding copyright ownership.  The ASF licenses this file
+#  to you under the Apache License, Version 2.0 (the
+#  "License"); you may not use this file except in compliance
+#  with the License.  You may obtain a copy of the License at
+#
+#  http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+echo $M2_HOME
+echo $PATH
+echo $MAVEN_OPTS
+
+mvn -version
+echo "Commit: $(git rev-parse HEAD)"
+
+# Set up a custom Maven settings file, configuring an Google-hosted maven 
central
+# mirror.
+cat << EOF > /tmp/az_settings.xml
+
+  
+
+  google-maven-central
+  GCS Maven Central mirror
+  
https://maven-central.storage-download.googleapis.com/repos/central/data/
+  central
+
+  
+
+EOF
+
+
+HERE="`dirname \"$0\"`" # relative
+HERE="`( cd \"$HERE\" && pwd )`"# absolutized and normalized
+if [ -z "$HERE" ] ; then
+# error; for some reason, the path is not accessible
+# to the script (e.g. permissions re-evaled after suid)
+exit 1  # fail
+fi
+
+source "${HERE}/travis/stage.sh"
+source "${HERE}/travis/shade.sh"
+
+print_system_info() {
+echo "CPU information"
+lscpu
+
+echo "Memory information"
+cat /proc/meminfo
+
+echo "Disk information"
+df -hH
+
+echo "Running build as"
+whoami
+}
+
+print_system_info
+
+
+STAGE=$1
+echo "Current stage: \"$STAGE\""
+
+EXIT_CODE=0
+
+#adding -Dmaven.wagon.http.pool=false (see 
https://developercommunity.visualstudio.com/content/problem/851041/microsoft-hosted-agents-run-into-maven-central-tim.html)
+# --settings /tmp/az_settings.xml 
+MVN="mvn clean install $MAVEN_OPTS -nsu -Dflink.convergence.phase=install 
-Pcheck-convergence -Dflink.forkCount=2 -Dflink.forkCountTestPackage=2 
-Dmaven.wagon.http.pool=false -Dmaven.javadoc.skip=true -B -U -DskipTests 
$PROFILE"
+
+# Run actual compile steps
+if [ $STAGE == "$STAGE_COMPILE" ]; then
+# run mvn clean install:
+$MVN
+EXIT_CODE=$?
+
+if [ $EXIT_CODE == 0 ]; then
+echo 
"\n\n==\n"
+echo "Checking scala suffixes\n"
+echo 
"==\n"
+
+./tools/verify_scala_suffixes.sh "${PROFILE}"
+EXIT_CODE=$?
+else
+echo 
"\n==\n"
+echo "Previous build failure detected, skipping scala-suffixes 
check.\n"
+echo 
"==\n"
+fi
+
+if [ $EXIT_CODE == 0 ]; then
+check_shaded_artifacts
+EXIT_CODE=$(($EXIT_CODE+$?))
+check_shaded_artifacts_s3_fs hadoop
+EXIT_CODE=$(($EXIT_CODE+$?))
+check_shaded_artifacts_s3_fs presto
+EXIT_CODE=$(($EXIT_CODE+$?))
+check_shaded_artifacts_connector_elasticsearch 2
+EXIT_CODE=$(($EXIT_CODE+$?))
+check_shaded_artifacts_connector_elasticsearch 5
+EXIT_CODE=$(($EXIT_CODE+$?))
+check_shaded_artifacts_connector_elasticsearch 6
+EXIT_CODE=$(($EXIT_CODE+$?))
+else
+echo 
"=="
+echo "Previous build failure detected, skipping shaded dependency 
check."
+echo 
"=="
+fi
+
+if [ $EXIT_CODE == 0 ]; then
+echo "Creating cache build directory $CACHE_FLINK_DIR"
+
+cp -r . "$CACHE_FLINK_DIR"
+
+function minimizeCachedFiles() {
+# reduces the size of the cached directory to speed up
+# the packing / download process
+# by removing files not required for subsequent stages
+
+# jars are re-built in subsequent stages, so no need to cache them 
(cannot be avoided)
+find "$CACHE_FLINK_DIR" -maxdepth 8 -type f -name '*.jar' \
+ 

[GitHub] [flink] rmetzger commented on a change in pull request #10976: [FLINK-13978][build system] Add experimental support for building on Azure Pipelines

2020-02-12 Thread GitBox
rmetzger commented on a change in pull request #10976: [FLINK-13978][build 
system] Add experimental support for building on Azure Pipelines
URL: https://github.com/apache/flink/pull/10976#discussion_r378393191
 
 

 ##
 File path: tools/travis_watchdog.sh
 ##
 @@ -166,7 +171,7 @@ print_stacktraces () {
 put_yarn_logs_to_artifacts() {
# Make sure to be in project root
cd $HERE/../
-   for file in `find ./flink-yarn-tests/target/flink-yarn-tests* -type f 
-name '*.log'`; do
+   for file in `find ./flink-yarn-tests/target -type f -name '*.log'`; do
 
 Review comment:
   I was debugging failing YARN tests as part of the migration. IIRC, some of 
the newer YARN tests log into a different directory. It doesn't hurt to search 
for log files to be included in the debugging file a bit more broadly.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] rmetzger commented on a change in pull request #10976: [FLINK-13978][build system] Add experimental support for building on Azure Pipelines

2020-02-12 Thread GitBox
rmetzger commented on a change in pull request #10976: [FLINK-13978][build 
system] Add experimental support for building on Azure Pipelines
URL: https://github.com/apache/flink/pull/10976#discussion_r378392492
 
 

 ##
 File path: tools/azure-pipelines/jobs-template.yml
 ##
 @@ -0,0 +1,137 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+parameters:
+  test_pool_definition: # defines the hardware pool for compilation and unit 
test execution.
+  e2e_pool_definion: # defines the hardware pool for end-to-end test execution
+  stage_name: # defines a unique identifier for all jobs in a stage (in case 
the jobs are added multiple times to a stage)
+  environment: # defines environment variables for downstream scripts
+
+jobs:
+- job: compile_${{parameters.stage_name}}
+  condition: not(eq(variables['MODE'], 'e2e'))
+  pool: ${{parameters.test_pool_definition}}
+  container: flink-build-container
+  timeoutInMinutes: 240
+  cancelTimeoutInMinutes: 1
+  workspace:
+clean: all # this cleans the entire workspace directory before running a 
new job
+# It is necessary because the custom build machines are reused for tests.
+# See also 
https://docs.microsoft.com/en-us/azure/devops/pipelines/process/phases?view=azure-devops=yaml#workspace
 
+
+  steps:
+  # The cache task is persisting the .m2 directory between builds, so that
+  # we do not have to re-download all dependencies from maven central for 
+  # each build. The hope is that downloading the cache is faster than
+  # all dependencies individually.
+  # In this configuration, we a hash over all committed (not generated) .pom 
files 
+  # as a key for the build cache (CACHE_KEY). If we have a cache miss on the 
hash
+  # (usually because a pom file has changed), we'll fall back to a key without
+  # the pom files (CACHE_FALLBACK_KEY).
+  # Offical documentation of the Cache task: 
https://docs.microsoft.com/en-us/azure/devops/pipelines/caching/?view=azure-devops
+  - task: Cache@2
+inputs:
+  key: $(CACHE_KEY)
+  restoreKeys: $(CACHE_FALLBACK_KEY)
+  path: $(MAVEN_CACHE_FOLDER)
+continueOnError: true # continue the build even if the cache fails.
+displayName: Cache Maven local repo
+
+  # Compile
+  - script: STAGE=compile ${{parameters.environment}} 
./tools/azure_controller.sh compile
+displayName: Build
+
+  # upload artifacts for next stage
+  - task: PublishPipelineArtifact@1
+inputs:
+  path: $(CACHE_FLINK_DIR)
+  artifact: FlinkCompileCacheDir-${{parameters.stage_name}}
+
+- job: test_${{parameters.stage_name}}
+  dependsOn: compile_${{parameters.stage_name}}
+  condition: not(eq(variables['MODE'], 'e2e'))
+  pool: ${{parameters.test_pool_definition}}
+  container: flink-build-container
+  timeoutInMinutes: 240
+  cancelTimeoutInMinutes: 1
+  workspace:
+clean: all
+  strategy:
+matrix:
+  core:
+module: core
+  python:
+module: python
+  libraries:
+module: libraries
+  blink_planner:
+module: blink_planner
+  connectors:
+module: connectors
+  kafka_gelly:
+module: kafka/gelly
+  tests:
+module: tests
+  legacy_scheduler_core:
+module: legacy_scheduler_core
+  legacy_scheduler_tests:
+module: legacy_scheduler_tests
+  misc:
+module: misc
+  steps:
+
+  # download artifacts
+  - task: DownloadPipelineArtifact@2
+inputs:
+  path: $(CACHE_FLINK_DIR)
+  artifact: FlinkCompileCacheDir-${{parameters.stage_name}}
+
+  # recreate "build-target" symlink for python tests
+  - script: |
+  ls -lisah $(CACHE_FLINK_DIR)
+  ls -lisah .
+  ln -snf 
$(CACHE_FLINK_DIR)/flink-dist/target/flink-*-SNAPSHOT-bin/flink-*-SNAPSHOT 
$(CACHE_FLINK_DIR)/build-target
+displayName: Recreate 'build-target' symlink
+  # Test
+  - script: STAGE=test ${{parameters.environment}} ./tools/azure_controller.sh 
$(module)
+displayName: Test - $(module)
+
+  - task: PublishTestResults@2
+inputs:
+  testResultsFormat: 'JUnit'
+
+
+- job: e2e_${{parameters.stage_name}}
+  condition: eq(variables['MODE'], 'e2e')
+  # We are not running this job on a container, but in a VM.
 
 Review comment:
   the end to end tests are 

[GitHub] [flink] rmetzger commented on a change in pull request #10976: [FLINK-13978][build system] Add experimental support for building on Azure Pipelines

2020-02-12 Thread GitBox
rmetzger commented on a change in pull request #10976: [FLINK-13978][build 
system] Add experimental support for building on Azure Pipelines
URL: https://github.com/apache/flink/pull/10976#discussion_r378392237
 
 

 ##
 File path: tools/azure-pipelines/jobs-template.yml
 ##
 @@ -0,0 +1,137 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+parameters:
+  test_pool_definition: # defines the hardware pool for compilation and unit 
test execution.
+  e2e_pool_definion: # defines the hardware pool for end-to-end test execution
+  stage_name: # defines a unique identifier for all jobs in a stage (in case 
the jobs are added multiple times to a stage)
+  environment: # defines environment variables for downstream scripts
+
+jobs:
+- job: compile_${{parameters.stage_name}}
+  condition: not(eq(variables['MODE'], 'e2e'))
+  pool: ${{parameters.test_pool_definition}}
+  container: flink-build-container
+  timeoutInMinutes: 240
+  cancelTimeoutInMinutes: 1
+  workspace:
+clean: all # this cleans the entire workspace directory before running a 
new job
+# It is necessary because the custom build machines are reused for tests.
+# See also 
https://docs.microsoft.com/en-us/azure/devops/pipelines/process/phases?view=azure-devops=yaml#workspace
 
+
+  steps:
+  # The cache task is persisting the .m2 directory between builds, so that
+  # we do not have to re-download all dependencies from maven central for 
+  # each build. The hope is that downloading the cache is faster than
+  # all dependencies individually.
+  # In this configuration, we a hash over all committed (not generated) .pom 
files 
+  # as a key for the build cache (CACHE_KEY). If we have a cache miss on the 
hash
+  # (usually because a pom file has changed), we'll fall back to a key without
+  # the pom files (CACHE_FALLBACK_KEY).
+  # Offical documentation of the Cache task: 
https://docs.microsoft.com/en-us/azure/devops/pipelines/caching/?view=azure-devops
+  - task: Cache@2
+inputs:
+  key: $(CACHE_KEY)
+  restoreKeys: $(CACHE_FALLBACK_KEY)
+  path: $(MAVEN_CACHE_FOLDER)
+continueOnError: true # continue the build even if the cache fails.
+displayName: Cache Maven local repo
+
+  # Compile
+  - script: STAGE=compile ${{parameters.environment}} 
./tools/azure_controller.sh compile
+displayName: Build
+
+  # upload artifacts for next stage
+  - task: PublishPipelineArtifact@1
 
 Review comment:
   I think its the version of the task


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] rmetzger commented on a change in pull request #10976: [FLINK-13978][build system] Add experimental support for building on Azure Pipelines

2020-02-12 Thread GitBox
rmetzger commented on a change in pull request #10976: [FLINK-13978][build 
system] Add experimental support for building on Azure Pipelines
URL: https://github.com/apache/flink/pull/10976#discussion_r378391274
 
 

 ##
 File path: tools/azure-pipelines/setup_kubernetes.sh
 ##
 @@ -0,0 +1,26 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+echo "Replace moby by docker"
+docker version
+sudo apt-get remove -y moby-engine
+curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
+sudo add-apt-repository \
+   "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
+   $(lsb_release -cs) \
+   stable"
+sudo apt-get update
+sudo apt-get install -y docker-ce docker-ce-cli containerd.io
 
 Review comment:
   The end to end tests are not executed in our custom docker image, but on the 
build machines provided by Azure (because the end to end tests are fairly 
invasive on the underlying system, that's why I wanted to use ephemeral 
machines. Also since the e2e tests use docker extensively, we would have much 
more docker in docker trouble)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] rmetzger commented on a change in pull request #10976: [FLINK-13978][build system] Add experimental support for building on Azure Pipelines

2020-02-12 Thread GitBox
rmetzger commented on a change in pull request #10976: [FLINK-13978][build 
system] Add experimental support for building on Azure Pipelines
URL: https://github.com/apache/flink/pull/10976#discussion_r378389823
 
 

 ##
 File path: 
flink-end-to-end-tests/test-scripts/test_mesos_multiple_submissions.sh
 ##
 @@ -29,7 +29,7 @@ 
TEST_PROGRAM_JAR=$END_TO_END_DIR/flink-cli-test/target/PeriodicStreamingJob.jar
 
 function submit_job {
 local output_path=$1
-docker exec -it mesos-master bash -c "${FLINK_DIR}/bin/flink run -d -p 1 
${TEST_PROGRAM_JAR} --durationInSecond ${DURATION} --outputPath ${output_path}" 
\
+docker exec mesos-master bash -c "${FLINK_DIR}/bin/flink run -d -p 1 
${TEST_PROGRAM_JAR} --durationInSecond ${DURATION} --outputPath ${output_path}" 
\
 
 Review comment:
   `docker exec -it` is basically for using Docker interactively (e.g. you want 
to run `bash` inside a container and talk to bash).
   `-i   Keep STDIN open even if not attached`
   `-t   Allocate a pseudo-TTY`
   
   It's just not necessary to pass `-it` (and IIRC it even logs a warning)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] rmetzger commented on a change in pull request #10976: [FLINK-13978][build system] Add experimental support for building on Azure Pipelines

2020-02-12 Thread GitBox
rmetzger commented on a change in pull request #10976: [FLINK-13978][build 
system] Add experimental support for building on Azure Pipelines
URL: https://github.com/apache/flink/pull/10976#discussion_r378171401
 
 

 ##
 File path: tools/azure-pipelines/build-apache-repo.yml
 ##
 @@ -0,0 +1,94 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+#
+# This file defines the Flink build for the "apache/flink" repository, 
including
+# the following:
+#  - PR builds
+#  - custom triggered e2e tests
+#  - nightly builds
+
+
+
+schedules:
+- cron: "0 0 * * *"
+  displayName: Daily midnight build
+  branches:
+include:
+- master
+  always: true # run even if there were no changes to the mentioned branches
+
+resources:
+  containers:
+  # Container with Maven 3.2.5, SSL to have the same environment everywhere.
+  - container: flink-build-container
+image: rmetzger/flink-ci:ubuntu-jdk8-amd64-e005e00
+
+
+variables:
 
 Review comment:
   No, I have not found a way. There are sadly some duplications between the 
two build definitions, but I've tried to keep the `azure-pipelines.yml` file as 
simple as possible (because that's the one most people will use and look at).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] rmetzger commented on a change in pull request #10976: [FLINK-13978][build system] Add experimental support for building on Azure Pipelines

2020-02-12 Thread GitBox
rmetzger commented on a change in pull request #10976: [FLINK-13978][build 
system] Add experimental support for building on Azure Pipelines
URL: https://github.com/apache/flink/pull/10976#discussion_r378170728
 
 

 ##
 File path: tools/azure-pipelines/build-apache-repo.yml
 ##
 @@ -0,0 +1,94 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+#
+# This file defines the Flink build for the "apache/flink" repository, 
including
+# the following:
+#  - PR builds
+#  - custom triggered e2e tests
+#  - nightly builds
+
+
+
+schedules:
+- cron: "0 0 * * *"
+  displayName: Daily midnight build
+  branches:
+include:
+- master
+  always: true # run even if there were no changes to the mentioned branches
+
+resources:
+  containers:
+  # Container with Maven 3.2.5, SSL to have the same environment everywhere.
+  - container: flink-build-container
+image: rmetzger/flink-ci:ubuntu-jdk8-amd64-e005e00
+
+
+variables:
+  MAVEN_CACHE_FOLDER: $(Pipeline.Workspace)/.m2/repository
+  MAVEN_OPTS: '-Dmaven.repo.local=$(MAVEN_CACHE_FOLDER)'
+  CACHE_KEY: maven | $(Agent.OS) | **/pom.xml, !**/target/**
+  CACHE_FALLBACK_KEY: maven | $(Agent.OS)
+  CACHE_FLINK_DIR: $(Pipeline.Workspace)/flink_cache
+
+stages:
+  # CI / PR triggered stage:
+  - stage: ci_build
+displayName: "CI Build (custom builders)"
+condition: not(eq(variables['Build.Reason'], in('Schedule', 'Manual')))
+jobs:
+  - template: jobs-template.yml
+parameters:
+  stage_name: ci_build
+  test_pool_definition:
+name: Default
+  e2e_pool_definition:
+vmImage: 'ubuntu-latest'
+  environment: PROFILE="-Dhadoop.version=2.8.3 -Dinclude_hadoop_aws 
-Dscala-2.11"
 
 Review comment:
   As jobs.
   
   Stages are different triggers, such as "build on push / pr", "end 2 end 
tests", "nightly cron"
   Jobs (included from the jobs-template) are parameterized through the 
`environment` for different scala / hadoop / whatnot.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] rmetzger commented on a change in pull request #10976: [FLINK-13978][build system] Add experimental support for building on Azure Pipelines

2020-02-12 Thread GitBox
rmetzger commented on a change in pull request #10976: [FLINK-13978][build 
system] Add experimental support for building on Azure Pipelines
URL: https://github.com/apache/flink/pull/10976#discussion_r378169655
 
 

 ##
 File path: tools/azure-pipelines/jobs-template.yml
 ##
 @@ -0,0 +1,129 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+parameters:
+  test_pool_definition: # where is compiliation and unit test execution 
happening?
+  e2e_pool_definion: # where is e2e test execution happening?
+  stage_name: # needed to make job names unique if they are included multiple 
times
+  environment: # used to pass environment variables into the downstream scripts
+
+jobs:
+- job: compile_${{parameters.stage_name}}
+  condition: not(eq(variables['MODE'], 'e2e'))
+  pool: ${{parameters.test_pool_definition}}
+  container: flink-build-container
+  timeoutInMinutes: 240
+  cancelTimeoutInMinutes: 1
+  workspace:
+clean: all
+  steps:
+
+  # Preparation
+  - task: CacheBeta@1
+inputs:
+  key: $(CACHE_KEY)
+  restoreKeys: $(CACHE_FALLBACK_KEY)
+  path: $(MAVEN_CACHE_FOLDER)
+  cacheHitVar: CACHE_RESTORED
+continueOnError: true # continue the build even if the cache fails.
+displayName: Cache Maven local repo
+
+  # Compile
+  - script: STAGE=compile ${{parameters.environment}} 
./tools/azure_controller.sh compile
+displayName: Build
+
+  # upload artifacts for next stage
+  - task: PublishPipelineArtifact@1
+inputs:
+  path: $(CACHE_FLINK_DIR)
+  artifact: FlinkCompileCacheDir-${{parameters.stage_name}}
+
+- job: test_${{parameters.stage_name}}
+  dependsOn: compile_${{parameters.stage_name}}
+  condition: not(eq(variables['MODE'], 'e2e'))
+  pool: ${{parameters.test_pool_definition}}
+  container: flink-build-container
+  timeoutInMinutes: 240
+  cancelTimeoutInMinutes: 1
+  workspace:
+clean: all
+  strategy:
+matrix:
+  core:
+module: core
+  python:
+module: python
+  libraries:
+module: libraries
+  blink_planner:
+module: blink_planner
+  connectors:
+module: connectors
+  kafka_gelly:
+module: kafka/gelly
+  tests:
+module: tests
+  legacy_scheduler_core:
+module: legacy_scheduler_core
+  legacy_scheduler_tests:
+module: legacy_scheduler_tests
+  misc:
+module: misc
+  steps:
+
+  # download artifacts
+  - task: DownloadPipelineArtifact@2
+inputs:
+  path: $(CACHE_FLINK_DIR)
+  artifact: FlinkCompileCacheDir-${{parameters.stage_name}}
+
+  # recreate "build-target" symlink for python tests
+  - script: |
+  ls -lisah $(CACHE_FLINK_DIR)
+  ls -lisah .
+  ln -snf 
$(CACHE_FLINK_DIR)/flink-dist/target/flink-*-SNAPSHOT-bin/flink-*-SNAPSHOT 
$(CACHE_FLINK_DIR)/build-target
+displayName: Recreate 'build-target' symlink
+  # Test
+  - script: STAGE=test ${{parameters.environment}} ./tools/azure_controller.sh 
$(module)
+displayName: Test - $(module)
+
+  - task: PublishTestResults@2
+inputs:
+  testResultsFormat: 'JUnit'
+
+
+- job: e2e_${{parameters.stage_name}}
+  condition: eq(variables['MODE'], 'e2e')
+  # We are not running this job on a container, but in a VM.
+  pool: ${{parameters.e2e_pool_definition}}
+  timeoutInMinutes: 240
+  cancelTimeoutInMinutes: 1
+  workspace:
+clean: all
+  steps:
+- task: CacheBeta@1
 
 Review comment:
   In this case, yes. However, the Cache definition is only 5 lines of code. 
I'm afraid that people might have a hard time understanding the build 
definition if there are too many indirections.
   I vote to keep it as-is.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] rmetzger commented on a change in pull request #10976: [FLINK-13978][build system] Add experimental support for building on Azure Pipelines

2020-02-12 Thread GitBox
rmetzger commented on a change in pull request #10976: [FLINK-13978][build 
system] Add experimental support for building on Azure Pipelines
URL: https://github.com/apache/flink/pull/10976#discussion_r378167572
 
 

 ##
 File path: tools/azure-pipelines/jobs-template.yml
 ##
 @@ -0,0 +1,129 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+parameters:
+  test_pool_definition: # where is compiliation and unit test execution 
happening?
+  e2e_pool_definion: # where is e2e test execution happening?
+  stage_name: # needed to make job names unique if they are included multiple 
times
+  environment: # used to pass environment variables into the downstream scripts
+
+jobs:
+- job: compile_${{parameters.stage_name}}
+  condition: not(eq(variables['MODE'], 'e2e'))
+  pool: ${{parameters.test_pool_definition}}
+  container: flink-build-container
+  timeoutInMinutes: 240
+  cancelTimeoutInMinutes: 1
+  workspace:
+clean: all
+  steps:
+
+  # Preparation
+  - task: CacheBeta@1
+inputs:
+  key: $(CACHE_KEY)
+  restoreKeys: $(CACHE_FALLBACK_KEY)
+  path: $(MAVEN_CACHE_FOLDER)
+  cacheHitVar: CACHE_RESTORED
+continueOnError: true # continue the build even if the cache fails.
+displayName: Cache Maven local repo
+
+  # Compile
+  - script: STAGE=compile ${{parameters.environment}} 
./tools/azure_controller.sh compile
+displayName: Build
+
+  # upload artifacts for next stage
+  - task: PublishPipelineArtifact@1
+inputs:
+  path: $(CACHE_FLINK_DIR)
+  artifact: FlinkCompileCacheDir-${{parameters.stage_name}}
+
+- job: test_${{parameters.stage_name}}
+  dependsOn: compile_${{parameters.stage_name}}
+  condition: not(eq(variables['MODE'], 'e2e'))
+  pool: ${{parameters.test_pool_definition}}
+  container: flink-build-container
+  timeoutInMinutes: 240
+  cancelTimeoutInMinutes: 1
 
 Review comment:
   See: 
https://docs.microsoft.com/en-us/azure/devops/pipelines/process/phases?view=azure-devops=yaml
   It sounds like this makes azure try harder to cancel a running task. Sadly, 
task cancellation is not very reliable on the custom machines (they usually 
continue running till the current task has finished).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] rmetzger commented on a change in pull request #10976: [FLINK-13978][build system] Add experimental support for building on Azure Pipelines

2020-02-12 Thread GitBox
rmetzger commented on a change in pull request #10976: [FLINK-13978][build 
system] Add experimental support for building on Azure Pipelines
URL: https://github.com/apache/flink/pull/10976#discussion_r378166539
 
 

 ##
 File path: tools/azure-pipelines/jobs-template.yml
 ##
 @@ -0,0 +1,129 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+parameters:
+  test_pool_definition: # where is compiliation and unit test execution 
happening?
+  e2e_pool_definion: # where is e2e test execution happening?
+  stage_name: # needed to make job names unique if they are included multiple 
times
+  environment: # used to pass environment variables into the downstream scripts
+
+jobs:
+- job: compile_${{parameters.stage_name}}
+  condition: not(eq(variables['MODE'], 'e2e'))
+  pool: ${{parameters.test_pool_definition}}
+  container: flink-build-container
+  timeoutInMinutes: 240
+  cancelTimeoutInMinutes: 1
+  workspace:
+clean: all
+  steps:
+
+  # Preparation
+  - task: CacheBeta@1
+inputs:
+  key: $(CACHE_KEY)
+  restoreKeys: $(CACHE_FALLBACK_KEY)
+  path: $(MAVEN_CACHE_FOLDER)
+  cacheHitVar: CACHE_RESTORED
+continueOnError: true # continue the build even if the cache fails.
+displayName: Cache Maven local repo
+
+  # Compile
+  - script: STAGE=compile ${{parameters.environment}} 
./tools/azure_controller.sh compile
+displayName: Build
+
+  # upload artifacts for next stage
+  - task: PublishPipelineArtifact@1
+inputs:
+  path: $(CACHE_FLINK_DIR)
+  artifact: FlinkCompileCacheDir-${{parameters.stage_name}}
+
+- job: test_${{parameters.stage_name}}
+  dependsOn: compile_${{parameters.stage_name}}
+  condition: not(eq(variables['MODE'], 'e2e'))
+  pool: ${{parameters.test_pool_definition}}
+  container: flink-build-container
+  timeoutInMinutes: 240
+  cancelTimeoutInMinutes: 1
+  workspace:
+clean: all
 
 Review comment:
   changed to
   ```
 workspace:
   clean: all # this cleans the entire workspace directory before running a 
new job
   # It is necessary because the custom build machines are reused for tests.
   # See also 
https://docs.microsoft.com/en-us/azure/devops/pipelines/process/phases?view=azure-devops=yaml#workspace
 
   ``` 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] rmetzger commented on a change in pull request #10976: [FLINK-13978][build system] Add experimental support for building on Azure Pipelines

2020-02-12 Thread GitBox
rmetzger commented on a change in pull request #10976: [FLINK-13978][build 
system] Add experimental support for building on Azure Pipelines
URL: https://github.com/apache/flink/pull/10976#discussion_r378164978
 
 

 ##
 File path: tools/azure-pipelines/jobs-template.yml
 ##
 @@ -0,0 +1,129 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+parameters:
+  test_pool_definition: # where is compiliation and unit test execution 
happening?
+  e2e_pool_definion: # where is e2e test execution happening?
+  stage_name: # needed to make job names unique if they are included multiple 
times
+  environment: # used to pass environment variables into the downstream scripts
+
+jobs:
+- job: compile_${{parameters.stage_name}}
+  condition: not(eq(variables['MODE'], 'e2e'))
 
 Review comment:
   at the time of your review, I would have agreed. But I have spend the past 
few days fixing the end to end test execution, and I would suggest to keep the 
e2e mode in.
   This would allow people to run the end to end tests at least in their 
private forks, and hopefully soon through CI bot as well.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] rmetzger commented on a change in pull request #10976: [FLINK-13978][build system] Add experimental support for building on Azure Pipelines

2020-02-12 Thread GitBox
rmetzger commented on a change in pull request #10976: [FLINK-13978][build 
system] Add experimental support for building on Azure Pipelines
URL: https://github.com/apache/flink/pull/10976#discussion_r378165147
 
 

 ##
 File path: tools/azure-pipelines/jobs-template.yml
 ##
 @@ -0,0 +1,129 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+parameters:
+  test_pool_definition: # where is compiliation and unit test execution 
happening?
+  e2e_pool_definion: # where is e2e test execution happening?
+  stage_name: # needed to make job names unique if they are included multiple 
times
+  environment: # used to pass environment variables into the downstream scripts
+
+jobs:
+- job: compile_${{parameters.stage_name}}
+  condition: not(eq(variables['MODE'], 'e2e'))
+  pool: ${{parameters.test_pool_definition}}
+  container: flink-build-container
+  timeoutInMinutes: 240
+  cancelTimeoutInMinutes: 1
+  workspace:
+clean: all
+  steps:
+
+  # Preparation
+  - task: CacheBeta@1
+inputs:
+  key: $(CACHE_KEY)
+  restoreKeys: $(CACHE_FALLBACK_KEY)
+  path: $(MAVEN_CACHE_FOLDER)
+  cacheHitVar: CACHE_RESTORED
+continueOnError: true # continue the build even if the cache fails.
+displayName: Cache Maven local repo
+
+  # Compile
+  - script: STAGE=compile ${{parameters.environment}} 
./tools/azure_controller.sh compile
+displayName: Build
+
+  # upload artifacts for next stage
+  - task: PublishPipelineArtifact@1
+inputs:
+  path: $(CACHE_FLINK_DIR)
+  artifact: FlinkCompileCacheDir-${{parameters.stage_name}}
+
+- job: test_${{parameters.stage_name}}
+  dependsOn: compile_${{parameters.stage_name}}
+  condition: not(eq(variables['MODE'], 'e2e'))
+  pool: ${{parameters.test_pool_definition}}
+  container: flink-build-container
+  timeoutInMinutes: 240
+  cancelTimeoutInMinutes: 1
+  workspace:
+clean: all
+  strategy:
+matrix:
+  core:
+module: core
+  python:
+module: python
+  libraries:
+module: libraries
+  blink_planner:
+module: blink_planner
+  connectors:
+module: connectors
+  kafka_gelly:
+module: kafka/gelly
+  tests:
+module: tests
+  legacy_scheduler_core:
+module: legacy_scheduler_core
+  legacy_scheduler_tests:
+module: legacy_scheduler_tests
+  misc:
+module: misc
+  steps:
+
+  # download artifacts
+  - task: DownloadPipelineArtifact@2
+inputs:
+  path: $(CACHE_FLINK_DIR)
+  artifact: FlinkCompileCacheDir-${{parameters.stage_name}}
+
+  # recreate "build-target" symlink for python tests
+  - script: |
+  ls -lisah $(CACHE_FLINK_DIR)
+  ls -lisah .
+  ln -snf 
$(CACHE_FLINK_DIR)/flink-dist/target/flink-*-SNAPSHOT-bin/flink-*-SNAPSHOT 
$(CACHE_FLINK_DIR)/build-target
+displayName: Recreate 'build-target' symlink
+  # Test
+  - script: STAGE=test ${{parameters.environment}} ./tools/azure_controller.sh 
$(module)
+displayName: Test - $(module)
+
+  - task: PublishTestResults@2
+inputs:
+  testResultsFormat: 'JUnit'
+
+
+- job: e2e_${{parameters.stage_name}}
 
 Review comment:
   It is not wip anymore. End to end tests are working ✅ 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] rmetzger commented on a change in pull request #10976: [FLINK-13978][build system] Add experimental support for building on Azure Pipelines

2020-02-12 Thread GitBox
rmetzger commented on a change in pull request #10976: [FLINK-13978][build 
system] Add experimental support for building on Azure Pipelines
URL: https://github.com/apache/flink/pull/10976#discussion_r378164317
 
 

 ##
 File path: tools/azure-pipelines/build-apache-repo.yml
 ##
 @@ -0,0 +1,94 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+#
+# This file defines the Flink build for the "apache/flink" repository, 
including
+# the following:
+#  - PR builds
+#  - custom triggered e2e tests
+#  - nightly builds
+
+
+
+schedules:
+- cron: "0 0 * * *"
+  displayName: Daily midnight build
+  branches:
+include:
+- master
+  always: true # run even if there were no changes to the mentioned branches
+
+resources:
+  containers:
+  # Container with Maven 3.2.5, SSL to have the same environment everywhere.
+  - container: flink-build-container
+image: rmetzger/flink-ci:ubuntu-jdk8-amd64-e005e00
+
+
+variables:
+  MAVEN_CACHE_FOLDER: $(Pipeline.Workspace)/.m2/repository
+  MAVEN_OPTS: '-Dmaven.repo.local=$(MAVEN_CACHE_FOLDER)'
+  CACHE_KEY: maven | $(Agent.OS) | **/pom.xml, !**/target/**
+  CACHE_FALLBACK_KEY: maven | $(Agent.OS)
+  CACHE_FLINK_DIR: $(Pipeline.Workspace)/flink_cache
+
+stages:
+  # CI / PR triggered stage:
+  - stage: ci_build
+displayName: "CI Build (custom builders)"
+condition: not(eq(variables['Build.Reason'], in('Schedule', 'Manual')))
+jobs:
+  - template: jobs-template.yml
+parameters:
+  stage_name: ci_build
 
 Review comment:
   I added the following documentation to the `jobs-template.yml`
   `defines a unique identifier for all jobs in a stage (in case the jobs are 
added multiple times to a stage)`
   It is a bit confusing that the stage name is under `jobs`. We can rename 
that parameter if we have a better name for it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] rmetzger commented on a change in pull request #10976: [FLINK-13978][build system] Add experimental support for building on Azure Pipelines

2020-02-12 Thread GitBox
rmetzger commented on a change in pull request #10976: [FLINK-13978][build 
system] Add experimental support for building on Azure Pipelines
URL: https://github.com/apache/flink/pull/10976#discussion_r378162555
 
 

 ##
 File path: azure-pipelines.yml
 ##
 @@ -13,23 +13,44 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
+#
+# This file defines an Azure Pipeline build for testing Flink. It is intended 
to be used
+# with a free Azure Pipelines account.
+# It has the following features:
+#  - default builds for pushes / pull requests to a Flink fork and custom AZP 
account
 
 Review comment:
   I clarified the language.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] rmetzger commented on a change in pull request #10976: [FLINK-13978][build system] Add experimental support for building on Azure Pipelines

2020-02-12 Thread GitBox
rmetzger commented on a change in pull request #10976: [FLINK-13978][build 
system] Add experimental support for building on Azure Pipelines
URL: https://github.com/apache/flink/pull/10976#discussion_r378161731
 
 

 ##
 File path: tools/azure-pipelines/build-apache-repo.yml
 ##
 @@ -0,0 +1,94 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+#
+# This file defines the Flink build for the "apache/flink" repository, 
including
+# the following:
+#  - PR builds
+#  - custom triggered e2e tests
+#  - nightly builds
+
+
+
+schedules:
+- cron: "0 0 * * *"
+  displayName: Daily midnight build
+  branches:
+include:
+- master
 
 Review comment:
   I will fix this as part of FLINK-15834


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] rmetzger commented on a change in pull request #10976: [FLINK-13978][build system] Add experimental support for building on Azure Pipelines

2020-02-12 Thread GitBox
rmetzger commented on a change in pull request #10976: [FLINK-13978][build 
system] Add experimental support for building on Azure Pipelines
URL: https://github.com/apache/flink/pull/10976#discussion_r378161079
 
 

 ##
 File path: azure-pipelines.yml
 ##
 @@ -13,23 +13,44 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
+#
+# This file defines an Azure Pipeline build for testing Flink. It is intended 
to be used
+# with a free Azure Pipelines account.
+# It has the following features:
+#  - default builds for pushes / pull requests to a Flink fork and custom AZP 
account
+#  - end2end tests
+#
+#
+# For the "apache/flink" repository, we are using the pipeline definition 
located in
+#   tools/azure-pipelines/build-apache-repo.yml
+# That file points to custom, self-hosted build agents for faster pull request 
build processing and 
+# integration with Flinkbot.
+#
 
-trigger:
-  branches:
-include:
-- '*' 
 
 resources:
   containers:
-  # Container with Maven 3.2.5 to have the same environment everywhere.
+  # Container with Maven 3.2.5, SSL to have the same environment everywhere.
   - container: flink-build-container
-image: rmetzger/flink-ci:3
-  repositories:
-- repository: templates
-  type: github
-  name: flink-ci/flink-azure-builds
-  endpoint: flink-ci
+image: rmetzger/flink-ci:ubuntu-jdk8-amd64-e005e00
+
+variables:
+  MAVEN_CACHE_FOLDER: $(Pipeline.Workspace)/.m2/repository
+  MAVEN_OPTS: '-Dmaven.repo.local=$(MAVEN_CACHE_FOLDER)'
+  CACHE_KEY: maven | $(Agent.OS) | **/pom.xml, !**/target/**
+  CACHE_FALLBACK_KEY: maven | $(Agent.OS)
+  CACHE_FLINK_DIR: $(Pipeline.Workspace)/flink_cache
+
 
 jobs:
-- template: flink-build-jobs.yml@templates
+  - template: tools/azure-pipelines/jobs-template.yml
+parameters:
+  stage_name: ci_build
+  test_pool_definition:
 
 Review comment:
   This defines the hardware pool for compilation and unit test execution in 
the `job-template.yml`.
   I will add some clarifying comments into the source.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] rmetzger commented on a change in pull request #10976: [FLINK-13978][build system] Add experimental support for building on Azure Pipelines

2020-02-06 Thread GitBox
rmetzger commented on a change in pull request #10976: [FLINK-13978][build 
system] Add experimental support for building on Azure Pipelines
URL: https://github.com/apache/flink/pull/10976#discussion_r375878864
 
 

 ##
 File path: azure-pipelines.yml
 ##
 @@ -13,23 +13,44 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
+#
+# This file defines an Azure Pipeline build for testing Flink. It is intended 
to be used
+# with a free Azure Pipelines account.
+# It has the following features:
+#  - default builds for pushes / pull requests to a Flink fork and custom AZP 
account
+#  - end2end tests
+#
+#
+# For the "apache/flink" repository, we are using the pipeline definition 
located in
+#   tools/azure-pipelines/build-apache-repo.yml
+# That file points to custom, self-hosted build agents for faster pull request 
build processing and 
+# integration with Flinkbot.
+#
 
-trigger:
-  branches:
-include:
-- '*' 
 
 resources:
   containers:
-  # Container with Maven 3.2.5 to have the same environment everywhere.
+  # Container with Maven 3.2.5, SSL to have the same environment everywhere.
   - container: flink-build-container
-image: rmetzger/flink-ci:3
-  repositories:
-- repository: templates
-  type: github
-  name: flink-ci/flink-azure-builds
-  endpoint: flink-ci
+image: rmetzger/flink-ci:ubuntu-jdk8-amd64-e005e00
+
+variables:
+  MAVEN_CACHE_FOLDER: $(Pipeline.Workspace)/.m2/repository
+  MAVEN_OPTS: '-Dmaven.repo.local=$(MAVEN_CACHE_FOLDER)'
+  CACHE_KEY: maven | $(Agent.OS) | **/pom.xml, !**/target/**
 
 Review comment:
   I will add some comments to clarify.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] rmetzger commented on a change in pull request #10976: [FLINK-13978][build system] Add experimental support for building on Azure Pipelines

2020-02-06 Thread GitBox
rmetzger commented on a change in pull request #10976: [FLINK-13978][build 
system] Add experimental support for building on Azure Pipelines
URL: https://github.com/apache/flink/pull/10976#discussion_r375878754
 
 

 ##
 File path: azure-pipelines.yml
 ##
 @@ -13,23 +13,44 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
+#
+# This file defines an Azure Pipeline build for testing Flink. It is intended 
to be used
+# with a free Azure Pipelines account.
+# It has the following features:
+#  - default builds for pushes / pull requests to a Flink fork and custom AZP 
account
+#  - end2end tests
+#
+#
+# For the "apache/flink" repository, we are using the pipeline definition 
located in
+#   tools/azure-pipelines/build-apache-repo.yml
+# That file points to custom, self-hosted build agents for faster pull request 
build processing and 
+# integration with Flinkbot.
+#
 
-trigger:
-  branches:
-include:
-- '*' 
 
 resources:
   containers:
-  # Container with Maven 3.2.5 to have the same environment everywhere.
+  # Container with Maven 3.2.5, SSL to have the same environment everywhere.
   - container: flink-build-container
-image: rmetzger/flink-ci:3
-  repositories:
-- repository: templates
-  type: github
-  name: flink-ci/flink-azure-builds
-  endpoint: flink-ci
+image: rmetzger/flink-ci:ubuntu-jdk8-amd64-e005e00
+
+variables:
+  MAVEN_CACHE_FOLDER: $(Pipeline.Workspace)/.m2/repository
+  MAVEN_OPTS: '-Dmaven.repo.local=$(MAVEN_CACHE_FOLDER)'
+  CACHE_KEY: maven | $(Agent.OS) | **/pom.xml, !**/target/**
 
 Review comment:
   Since we are using a fallback key, the new cache file will be based on a 
previous cache.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] rmetzger commented on a change in pull request #10976: [FLINK-13978][build system] Add experimental support for building on Azure Pipelines

2020-02-06 Thread GitBox
rmetzger commented on a change in pull request #10976: [FLINK-13978][build 
system] Add experimental support for building on Azure Pipelines
URL: https://github.com/apache/flink/pull/10976#discussion_r375878025
 
 

 ##
 File path: azure-pipelines.yml
 ##
 @@ -13,23 +13,44 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
+#
+# This file defines an Azure Pipeline build for testing Flink. It is intended 
to be used
+# with a free Azure Pipelines account.
+# It has the following features:
+#  - default builds for pushes / pull requests to a Flink fork and custom AZP 
account
+#  - end2end tests
+#
+#
+# For the "apache/flink" repository, we are using the pipeline definition 
located in
+#   tools/azure-pipelines/build-apache-repo.yml
+# That file points to custom, self-hosted build agents for faster pull request 
build processing and 
+# integration with Flinkbot.
+#
 
-trigger:
-  branches:
-include:
-- '*' 
 
 resources:
   containers:
-  # Container with Maven 3.2.5 to have the same environment everywhere.
+  # Container with Maven 3.2.5, SSL to have the same environment everywhere.
   - container: flink-build-container
-image: rmetzger/flink-ci:3
-  repositories:
-- repository: templates
-  type: github
-  name: flink-ci/flink-azure-builds
-  endpoint: flink-ci
+image: rmetzger/flink-ci:ubuntu-jdk8-amd64-e005e00
+
+variables:
+  MAVEN_CACHE_FOLDER: $(Pipeline.Workspace)/.m2/repository
+  MAVEN_OPTS: '-Dmaven.repo.local=$(MAVEN_CACHE_FOLDER)'
+  CACHE_KEY: maven | $(Agent.OS) | **/pom.xml, !**/target/**
+  CACHE_FALLBACK_KEY: maven | $(Agent.OS)
 
 Review comment:
   When the `CACHE_KEY` has a miss, it'll use the fallback key.
   What this means in practice is that we are downloading the cache, even if 
there were changes to the pom files.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] rmetzger commented on a change in pull request #10976: [FLINK-13978][build system] Add experimental support for building on Azure Pipelines

2020-01-31 Thread GitBox
rmetzger commented on a change in pull request #10976: [FLINK-13978][build 
system] Add experimental support for building on Azure Pipelines
URL: https://github.com/apache/flink/pull/10976#discussion_r373460583
 
 

 ##
 File path: tools/azure-pipelines/build-apache-repo.yml
 ##
 @@ -0,0 +1,94 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+#
+# This file defines the Flink build for the "apache/flink" repository, 
including
+# the following:
+#  - PR builds
+#  - custom triggered e2e tests
+#  - nightly builds
+
+
+
+schedules:
+- cron: "0 0 * * *"
+  displayName: Daily midnight build
+  branches:
+include:
+- master
+  always: true # run even if there were no changes to the mentioned branches
+
+resources:
+  containers:
+  # Container with Maven 3.2.5, SSL to have the same environment everywhere.
+  - container: flink-build-container
+image: rmetzger/flink-ci:ubuntu-jdk8-amd64-e005e00
+
+
+variables:
+  MAVEN_CACHE_FOLDER: $(Pipeline.Workspace)/.m2/repository
+  MAVEN_OPTS: '-Dmaven.repo.local=$(MAVEN_CACHE_FOLDER)'
+  CACHE_KEY: maven | $(Agent.OS) | **/pom.xml, !**/target/**
+  CACHE_FALLBACK_KEY: maven | $(Agent.OS)
+  CACHE_FLINK_DIR: $(Pipeline.Workspace)/flink_cache
+
+stages:
+  # CI / PR triggered stage:
+  - stage: ci_build
+displayName: "CI Build (custom builders)"
+condition: not(eq(variables['Build.Reason'], in('Schedule', 'Manual')))
+jobs:
+  - template: jobs-template.yml
+parameters:
+  stage_name: ci_build
+  test_pool_definition:
+name: Default
+  e2e_pool_definition:
+vmImage: 'ubuntu-latest'
+  environment: PROFILE="-Dhadoop.version=2.8.3 -Dinclude_hadoop_aws 
-Dscala-2.11"
+
+  # Special stage for midnight builds:
+  - stage: cron_build_on_azure_os_free_pool
+displayName: "Cron build on free Azure Resource Pool"
+dependsOn: [] # depending on an empty array makes the stages run in 
parallel
+condition: or(eq(variables['Build.Reason'], 'Schedule'), 
eq(variables['MODE'], 'nightly'))
+jobs:
+  - template: jobs-template.yml
+parameters:
+  stage_name: cron_build_default
+  test_pool_definition:
+vmImage: 'ubuntu-latest'
+  e2e_pool_definition:
+vmImage: 'ubuntu-latest'
+  environment: PROFILE="-Dhadoop.version=2.8.3 -Dinclude_hadoop_aws 
-Dscala-2.11"
+  - template: jobs-template.yml
+parameters:
+  stage_name: cron_build_scala2_12
+  test_pool_definition:
+vmImage: 'ubuntu-latest'
+  e2e_pool_definition:
+vmImage: 'ubuntu-latest'
+  environment: PROFILE="-Dhadoop.version=2.8.3 -Dinclude_hadoop_aws 
-Dscala-2.12 -Phive-1.2.1"
+  - template: jobs-template.yml
+parameters:
+  stage_name: cron_build_jdk11
 
 Review comment:
   https://issues.apache.org/jira/browse/FLINK-15834


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] rmetzger commented on a change in pull request #10976: [FLINK-13978][build system] Add experimental support for building on Azure Pipelines

2020-01-31 Thread GitBox
rmetzger commented on a change in pull request #10976: [FLINK-13978][build 
system] Add experimental support for building on Azure Pipelines
URL: https://github.com/apache/flink/pull/10976#discussion_r373459195
 
 

 ##
 File path: flink-end-to-end-tests/run-nightly-tests.sh
 ##
 @@ -88,8 +88,11 @@ run_test "Resuming Externalized Checkpoint after terminal 
failure (rocks, increm
 # Docker
 

 
-run_test "Running Kerberized YARN on Docker test (default input)" 
"$END_TO_END_DIR/test-scripts/test_yarn_kerberos_docker.sh"
-run_test "Running Kerberized YARN on Docker test (custom fs plugin)" 
"$END_TO_END_DIR/test-scripts/test_yarn_kerberos_docker.sh dummy-fs"
+# Ignore these tests on Azure
 
 Review comment:
   Ok --> https://issues.apache.org/jira/browse/FLINK-15833


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] rmetzger commented on a change in pull request #10976: [FLINK-13978][build system] Add experimental support for building on Azure Pipelines

2020-01-31 Thread GitBox
rmetzger commented on a change in pull request #10976: [FLINK-13978][build 
system] Add experimental support for building on Azure Pipelines
URL: https://github.com/apache/flink/pull/10976#discussion_r373457638
 
 

 ##
 File path: tools/azure-pipelines/build-apache-repo.yml
 ##
 @@ -0,0 +1,94 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+#
+# This file defines the Flink build for the "apache/flink" repository, 
including
+# the following:
+#  - PR builds
+#  - custom triggered e2e tests
+#  - nightly builds
+
+
+
+schedules:
+- cron: "0 0 * * *"
+  displayName: Daily midnight build
+  branches:
+include:
+- master
+  always: true # run even if there were no changes to the mentioned branches
+
+resources:
+  containers:
+  # Container with Maven 3.2.5, SSL to have the same environment everywhere.
+  - container: flink-build-container
+image: rmetzger/flink-ci:ubuntu-jdk8-amd64-e005e00
+
+
+variables:
+  MAVEN_CACHE_FOLDER: $(Pipeline.Workspace)/.m2/repository
+  MAVEN_OPTS: '-Dmaven.repo.local=$(MAVEN_CACHE_FOLDER)'
+  CACHE_KEY: maven | $(Agent.OS) | **/pom.xml, !**/target/**
+  CACHE_FALLBACK_KEY: maven | $(Agent.OS)
+  CACHE_FLINK_DIR: $(Pipeline.Workspace)/flink_cache
+
+stages:
+  # CI / PR triggered stage:
+  - stage: ci_build
+displayName: "CI Build (custom builders)"
+condition: not(eq(variables['Build.Reason'], in('Schedule', 'Manual')))
+jobs:
+  - template: jobs-template.yml
+parameters:
+  stage_name: ci_build
+  test_pool_definition:
+name: Default
+  e2e_pool_definition:
+vmImage: 'ubuntu-latest'
+  environment: PROFILE="-Dhadoop.version=2.8.3 -Dinclude_hadoop_aws 
-Dscala-2.11"
+
+  # Special stage for midnight builds:
+  - stage: cron_build_on_azure_os_free_pool
+displayName: "Cron build on free Azure Resource Pool"
+dependsOn: [] # depending on an empty array makes the stages run in 
parallel
+condition: or(eq(variables['Build.Reason'], 'Schedule'), 
eq(variables['MODE'], 'nightly'))
+jobs:
+  - template: jobs-template.yml
+parameters:
+  stage_name: cron_build_default
+  test_pool_definition:
+vmImage: 'ubuntu-latest'
+  e2e_pool_definition:
+vmImage: 'ubuntu-latest'
+  environment: PROFILE="-Dhadoop.version=2.8.3 -Dinclude_hadoop_aws 
-Dscala-2.11"
+  - template: jobs-template.yml
+parameters:
+  stage_name: cron_build_scala2_12
+  test_pool_definition:
+vmImage: 'ubuntu-latest'
+  e2e_pool_definition:
+vmImage: 'ubuntu-latest'
+  environment: PROFILE="-Dhadoop.version=2.8.3 -Dinclude_hadoop_aws 
-Dscala-2.12 -Phive-1.2.1"
+  - template: jobs-template.yml
+parameters:
+  stage_name: cron_build_jdk11
 
 Review comment:
   ok, thx for the fast response.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] rmetzger commented on a change in pull request #10976: [FLINK-13978][build system] Add experimental support for building on Azure Pipelines

2020-01-31 Thread GitBox
rmetzger commented on a change in pull request #10976: [FLINK-13978][build 
system] Add experimental support for building on Azure Pipelines
URL: https://github.com/apache/flink/pull/10976#discussion_r373387519
 
 

 ##
 File path: tools/azure-pipelines/build-apache-repo.yml
 ##
 @@ -0,0 +1,94 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+#
+# This file defines the Flink build for the "apache/flink" repository, 
including
+# the following:
+#  - PR builds
+#  - custom triggered e2e tests
+#  - nightly builds
+
+
+
+schedules:
+- cron: "0 0 * * *"
+  displayName: Daily midnight build
+  branches:
+include:
+- master
+  always: true # run even if there were no changes to the mentioned branches
+
+resources:
+  containers:
+  # Container with Maven 3.2.5, SSL to have the same environment everywhere.
+  - container: flink-build-container
+image: rmetzger/flink-ci:ubuntu-jdk8-amd64-e005e00
+
+
+variables:
+  MAVEN_CACHE_FOLDER: $(Pipeline.Workspace)/.m2/repository
+  MAVEN_OPTS: '-Dmaven.repo.local=$(MAVEN_CACHE_FOLDER)'
+  CACHE_KEY: maven | $(Agent.OS) | **/pom.xml, !**/target/**
+  CACHE_FALLBACK_KEY: maven | $(Agent.OS)
+  CACHE_FLINK_DIR: $(Pipeline.Workspace)/flink_cache
+
+stages:
+  # CI / PR triggered stage:
+  - stage: ci_build
+displayName: "CI Build (custom builders)"
+condition: not(eq(variables['Build.Reason'], in('Schedule', 'Manual')))
+jobs:
+  - template: jobs-template.yml
+parameters:
+  stage_name: ci_build
+  test_pool_definition:
+name: Default
+  e2e_pool_definition:
+vmImage: 'ubuntu-latest'
+  environment: PROFILE="-Dhadoop.version=2.8.3 -Dinclude_hadoop_aws 
-Dscala-2.11"
+
+  # Special stage for midnight builds:
+  - stage: cron_build_on_azure_os_free_pool
+displayName: "Cron build on free Azure Resource Pool"
+dependsOn: [] # depending on an empty array makes the stages run in 
parallel
+condition: or(eq(variables['Build.Reason'], 'Schedule'), 
eq(variables['MODE'], 'nightly'))
+jobs:
+  - template: jobs-template.yml
+parameters:
+  stage_name: cron_build_default
+  test_pool_definition:
+vmImage: 'ubuntu-latest'
+  e2e_pool_definition:
+vmImage: 'ubuntu-latest'
+  environment: PROFILE="-Dhadoop.version=2.8.3 -Dinclude_hadoop_aws 
-Dscala-2.11"
+  - template: jobs-template.yml
+parameters:
+  stage_name: cron_build_scala2_12
+  test_pool_definition:
+vmImage: 'ubuntu-latest'
+  e2e_pool_definition:
+vmImage: 'ubuntu-latest'
+  environment: PROFILE="-Dhadoop.version=2.8.3 -Dinclude_hadoop_aws 
-Dscala-2.12 -Phive-1.2.1"
+  - template: jobs-template.yml
+parameters:
+  stage_name: cron_build_jdk11
 
 Review comment:
   Yeah, no. This is some left over work in progress.
   What do you prefer: adding proper jdk11 support as part of this PR, or 
removing it now, and doing it in a separate PR? (I prefer the latter)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] rmetzger commented on a change in pull request #10976: [FLINK-13978][build system] Add experimental support for building on Azure Pipelines

2020-01-31 Thread GitBox
rmetzger commented on a change in pull request #10976: [FLINK-13978][build 
system] Add experimental support for building on Azure Pipelines
URL: https://github.com/apache/flink/pull/10976#discussion_r373384117
 
 

 ##
 File path: flink-end-to-end-tests/run-nightly-tests.sh
 ##
 @@ -88,8 +88,11 @@ run_test "Resuming Externalized Checkpoint after terminal 
failure (rocks, increm
 # Docker
 

 
-run_test "Running Kerberized YARN on Docker test (default input)" 
"$END_TO_END_DIR/test-scripts/test_yarn_kerberos_docker.sh"
-run_test "Running Kerberized YARN on Docker test (custom fs plugin)" 
"$END_TO_END_DIR/test-scripts/test_yarn_kerberos_docker.sh dummy-fs"
+# Ignore these tests on Azure
 
 Review comment:
   In these tests, the TaskManagers are not starting on YARN, probably due to 
memory constraints.
   Do you agree to file a JIRA ticket to fix this?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] rmetzger commented on a change in pull request #10976: [FLINK-13978][build system] Add experimental support for building on Azure Pipelines

2020-01-31 Thread GitBox
rmetzger commented on a change in pull request #10976: [FLINK-13978][build 
system] Add experimental support for building on Azure Pipelines
URL: https://github.com/apache/flink/pull/10976#discussion_r373382927
 
 

 ##
 File path: tools/travis_watchdog.sh
 ##
 @@ -273,27 +280,27 @@ cd ../../
 case $TEST in
 (misc)
 if [ $EXIT_CODE == 0 ]; then
-printf 
"\n\n==\n"
-printf "Running bash end-to-end tests\n"
-printf 
"==\n"
+echo 
"\n\n==\n"
+echo "Running bash end-to-end tests\n"
+echo 
"==\n"
 
 FLINK_DIR=build-target 
flink-end-to-end-tests/run-pre-commit-tests.sh
 
 EXIT_CODE=$?
 else
-printf 
"\n==\n"
-printf "Previous build failure detected, skipping bash end-to-end 
tests.\n"
-printf 
"==\n"
+echo 
"\n==\n"
+echo "Previous build failure detected, skipping bash end-to-end 
tests.\n"
+echo 
"==\n"
 fi
 if [ $EXIT_CODE == 0 ]; then
-printf 
"\n\n==\n"
-printf "Running java end-to-end tests\n"
 
 Review comment:
   I did not know that there are intentions to drop this file. 
   If I'm not mistaken, printf did not produce any outputs on azure?! I don't 
exactly remember. If this is a big problem for you, I can invest time into this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] rmetzger commented on a change in pull request #10976: [FLINK-13978][build system] Add experimental support for building on Azure Pipelines

2020-01-31 Thread GitBox
rmetzger commented on a change in pull request #10976: [FLINK-13978][build 
system] Add experimental support for building on Azure Pipelines
URL: https://github.com/apache/flink/pull/10976#discussion_r373380548
 
 

 ##
 File path: tools/travis_watchdog.sh
 ##
 @@ -68,7 +69,7 @@ MVN_TEST_MODULES=$(get_test_modules_for_stage ${TEST})
 # Flink, which however should all be built locally. see FLINK-7230
 #
 MVN_LOGGING_OPTIONS="-Dlog.dir=${ARTIFACTS_DIR} 
-Dlog4j.configuration=file://$LOG4J_PROPERTIES 
-Dorg.slf4j.simpleLogger.log.org.apache.maven.cli.transfer.Slf4jMavenTransferListener=warn"
-MVN_COMMON_OPTIONS="-nsu -Dflink.forkCount=2 -Dflink.forkCountTestPackage=2 
-Dfast -B -Pskip-webui-build $MVN_LOGGING_OPTIONS"
+MVN_COMMON_OPTIONS="-nsu -Dflink.forkCount=2 -Dflink.forkCountTestPackage=2 
-Dfast -Dmaven.wagon.http.pool=false -B -Pskip-webui-build $MVN_LOGGING_OPTIONS"
 
 Review comment:
   The amount of connection timeouts significantly reduced.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] rmetzger commented on a change in pull request #10976: [FLINK-13978][build system] Add experimental support for building on Azure Pipelines

2020-01-31 Thread GitBox
rmetzger commented on a change in pull request #10976: [FLINK-13978][build 
system] Add experimental support for building on Azure Pipelines
URL: https://github.com/apache/flink/pull/10976#discussion_r373379409
 
 

 ##
 File path: flink-end-to-end-tests/test-scripts/test_streaming_kinesis.sh
 ##
 @@ -27,12 +27,16 @@ export AWS_ACCESS_KEY_ID=flinkKinesisTestFakeAccessKeyId
 export AWS_SECRET_KEY=flinkKinesisTestFakeAccessKey
 
 KINESALITE_PORT=4567
+KINESALITE_HOST=kinesalite-container
+KINESALITE_NETWORK=some
 
 Review comment:
   I'll revert this change. These changes are leftovers of my attempt to make 
this test pass in a docker-in-docker scenario.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] rmetzger commented on a change in pull request #10976: [FLINK-13978][build system] Add experimental support for building on Azure Pipelines

2020-01-31 Thread GitBox
rmetzger commented on a change in pull request #10976: [FLINK-13978][build 
system] Add experimental support for building on Azure Pipelines
URL: https://github.com/apache/flink/pull/10976#discussion_r373374860
 
 

 ##
 File path: azure-pipelines.yml
 ##
 @@ -13,23 +13,44 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
+#
+# This file defines an Azure Pipeline build for testing Flink. It is intended 
to be used
+# with a free Azure Pipelines account.
+# It has the following features:
+#  - default builds for pushes / pull requests to a Flink fork and custom AZP 
account
+#  - end2end tests
+#
+#
+# For the "apache/flink" repository, we are using the pipeline definition 
located in
+#   tools/azure-pipelines/build-apache-repo.yml
+# That file points to custom, self-hosted build agents for faster pull request 
build processing and 
+# integration with Flinkbot.
+#
 
-trigger:
-  branches:
-include:
-- '*' 
 
 resources:
   containers:
-  # Container with Maven 3.2.5 to have the same environment everywhere.
+  # Container with Maven 3.2.5, SSL to have the same environment everywhere.
   - container: flink-build-container
-image: rmetzger/flink-ci:3
-  repositories:
-- repository: templates
-  type: github
-  name: flink-ci/flink-azure-builds
-  endpoint: flink-ci
+image: rmetzger/flink-ci:ubuntu-jdk8-amd64-e005e00
 
 Review comment:
   This is not a repository reference. It refers to a utility docker image.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services