This is an automated email from the ASF dual-hosted git repository.

jincheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
     new 40fe63c  [FLINK-11972] [docs] Add necessary notes about running 
streaming bucketing end-to-end test in README
40fe63c is described below

commit 40fe63c5057a9388559c90b75e45ac24a1a387d0
Author: Yu Li <l...@apache.org>
AuthorDate: Thu Mar 21 06:39:09 2019 +0800

    [FLINK-11972] [docs] Add necessary notes about running streaming bucketing 
end-to-end test in README
    
    This closes #8027.
---
 flink-end-to-end-tests/README.md | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/flink-end-to-end-tests/README.md b/flink-end-to-end-tests/README.md
index e575408..d7f3b6d 100644
--- a/flink-end-to-end-tests/README.md
+++ b/flink-end-to-end-tests/README.md
@@ -33,6 +33,12 @@ $ FLINK_DIR=<flink dir> 
flink-end-to-end-tests/run-single-test.sh your_test.sh a
 
 **NOTICE**: Please _DON'T_ run the scripts with explicit command like ```sh 
run-nightly-tests.sh``` since ```#!/usr/bin/env bash``` is specified as the 
header of the scripts to assure flexibility on different systems.
 
+### Streaming bucketing test
+
+Before running this nightly test case (test_streaming_bucketing.sh), please 
make sure to run `mvn -DskipTests install` in the `flink-end-to-end-tests` 
directory, so jar files necessary for the test like 
`BucketingSinkTestProgram.jar` could be generated.
+
+What's more, starting from 1.8.0 release it's required to make sure that 
`HADOOP_CLASSPATH` is [correctly 
set](https://ci.apache.org/projects/flink/flink-docs-stable/ops/deployment/hadoop.html)
 or the pre-bundled hadoop jar has been put into the `lib` folder of the 
`FLINK_DIR` (You can find the binaries from the [Downloads 
page](http://flink.apache.org/downloads.html) on the Flink project site).
+
 ### Kubernetes test
 
 Kubernetes test (test_kubernetes_embedded_job.sh) assumes a running minikube 
cluster.

Reply via email to