This is an automated email from the ASF dual-hosted git repository.

twalthr pushed a commit to branch release-1.6
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.6 by this push:
     new 8601555  [FLINK-10532] [docs] Fix broken links in documentation
8601555 is described below

commit 8601555896b2016804303ce7baa6a8b49dd3b07e
Author: Timo Walther <twal...@apache.org>
AuthorDate: Fri Oct 12 22:53:08 2018 +0200

    [FLINK-10532] [docs] Fix broken links in documentation
---
 docs/dev/api_concepts.md      | 2 +-
 docs/dev/stream/python.md     | 6 +++---
 docs/dev/table/sourceSinks.md | 2 +-
 3 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/docs/dev/api_concepts.md b/docs/dev/api_concepts.md
index c421507..d1e6100 100644
--- a/docs/dev/api_concepts.md
+++ b/docs/dev/api_concepts.md
@@ -510,7 +510,7 @@ data.map(new MapFunction<String, Integer> () {
 
 #### Java 8 Lambdas
 
-Flink also supports Java 8 Lambdas in the Java API. Please see the full [Java 
8 Guide]({{ site.baseurl }}/dev/java8.html).
+Flink also supports Java 8 Lambdas in the Java API. Please see the full [Java 
8 Guide]({{ site.baseurl }}/dev/java_lambdas.html).
 
 {% highlight java %}
 data.filter(s -> s.startsWith("http://";));
diff --git a/docs/dev/stream/python.md b/docs/dev/stream/python.md
index 887d983..29b2f30 100644
--- a/docs/dev/stream/python.md
+++ b/docs/dev/stream/python.md
@@ -227,7 +227,7 @@ Data transformations transform one or more DataStreams into 
a new DataStream. Pr
 multiple transformations into sophisticated assemblies.
 
 This section gives a brief overview of the available transformations. The 
[transformations
-documentation](dataset_transformations.html) has a full description of all 
transformations with
+documentation](./operators/index.html) has a full description of all 
transformations with
 examples.
 
 <br />
@@ -322,7 +322,7 @@ data.reduce(Sum())
       <td>
         <p>Windows can be defined on already partitioned KeyedStreams. Windows 
group the data in each
         key according to some characteristic (e.g., the data that arrived 
within the last 5 seconds).
-        See <a href="windows.html">windows</a> for a complete description of 
windows.
+        See <a href="./operators/windows.html">windows</a> for a complete 
description of windows.
     {% highlight python %}
 keyed_stream.count_window(10, 5)  # Last 10 elements, sliding (jumping) by 5 
elements
 
@@ -624,7 +624,7 @@ env.execute()
 
 A system-wide default parallelism for all execution environments can be 
defined by setting the
 `parallelism.default` property in `./conf/flink-conf.yaml`. See the
-[Configuration]({{ site.baseurl }}/setup/config.html) documentation for 
details.
+[Configuration]({{ site.baseurl }}/ops/config.html) documentation for details.
 
 {% top %}
 
diff --git a/docs/dev/table/sourceSinks.md b/docs/dev/table/sourceSinks.md
index d0cc78e..7b831b7 100644
--- a/docs/dev/table/sourceSinks.md
+++ b/docs/dev/table/sourceSinks.md
@@ -664,7 +664,7 @@ connector.debug=true
 
 ### Use a TableFactory in the Table & SQL API
 
-For a type-safe, programmatic approach with explanatory Scaladoc/Javadoc, the 
Table & SQL API offers descriptors in `org.apache.flink.table.descriptors` that 
translate into string-based properties. See the [built-in 
descriptors](connect.md) for sources, sinks, and formats as a reference.
+For a type-safe, programmatic approach with explanatory Scaladoc/Javadoc, the 
Table & SQL API offers descriptors in `org.apache.flink.table.descriptors` that 
translate into string-based properties. See the [built-in 
descriptors](connect.html) for sources, sinks, and formats as a reference.
 
 A connector for `MySystem` in our example can extend `ConnectorDescriptor` as 
shown below:
 

Reply via email to