spark git commit: [SPARK-20844] Remove experimental from Structured Streaming APIs

2017-05-26 Thread zsxwing
Repository: spark
Updated Branches:
  refs/heads/branch-2.2 92837aeb4 -> 2b59ed4f1


[SPARK-20844] Remove experimental from Structured Streaming APIs

Now that Structured Streaming has been out for several Spark release and has 
large production use cases, the `Experimental` label is no longer appropriate.  
I've left `InterfaceStability.Evolving` however, as I think we may make a few 
changes to the pluggable Source & Sink API in Spark 2.3.

Author: Michael Armbrust 

Closes #18065 from marmbrus/streamingGA.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/2b59ed4f
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/2b59ed4f
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/2b59ed4f

Branch: refs/heads/branch-2.2
Commit: 2b59ed4f1d4e859d5987b6eaaee074260b2a12f8
Parents: 92837ae
Author: Michael Armbrust 
Authored: Fri May 26 13:33:23 2017 -0700
Committer: Shixiong Zhu 
Committed: Fri May 26 13:34:33 2017 -0700

--
 docs/structured-streaming-programming-guide.md  |  4 +-
 python/pyspark/sql/context.py   |  4 +-
 python/pyspark/sql/dataframe.py |  6 +--
 python/pyspark/sql/session.py   |  4 +-
 python/pyspark/sql/streaming.py | 42 ++--
 .../apache/spark/sql/streaming/OutputMode.java  |  3 --
 .../org/apache/spark/sql/streaming/Trigger.java |  7 
 .../scala/org/apache/spark/sql/Dataset.scala|  2 -
 .../org/apache/spark/sql/ForeachWriter.scala|  4 +-
 .../scala/org/apache/spark/sql/SQLContext.scala |  2 -
 .../org/apache/spark/sql/SparkSession.scala |  2 -
 .../scala/org/apache/spark/sql/functions.scala  |  8 +---
 .../spark/sql/streaming/DataStreamReader.scala  |  3 +-
 .../spark/sql/streaming/DataStreamWriter.scala  |  4 +-
 .../spark/sql/streaming/ProcessingTime.scala|  6 +--
 .../spark/sql/streaming/StreamingQuery.scala|  4 +-
 .../sql/streaming/StreamingQueryException.scala |  4 +-
 .../sql/streaming/StreamingQueryListener.scala  | 14 +--
 .../sql/streaming/StreamingQueryManager.scala   |  6 +--
 .../sql/streaming/StreamingQueryStatus.scala|  4 +-
 .../apache/spark/sql/streaming/progress.scala   | 10 +
 21 files changed, 42 insertions(+), 101 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/2b59ed4f/docs/structured-streaming-programming-guide.md
--
diff --git a/docs/structured-streaming-programming-guide.md 
b/docs/structured-streaming-programming-guide.md
index bd01be9..6a25c99 100644
--- a/docs/structured-streaming-programming-guide.md
+++ b/docs/structured-streaming-programming-guide.md
@@ -1,6 +1,6 @@
 ---
 layout: global
-displayTitle: Structured Streaming Programming Guide [Experimental]
+displayTitle: Structured Streaming Programming Guide
 title: Structured Streaming Programming Guide
 ---
 
@@ -10,7 +10,7 @@ title: Structured Streaming Programming Guide
 # Overview
 Structured Streaming is a scalable and fault-tolerant stream processing engine 
built on the Spark SQL engine. You can express your streaming computation the 
same way you would express a batch computation on static data. The Spark SQL 
engine will take care of running it incrementally and continuously and updating 
the final result as streaming data continues to arrive. You can use the 
[Dataset/DataFrame API](sql-programming-guide.html) in Scala, Java, Python or R 
to express streaming aggregations, event-time windows, stream-to-batch joins, 
etc. The computation is executed on the same optimized Spark SQL engine. 
Finally, the system ensures end-to-end exactly-once fault-tolerance guarantees 
through checkpointing and Write Ahead Logs. In short, *Structured Streaming 
provides fast, scalable, fault-tolerant, end-to-end exactly-once stream 
processing without the user having to reason about streaming.*
 
-**Structured Streaming is still ALPHA in Spark 2.1** and the APIs are still 
experimental. In this guide, we are going to walk you through the programming 
model and the APIs. First, let's start with a simple example - a streaming word 
count.
+In this guide, we are going to walk you through the programming model and the 
APIs. First, let's start with a simple example - a streaming word count.
 
 # Quick Example
 Let’s say you want to maintain a running word count of text data received 
from a data server listening on a TCP socket. Let’s see how you can express 
this using Structured Streaming. You can see the full code in

http://git-wip-us.apache.org/repos/asf/spark/blob/2b59ed4f/python/pyspark/sql/context.py
--
diff --git a/python/pyspark/sql/context.py 

spark git commit: [SPARK-20844] Remove experimental from Structured Streaming APIs

2017-05-26 Thread zsxwing
Repository: spark
Updated Branches:
  refs/heads/master 0fd84b05d -> d935e0a9d


[SPARK-20844] Remove experimental from Structured Streaming APIs

Now that Structured Streaming has been out for several Spark release and has 
large production use cases, the `Experimental` label is no longer appropriate.  
I've left `InterfaceStability.Evolving` however, as I think we may make a few 
changes to the pluggable Source & Sink API in Spark 2.3.

Author: Michael Armbrust 

Closes #18065 from marmbrus/streamingGA.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/d935e0a9
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/d935e0a9
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/d935e0a9

Branch: refs/heads/master
Commit: d935e0a9d9bb3d3c74e9529e161648caa50696b7
Parents: 0fd84b0
Author: Michael Armbrust 
Authored: Fri May 26 13:33:23 2017 -0700
Committer: Shixiong Zhu 
Committed: Fri May 26 13:33:23 2017 -0700

--
 docs/structured-streaming-programming-guide.md  |  4 +-
 python/pyspark/sql/context.py   |  4 +-
 python/pyspark/sql/dataframe.py |  6 +--
 python/pyspark/sql/session.py   |  4 +-
 python/pyspark/sql/streaming.py | 42 ++--
 .../apache/spark/sql/streaming/OutputMode.java  |  3 --
 .../org/apache/spark/sql/streaming/Trigger.java |  7 
 .../scala/org/apache/spark/sql/Dataset.scala|  2 -
 .../org/apache/spark/sql/ForeachWriter.scala|  4 +-
 .../scala/org/apache/spark/sql/SQLContext.scala |  2 -
 .../org/apache/spark/sql/SparkSession.scala |  2 -
 .../scala/org/apache/spark/sql/functions.scala  |  8 +---
 .../spark/sql/streaming/DataStreamReader.scala  |  3 +-
 .../spark/sql/streaming/DataStreamWriter.scala  |  4 +-
 .../spark/sql/streaming/ProcessingTime.scala|  6 +--
 .../spark/sql/streaming/StreamingQuery.scala|  4 +-
 .../sql/streaming/StreamingQueryException.scala |  4 +-
 .../sql/streaming/StreamingQueryListener.scala  | 14 +--
 .../sql/streaming/StreamingQueryManager.scala   |  6 +--
 .../sql/streaming/StreamingQueryStatus.scala|  4 +-
 .../apache/spark/sql/streaming/progress.scala   | 10 +
 21 files changed, 42 insertions(+), 101 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/d935e0a9/docs/structured-streaming-programming-guide.md
--
diff --git a/docs/structured-streaming-programming-guide.md 
b/docs/structured-streaming-programming-guide.md
index bd01be9..6a25c99 100644
--- a/docs/structured-streaming-programming-guide.md
+++ b/docs/structured-streaming-programming-guide.md
@@ -1,6 +1,6 @@
 ---
 layout: global
-displayTitle: Structured Streaming Programming Guide [Experimental]
+displayTitle: Structured Streaming Programming Guide
 title: Structured Streaming Programming Guide
 ---
 
@@ -10,7 +10,7 @@ title: Structured Streaming Programming Guide
 # Overview
 Structured Streaming is a scalable and fault-tolerant stream processing engine 
built on the Spark SQL engine. You can express your streaming computation the 
same way you would express a batch computation on static data. The Spark SQL 
engine will take care of running it incrementally and continuously and updating 
the final result as streaming data continues to arrive. You can use the 
[Dataset/DataFrame API](sql-programming-guide.html) in Scala, Java, Python or R 
to express streaming aggregations, event-time windows, stream-to-batch joins, 
etc. The computation is executed on the same optimized Spark SQL engine. 
Finally, the system ensures end-to-end exactly-once fault-tolerance guarantees 
through checkpointing and Write Ahead Logs. In short, *Structured Streaming 
provides fast, scalable, fault-tolerant, end-to-end exactly-once stream 
processing without the user having to reason about streaming.*
 
-**Structured Streaming is still ALPHA in Spark 2.1** and the APIs are still 
experimental. In this guide, we are going to walk you through the programming 
model and the APIs. First, let's start with a simple example - a streaming word 
count.
+In this guide, we are going to walk you through the programming model and the 
APIs. First, let's start with a simple example - a streaming word count.
 
 # Quick Example
 Let’s say you want to maintain a running word count of text data received 
from a data server listening on a TCP socket. Let’s see how you can express 
this using Structured Streaming. You can see the full code in

http://git-wip-us.apache.org/repos/asf/spark/blob/d935e0a9/python/pyspark/sql/context.py
--
diff --git a/python/pyspark/sql/context.py