viirya commented on a change in pull request #33683:
URL: https://github.com/apache/spark/pull/33683#discussion_r689245759



##########
File path: docs/structured-streaming-programming-guide.md
##########
@@ -1792,7 +1792,85 @@ hence the number is not same as the number of original 
input rows. You'd like to
 There's a known workaround: split your streaming query into multiple queries 
per stateful operator, and ensure
 end-to-end exactly once per query. Ensuring end-to-end exactly once for the 
last query is optional.
 
-### State Store and task locality
+### State Store
+
+State store is a versioned key-value store which provides both read and write 
operations. In
+structured streaming, we use the state store provider to handle the stateful 
operations across
+batches. There are two built-in state store provider implementations. End 
users can also implement
+their own state store provider by extending StateStoreProvider interface.
+
+#### HDFS state store provider
+
+The HDFS backend state store provider is the default implementation of 
[[StateStoreProvider]] and
+[[StateStore]] in which all the data is stated in memory map in the first 
stage, and then backed

Review comment:
       stated? stored?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to