Github user tdas commented on a diff in the pull request:

    https://github.com/apache/spark/pull/13286#discussion_r64976764
  
    --- Diff: python/pyspark/sql/readwriter.py ---
    @@ -500,6 +500,26 @@ def mode(self, saveMode):
                 self._jwrite = self._jwrite.mode(saveMode)
             return self
     
    +    @since(2.0)
    +    def outputMode(self, outputMode):
    +        """Specifies how data of a streaming DataFrame/Dataset is written 
to a streaming sink.
    +
    +        Options include:
    +
    +        * `append`:Only the new rows in the streaming DataFrame/Dataset 
will be written to
    +           the sink
    +        * `complete`:All the rows in the streaming DataFrame/Dataset will 
be written to the sink
    +           every time these is some updates
    --- End diff --
    
    I want to write something that makes sense generally, without understanding 
trigger and all. As is, since the trigger is optional, one does not need to 
know about triggers at all to start running stuff in structured streaming.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to