[ 
https://issues.apache.org/jira/browse/BEAM-8470?focusedWorklogId=346154&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-346154
 ]

ASF GitHub Bot logged work on BEAM-8470:
----------------------------------------

                Author: ASF GitHub Bot
            Created on: 19/Nov/19 18:14
            Start Date: 19/Nov/19 18:14
    Worklog Time Spent: 10m 
      Work Description: aromanenko-dev commented on issue #9866: [BEAM-8470] 
Create a new Spark runner based on Spark Structured streaming framework
URL: https://github.com/apache/beam/pull/9866#issuecomment-555636907
 
 
   Run Python PreCommit
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 346154)
    Time Spent: 10h 10m  (was: 10h)

> Create a new Spark runner based on Spark Structured streaming framework
> -----------------------------------------------------------------------
>
>                 Key: BEAM-8470
>                 URL: https://issues.apache.org/jira/browse/BEAM-8470
>             Project: Beam
>          Issue Type: Improvement
>          Components: runner-spark
>            Reporter: Etienne Chauchot
>            Assignee: Etienne Chauchot
>            Priority: Major
>          Time Spent: 10h 10m
>  Remaining Estimate: 0h
>
> h1. Why is it worth creating a new runner based on structured streaming:
> Because this new framework brings:
>  * Unified batch and streaming semantics:
>  * no more RDD/DStream distinction, as in Beam (only PCollection)
>  * Better state management:
>  * incremental state instead of saving all each time
>  * No more synchronous saving delaying computation: per batch and partition 
> delta file saved asynchronously + in-memory hashmap synchronous put/get
>  * Schemas in datasets:
>  * The dataset knows the structure of the data (fields) and can optimize 
> later on
>  * Schemas in PCollection in Beam
>  * New Source API
>  * Very close to Beam bounded source and unbounded sources
> h1. Why make a new runner from scratch?
>  * Structured streaming framework is very different from the RDD/Dstream 
> framework
> h1. We hope to gain
>  * More up to date runner in terms of libraries: leverage new features
>  * Leverage learnt practices from the previous runners
>  * Better performance thanks to the DAG optimizer (catalyst) and by 
> simplifying the code.
>  * Simplify the code and ease the maintenance
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to