GitHub user glaksh100 opened a pull request:

    https://github.com/apache/flink/pull/6408

    [FLINK-9897] Make adaptive reads depend on run loop time instead of 
fetchintervalmillis

    ## What is the purpose of the change
    [FLINK-9692](https://github.com/apache/flink/pull/6300) introduced the 
feature of adapting `maxNumberOfRecordsPerFetch` based on the average size of 
Kinesis records. The PR assumed a maximum of `1/fetchIntervalMillis` 
reads/second. However, in the case that the run loop of the `ShardConsumer` 
takes more than `fetchIntervalMillis` to process records, the 
`maxNumberOfRecordsPerFetch` is still sub-optimal. The purpose of this change 
is to make the adaptive reads more efficient by using the actual run loop 
frequency to determine the number of reads/second and 
`maxNumberOfRecordsPerFetch`. The change also re-factors the run loop to be 
more modular.
    
    
    ## Brief change log
    
      - `processingStartTimeNanos` records start time of loop
      -  `processingEndTimeNanos` records end time of loop
      -  `adjustRunLoopFrequency()` adjusts end time depending on 
`sleepTimeMillis` (if any).
      -  `runLoopTimeNanos` records actual run loop time.
      -  `adaptRecordsToRead` calculates `maxNumberOfRecordsPerFetch` based on 
`runLoopTimeNanos`
      - Unused method `getAdaptiveMaxRecordsPerFetch` is removed.
    
    ## Verifying this change
    
    This change is already covered by existing tests, such as 
`ShardConsumerTest`
    This has also been tested against a stream with the following configuration
    ```
    Number of Shards: 512
    Parallelism: 128
    ```
    
    ## Does this pull request potentially affect one of the following parts:
    
      - Dependencies (does it add or upgrade a dependency): (yes / **no**)
      - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
      - The serializers: (yes / **no** / don't know)
      - The runtime per-record code paths (performance sensitive): (yes / 
**no** / don't know)
      - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
      - The S3 file system connector: (yes / **no** / don't know)
    
    ## Documentation
    
      - Does this pull request introduce a new feature? (yes / **no**)
      - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)


You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/lyft/flink FLINK-9897.AdaptiveReadsRunLoop

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/flink/pull/6408.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #6408
    
----
commit 786556b9a9a509051a14772fbbd282db73e65252
Author: Lakshmi Gururaja Rao <glaksh100@...>
Date:   2018-07-24T18:44:08Z

    [FLINK-9897] Make adaptive reads depend on run loop time instead of fetch 
interval millis

----


---

Reply via email to