[ 
https://issues.apache.org/jira/browse/NIFI-1251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16528175#comment-16528175
 ] 

Peter Wicks commented on NIFI-1251:
-----------------------------------

I have a version just about ready. Unit tests are passing, but still need to do 
real testing (because Unit Tests don't test for commit's happening mid-run... 
:)).

Will hopefully have a PR next week. Branch: 
[https://github.com/patricker/nifi/tree/NIFI-1251]

 

> Allow ExecuteSQL to send out large result sets in chunks
> --------------------------------------------------------
>
>                 Key: NIFI-1251
>                 URL: https://issues.apache.org/jira/browse/NIFI-1251
>             Project: Apache NiFi
>          Issue Type: Improvement
>          Components: Extensions
>            Reporter: Mark Payne
>            Assignee: Peter Wicks
>            Priority: Major
>
> Currently, when using ExecuteSQL, if a result set is very large, it can take 
> quite a long time to pull back all of the results. It would be nice to have 
> the ability to specify the maximum number of records to put into a FlowFile, 
> so that if we pull back say 1 million records we can configure it to create 
> 1000 FlowFiles, each with 1000 records. This way, we can begin processing the 
> first 1,000 records while the next 1000 are being pulled from the remote 
> database.
> This suggestion comes from Vinay via the dev@ mailing list:
> Is there way to have streaming feature when large result set is fetched from
> database basically to reads data from the database in chunks of records
> instead of loading the full result set into memory.
> As part of ExecuteSQL can a property be specified called "FetchSize" which
> Indicates how many rows should be fetched from the resultSet.
> Since jam bit new in using NIFI , can any guide me on above.
> Thanks in advance



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to