[ 
https://issues.apache.org/jira/browse/NIFI-4428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16252781#comment-16252781
 ] 

ASF GitHub Bot commented on NIFI-4428:
--------------------------------------

Github user vakshorton commented on the issue:

    https://github.com/apache/nifi/pull/2181
  
    @mattyb149 As Druid is optimized for OLAP it does not work as an OLTP 
datastore, The REST API of Druid is for querying and obtaining meta data (like 
segment schema and time interval), it does not support writes. That is mainly 
because writes in Druid can only happen after a configured quantity of data 
(Segment Granularity Spec) has been indexed and organized into a storage 
segment (schema, bitmap index, aggregate metrics, compression). Only an 
Indexing job can create a segment (either batch or realtime). Realtime indexing 
jobs can be created in one of two ways. 1.) Pull via a Firehose (Druid Aware 
Pull API) created to read the stream source, controlled by a Druid Realtime 
Node (like the Kafka Indexing Service). 2.) Push via Tranquility (Druid 
Indexing API) from stream source to Druid Overlord and then MiddleManager Nodes 
that will make the data immediately queryable while indexing it for storage as 
a segment. Since the goal is to push data from Nifi into Druid, Tranquility 
seems like the best option.


> Implement PutDruid Processor and Controller
> -------------------------------------------
>
>                 Key: NIFI-4428
>                 URL: https://issues.apache.org/jira/browse/NIFI-4428
>             Project: Apache NiFi
>          Issue Type: New Feature
>    Affects Versions: 1.3.0
>            Reporter: Vadim Vaks
>
> Implement a PutDruid Processor and Controller using Tranquility API. This 
> will enable Nifi to index contents of flow files in Druid. The implementation 
> should also be able to handle late arriving data (event timestamp points to 
> Druid indexing task that has closed, segment granularity and grace window 
> period expired). Late arriving data is typically dropped. Nifi should allow 
> late arriving data to be diverted to FAILED or DROPPED relationship. That 
> would allow late arriving data to be stored on HDFS or S3 until a re-indexing 
> task can merge it into the correct segment in deep storage.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to