#general


@romualdo.gobbo: @romualdo.gobbo has joined the channel

#random


@romualdo.gobbo: @romualdo.gobbo has joined the channel

#troubleshooting


@gamparohit: data injection stops in a realtime table after segment flush threshold time. When i check, no new segments are created and the status of the one and only segment is shown as consuming.
  @gamparohit: this is the table { "tableName": "eventhandler_pinot_REALTIME", "tableType": "REALTIME", "segmentsConfig": { "timeColumnName": "createdOn", "replicasPerPartition": "1", "schemaName": "eventhandler_pinot", "replication": "1", "segmentPushType": "APPEND", "segmentPushFrequency": "HOURLY" }, "tenants": { "broker": "DefaultTenant", "server": "DefaultTenant", "tagOverrideConfig": { "realtimeConsuming": "DefaultTenant_REALTIME", "realtimeCompleted": "DefaultTenant_OFFLINE" } }, "tableIndexConfig": { "invertedIndexColumns": [], "rangeIndexColumns": [], "autoGeneratedInvertedIndex": false, "createInvertedIndexDuringSegmentGeneration": false, "bloomFilterColumns": [], "loadMode": "MMAP", "streamConfigs": { "streamType": "kafka", "stream.kafka.topic.name": "eventhandler_pinot", "stream.kafka.broker.list": "**.***.**.***:9092", "stream.kafka.consumer.type": "lowlevel", "stream.kafka.consumer.prop.auto.offset.reset": "smallest", "stream.kafka.consumer.factory.class.name": "org.apache.pinot.plugin.stream.kafka20.KafkaConsumerFactory", "stream.kafka.decoder.class.name": "org.apache.pinot.plugin.stream.kafka.KafkaJSONMessageDecoder", "realtime.segment.flush.threshold.rows": "0", "realtime.segment.flush.threshold.time": "24h", "realtime.segment.flush.segment.size": "100M" }, "noDictionaryColumns": [], "onHeapDictionaryColumns": [], "varLengthDictionaryColumns": [], "enableDefaultStarTree": false, "sortedColumn": [], "enableDynamicStarTreeCreation": false, "aggregateMetrics": false, "nullHandlingEnabled": false }, "metadata": {}, "quota": {}, "routing": {}, "query": {}, "ingestionConfig": {} }
  @g.kishore: Check the logs, there might be some exception while flushing the segment
@romualdo.gobbo: @romualdo.gobbo has joined the channel

#pinot-s3


@pabraham.usa: Also is it a good idea to make deep storage for fail over purpose? I assume there will a little bit of delay in pulling segments from say s3 and starting the server up?
@g.kishore: What do you mean fail over purpose
--------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]

Reply via email to