#general


@qoega: @qoega has joined the channel
@nicolasdelffon: @nicolasdelffon has joined the channel

#random


@qoega: @qoega has joined the channel
@nicolasdelffon: @nicolasdelffon has joined the channel

#troubleshooting


@nadeemsadim: @mayanks @xiangfu0 @jackie.jxt @ssubrama The timeseries databases like influxdb or cortex can be connected with grafana and charts can be loaded with aggregations .. is there support for connecting grafana and build visualizations on top of data in pinot tables .. or we need to go with something like superset
@nadeemsadim:
  @xiangfu0: it’s not supported yet, you can contribute the grafana connector if you are interested.
  @nadeemsadim: ok @xiangfu0
@msoni6226: Hi Team, We are running a hybrid table setup. With data being present in REALTIME table and no data in the OFFLINE table. We have configured task in the REALTIME table so that data can be moved from REALTIME to OFFLINE. However, when in the broker logs I am seeing the below warning continuously, is this something with respect to no data being in OFFLINE table or will this be gone once when we have data in OFFLINE table. `2021-10-07 06:41:20.000 WARN [BaseBrokerRequestHandler] [jersey-server-managed-async-executor-15] Failed to find time boundary info for hybrid table:`
  @xiangfu0: if your offline table is empty, then you can ignore this
@anu110195: Somehow ingestion from kafka stopped in a RELATIME table...
@anu110195: getting this error in logs
@anu110195: Fetch offset 8220025 is out of range for partition pinot_request_table-0, resetting offset
  @adireddijagadesh: @anu110195, The reason could be partition: `pinot_request_table-0` may not have enough messages or offset `8220025` might be deleted due to retention period. Kafka broker logs might have exact reason. when an offset is out of range, the consumer(pinot stream ingestion) reset position of the offset defined at `"stream.kafka.consumer.prop.auto.offset.reset"` having smallest or largest or timestamp in ms.
  @mayanks: Thanks @adireddijagadesh
@bajpai.arpita746462: Hi Team, We have hybrid tables in our Pinot 0.8.0 and we have deleted some of them. But still we are able to see minion metadata for the deleted tables in Pinot Incubator UI. Will it be deleted automatically or this is a bug? Can we safely delete the existing metadata for the deleted tables from pinot explorer?
@qoega: @qoega has joined the channel
@qoega: Hi! I’m trying to ingest data to Pinot 0.8.0 and get not clear message. I can’t see any errors in this output and did not find other logs with related output. But output looks different to sample output from docs and there is no segments pushed to Pinot. Can you please help me to see where I can see error to fix that? ```Trying to create instance for class org.apache.pinot.plugin.ingestion.batch.standalone.SegmentGenerationJobRunner Initializing PinotFS for scheme file, classname org.apache.pinot.spi.filesystem.LocalPinotFS Creating an executor service with 1 threads(Job parallelism: 1, available cores: 80.) Trying to create instance for class org.apache.pinot.plugin.ingestion.batch.standalone.SegmentTarPushJobRunner Initializing PinotFS for scheme file, classname org.apache.pinot.spi.filesystem.LocalPinotFS Start pushing segments: []... to locations: [org.apache.pinot.spi.ingestion.batch.spec.PinotClusterSpec@78de58ea] for table trips_OFFLINE```
@nicolasdelffon: @nicolasdelffon has joined the channel

#getting-started


@qoega: @qoega has joined the channel
@dadelcas: Hey, I've seen somewhere that pinot have some special columns with metadata about the row segment path and other stuff. I don't seem to find that anywhere and I wonder if someone could kindly point me at where they are documented

#kinesis_help


@npawar: @npawar has joined the channel
@abhijeet.kushe: @abhijeet.kushe has joined the channel
@kharekartik: @kharekartik has joined the channel
@npawar: Hey Kartik, please meet Abhijeet
@npawar: he’s facing some issues when using kinesis
@npawar: i wanted to verify one behavior from you
@npawar: he’s set shardIteratorType `LATEST` and `realtime.segment.flush.threshol.time: 6h`
@npawar: now say a segment was CONSUMING. and then the server got restarted. when the server comes back up, will the KinesisConsumer consume from the startOffset recorded in the segment metadata, or will it consume from Latest sequence number?
@npawar: i have a hunch, the latter must be happening. if so, we have a bug. after restart, we should always consume from the startOffset of the segment
--------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscr...@pinot.apache.org For additional commands, e-mail: dev-h...@pinot.apache.org

Reply via email to