#general


@subin.tp: Hello:raised_hands: Would like to track the average message waiting period of message in pinot input topic. Do we have any metric for that ?
  @ssubrama: You will need kafka support for this. You need to get the ingestion time (into kafka) of the message, and then use that to compute the elapsed time from that to ingestion into pinot.
  @ssubrama: Pinot provides support for `RowMetadata` class where the underlying plugin can populate this information
  @ssubrama: For each query, pinot indicates the largest time difference faced amongst all segments, in the metadata sent out by broker.
@mbshrikanth: @mbshrikanth has joined the channel
@jsegall: @jsegall has joined the channel
@dongxiaoman: @dongxiaoman has joined the channel

#random


@mbshrikanth: @mbshrikanth has joined the channel
@jsegall: @jsegall has joined the channel
@dongxiaoman: @dongxiaoman has joined the channel

#troubleshooting


@cechovsky.jozef: Hi there, any help please? I’m struggling to connect external Kafka to Pinot. I have Kafka deployed in one Kubernetes cluster and Pinot in separated one. I’m 100% sure that the communication between these two clusters are correct. Deployment of Pinot is done by this tutorial I created Kafka topics, Pinot table and schema according this and just changed config pointing to our Kafka brokers "tableIndexConfig": { "loadMode": "MMAP", "streamConfigs": { "streamType": "kafka", "stream.kafka.consumer.type": "lowlevel", "stream.kafka.topic.name": "transcript-topic", "stream.kafka.decoder.class.name": "org.apache.pinot.plugin.stream.kafka.KafkaJSONMessageDecoder", "stream.kafka.consumer.factory.class.name": "org.apache.pinot.plugin.stream.kafka20.KafkaConsumerFactory", "stream.kafka.broker.list": "our_kafka_url:9092", "realtime.segment.flush.threshold.size": "0", "realtime.segment.flush.threshold.time": "24h", "realtime.segment.flush.desired.size": "50M", "stream.kafka.consumer.prop.auto.offset.reset": "smallest" } } But I still do not see any data in my tables. I’m using Kafka version 2.6.2 Or does Pinot work just with Kafka deployed together with it with usage of the same Zookeeper? I tried to set stream.kafka.zk.broker.url to our Zookeeper but still without success. Thanks a lot
  @mayanks: No, Pinot does not need Kafka to be on same ZK. What do you see in the server logs?
@tanmay.movva: Hello, We are trying to connect Pinot with Trino and we are getting this error ```No valid brokers found for backendentityview'``` We got to know it is because, the trino-pinot connector doesn’t support mixed case table name. Is anything planned to support mixed case table names in the connector?
  @tanmay.movva: It is failing at this point While it is trying to fetch the routing table from broker, it is using the tableName from tableHandle. In tableHandle, the table name is . because of which Trino is not able to find the routing table in Pinot.
  @g.kishore: @elon.azoulay ^^
  @elon.azoulay: Which version of trino and pinot are you using?
  @elon.azoulay: Ah, we have a pr to address this:
  @elon.azoulay:
@mbshrikanth: @mbshrikanth has joined the channel
@jsegall: @jsegall has joined the channel
@kchavda: Hi all, I'm pushing a table from postgres to kafka (using debezium) to Pinot. The table has a few geography columns. When creating the realtime table however, I get am getting error on Pinot (below). ```java.lang.IllegalStateException: Cannot read single-value from Collection: [AQEAACDmEAAA5no2BviTXcB1T2ijhAxBQA==, 4326] for column: point at shaded.com.google.common.base.Preconditions.checkState(Preconditions.java:721) ~[pinot-all-0.7.1-jar-with-dependencies.jar:0.7.1-afa4b252ab1c424ddd6c859bb305b2aa342b66ed] at org.apache.pinot.core.data.recordtransformer.DataTypeTransformer.standardizeCollection(DataTypeTransformer.java:193) ~[pinot-all-0.7.1-jar-with-dependencies.jar:0.7.1-afa4b252ab1c424ddd6c859bb305b2aa342b66ed] at org.apache.pinot.core.data.recordtransformer.DataTypeTransformer.standardize(DataTypeTransformer.java:138) ~[pinot-all-0.7.1-jar-with-dependencies.jar:0.7.1-afa4b252ab1c424ddd6c859bb305b2aa342b66ed] at org.apache.pinot.core.data.recordtransformer.DataTypeTransformer.transform(DataTypeTransformer.java:88) ~[pinot-all-0.7.1-jar-with-dependencies.jar:0.7.1-afa4b252ab1c424ddd6c859bb305b2aa342b66ed] at org.apache.pinot.core.data.recordtransformer.CompositeTransformer.transform(CompositeTransformer.java:82) ~[pinot-all-0.7.1-jar-with-dependencies.jar:0.7.1-afa4b252ab1c424ddd6c859bb305b2aa342b66ed] at org.apache.pinot.core.data.manager.realtime.LLRealtimeSegmentDataManager.processStreamEvents(LLRealtimeSegmentDataManager.java:491) [pinot-all-0.7.1-jar-with-dependencies.jar:0.7.1-afa4b252ab1c424ddd6c859bb305b2aa342b66ed] at org.apache.pinot.core.data.manager.realtime.LLRealtimeSegmentDataManager.consumeLoop(LLRealtimeSegmentDataManager.java:402) [pinot-all-0.7.1-jar-with-dependencies.jar:0.7.1-afa4b252ab1c424ddd6c859bb305b2aa342b66ed] at org.apache.pinot.core.data.manager.realtime.LLRealtimeSegmentDataManager$PartitionConsumer.run(LLRealtimeSegmentDataManager.java:538) [pinot-all-0.7.1-jar-with-dependencies.jar:0.7.1-afa4b252ab1c424ddd6c859bb305b2aa342b66ed] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_282]``` The point column has this as value: ```"point" : { "wkb" : "AQEAACDmEAAA5no2BviTXcB1T2ijhAxBQA==", "srid" : 4326 },``` Any suggestions on how to resolve? I have the column as string in the Pinot table schema.
  @jackie.jxt: Can you please share the schema? Are you planning to add geo index to this column?
  @jackie.jxt: Add @yupeng to the discussion
  @yupeng: try `ST_GeogFromWKB` from
  @jackie.jxt: You should be able to decode the input string using the in-built `base64Decode()` function. See on how to use ingestion transforms
@dongxiaoman: @dongxiaoman has joined the channel

#pinot-dev


@atri.sharma: Please review:

#getting-started


@cechovsky.jozef: @cechovsky.jozef has joined the channel
--------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscr...@pinot.apache.org For additional commands, e-mail: dev-h...@pinot.apache.org

Reply via email to