#general


@kizilirmak.mustafamer: @kizilirmak.mustafamer has joined the channel
@chethanu.tech: Thread: Please drop the logo of your company if your are using Pinot I will update this section in docs
  @ken: Hi @chethanu.tech if we’re a consulting company, do you want our logo, or (if we get the OK) the logo of the client?
@zjinwei: Hi team, I'm new to pinot and trying to get logs for troubleshooting using logz. I deployed pinot using k8s, want to make sure are the logs of different components exist in the according pods? like gc-pinot-broker.log and pinotBroker.log. What's the different? How to change the log levels? Is the log seen in kubectl logs same as seen inside the pod log file?
  @dlavoie: Hi Jinwei, there’s indeed three different level of logs. * `gc-pinot-broker.log` will contain Garbage Collection statistics. Pinot being memory intensive, it is useful to get insights on history of garbage collection. * `pinotBroker.log` is configured with a `info` level on the root logger. * `kubectl logs` will output the console appender which is configured on a `warn` level for the root logger.
  @dlavoie: As for overriding these configuration, it would involve mounting a congfigmap volume to the container and defining the `<component>.log4j2ConfFile` helm value.
  @dlavoie: If you are not using the helm chart, but creating the pod yourself, you can use the `JAVA_OPTS` to define the path to your customized log4j2.xml file with the `-Dlog4j2.configurationFile` JVM option.
  @zjinwei: Hi Daniel, thanks for your reply! Yes, I have helm file and trying to put the logs to logz. So does it mean I need to mounting a congfigmap volume to the container and defining the `<component>.log4j2ConfFile` helm value? Do we have some examples about this? Thx
  @dlavoie: What are you using to ship logs to logz?
@apandhi: @apandhi has joined the channel
@wrbriggs: For realtime ingestion, does `tableIndexConfig.sortedColumn` actually sort the segments, or is it intended to tell Pinot what column the data is already sorted by when ingesting from Kafka?
@g.kishore: It automatically sorts it while ingesting from Kafka
  @g.kishore: Whereas in batch mode, we require it to be pre-sorted. This is something we should fix... can you please file an issue for this.
  @mayanks: You mean auto sort for offline as well?
  @g.kishore: yes
@wrbriggs: Thanks, Kishore. I have another question - is the murmur hash used for broker-side partition pruning the same as the default murmur3 hash Kafka uses for partition assignment, or will I need to do anything special with my keys when producing to Kafka for partitioned real-time ingestion in Pinot?
@mayanks: I believe we use murmur2 hash
@mayanks: The expectation is to have the partition function used by producer with the one defined in Pinot
  @wrbriggs: Right - the default producer partitioner in Kafka is murmur3, I think, which is why I asked
  @mayanks:
  @mayanks: You can set the Kafka producer partitioner to a custom function that matches ^^. Alternatively, you can also file and issue to have Pinot support murmur3 impl as used by Kafka

#random


@kizilirmak.mustafamer: @kizilirmak.mustafamer has joined the channel
@apandhi: @apandhi has joined the channel

#troubleshooting


@kizilirmak.mustafamer: @kizilirmak.mustafamer has joined the channel
@yash.agarwal: We get the following exception when starting our pinot nodes. with config `-Dplugins.dir=/opt/pinot/plugins` . ```Failed to load plugin [pinot-hdfs] from dir [/opt/pinot/plugins/pinot-file-system/pinot-hdfs] java.lang.IllegalArgumentException: object is not an instance of declaring class at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?] at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:?] at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?] at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?] at org.apache.pinot.spi.plugin.PluginClassLoader.<init>(PluginClassLoader.java:50) ~[pinot-all-0.7.0-SNAPSHOT-jar-with-dependencies.jar:0.7.0-SNAPSHOT-85cf696be3decb75fd8e0d9e5ec5ef0a32d8dd9b] at org.apache.pinot.spi.plugin.PluginManager.createClassLoader(PluginManager.java:171) ~[pinot-all-0.7.0-SNAPSHOT-jar-with-dependencies.jar:0.7.0-SNAPSHOT-85cf696be3decb75fd8e0d9e5ec5ef0a32d8dd9b] at org.apache.pinot.spi.plugin.PluginManager.load(PluginManager.java:162) ~[pinot-all-0.7.0-SNAPSHOT-jar-with-dependencies.jar:0.7.0-SNAPSHOT-85cf696be3decb75fd8e0d9e5ec5ef0a32d8dd9b] at org.apache.pinot.spi.plugin.PluginManager.init(PluginManager.java:137) ~[pinot-all-0.7.0-SNAPSHOT-jar-with-dependencies.jar:0.7.0-SNAPSHOT-85cf696be3decb75fd8e0d9e5ec5ef0a32d8dd9b] at org.apache.pinot.spi.plugin.PluginManager.init(PluginManager.java:103) ~[pinot-all-0.7.0-SNAPSHOT-jar-with-dependencies.jar:0.7.0-SNAPSHOT-85cf696be3decb75fd8e0d9e5ec5ef0a32d8dd9b] at org.apache.pinot.spi.plugin.PluginManager.<init>(PluginManager.java:84) ~[pinot-all-0.7.0-SNAPSHOT-jar-with-dependencies.jar:0.7.0-SNAPSHOT-85cf696be3decb75fd8e0d9e5ec5ef0a32d8dd9b] at org.apache.pinot.spi.plugin.PluginManager.<clinit>(PluginManager.java:46) ~[pinot-all-0.7.0-SNAPSHOT-jar-with-dependencies.jar:0.7.0-SNAPSHOT-85cf696be3decb75fd8e0d9e5ec5ef0a32d8dd9b] at org.apache.pinot.tools.admin.PinotAdministrator.main(PinotAdministrator.java:166) ~[pinot-all-0.7.0-SNAPSHOT-jar-with-dependencies.jar:0.7.0-SNAPSHOT-85cf696be3decb75fd8e0d9e5ec5ef0a32d8dd9b]```
@apandhi: @apandhi has joined the channel
@elon.azoulay: Happy new year everyone! We are experiencing a server that seems to be "stuck" - it can process raw server queries but in QueryScheduler it appears unable to get a permit - we have a rate of 10000 queries/second, it never enters this block: ```if (queryLogRateLimiter.tryAcquire() || forceLog(schedulerWaitMs, numDocsScanned)) { ("Processed requestId={},table={},segments(queried/processed/matched/consuming)={}/{}/{}/{}," + "schedulerWaitMs={},reqDeserMs={},totalExecMs={},resSerMs={},totalTimeMs={},minConsumingFreshnessMs={},broker={}," + "numDocsScanned={},scanInFilter={},scanPostFilter={},sched={}", requestId, tableNameWithType,```
  @elon.azoulay: Anyone else experience this? I am adding some more debug logs to see if we can reproduce. It only seems to happen on 1 server, after ~1 week of being up.
  @elon.azoulay: i.e. only n-1 out of n servers respond and the stuck server is what is holding up the query
--------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]

Reply via email to