#general


@mourad.dlia: @mourad.dlia has joined the channel
@ilirsuloti: @ilirsuloti has joined the channel
@g.kishore: Happening now - Pinot Year In Review
@xiangfu0: ```Hello Community, We are pleased to announce that Apache Pinot 0.9.1 is released! Apache Pinot is a realtime distributed OLAP datastore, designed to answer OLAP queries with low latency use-cases. This is a bug fix release that includes the upgrade to the latest log4j library, v2.15.0. This is our response to CVE-2021-44228. The release can be downloaded at The release note is available at Additional resources - Project website: Getting started: Pinot developer blogs: Intro to Pinot Video: Join Pinot Community - Twitter: Meetup: Slack channel: Best Regards, Apache Pinot Team```

#random


@mourad.dlia: @mourad.dlia has joined the channel
@ilirsuloti: @ilirsuloti has joined the channel

#feat-presto-connector


@prashant.korade: @prashant.korade has joined the channel

#feat-better-schema-evolution


@prashant.korade: @prashant.korade has joined the channel

#troubleshooting


@bsa0393: Hello, I got an error `zookeeper.request.timeout value is 0. feature enabled=` `Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)` `Socket error occurred: localhost/127.0.0.1:2181: Connection refused` while running the command `./pinot-admin.sh StartController` Any idea what could go wrong?
  @jackie.jxt: You need to first start a ZK instance before starting the controller
@luisfernandez: has anyone gotten this issue before? I’m getting the following exception in one of the brokers: ```2021-12-13 11:31:40 java.lang.OutOfMemoryError: Direct buffer memory 2021-12-13 11:31:40 Caught exception while handling response from server: pinot-server-1_R``` we currently have 2 brokers, currently doing a lot of garbage collection i’m unaware as to why. latency from broker to server has been severed by a lot but I’m not sure what happened as to we haven’t been touching the pinot cluster lately, we did stop one of our apps from streaming but that doesn’t line up with the spikes on response times.
  @richard892: broker OOM?
  @luisfernandez: time spent on GC on one of the brokers, the other broker seems to not have the issue but I’m unsure why
  @luisfernandez: that’s for G1 Old Generation
  @richard892: this could relate to something I've seen before, a netty contributor warned me about OOM
  @richard892: this isn't related to heap memory, so the GC metrics are a red herring
  @richard892:
  @richard892: how much direct memory have you given your brokers? (`-XX:MaxDirectMemorySize`)
  @luisfernandez: ```value: "-Xms2G -Xmx2G -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -Xlog:gc*:file=/opt/pinot/gc-pinot-broker.log -Dlog4j2.configurationFile=/opt/pinot/conf/log4j2.xml -Dplugins.dir=/opt/pinot/plugins -javaagent:/opt/pinot/etc/jmx_prometheus_javaagent/jmx_prometheus_javaagent-0.12.0.jar=8008:/opt/pinot/etc/jmx_prometheus_javaagent/configs/pinot.yml" ```
  @luisfernandez:
  @luisfernandez: seems like i haven’t set that up
  @luisfernandez: but it’s not using all the heap i’m confused
  @luisfernandez: that’s jvm heap ^
  @richard892: ok, so unless you set it, it should default to 2G because that's how you set Xmx
  @richard892: Do you have off heap memory metrics?
  @luisfernandez:
  @luisfernandez: so that’s the pod memory
  @luisfernandez: def more than 2g :smile:
  @luisfernandez: but the jvm metics show something else why is off heap
  @luisfernandez: so not the best but i restarted the pod and then we are good
  @luisfernandez: but i’m concerned that i don’t know how and what happened :smile:
  @richard892: no it's not the best, let me figure out if there is a broker metric which shed more light in to what happened
@mourad.dlia: @mourad.dlia has joined the channel
@ilirsuloti: @ilirsuloti has joined the channel

#pinot-dev


@saulo.sobreiro: @saulo.sobreiro has joined the channel

#announcements


@saulo.sobreiro: @saulo.sobreiro has joined the channel

#presto-pinot-connector


@lrhadoop143: @lrhadoop143 has joined the channel

#getting-started


@luisfernandez: can we somehow stop ingestion of records to tables in pinot? other than shutting down the kafka streaming app?
--------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscr...@pinot.apache.org For additional commands, e-mail: dev-h...@pinot.apache.org

Reply via email to