#general
@djwang: @djwang has joined the channel
@djwang: Hi Pinot community members, I’m from StreamNative, now working at organizing the
@djwang: If you have any concerns, feel free to ask me. :laughing:
@mayanks: Thanks @djwang , I’ll ping you shortly
@djwang: Hi @mayanks Thanks for your help.
@ravikumar.m: @ravikumar.m has joined the channel
@rajasekhar.m: @rajasekhar.m has joined the channel
@dixit: @dixit has joined the channel
@sathi.tadi: @sathi.tadi has joined the channel
@alexandre: @alexandre has joined the channel
@avinashup45: @avinashup45 has joined the channel
@srini: hello from the Apache Superset community! :wave: We’re hosting a fun event with @brianolsen87 on April 13th on using Trino <> Superset to join data from Pinot and Mongo :pinot: Would love to see y’all there!
@g.kishore: We need to get a emoticon for superset . We have bunny :rabbit2::rabbit: for trino and pinot :wine_glass:
@srini: that would be amazing Kishore :smile:
@srini: I have one lying around if someone wants to upload it to this Slack
@harsur_12: @harsur_12 has joined the channel
@rams357: @rams357 has joined the channel
@tingchen: @npawar for column transformation,
@npawar: yes ingestion config is required in both tables
@npawar: yes for reload. Jackie recently extended the transform configs to support derived columns
@npawar: this is assuming that the arguments to the transform function are already part of the segment
@tingchen: `{` `"tableName": "myTable",` `...` `"ingestionConfig": {` `"transformConfigs": [` `{` `"columnName": "hoursSinceEpoch",` `"transformFunction": "toEpochHours(timestamp)" // inbuilt function` `}]` `}` `}`
@tingchen: we have a table and I planned to apply a similar transformation function like above to it.
@tingchen: so what I need to do is to (1) add a new column (hoursSinceEpoch) to the table schema (2)add ingestionConfig to the table config and (3) reload the table?
@npawar: yes
#random
@djwang: @djwang has joined the channel
@ravikumar.m: @ravikumar.m has joined the channel
@rajasekhar.m: @rajasekhar.m has joined the channel
@dixit: @dixit has joined the channel
@sathi.tadi: @sathi.tadi has joined the channel
@alexandre: @alexandre has joined the channel
@avinashup45: @avinashup45 has joined the channel
@harsur_12: @harsur_12 has joined the channel
@rams357: @rams357 has joined the channel
#troubleshooting
@djwang: @djwang has joined the channel
@ravikumar.m: @ravikumar.m has joined the channel
@rajasekhar.m: @rajasekhar.m has joined the channel
@dixit: @dixit has joined the channel
@elon.azoulay: Does pinot have an issue parsing floating point literals w scale? i.e. ```select count(*) from mytable where (( DATETRUNC( 'hour', created_at_seconds, 'seconds')) - ( DATETRUNC( 'hour', CAST( 1.610354466173E9 as long), 'seconds'))) >= 0``` does not work but if you take the `E9` away it works. Looks like the grammar only recognizes ```FLOATING_POINT_LITERAL : SIGN? DIGIT+ '.' DIGIT* | SIGN? DIGIT* '.' DIGIT+;``` This is for pinot 0.6.0, did this change in 0.7.0?
@sathi.tadi: @sathi.tadi has joined the channel
@alexandre: @alexandre has joined the channel
@avinashup45: @avinashup45 has joined the channel
@jmeyer: Hello :wave: Can you confirm Pinot Controller uses `/opt/pinot/conf/pinot-controller-log4j2.xml` for logging configuration ? (without override from JAVA_OPTS) Thanks !
@dlavoie: In which context? With the helm chart?
@jmeyer: Yes
@dlavoie: It’s going to use `.Values.controller.log4j2ConfFile`
@dlavoie: Which is defaulted to `/opt/pinot/conf/pinot-controller-log4j2.xml`, so yes
@jmeyer: Great, thanks for the confirmation @dlavoie !
@dlavoie: this will output all into to a `pinotController.log` inside the home of the pod.
@jmeyer: Yep, our logging system is picking them up :slightly_smiling_face: What about the default log level ? Seems like it is WARN as I can't see any INFO level logs
@dlavoie: FYI, that’s not ideal for multiple reasons, first the default flush seems not ok, and we’ll want everything redirected to stdout by default at some point.
@dlavoie: WARN is redirected to stdout, INFO to the internal file.
@jmeyer: So we need to tail both stdout and the internal file to get all logs ?
@dlavoie: Yeah
@dlavoie: Which isn’t ideal, looking at the chart and there isn’t much room for customizing the log4j configs
@dlavoie: Might be helpful to have the log4j config mounted as editable configmaps
@jmeyer: I see, thanks for all the info and suggestions - I'll see how I can work around that :slightly_smiling_face:
@ravikumar.m: Hi All, In documentation, it saying Pinot can not support joins in queries. is there any alternative to achieve that. I have to implement derived stats. which will query on multiple pinot tables(schema) and get the data.
@dlavoie: Presto can help with that. If your lookup data is reasonable, it can also be achieved by your querying application by joining the results from independent queries
@srini: I’ve been thinking about this a bunch recently. Few different options: • Load data into a data sink that supports JOINS. Like Rockset (
@harsur_12: @harsur_12 has joined the channel
@rams357: @rams357 has joined the channel
#pinot-dev
@npawar: was there some recent version changes made to hadoop/parquet dependencies? I’m unable to upload a Parquet format file via this API anymore. ```
#getting-started
@harsur_12: @harsur_12 has joined the channel
#pinot-flow
@ravi.maddi: @ravi.maddi has renamed the channel from "pinot-startup" to "pinot-flow"
@ravikumar.m: @ravikumar.m has joined the channel
@rajasekhar.m: @rajasekhar.m has joined the channel
@ravi.maddi: @ravi.maddi has left the channel
@dixit: @dixit has joined the channel
@g.kishore: @g.kishore has joined the channel
@sathi.tadi: @sathi.tadi has joined the channel
@vallamsetty: Hey Ravi.. Thanks for creating the channel...
@vallamsetty: Welcome everyone to the Pinot community
#pinot-rack-awareness
@jaydesai.jd: @ssubrama @g.kishore Can u review the changes and sign off today if possible. Thanks :slightly_smiling_face:
@g.kishore: done
@ssubrama: @jaydesai.jd What is pinot env provider supposed to do?
@jaydesai.jd: Replying in the document.
--------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
