#general


@djwang: @djwang has joined the channel
@djwang: Hi Pinot community members, I’m from StreamNative, now working at organizing the . I’m trying to get in touch with Pinot community and invite as one community partner of our Summit. As a community partner, The Pinot logo will be featured on the Pulsar Summit website, in promotional materials of the event, and also will be in Opening Keynote at the Summit. I think Pinot will gain valuable mindshare of a targeted audience. Also, welcome to submit talks on . I’m not sure this is the right place to talk about this. So, Looking forward to your reply. Thanks a lot!
  @djwang: If you have any concerns, feel free to ask me. :laughing:
  @mayanks: Thanks @djwang , I’ll ping you shortly
  @djwang: Hi @mayanks Thanks for your help.
@ravikumar.m: @ravikumar.m has joined the channel
@rajasekhar.m: @rajasekhar.m has joined the channel
@dixit: @dixit has joined the channel
@sathi.tadi: @sathi.tadi has joined the channel
@alexandre: @alexandre has joined the channel
@avinashup45: @avinashup45 has joined the channel
@srini: hello from the Apache Superset community! :wave: We’re hosting a fun event with @brianolsen87 on April 13th on using Trino <> Superset to join data from Pinot and Mongo :pinot: Would love to see y’all there!
  @g.kishore: We need to get a emoticon for superset . We have bunny :rabbit2::rabbit: for trino and pinot :wine_glass:
  @srini: that would be amazing Kishore :smile:
  @srini: I have one lying around if someone wants to upload it to this Slack
@harsur_12: @harsur_12 has joined the channel
@rams357: @rams357 has joined the channel
@tingchen: @npawar for column transformation, , (1) for a hybrid table, is the ingestionConfig required in both realtime and offline tables (2) can the transformation be applied to existing data? can we reload the table to do it?
  @npawar: yes ingestion config is required in both tables
  @npawar: yes for reload. Jackie recently extended the transform configs to support derived columns
  @npawar: this is assuming that the arguments to the transform function are already part of the segment
  @tingchen: `{` `"tableName": "myTable",` `...` `"ingestionConfig": {` `"transformConfigs": [` `{` `"columnName": "hoursSinceEpoch",` `"transformFunction": "toEpochHours(timestamp)" // inbuilt function` `}]` `}` `}`
  @tingchen: we have a table and I planned to apply a similar transformation function like above to it.
  @tingchen: so what I need to do is to (1) add a new column (hoursSinceEpoch) to the table schema (2)add ingestionConfig to the table config and (3) reload the table?
  @npawar: yes

#random


@djwang: @djwang has joined the channel
@ravikumar.m: @ravikumar.m has joined the channel
@rajasekhar.m: @rajasekhar.m has joined the channel
@dixit: @dixit has joined the channel
@sathi.tadi: @sathi.tadi has joined the channel
@alexandre: @alexandre has joined the channel
@avinashup45: @avinashup45 has joined the channel
@harsur_12: @harsur_12 has joined the channel
@rams357: @rams357 has joined the channel

#troubleshooting


@djwang: @djwang has joined the channel
@ravikumar.m: @ravikumar.m has joined the channel
@rajasekhar.m: @rajasekhar.m has joined the channel
@dixit: @dixit has joined the channel
@elon.azoulay: Does pinot have an issue parsing floating point literals w scale? i.e. ```select count(*) from mytable where (( DATETRUNC( 'hour', created_at_seconds, 'seconds')) - ( DATETRUNC( 'hour', CAST( 1.610354466173E9 as long), 'seconds'))) >= 0``` does not work but if you take the `E9` away it works. Looks like the grammar only recognizes ```FLOATING_POINT_LITERAL : SIGN? DIGIT+ '.' DIGIT* | SIGN? DIGIT* '.' DIGIT+;``` This is for pinot 0.6.0, did this change in 0.7.0?
@sathi.tadi: @sathi.tadi has joined the channel
@alexandre: @alexandre has joined the channel
@avinashup45: @avinashup45 has joined the channel
@jmeyer: Hello :wave: Can you confirm Pinot Controller uses `/opt/pinot/conf/pinot-controller-log4j2.xml` for logging configuration ? (without override from JAVA_OPTS) Thanks !
  @dlavoie: In which context? With the helm chart?
  @jmeyer: Yes
  @dlavoie: It’s going to use `.Values.controller.log4j2ConfFile`
  @dlavoie: Which is defaulted to `/opt/pinot/conf/pinot-controller-log4j2.xml`, so yes
  @jmeyer: Great, thanks for the confirmation @dlavoie !
  @dlavoie: this will output all into to a `pinotController.log` inside the home of the pod.
  @jmeyer: Yep, our logging system is picking them up :slightly_smiling_face: What about the default log level ? Seems like it is WARN as I can't see any INFO level logs
  @dlavoie: FYI, that’s not ideal for multiple reasons, first the default flush seems not ok, and we’ll want everything redirected to stdout by default at some point.
  @dlavoie: WARN is redirected to stdout, INFO to the internal file.
  @jmeyer: So we need to tail both stdout and the internal file to get all logs ?
  @dlavoie: Yeah
  @dlavoie: Which isn’t ideal, looking at the chart and there isn’t much room for customizing the log4j configs
  @dlavoie: Might be helpful to have the log4j config mounted as editable configmaps
  @jmeyer: I see, thanks for all the info and suggestions - I'll see how I can work around that :slightly_smiling_face:
@ravikumar.m: Hi All, In documentation, it saying Pinot can not support joins in queries. is there any alternative to achieve that. I have to implement derived stats. which will query on multiple pinot tables(schema) and get the data.
  @dlavoie: Presto can help with that. If your lookup data is reasonable, it can also be achieved by your querying application by joining the results from independent queries
  @srini: I’ve been thinking about this a bunch recently. Few different options: • Load data into a data sink that supports JOINS. Like Rockset () • Use a query engine to do JOINs, like Trino () or PrestoDB () or Apache Drill (). No opinions here, I’ll side step the politics here :pray:
@harsur_12: @harsur_12 has joined the channel
@rams357: @rams357 has joined the channel

#pinot-dev


@npawar: was there some recent version changes made to hadoop/parquet dependencies? I’m unable to upload a Parquet format file via this API anymore. `````` This was working a few weeks ago. Now i get this exception during segment creation ```java.lang.NoClassDefFoundError: org/apache/hadoop/mapreduce/lib/input/FileInputFormat at java.lang.ClassLoader.defineClass1(Native Method) ~[?:1.8.0_282] at java.lang.ClassLoader.defineClass(ClassLoader.java:756) ~[?:1.8.0_282] at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) ~[?:1.8.0_282] at java.net.URLClassLoader.defineClass(URLClassLoader.java:468) ~[?:1.8.0_282] at java.net.URLClassLoader.access$100(URLClassLoader.java:74) ~[?:1.8.0_282] at java.net.URLClassLoader$1.run(URLClassLoader.java:369) ~[?:1.8.0_282] at java.net.URLClassLoader$1.run(URLClassLoader.java:363) ~[?:1.8.0_282] at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_282] at java.net.URLClassLoader.findClass(URLClassLoader.java:362) ~[?:1.8.0_282] at java.lang.ClassLoader.loadClass(ClassLoader.java:418) ~[?:1.8.0_282] at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) ~[?:1.8.0_282] at java.lang.ClassLoader.loadClass(ClassLoader.java:351) ~[?:1.8.0_282] at org.apache.parquet.HadoopReadOptions$Builder.<init>(HadoopReadOptions.java:95) ~[pinot-all-0.7.0-SNAPSHOT-jar-with-dependencies.jar:0.7.0-SNAPSHOT-89a22f097c5ff26396e58950c90d764066a56121] at org.apache.parquet.HadoopReadOptions.builder(HadoopReadOptions.java:79) ~[pinot-all-0.7.0-SNAPSHOT-jar-with-dependencies.jar:0.7.0-SNAPSHOT-89a22f097c5ff26396e58950c90d764066a56121] at org.apache.parquet.hadoop.ParquetReader$Builder.<init>(ParquetReader.java:198) ~[pinot-all-0.7.0-SNAPSHOT-jar-with-dependencies.jar:0.7.0-SNAPSHOT-89a22f097c5ff26396e58950c90d764066a56121] at org.apache.parquet.avro.AvroParquetReader$Builder.<init>(AvroParquetReader.java:107) ~[pinot-all-0.7.0-SNAPSHOT-jar-with-dependencies.jar:0.7.0-SNAPSHOT-89a22f097c5ff26396e58950c90d764066a56121] at org.apache.parquet.avro.AvroParquetReader$Builder.<init>(AvroParquetReader.java:99) ~[pinot-all-0.7.0-SNAPSHOT-jar-with-dependencies.jar:0.7.0-SNAPSHOT-89a22f097c5ff26396e58950c90d764066a56121] at org.apache.parquet.avro.AvroParquetReader.builder(AvroParquetReader.java:48) ~[pinot-all-0.7.0-SNAPSHOT-jar-with-dependencies.jar:0.7.0-SNAPSHOT-89a22f097c5ff26396e58950c90d764066a56121] at org.apache.pinot.plugin.inputformat.parquet.ParquetUtils.getParquetAvroReader(ParquetUtils.java:51) ~[pinot-all-0.7.0-SNAPSHOT-jar-with-dependencies.jar:0.7.0-SNAPSHOT-89a22f097c5ff26396e58950c90d764066a56121] at org.apache.pinot.plugin.inputformat.parquet.ParquetAvroRecordReader.init(ParquetAvroRecordReader.java:52) ~[pinot-all-0.7.0-SNAPSHOT-jar-with-dependencies.jar:0.7.0-SNAPSHOT-89a22f097c5ff26396e58950c90d764066a56121] at org.apache.pinot.plugin.inputformat.parquet.ParquetRecordReader.init(ParquetRecordReader.java:47) ~[pinot-all-0.7.0-SNAPSHOT-jar-with-dependencies.jar:0.7.0-SNAPSHOT-89a22f097c5ff26396e58950c90d764066a56121] at org.apache.pinot.spi.data.readers.RecordReaderFactory.getRecordReaderByClass(RecordReaderFactory.java:149) ~[pinot-all-0.7.0-SNAPSHOT-jar-with-dependencies.jar:0.7.0-SNAPSHOT-89a22f097c5ff26396e58950c90d764066a56121] at org.apache.pinot.core.segment.creator.impl.SegmentIndexCreationDriverImpl.getRecordReader(SegmentIndexCreationDriverImpl.java:122) ~[pinot-all-0.7.0-SNAPSHOT-jar-with-dependencies.jar:0.7.0-SNAPSHOT-89a22f097c5ff26396e58950c90d764066a56121] at org.apache.pinot.core.segment.creator.impl.SegmentIndexCreationDriverImpl.init(SegmentIndexCreationDriverImpl.java:98) ~[pinot-all-0.7.0-SNAPSHOT-jar-with-dependencies.jar:0.7.0-SNAPSHOT-89a22f097c5ff26396e58950c90d764066a56121] at org.apache.pinot.controller.util.FileIngestionUtils.buildSegment(FileIngestionUtils.java:129) ~[pinot-all-0.7.0-SNAPSHOT-jar-with-dependencies.jar:0.7.0-SNAPSHOT-89a22f097c5ff26396e58950c90d764066a56121] at org.apache.pinot.controller.util.FileIngestionHelper.buildSegmentAndPush(FileIngestionHelper.java:101) ~[pinot-all-0.7.0-SNAPSHOT-jar-with-dependencies.jar:0.7.0-SNAPSHOT-89a22f097c5ff26396e58950c90d764066a56121] at org.apache.pinot.controller.api.resources.PinotIngestionRestletResource.ingestData(PinotIngestionRestletResource.java:197) ~[pinot-all-0.7.0-SNAPSHOT-jar-with-dependencies.jar:0.7.0-SNAPSHOT-89a22f097c5ff26396e58950c90d764066a56121] at org.apache.pinot.controller.api.resources.PinotIngestionRestletResource.ingestFromFile(PinotIngestionRestletResource.java:127) ~[pinot-all-0.7.0-SNAPSHOT-jar-with-dependencies.jar:0.7.0-SNAPSHOT-89a22f097c5ff26396e58950c90d764066a56121] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_282]```

#getting-started


@harsur_12: @harsur_12 has joined the channel

#pinot-flow


@ravi.maddi: @ravi.maddi has renamed the channel from "pinot-startup" to "pinot-flow"
@ravikumar.m: @ravikumar.m has joined the channel
@rajasekhar.m: @rajasekhar.m has joined the channel
@ravi.maddi: @ravi.maddi has left the channel
@dixit: @dixit has joined the channel
@g.kishore: @g.kishore has joined the channel
@sathi.tadi: @sathi.tadi has joined the channel
@vallamsetty: Hey Ravi.. Thanks for creating the channel...
@vallamsetty: Welcome everyone to the Pinot community

#pinot-rack-awareness


@jaydesai.jd: @ssubrama @g.kishore Can u review the changes and sign off today if possible. Thanks :slightly_smiling_face:
@g.kishore: done
@ssubrama: @jaydesai.jd What is pinot env provider supposed to do?
  @jaydesai.jd: Replying in the document.
--------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]

Reply via email to