#general


@balazs.francsics: @balazs.francsics has joined the channel
@nico: @nico has joined the channel

#random


@balazs.francsics: @balazs.francsics has joined the channel
@nico: @nico has joined the channel

#feat-text-search


@nico: @nico has joined the channel

#feat-presto-connector


@aaron: @aaron has joined the channel

#troubleshooting


@laxman: Hi Team, we are facing one issue with realtime consumption (from kafka). We have 4 pinot servers, 7 real time tables. Every few hours, one of the pinot server stops consuming from one kafka topic. In logs, server unable to connect to controller while finalising the segment. However, controller is up and running. • Should pinot server retry for such recoverable errors? • Are there any config levers (retries on error, etc) to fix this? Any thoughts?
  @fx19880617: so it’s not connecting to controller? do you have stacktrace for the controller call error?
  @singalravi: The readiness probe of pinot controller pod is failing. As a result k8s does not allow connection to controller leader pod. pinot server connection fails with following error message:
  @singalravi: ```2021/02/01 17:34:58.141 ERROR [ServerSegmentCompletionProtocolHandler] [spanEventView__24__625__20210201T1714Z] Could not send request java.net.SocketTimeoutException: Read timed out at java.net.SocketInputStream.socketRead0(Native Method) ~[?:?] at java.net.SocketInputStream.socketRead(SocketInputStream.java:115) ~[?:?] at java.net.SocketInputStream.read(SocketInputStream.java:168) ~[?:?] at java.net.SocketInputStream.read(SocketInputStream.java:140) ~[?:?] at shaded.org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137) ~[pinot-all-0.6.0-jar-with-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21] at shaded.org.apache.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:153) ~[pinot-all-0.6.0-jar-with-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21] at shaded.org.apache.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:282) ~[pinot-all-0.6.0-jar-with-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21] at shaded.org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:138) ~[pinot-all-0.6.0-jar-with-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21] at shaded.org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:56) ~[pinot-all-0.6.0-jar-with-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21] at shaded.org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:259) ~[pinot-all-0.6.0-jar-with-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21] at shaded.org.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:163) ~[pinot-all-0.6.0-jar-with-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21] at shaded.org.apache.http.impl.conn.CPoolProxy.receiveResponseHeader(CPoolProxy.java:157) ~[pinot-all-0.6.0-jar-with-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21] at shaded.org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:273) ~[pinot-all-0.6.0-jar-with-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21] at shaded.org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125) ~[pinot-all-0.6.0-jar-with-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21] at shaded.org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:272) ~[pinot-all-0.6.0-jar-with-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21] at shaded.org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) ~[pinot-all-0.6.0-jar-with-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21] at shaded.org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) ~[pinot-all-0.6.0-jar-with-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21] at shaded.org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) ~[pinot-all-0.6.0-jar-with-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21] at shaded.org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) ~[pinot-all-0.6.0-jar-with-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21] at shaded.org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) ~[pinot-all-0.6.0-jar-with-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21] at shaded.org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108) ~[pinot-all-0.6.0-jar-with-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21] at org.apache.pinot.common.utils.FileUploadDownloadClient.sendRequest(FileUploadDownloadClient.java:357) ~[pinot-all-0.6.0-jar-with-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21] at org.apache.pinot.common.utils.FileUploadDownloadClient.sendSegmentCompletionProtocolRequest(FileUploadDownloadClient.java:631) ~[pinot-all-0.6.0-jar-with-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21] at org.apache.pinot.server.realtime.ServerSegmentCompletionProtocolHandler.sendRequest(ServerSegmentCompletionProtocolHandler.java:207) [pinot-all-0.6.0-jar-with-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21] at org.apache.pinot.server.realtime.ServerSegmentCompletionProtocolHandler.segmentConsumed(ServerSegmentCompletionProtocolHandler.java:174) [pinot-all-0.6.0-jar-with-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21] at org.apache.pinot.core.data.manager.realtime.LLRealtimeSegmentDataManager.postSegmentConsumedMsg(LLRealtimeSegmentDataManager.java:930) [pinot-all-0.6.0-jar-with-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21] at org.apache.pinot.core.data.manager.realtime.LLRealtimeSegmentDataManager$PartitionConsumer.run(LLRealtimeSegmentDataManager.java:557) [pinot-all-0.6.0-jar-with-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21] at java.lang.Thread.run(Thread.java:834) [?:?]```
  @singalravi: Following is the readiness probe on pinot controller: ``` readinessProbe: failureThreshold: 3 httpGet: path: /pinot-controller/admin port: 9000 scheme: HTTP initialDelaySeconds: 60 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1```
  @fx19880617: I see, have you try to enter into the pod and check if the http endpoint is up?
  @fx19880617: also you can use `/health` endpoint to check
@sagar.khangan: Hi Team , I am running a job to import data form s3 from parquet ,getting this error ```2021/02/11 06:27:24.999 ERROR [SegmentGenerationJobRunner] [pool-2-thread-1] Failed to generate Pinot segment for file - java.lang.IllegalArgumentException: INT96 not yet implemented.```
  @fx19880617: this is due to the int96 support is missing in parquet-avro lib, we are adding the native parquet support here:
  @fx19880617: should be available soon
  @fx19880617: for now, you can transform this field to int64. I think it’s timestamp in nano seconds right?
  @sagar.khangan: ok thanx
  @sagar.khangan: yes its epoch time
  @sagar.khangan: when can we expect this parquet support , be available?
  @fx19880617: Should be soon, this week or next week. @steotia ^^
@sagar.khangan: Is there any config in jobspec where we can ignore some prefix or suffix of files? like folder has csv files and other formats , but we want it to ignore others ??
@npawar: check out the includeFileNamePattern:
  @fx19880617: ``` # inputDirURI: Root directory of input data, expected to have scheme configured in PinotFS. inputDirURI: 'examples/batch/baseballStats/rawdata' # includeFileNamePattern: include file name pattern, supported glob pattern. # Sample usage: # 'glob:*.avro' will include all avro files just under the inputDirURI, not sub directories; # 'glob:**/*.avro' will include all the avro files under inputDirURI recursively. includeFileNamePattern: 'glob:**/*.csv'```
  @sagar.khangan: thanx
@sagar.khangan: I am getting error while uploading s3 data ``` Failed to generate Pinot segment for file s3:xxx/xxx/1234.csv Illegal character in scheme name at index 2: table_OFFLINE_2021-02-01 09:39:00.000_2021-02-01 11:59:00.000_2.tar.gz at java.net.URI.create(URI.java:852) ~[?:1.8.0_282] at java.net.URI.resolve(URI.java:1036) ~[?:1.8.0_282] at org.apache.pinot.plugin.ingestion.batch.standalone.SegmentGenerationJobRunner.lambda$run$0(SegmentGenerationJobRunner.java:212) ~[pinot-batch-ingestion-standalone-0.7.0-SNAPSHOT-shaded.jar:0.7.0-SNAPSHOT-162d0e61b6b1c3d51f915f7ad3e151a4fb24110a] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_282] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_282] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_282] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_282] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_282]```
  @fx19880617: what’s your `inputDirURI` and `outputDirURI`?
  @fx19880617: seems there is a space in the segment name
  @fx19880617: not sure if that’s allowed for uri
  @sagar.khangan: inputDirURI: '' includeFileNamePattern: 'glob:**/*.csv' outputDirURI: ''
  @fx19880617: that should be ok
  @fx19880617: I think the issue is segment name
  @sagar.khangan: is that space because of datetime field ?
  @fx19880617: I think so
  @sagar.khangan: format is "yyyy-MM-dd HH:mm:ss.SSS"
  @fx19880617: ah i c
  @sagar.khangan: is this format not allowed with pinot ingestion?
  @fx19880617: this time format is fine, just we put time min/max value in segment name which contains the space that triggered the bug
  @fx19880617: in your ingestion job
  @fx19880617: can you change the segmentNameGenerator
  @sagar.khangan: what should I use for segmentNameGenerator
  @sagar.khangan: ?
  @fx19880617: just try this: ``` segmentNameGeneratorSpec: type: simple configs:```
  @fx19880617: meanwhile we will fix this bug
  @sagar.khangan: ok sure
  @sagar.khangan: also for controllerURI "I need to put the AWS Load balancer uri ?
  @fx19880617: if you run it from the k8s cluster, then you can use the service name `pinot-controller:9000`
  @fx19880617: if it’s outside the k8s
  @fx19880617: like from run the job from your laptop then AWS lb is required
  @sagar.khangan: I am running from pinot-server pod ,with same it gave error ```Failed to read from Schema URI - 'pinot-controller:9000```
  @sagar.khangan: tried this ```segmentNameGeneratorSpec: type: simple configs:``` Illegal character in scheme name at index 2: tabloe_OFFLINE_2021-02-01 08:28:00.000_2021-02-01 10:59:00.000_1.tar.gz *same error*
  @fx19880617: ``?
  @sagar.khangan: this uri worked , but same eror for Illegal character in scheme name at index 2:
  @fx19880617: hmm
  @fx19880617: can you try this:
  @fx19880617: ```segmentNameGeneratorSpec: type: fixed configs: segment.name: myTable_segment_0```
  @fx19880617:
  @sagar.khangan: Also I am adding a field as Boolean it is converted to string and value is null , and INT field are showing negative values
  @fx19880617: pinot internal use string to store boolean
  @fx19880617: I think that is null value ?
  @sagar.khangan: yes its showing null
  @sagar.khangan: and in schema I see format as STRING
  @sagar.khangan: Also I have uploaded data to Pinot, I can see in S3 the tar files, but in query editor only 10000 files which I uploaded intially are showing up
  @sagar.khangan: is this expected? that it would have some delay ? beacuse there was no error on console from script
  @fx19880617: hmm
  @fx19880617: so you mean you have 10000 csv files?
  @fx19880617: how many segments and total documents?
  @sagar.khangan: no i mean for testing i tried with 3 csv it uploaded 10000 records, not I ran on buncho f more csvs
  @sagar.khangan: now it is not showing new records
  @sagar.khangan: still showing 10000 records and 1 segment
  @fx19880617: ic
  @fx19880617: ah
  @fx19880617: for this you need to run 3 jobs each with different segment.name
  @fx19880617: and point to each file
  @fx19880617: pinot use segment name to distinguish each segment
  @fx19880617: and will override each other if the segment name is the same
  @sagar.khangan: I need to run number this times equal to number of folders in s3? with different values for configs: segment.name: ae_consent_flags_segment_0
  @sagar.khangan: is this correct?
  @fx19880617: yes
  @fx19880617: each file should comes to one segment
  @fx19880617: you can use inputPattern to pick the only file inside a directory
  @sagar.khangan: inputPattern option in jobspec? I didnt find in documentation
  @sagar.khangan: is there any option to set this automatically to use 1 segment for each file ?
  @fx19880617: not for fixed segmentNameGenerator
  @fx19880617: what’s your first `segmentNameGenerator` config?
  @sagar.khangan: segmentNameGeneratorSpec: type: fixed configs: segment.name: table_segment_0
  @sagar.khangan: i think "exclude.sequence.id" this would work ?
  @fx19880617: no
  @fx19880617: it’s for different usage
  @sagar.khangan: ok
  @fx19880617: have you tried this ```# segmentNameGeneratorSpec: defines how to init a SegmentNameGenerator. segmentNameGeneratorSpec: type: normalizedDate configs: exclude.sequence.id: true```
  @fx19880617: also can I see your table config
  @fx19880617: did you set ```"segmentPushType": "APPEND",```
  @sagar.khangan: yes ```{ "OFFLINE": { "tableName": "name_OFFLINE", "tableType": "OFFLINE", "segmentsConfig": { "timeColumnName": "uploaded", "segmentPushFrequency": "HOURLY", "segmentPushType": "APPEND", "schemaName": "name", "replication": "1", "replicasPerPartition": "1" }, "tenants": { "broker": "DefaultTenant", "server": "DefaultTenant" }, "tableIndexConfig": { "invertedIndexColumns": [], "rangeIndexColumns": [], "autoGeneratedInvertedIndex": false, "createInvertedIndexDuringSegmentGeneration": false, "sortedColumn": [], "bloomFilterColumns": [], "loadMode": "MMAP", "noDictionaryColumns": [], "onHeapDictionaryColumns": [], "varLengthDictionaryColumns": [], "enableDefaultStarTree": false, "enableDynamicStarTreeCreation": false, "aggregateMetrics": false, "nullHandlingEnabled": false }, "metadata": {}, "quota": {}, "routing": {}, "query": {}, "ingestionConfig": {}, "isDimTable": false } }```
  @fx19880617: ok
  @fx19880617: this should be good
  @sagar.khangan: ok le tme try what you shared
  @fx19880617: can you use this `normalizedDate` type
  @sagar.khangan: yes trying with that
  @sagar.khangan: I cleaned my s3 output folder and ran again , tar files are created , no error in script but in query editor still doc count is same 10834
  @fx19880617: hmm
  @fx19880617: how many segments created in your output s3 directory
  @sagar.khangan: 24 tar files in output dir , 1 correspoding to 1 csv
  @sagar.khangan: now it loaded, when I clicked on reload segments, this reload segment didnt work last time
  @sagar.khangan: is there a way to autoreload ?
  @sagar.khangan: we have hierarchal s3 folders , I just ran for 2nd last level where folder contains file
  @fx19880617: can you check table idealstates
  @fx19880617: from the log, do you see all the segments are pushed ?
  @sagar.khangan: logs just show ``` 2021/02/11 10:33:29.244 WARN [SegmentIndexCreationDriverImpl] [pool-2-thread-1] Using class: org.apache.pinot.plugin.inputformat.csv.CSVRecordReader to read segment, ignoring configured file format: AVRO 2021/02/11 10:33:30.040 WARN [SegmentIndexCreationDriverImpl] [pool-2-thread-1] Using class: org.apache.pinot.plugin.inputformat.csv.CSVRecordReader to```
  @fx19880617: can you try to change `jobType: SegmentCreationAndMetadataPush`
  @fx19880617: to `jobType: SegmentMetadataPush`
  @fx19880617: then rerun it
  @fx19880617: it will just push segments from the output directory to pinot
  @sagar.khangan: ok
@sagar.khangan: I have uploaded data to Pinot, I can see in S3 the tar files, but in query editor only 10000 files which I uploaded intially are showing up is this expected? that it would have some delay ? becasue there was no error on console from script.?
  @bowlesns: If your jobType was `SegmentCreationAndTarPush` believe it would exhibit that behavior. I’ve been using `SegmentCreationAndUriPush` and the data shows up when querying from the controller. Let me know if this works for you.
@laxman: Hi Team, what are the steps to restore deleted segments back to a REALTIME table? We did the following steps but to no avail. • Copy back the deleted segments to respective tables • Restored the zookeeper metadata from zk backup to this path `/pinot/<datasource-name>/PROPERTYSTORE/SEGMENTS/<table-name>_REALTIME/…` • Restarted servers, controllers We don’t see these segments in IDEAL_STATE. Anyone has done this? Whats the right way to restore deleted segments for a REALTIME table?
  @fx19880617: idealstates requires explicitly modified for this
  @fx19880617: Ting is adding support for this :
  @laxman: > idealstates requires explicitly modified for this Thanks Xiang. We are trying this now.
  @fx19880617: ok, do you have offline table? if so, you can update the offline side
  @pabraham.usa: @fx19880617 real-time segment upload is still wip? Or is there any way I can do it ?
  @fx19880617: it’s still wip, you can pull that PR into your repo to build and test
  @laxman: @fx19880617: We don’t have offline tables.
@sagar.khangan: Hi Team I have a column with INT in offline table , I am loading from s3 csv, value for that is '1' in csv but its loaded as '-2147483648' in table what could be the reason?
  @fx19880617: i think this typically means the csv parser error, can you check if your column name matches the pinot column? also the cases
@falexvr: Guys, I merged the latest changes into our fork, built again the docker image, deployed and now when we try to create a table we see this error in the controller: `ClassNotFoundException: org.apache.pinot.plugin.stream.kafka20.KafkaConsumerFactory` Any suggestions?
  @dlavoie: Are you running a customized PluginDir value?
  @falexvr: Nope, actually we're using almost everything as it was in the first place
  @falexvr: `pluginsDir: /opt/pinot/plugins`
  @dlavoie: what about `plugins.include` ?
  @falexvr: This is the portion in values.yaml used to deploy in a kubernetes environment using helm:
  @falexvr:
  @dlavoie: can you list the content `/opt/pinot/plugins` of your within your forked image?
  @dlavoie: Also, can you extract your controller logs using this command? ```kubectl exec <controller-pod-name> -- cat pinotController.log > controller.log``` The startup logs all plugins being loaded
  @falexvr: This is what I see in `/opt/pinot/plugins`
  @dlavoie: If you can share your controller logs we’ll have a better understand of what is going on
  @falexvr:
  @dlavoie: Somehow, you have provided `pinot-gcs` as `plugins.include`. the plugin is loaded by default, so you don’t need specify it. Overwriting plugins.include will disable all other plugins, including the kafka one.
  @falexvr: Ah... I did it as we saw here:
  @dlavoie: Yes, the documentation may induce in error since it doesn’t mention that 1- other plugins will be disabled, 2- it’s already part of the docker image.
  @dlavoie: I don’t see anything in your `values.yaml`, so I guess it is part of your docker image fork?
  @falexvr: It's in the `jvmOpts` section of `values.yaml`
  @dlavoie: Not for the controller
  @dlavoie: You only shared the controller values
  @falexvr: Sorry, wrong file
  @falexvr: This is the one
  @falexvr:
  @dlavoie: anyways, just remove `-Dplugins.dir=/opt/pinot/plugins -Dplugins.include=pinot-gcs` from all your jvmArgs
  @falexvr: Okay, I'll try that
@balazs.francsics: @balazs.francsics has joined the channel
@aaron: When I open the pinot controller web UI it becomes visible very briefly and then goes blank. I see a bunch of warnings in the controller logs, are these relevant?
  @aaron: ```2021/02/11 14:33:10.615 WARN [Reflections] [main] could not get type for name org.eclipse.jetty.npn.NextProtoNego$ClientProvider from any class loader org.reflections.ReflectionsException: could not get type for name org.eclipse.jetty.npn.NextProtoNego$ClientProvider at org.reflections.ReflectionUtils.forName(ReflectionUtils.java:390) ~[pinot-all-0.6.0-jar-with-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21] at org.reflections.Reflections.expandSuperTypes(Reflections.java:381) ~[pinot-all-0.6.0-jar-with-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21] at org.reflections.Reflections.<init>(Reflections.java:126) ~[pinot-all-0.6.0-jar-with-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21] at io.swagger.jaxrs.config.BeanConfig.classes(BeanConfig.java:276) ~[pinot-all-0.6.0-jar-with-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21] at io.swagger.jaxrs.config.BeanConfig.scanAndRead(BeanConfig.java:240) ~[pinot-all-0.6.0-jar-with-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21] at io.swagger.jaxrs.config.BeanConfig.setScan(BeanConfig.java:221) ~[pinot-all-0.6.0-jar-with-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21] at org.apache.pinot.controller.api.ControllerAdminApiApplication.setupSwagger(ControllerAdminApiApplication.java:154) ~[pinot-all-0.6.0-jar-with-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21] at org.apache.pinot.controller.api.ControllerAdminApiApplication.start(ControllerAdminApiApplication.java:128) ~[pinot-all-0.6.0-jar-with-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21] at org.apache.pinot.controller.ControllerStarter.setUpPinotController(ControllerStarter.java:416) ~[pinot-all-0.6.0-jar-with-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21] at org.apache.pinot.controller.ControllerStarter.start(ControllerStarter.java:287) ~[pinot-all-0.6.0-jar-with-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21] at org.apache.pinot.tools.service.PinotServiceManager.startController(PinotServiceManager.java:116) ~[pinot-all-0.6.0-jar-with-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21] at org.apache.pinot.tools.service.PinotServiceManager.startRole(PinotServiceManager.java:91) ~[pinot-all-0.6.0-jar-with-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21] at org.apache.pinot.tools.admin.command.StartServiceManagerCommand.lambda$startBootstrapServices$0(StartServiceManagerCommand.java:234) ~[pinot-all-0.6.0-jar-with-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21] at org.apache.pinot.tools.admin.command.StartServiceManagerCommand.startPinotService(StartServiceManagerCommand.java:286) [pinot-all-0.6.0-jar-with-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21] at org.apache.pinot.tools.admin.command.StartServiceManagerCommand.startBootstrapServices(StartServiceManagerCommand.java:233) [pinot-all-0.6.0-jar-with-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21] at org.apache.pinot.tools.admin.command.StartServiceManagerCommand.execute(StartServiceManagerCommand.java:183) [pinot-all-0.6.0-jar-with-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21] at org.apache.pinot.tools.admin.command.StartControllerCommand.execute(StartControllerCommand.java:130) [pinot-all-0.6.0-jar-with-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21] at org.apache.pinot.tools.admin.PinotAdministrator.execute(PinotAdministrator.java:154) [pinot-all-0.6.0-jar-with-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21] at org.apache.pinot.tools.admin.PinotAdministrator.main(PinotAdministrator.java:166) [pinot-all-0.6.0-jar-with-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21] Caused by: java.lang.ClassNotFoundException: org.eclipse.jetty.npn.NextProtoNego$ClientProvider at jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581) ~[?:?] at jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178) ~[?:?] at java.lang.ClassLoader.loadClass(ClassLoader.java:522) ~[?:?] at org.reflections.ReflectionUtils.forName(ReflectionUtils.java:388) ~[pinot-all-0.6.0-jar-with-dependencies.jar:0.6.0-bb646baceafcd9b849a1ecdec7a11203c7027e21] ... 18 more```
  @aaron: I'm running with `-Dplugins.include=pinot-s3`
  @fx19880617: i think it’s not relevant to plugin, this warning should be ignorable. I saw it when open the swagger ui
  @fx19880617: is this java 8 or 11
  @aaron: The controller is java 11
  @aaron: I have the server running java 8 though, because I couldn't get it working with java 11
  @fx19880617: got it, I will double check , meanwhile can you try java 8?
  @fx19880617: also I think the bin is built by java8 so runtime 11 might cause that issue. you can also try to build from src with java11
  @fx19880617: or just try the java11 docker image
  @aaron: Ok thanks -- I'll try the docker image
  @aaron: I actually see the same issue running with Java 8, btw
  @fx19880617: oh? is it bin file downloaded
  @fx19880617: I just checked the latest master branch, that works fine
  @aaron: I'm running *`apache-pinot-incubating-0.6.0-bin`*
  @aaron: Is running from master more stable?
  @fx19880617: there is just more fixes there but I don’t think any UI change has been made from 0.6.0
  @fx19880617: I’ve tried 0.6.0-bin, it works on my side
  @fx19880617: i tried: `bin/pinot-admin.sh QuickStart -type batch`
  @fx19880617: hmm, I also tried java11, it also works
  @fx19880617: I’m running from my own mac though
  @fx19880617: have you tried a different browser
  @aaron: I'm just confused because it was working before and then I changed the configuration options to include s3 and it stopped working. I'm using google chrome and I tried safari too
  @fx19880617: I will check on that
@nico: @nico has joined the channel

#pinot-dev


@nico: @nico has joined the channel
@tingchen: Bump up the need to solve the issue . We have seen a lot of issues with the out dated yammer metrics lib. For example, it has the stale metrics issue as reported here: . This makes some alerts not working properly. The new lib should allow configuration of more recent metrics implementation.
  @jlli: Thanks for the heads up! Let me take a look
  @tingchen: thank you.

#community


@nico: @nico has joined the channel

#announcements


@gergely.lendvai93: @gergely.lendvai93 has joined the channel
@nico: @nico has joined the channel

#getting-started


@nico: @nico has joined the channel
--------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]

Reply via email to