#general


@1705ayush: Hi everyone, I am trying to implement thirdeye on a custom data and cluster. I have completed the setup of pinot on minikube on _namespace my-pinot-kube_ using ```helm install pinot ~/incubator-pinot/kubernetes/helm/pinot -n my-pinot-kube --set replicas=1``` When I try to install thirdeye using ```/incubator-pinot/kubernetes/helm/thirdeye $ ./install.sh thirdeye -n my-pinot-kube``` the backend and frontend pods of thirdeye crashes (CrashLoopBackOff) after starting. Attached are the log files for both backend and frontend. There are mentions of pinot-quickstart in the log files. Seems like thirdeye is still expecting the pinot-quickstart namespace. @pyne.suvodeep I celebrated early I think :upside_down_face:
@pyne.suvodeep: Hey @1705ayush Seems like a dependency issue. I have to look into this. btw. if you are running into thirdeye issues, might be good to check out the thirdeye slack.

#troubleshooting


@phuchdh: Hello, i’m having some trouble with minion base on following docs: I want to convert realtime table to offline table. but minions show errors. Here are errors logs:
  @fx19880617: @npawar
  @npawar: can you show me the table config?
  @npawar: your processing window is only 5 minutes `windowStartMs=1614297600000, segmentName=RuleLogs__0__0__20210301T0452Z,RuleLogs__1__0__20210301T0452Z,RuleLogs__2__0__20210301T0452Z, windowEndMs=1614297900000`
  @npawar: have you set a bucketTimePeriod of 5mins?
  @npawar: i would suggest you remove that
@phuchdh:
@phuchdh: logs
@joshhighley: KafkaStreamLevelStreamConfig ("highlevel") copies stream properties 'kafka.consumer.prop.*' to the Kafka consumer properties but KafkaPartitionLevelStreamConfig ("lowlevel") does not. Is there a reason for this?
@joshhighley: I believe it's the reason I can't connect to a Kafka server that uses SASL_SSL: the needed properties aren't being passed from the lowlevel realtime table config to the KafkaConsumer
@joshhighley: After looking at the source some more, I found that the highlevel tables copy properties at a very different level than lowlevel. For example, a highlevel table prop "stream.kafka.consumer.prop.security.protocol":"SASL_SSL" has to be "security.protocol":"SASL_SSL" in streamConfigs { } for lowlevel
  @wrbriggs: This works for me:
  @wrbriggs: Also, I have seen @npawar say that high level streams are deprecated
  @joshhighley: I got it to work but the same config settings for lowlevel and highlevel should use the same keys. I saw 'kafka.consumer.prop' somewhere in the documentation but now I can't find it.

#pinot-dev


@ken: Hi all - I’m back to working on , where I was trying to put all of the code used for Pinot integration tests into a tests-qualified jar. But I’m wondering if it would work better to have a separate Pinot sub-project/jar that can be used to spin up a stand-alone “mini” Pinot cluster, and have that jar be used by pinot-integration-tests (and 3rd parties wanting to do the same). This would provide cleaner separation for what’s used internally by Pinot, versus what projects using Pinot would need to use. Thoughts?
@g.kishore: > But I’m wondering if it would work better to have a separate Pinot sub-project/jar that can be used to spin up a stand-alone “mini” Pinot cluster, and have that jar be used by pinot-integration-tests (and 3rd parties wanting to do the same) + 100 for this
@g.kishore: this is the right thing to do
@g.kishore: infact our quickstart is very similar to this
@g.kishore: it just brings up everything in one process

#pinot-0-5-0-release


@npawar: @npawar has left the channel

#segment-write-api


@yupeng: @npawar @fx19880617 whats the next step of the segment writer API?
@yupeng: do we need a call with subbu or we can proceed with the impl
@npawar: i dont think we need a call, we can continue chatting with him on the doc
@npawar: Xiang and I have started thinking about and discussing the implementation
@yupeng: cool
@yupeng: have you decided `collect vs write` for method name?
@npawar: i’ll change it back to write and keep it the way it is in the doc
@npawar: i agree with your reasoning
@yupeng: sg
@yupeng: also, i’m building a POC of the flink connector
@yupeng: i wonder how i can try the impl out
@yupeng: will you push it to some branch as you implement it?
@npawar: sure will share the branch when i have something
@npawar: but just a heads up, that i wont have something ready for you immediately. I’ll need a week or so to wrap up some other things i’m working on
  @yupeng: sure thing.
@fx19880617: I prefer collect. Collect and flush may give user the impression that this thing won’t be persisted until you call flush
  @npawar: that’s the right impresstion rt?
  @fx19880617: I feel this is the impression we should give to the user
  @npawar: cool.. then lets keep it collect
  @yupeng: hmm
  @yupeng: thats the API, right?
  @yupeng: Xiang, the actual behavior could depend on implementation
  @yupeng: for example, we want the mutableSegmentImpl to implement this
  @yupeng: then it can serve right after write is called, right?
--------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]

Reply via email to