#general


@deenadhayalan.dd7: Hi team , I came up checking the difference between Groovy Function Time difference with best case scenario Topic's . When I came up with the segment.start.time and end.time and creation.time , I am confused with the start and end time . I have given the time , segments , and conf for my tables , Kindly resolve my issue.
  @mark.needham: The times are based on the contents of the segment, so presumably the groovy filter function is removing some records?
  @deenadhayalan.dd7: @mark.needham Yes,Groovy is filtering out some records . I will look into the timestamp of the contents I have generated.
@randika: @randika has joined the channel
@xiangfu0: Thanks to @kam! We’ve added to Pinot community to provide searchable(Google also indexed this!) slack public channel. You can search for past discussions and join the threads from here:
@zotyarex: @zotyarex has joined the channel
@sowmya.gowda: @sowmya.gowda has joined the channel
@nizar.hejazi: Hey team, deployed two different 0.11.0 nightly builds and found that *an equality* filter predicate on a field with a *sorted column* index isn’t working as expected if the segment is in *CONSUMING* state. Ex: ```select distinct (company) from role_with_company limit 1000000 -- answer: 51``` Queries w/ less than or greater than predicates returns always the correct results: ```select count(distinct company) from role_with_company where company < '6269223774083d800011fd95' limit 1000000 -- answer: 36 select count(distinct company) from role_with_company where company > '6269223774083d800011fd95' limit 1000000 -- answer: 14``` On the other hand, equality predicates when the segment is in CONSUMING state does not return the correct results: ```select count(distinct company) from role_with_company where company = '6269223774083d800011fd95' limit 1000000 -- answer: 0, when segment is in CONSUMING state``` When the segment is COMMITTED, the query returns the correct results: ```select count(distinct company) from role_with_company where company = '6269223774083d800011fd95' limit 1000000 -- answer: 1, when segment is COMMITTED``` Anyone aware of a change in behaviour that was introduced recently? @richard892 @jackie.jxt
@jackie.jxt: @nizar.hejazi Thanks for reporting the issue. Can you share the commit hash for the 2 builds?
@jackie.jxt: I don't recall any recent change on the equals predicate. You may also query the virtual column `$segmentName` to identify which segments contain the value
@jackie.jxt: Let's move the conversation to <#C011C9JHN7R|troubleshooting>
@karangisreekanth: @karangisreekanth has joined the channel
@dave.deep: @dave.deep has joined the channel
@xiaoyzhu: @xiaoyzhu has joined the channel
@jaimin: @jaimin has joined the channel
@mehmet.tasan: @mehmet.tasan has joined the channel
@jag959: @jag959 has joined the channel
@pj.kovanen: @pj.kovanen has joined the channel
@gunnar.enserro: @gunnar.enserro has joined the channel
@hareesh.lakshminaraya: @hareesh.lakshminaraya has joined the channel
@rafael.moreno: @rafael.moreno has joined the channel
@acching: @acching has joined the channel

#random


@randika: @randika has joined the channel
@zotyarex: @zotyarex has joined the channel
@sowmya.gowda: @sowmya.gowda has joined the channel
@karangisreekanth: @karangisreekanth has joined the channel
@dave.deep: @dave.deep has joined the channel
@xiaoyzhu: @xiaoyzhu has joined the channel
@jaimin: @jaimin has joined the channel
@mehmet.tasan: @mehmet.tasan has joined the channel
@jag959: @jag959 has joined the channel
@pj.kovanen: @pj.kovanen has joined the channel
@gunnar.enserro: @gunnar.enserro has joined the channel
@hareesh.lakshminaraya: @hareesh.lakshminaraya has joined the channel
@rafael.moreno: @rafael.moreno has joined the channel
@acching: @acching has joined the channel

#troubleshooting


@randika: @randika has joined the channel
@octchristmas: Hi. Team. Some brokers do not work. The response code is 500. The message in the response is in html format, not json. ```<html><head><title>Grizzly 2.4.4</title><style><!--div.header {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#003300;font-size:22px;-moz-border-radius-topleft: 10px;border-top-left-radius: 10px;-moz-border-radius-topright: 10px;border-top-right-radius: 10px;padding-left: 5px}div.body {font-family:Tahoma,Arial,sans-serif;color:black;background-color:#FFFFCC;font-size:16px;padding-top:10px;padding-bottom:10px;padding-left:10px}div.footer {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#666633;font-size:14px;-moz-border-radius-bottomleft: 10px;border-bottom-left-radius: 10px;-moz-border-radius-bottomright: 10px;border-bottom-right-radius: 10px;padding-left: 5px}BODY {font-family:Tahoma,Arial,sans-serif;color:black;background-color:white;}B {font-family:Tahoma,Arial,sans-serif;color:black;}A {color : black;}HR {color : #999966;}--></style> </head><body><div class="header">Request failed.</div><div class="body">Request failed.</div><div class="footer">Grizzly 2.4.4</div></body></html>``` I compared the logs of the brokers that are working well and the brokers that are failing. On the failing broker, it seems that there is a problem with communication over the NIO channel with the server as shown below. Borker log: ```Trying to acquire token for table: starbucksStores -- Broker logs where the query succeeds 2022/05/24 13:12:26.926 INFO [DataTableHandler] [nioEventLoopGroup-2-1] Channel for server: pinot_test_broker01_O is now active 2022/05/24 13:12:27.320 DEBUG [BaseBrokerRequestHandler] [jersey-server-managed-async-executor-1] Broker Response: org.apache.pinot.common.response.broker.BrokerResponseNative@656efaa8 ... -- Broker logs where queries fail 2022/05/24 12:42:22.252 INFO [DataTableHandler] [nioEventLoopGroup-2-1] Channel for server: [!!!org.apache.pinot.core.transport.ServerRoutingInstance@1479d4e9=>java.lang.NoSuchFieldError:EXACT!!!] is now active``` Perhaps the error is related to this. ``````
@zotyarex: @zotyarex has joined the channel
@sowmya.gowda: @sowmya.gowda has joined the channel
@karangisreekanth: @karangisreekanth has joined the channel
@dave.deep: @dave.deep has joined the channel
@xiaoyzhu: @xiaoyzhu has joined the channel
@jaimin: @jaimin has joined the channel
@mehmet.tasan: @mehmet.tasan has joined the channel
@jag959: @jag959 has joined the channel
@pj.kovanen: @pj.kovanen has joined the channel
@nizar.hejazi: Hey team, deployed two different 0.11.0 nightly builds and found that *an equality* filter predicate on a field with a *sorted column* index isn’t working as expected if the segment is in *CONSUMING* state. Ex: ```select distinct (company) from role_with_company limit 1000000 -- answer: 51``` Queries w/ less than or greater than predicates returns always the correct results: ```select count(distinct company) from role_with_company where company < '6269223774083d800011fd95' limit 1000000 -- answer: 36 select count(distinct company) from role_with_company where company > '6269223774083d800011fd95' limit 1000000 -- answer: 14``` On the other hand, equality predicates when the segment is in CONSUMING state does not return the correct results: ```select count(distinct company) from role_with_company where company = '6269223774083d800011fd95' limit 1000000 -- answer: 0, when segment is in CONSUMING state``` When the segment is COMMITTED, the query returns the correct results: ```select count(distinct company) from role_with_company where company = '6269223774083d800011fd95' limit 1000000 -- answer: 1, when segment is COMMITTED``` Anyone aware of a change in behaviour that was introduced recently? @richard892 @jackie.jxt Latest nightly build commit: *0.11.0-SNAPSHOT-438c53b-20220520* Previous nightly build commit: *0.11.0-SNAPSHOT-3403619-20220507*
  @jackie.jxt: Can you try ```select count(distinct company), $segmentName from role_with_company where company = '6269223774083d800011fd95' group by $segmentName limit 1000000```
  @nizar.hejazi: Run the following command to find the segment name for this company: ```select distinct ($segmentName), company from role_with_company limit 1000000``` Result:
  @nizar.hejazi: @jackie.jxt your query returns no results
  @jackie.jxt: Shall we have a quick zoom?
  @nizar.hejazi: yes @jackie.jxt
  @nizar.hejazi: Debugged w/ Jackie and the issue most likely is that our Kafka data partitioning schema is different from Pinot partitioning schema. Will double confirm and update the thread.
@gunnar.enserro: @gunnar.enserro has joined the channel
@horaymond6: Hello, How can I round decimals in a pinot query? Like if result is 1.674321, I want it to round to 1.7
@hareesh.lakshminaraya: @hareesh.lakshminaraya has joined the channel
@rafael.moreno: @rafael.moreno has joined the channel
@acching: @acching has joined the channel

#thirdeye-pinot


@zhixun.hong: Hi @cyril I could load Pinot dataset in ThirdEye based on your advice. I can see that dataset in thrideye-admin/Dataset page now, but can't load it in RCA or Anomaly Detection page. Now quick question, to configure a simple alert, what should I do? Configuring alert in Pinot or ThirdEye?
@cyril: Awesome @zhixun.hong! Please read here to configure a simple alert:
  @zhixun.hong: @cyril what is this error meaning?
  @cyril: can you give more context?
  @zhixun.hong: Could you check and configure one simple alert here? dataset name is "alert_system_v1" and metric is "avg_milliwatts_by_min"
  @zhixun.hong: ```detectionName: '1_min_alert' description: 'energy data alert detection' # Tip: Type a few characters and look ahead (ctrl + space) to auto-fill. metric: avg_milliwatts_by_min dataset: alert_system_v1 # Configure multiple rules with "OR" relationship. rules: - detection: - name: detection_min_1 type: ALGORITHM # Configure the detection type here. See doc for more details. params: # The parameters for this rule. Different rules have different params. configuration: bucketPeriod: P1D # Use P1D for daily; PT1H for hourly; PT5M for minutely data. pValueThreshold: 0.05 # Higher value means more sensitive to small changes. mlConfig: true # Automatically maintain configuration with the best performance. filter: # Filter out anomalies detected by rules to reduce noise. - name: filter_rule_1 type: PERCENTAGE_CHANGE_FILTER params: pattern: UP_OR_DOWN # Other patterns: "UP","DOWN". threshold: 0.05 # Filter out all changes less than 5% compared to baseline. quality: # Configure the data quality rules - name: data_sla_rule_1 type: DATA_SLA # Alert if data is missing. params: sla: 1_DAYS # Data is missing for 3 days since last availability```
  @zhixun.hong: I tried this alert.
  @zhixun.hong: ```There was an error generating the preview. Need support? Share the content below with the Thirdeye team at Error: There was no metric or anomaly data returned for the detection```
  @cyril: Can you try this one and tell me what you get in the thirdeye *logs* *(ie in the logs of the docker container)* ```detectionName: 'test' description: 'this is a test' # Tip: Type a few characters and look ahead (ctrl + space) to auto-fill. metric: avg_milliwatts_by_min dataset: alert_system_v1 # Configure multiple rules with "OR" relationship. rules: - detection: - name: detect_threshold type: THRESHOLD # Configure the detection type here. See doc for more details. params: # The parameters for this rule. Different rules have different params. max: 7000```
  @zhixun.hong: gle.common.util.concurrent.UncheckedExecutionException: java.lang.RuntimeException: java.util.concurrent.ExecutionException: com.google.common.util.concurrent.UncheckedExecutionException: org.apache.pinot.client.PinotClientException: Error when running pql:SELECT SUM(avg_milliwatts_by_min) FROM alert_system_v1 WHERE datetime_as_epoch >= 1650902400000 AND datetime_as_epoch < 1651680000000 GROUP BY dateTimeConvert(datetime_as_epoch,'1:MILLISECONDS:EPOCH','1:MILLISECONDS:EPOCH','1:MINUTES') TOP 100000\n\tat com.google.common.cache.LocalCache.loadAll(LocalCache.java:4151)\n\tat com.google.common.cache.LocalCache.getAll(LocalCache.java:4104)\n\tat com.google.common.cache.LocalCache$LocalLoadingCache.getAll(LocalCache.java:5000)\n\tat org.apache.pinot.thirdeye.detection.cache.builder.TimeSeriesCacheBuilder.fetchSlices(TimeSeriesCacheBuilder.java:107)\n\tat org.apache.pinot.thirdeye.detection.DefaultDataProvider.fetchTimeseries(DefaultDataProvider.java:104)\n\t... 13 common frames omitted\nWrapped by: java.lang.RuntimeException: fetch time series failed\n\tat org.apache.pinot.thirdeye.detection.DefaultDataProvider.fetchTimeseries(DefaultDataProvider.java:112)\n\tat org.apache.pinot.thirdeye.detection.DefaultInputDataFetcher.fetchData(DefaultInputDataFetcher.java:59)\n\tat org.apache.pinot.thirdeye.detection.components.ThresholdRuleDetector.runDetection(ThresholdRuleDetector.java:68)\n\tat org.apache.pinot.thirdeye.detection.wrapper.AnomalyDetectorWrapper.run(AnomalyDetectorWrapper.java:184)\n\t... 10 common frames omitted\nWrapped by: org.apache.pinot.thirdeye.detection.DetectionPipelineException: Detection failed for all windows for detection config id 9223372036854775807 detector detect_threshold:THRESHOLD for monitoring window 1650902400000 to 1651680000000.\n\tat org.apache.pinot.thirdeye.detection.wrapper.AnomalyDetectorWrapper.checkMovingWindowDetectionStatus(AnomalyDetectorWrapper.java:246)\n\tat org.apache.pinot.thirdeye.detection.wrapper.AnomalyDetectorWrapper.run(AnomalyDetectorWrapper.java:202)\n\tat org.apache.pinot.thirdeye.detection.DetectionPipeline.runNested(DetectionPipeline.java:266)\n\tat org.apache.pinot.thirdeye.detection.algorithm.MergeWrapper.run(MergeWrapper.java:126)\n\tat org.apache.pinot.thirdeye.detection.DetectionPipeline.runNested(DetectionPipeline.java:266)\n\tat org.apache.pinot.thirdeye.detection.algorithm.DimensionWrapper.run(DimensionWrapper.java:333)\n\t... 6 common frames omitted\nWrapped by: org.apache.pinot.thirdeye.detection.DetectionPipelineException: Detection failed for all nested dimensions for detection config id 9223372036854775807 for monitoring window 1650902400000 to 1651680000000.\n\tat org.apache.pinot.thirdeye.detection.algorithm.DimensionWrapper.checkNestedMetricsStatus(DimensionWrapper.java:431)\n\tat org.apache.pinot.thirdeye.detection.algorithm.DimensionWrapper.run(DimensionWrapper.java:352)\n\tat org.apache.pinot.thirdeye.detection.DetectionPipeline.runNested(DetectionPipeline.java:266)\n\tat org.apache.pinot.thirdeye.detection.algorithm.MergeWrapper.run(MergeWrapper.java:126)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\t... 1 common frames omitted\nWrapped by: java.util.concurrent.ExecutionException: org.apache.pinot.thirdeye.detection.DetectionPipelineException: Detection failed for all nested dimensions for detection config id 9223372036854775807 for monitoring window 1650902400000 to 1651680000000.\n\tat java.util.concurrent.FutureTask.report(FutureTask.java:122)\n\tat java.util.concurrent.FutureTask.get(FutureTask.java:206)\n\tat org.apache.pinot.thirdeye.detection.yaml.YamlResource.runPreview(YamlResource.java:881)\n\tat org.apache.pinot.thirdeye.detection.yaml.YamlResource.yamlPreviewApi(YamlResource.java:842)\n\tat sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\n\tat sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\n\tat sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n\tat java.lang.reflect.Method.invoke(Method.java:498)\n\tat org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory$1.invoke(ResourceMethodInvocationHandlerFactory.java:81)\n\tat org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:144)\n\tat org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:161)\n\tat org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$ResponseOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:160)\n\tat org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:99)\n\tat org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:389)\n\tat org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:347)\n\tat org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:102)\n\tat org.glassfish.jersey.server.ServerRuntime$2.run(ServerRuntime.java:326)\n\tat org.glassfish.jersey.internal.Errors$1.call(Errors.java:271)\n\tat org.glassfish.jersey.internal.Errors$1.call(Errors.java:267)\n\tat org.glassfish.jersey.internal.Errors.process(Errors.java:315)\n\tat org.glassfish.jersey.internal.Errors.process(Errors.java:297)\n\tat org.glassfish.jersey.internal.Errors.process(Errors.java:267)\n\tat org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:317)\n\tat org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:305)\n\tat org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:1154)\n\tat org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:473)\n\tat org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:427)\n\tat org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:388)\n\tat org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:341)\n\tat org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:228)\n\tat io.dropwizard.jetty.NonblockingServletHolder.handle(NonblockingServletHolder.java:49)\n\tat org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1623)\n\tat io.dropwizard.servlets.ThreadNameFilter.doFilter(ThreadNameFilter.java:35)\n\tat org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1610)\n\tat io.dropwizard.jersey.filter.AllowedMethodsFilter.handle(AllowedMethodsFilter.java:45)\n\tat io.dropwizard.jersey.filter.AllowedMethodsFilter.doFilter(AllowedMethodsFilter.java:39)\n\tat org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1610)\n\tat io.dropwizard.bundles.redirect.RedirectBundle$1.doFilter(RedirectBundle.java:52)\n\tat org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1610)\n\tat org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)\n\tat org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)\n\tat org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)\n\tat org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)\n\tat org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)\n\tat org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)\n\tat org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247)\n\tat org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)\n\tat org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat com.codahale.metrics.jetty9.InstrumentedHandler.handle(InstrumentedHandler.java:239)\n\tat io.dropwizard.jetty.RoutingHandler.handle(RoutingHandler.java:52)\n\tat org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:703)\n\tat io.dropwizard.jetty.BiDiGzipHandler.handle(BiDiGzipHandler.java:67)\n\tat org.eclipse.jetty.server.handler.RequestLogHandler.handle(RequestLogHandler.java:56)\n\tat org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:174)\n\tat org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat org.eclipse.jetty.server.Server.handle(Server.java:505)\n\tat org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:370)\n\tat org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:267)\n\tat .AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:305)\n\tat .FillInterest.fillable(FillInterest.java:103)\n\tat .ChannelEndPoint$2.run(ChannelEndPoint.java:117)\n\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333)\n\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:310)\n\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168)\n\tat org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:126)\n\tat org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:366)\n\tat org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:698)\n\tat org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:804)\n\tat java.lang.Thread.run(Thread.java:750)\n","level":"ERROR","logger":"org.apache.pinot.thirdeye.detection.yaml.YamlResource","thread":"dw-616 - POST /yaml/preview?start=1650902400000&end=1651680000000&tuningStart=0&tuningEnd=0","message":"Error running preview with payload detectionName: 'test'\ndescription: 'this is a test'\n\n# Tip: Type a few characters and look ahead (ctrl + space) to auto-fill.\nmetric: avg_milliwatts_by_min\ndataset: alert_system_v1\n\n# Configure multiple rules with \"OR\" relationship.\nrules:\n- detection:\n - name: detect_threshold\n type: THRESHOLD # Configure the detection type here. See doc for more details.\n params: # The parameters for this rule. Different rules have different params.\n max: 7000","timestamp":1653402066749} front_1 | {"method":"POST","userAgent":"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.64 Safari/537.36","uri":"/yaml/preview","requestTime":5056,"protocol":"HTTP/1.1","contentLength":1480,"remoteAddress":"208.110.84.210","timestamp":1653402066750,"status":500}
  @cyril: ok the error happens because this version of ThirdEye is legacy and uses PQL query language. Pinot will only use SQL in the future. TOP in the query should be replaced by LIMIT. I tried a hotfix. You can rebuild the image, relaunch docker-compose and see if it’s better. Please know that we don’t support this version of TE anymore so I won’t do more changes to the codebase.
  @zhixun.hong: then, what is the best way to use ThirdEye data alert for our custom csv dataset?
  @cyril: If you want to use this OSS ThirdEye version, you will have to do a small amounts of development to maintain the system yourself. Not much I think but this requires to look at the code a bit and do some debugging. StarTree is now offering a managed platform with both Pinot and a fully rewritten ThirdEye, in SaaS or hosted in your cloud. What is your use case for ThirdEye? Disclaimer: I’m working at StarTree.
  @cyril: But please do try to rebuild the docker image and relaunch - this should work.
  @zhixun.hong: thirdeye docker rebuilding?
  @cyril: yes
  @zhixun.hong: I rebuilt thirdeye docker, but still same error.
  @cyril: hum strange maybe you hit a layer cache when the image was rebuilt ? could you try rebuilding without cache?
  @zhixun.hong: I tried to remove docker image, and then rebuilt it.
  @cyril: ok did you restart the docker-compose? What is the exact error message?
  @zhixun.hong: give me a sec
  @zhixun.hong: It failed to docker running now.
  @zhixun.hong: any thought on that?

#getting-started


@randika: @randika has joined the channel
@zotyarex: @zotyarex has joined the channel
@sowmya.gowda: @sowmya.gowda has joined the channel
@karangisreekanth: @karangisreekanth has joined the channel
@dave.deep: @dave.deep has joined the channel
@xiaoyzhu: @xiaoyzhu has joined the channel
@jaimin: @jaimin has joined the channel
@mehmet.tasan: @mehmet.tasan has joined the channel
@jag959: @jag959 has joined the channel
@pj.kovanen: @pj.kovanen has joined the channel
@gunnar.enserro: @gunnar.enserro has joined the channel
@hareesh.lakshminaraya: @hareesh.lakshminaraya has joined the channel
@rafael.moreno: @rafael.moreno has joined the channel
@acching: @acching has joined the channel

#flink-pinot-connector


@sowmya.gowda: @sowmya.gowda has joined the channel

#introductions


@randika: @randika has joined the channel
@zotyarex: @zotyarex has joined the channel
@zhixun.hong: Hi team. I'm working on data engineering for big data analysis in real-time. I've worked on backend with audio/image data processing for long years and now want to get a knowledge how to monitoring data analysis and alert detection from data pipeline. I heard ThirdEye and Pinot is good way for it. I'm still struggling to instantiate this framework at my env, but will try to learn more through this channel. Looking for tons of help. Thanks.
  @mayanks: Welcome to the community.
@sowmya.gowda: @sowmya.gowda has joined the channel
@karangisreekanth: @karangisreekanth has joined the channel
@dave.deep: @dave.deep has joined the channel
@xiaoyzhu: @xiaoyzhu has joined the channel
@jaimin: @jaimin has joined the channel
@mehmet.tasan: @mehmet.tasan has joined the channel
@jag959: @jag959 has joined the channel
@pj.kovanen: @pj.kovanen has joined the channel
@gunnar.enserro: @gunnar.enserro has joined the channel
@hareesh.lakshminaraya: @hareesh.lakshminaraya has joined the channel
@rafael.moreno: @rafael.moreno has joined the channel
@acching: @acching has joined the channel
--------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscr...@pinot.apache.org For additional commands, e-mail: dev-h...@pinot.apache.org

Reply via email to