#general


@g.kishore: Using Apache Kafka and Pinot for User-Facing Real-Time Analytics -- Happening now
@tariqahmed.farhan: @tariqahmed.farhan has joined the channel
@yuzhug: @yuzhug has joined the channel
@ratna: @ratna has joined the channel
@rabeeb.rahman.225: @rabeeb.rahman.225 has joined the channel
@sashastic: @sashastic has joined the channel
@karinwolok1: Heyyy to the newbies that probably came from the meetup! :slightly_smiling_face:
@srini: “Apache Pinot solves it, and the two together are like chocolate and peanut butter, peaches and cream, and Steve Rogers and Peggy Carter.” amazing :laughing:

#random


@tariqahmed.farhan: @tariqahmed.farhan has joined the channel
@yuzhug: @yuzhug has joined the channel
@ratna: @ratna has joined the channel
@rabeeb.rahman.225: @rabeeb.rahman.225 has joined the channel
@sashastic: @sashastic has joined the channel
@rabeeb.rahman.225: Is anyone here a Data Analyst? If you are please let me know...I am a student at Springboard's Data Analyst course and I'm trying to network...

#feat-presto-connector


@kha.nguyen: @kha.nguyen has joined the channel

#troubleshooting


@humengyuk18: Hi team, is the `FixedSegmentNameGenerator` supported ? From the doc , only simple and normalizedDate name generator are supported.
  @ken: I’d noticed that too, when tracking down an issue with segment name generation. Looking at the code, it seems like it should be supported. `SegmentGenerationTaskRunner.getSegmentNameGenerator()` maps from the “fixed” type in the config to `return new FixedSegmentNameGenerator(segmentNameGeneratorConfigs.get(SEGMENT_NAME));`
  @humengyuk18: thanks, I will try.
@falexvr: Guys, is there any way we can manually ingest some data to one already created table from bigquery? For some reason zookeeper stopped working and entered into a reboot loop and that messed up everything, the segments where no longer accessible, our broker is still restarting, we tried several things and none of theme seemed to work, it was working yesterday and we didn't do any changes to the infrastructure configuration and can't find the source of it all yet
@falexvr: It just keeps writing this to the logs non-stop: ERROR [MessageGenerationPhase] [HelixController-pipeline-default-mls-(eb0a8635_DEFAULT)] Event eb0a8635_DEFAULT : Unable to find a next state for resource: dpt_video_event_captured_v2_REALTIME partition: dpt_video_event_captured_v2__0__24203__20210124T1614Z from stateModelDefinitionclass org.apache.helix.model.StateModelDefinition from:ERROR to:ONLINE
@falexvr: It's no longer consuming data from kafka :disappointed:
@g.kishore: what happened?
@g.kishore: why did zk go down?
@falexvr: We still don't know, today we saw our analytics dashboards in bad shape and when we had a look at the infra we saw this
@g.kishore: btw, the segments will be in segment store
@g.kishore: you can always bring everything back up
@falexvr:
@pabraham.usa: @falexvr I had issues with zookeeper and running 5 instances of it seems to help a lot. This allows 2 instances to go down safely. Also I had to bump xmx to match the load.
@dlavoie: You should start with investigating the errors from Zookeeper.
@dlavoie: Get to the root cause of the pod crashing.
@falexvr: Yeah it was only one pod that started rebooting like crazy, we use to work with 3 instances
@dlavoie: What is the root cause of the restart?
@pabraham.usa: maybe try replacing zookeeper snapshot folder in 1 with the working one from 0/2 and restart . it may work
@tariqahmed.farhan: @tariqahmed.farhan has joined the channel
@yuzhug: @yuzhug has joined the channel
@ratna: @ratna has joined the channel
@rabeeb.rahman.225: @rabeeb.rahman.225 has joined the channel
@sashastic: @sashastic has joined the channel

#announcements


@g.kishore: Using Apache Kafka and Pinot for User-Facing Real-Time Analytics happening now
@sashastic: @sashastic has joined the channel

#getting-started


@wooodini: @wooodini has joined the channel
--------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]

Reply via email to