#general
@tsajay101: @tsajay101 has joined the channel
@lvs.pjx: @lvs.pjx has joined the channel
@sri: @sri has joined the channel
@valentin: @valentin has joined the channel
@aliouamardev: @aliouamardev has joined the channel
@zjinwei: Hi @npawar I'm working with @amitchopra and trying to use our new Pinot Kinesis support. Want to know do we have any images built on that? Because with the branch, we can not use it directly. Thanks
@npawar: we haven’t built any images. You could build one from the branch:
@zjinwei: Thanks Neha, I tried it according to the doc and got the error `[ERROR] Failed to execute goal com.mycila:license-maven-plugin:2.8:check (default) on project pinot-kinesis: Some files do not have the expected license header -> [Help 1]` `[ERROR]` `[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.` `[ERROR] Re-run Maven using the -X switch to enable full debug logging.` `[ERROR]` `[ERROR] For more information about the errors and possible solutions, please read the following articles:` `[ERROR] [Help 1]
@npawar: looks like some files are missing license headers. I’ll add them and update the branch
@npawar: you can try again
#random
@tsajay101: @tsajay101 has joined the channel
@lvs.pjx: @lvs.pjx has joined the channel
@sri: @sri has joined the channel
@valentin: @valentin has joined the channel
@aliouamardev: @aliouamardev has joined the channel
#troubleshooting
@tsajay101: @tsajay101 has joined the channel
@lvs.pjx: @lvs.pjx has joined the channel
@sri: @sri has joined the channel
@valentin: @valentin has joined the channel
@aliouamardev: @aliouamardev has joined the channel
#pinot-s3
@pabraham.usa: At the moment I am having only single disk per pino in an AZ. If that disk fails I will lose all segments for that pino node
@pabraham.usa: I assume the normal solution will be to add disk in multiple AZs and replicate segments. However I am also thinking on the possibility of having segment copy in S3 as deep storage.
@g.kishore: yes, that happens by default if you configure s3 as the deep storage
@g.kishore: you have only one replica
@pabraham.usa: Ahh great, so I assume sync back will happen normally during a disk failure?
@pabraham.usa: I mean happen automatically
@g.kishore: yes
@pabraham.usa: Trying to understand the recovery time
@g.kishore: its the time for k8s to launch a new container + time to pull the segments from s3
@pabraham.usa: As this approach is cost effective
@g.kishore: you can use replication factor=2
@pabraham.usa: That means 2 copies on same disk
@g.kishore: then there will be two copies within the same AZ
@g.kishore: across multiple nodes
@pabraham.usa: ok great , I think I have to do that. Will try with deep storage first and see the recovery time meets RPO
@g.kishore: note all of these can be done on demand
@g.kishore: just change the replication factor in table config to 2 and invoke rebalance
@g.kishore: btw, we should have this conversation in troubleshooting so others can also benefit
@pabraham.usa: Sure Thanks for the details Kishore, You been very helpful as always..!!
#discuss-validation
@chinmay.cerebro: @mohammedgalalen056: let's go ahead with that strategy - lets make the json schema based validation configurable. So by default - we don't enforce it. Let's finish up the remaining items in the schema and open a PR ?
@mohammedgalalen056: Ok
#getting-started
@krishna: @krishna has joined the channel
--------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
