#general


@sleepythread: How does Pinot behaves in case there are huge number of tables for events ( say 2k ) ? I have a feeling that ZK can be a bottleneck in this situation ?
  @g.kishore: We haven’t seen any issue with large number of tables with ZK.
@jmeyer: @jmeyer has joined the channel
@c.ravikiran123: @c.ravikiran123 has joined the channel

#random


@jmeyer: @jmeyer has joined the channel
@c.ravikiran123: @c.ravikiran123 has joined the channel

#troubleshooting


@elon.azoulay: The following query compiles with the calcite compiler but the broker request still uses the pql2 compiler which fails: ```select foo from table where and case bar when 1,2,3 then 4 when 4,5,6 then 5 when 7 then 6 when 8 then 7 else 17 end = 6 limit 10```
  @elon.azoulay: Should `org.apache.pinot.common.request.transform.TransformExpressionTree` use the calcite compiler instead of the pql2 compiler if the endpoint is the "sql" endpoint?
  @g.kishore: Good point.. we should make that change. @mayanks ^^
  @mayanks: Yes, we should make that change within the scope of deprecating PQL. Last time I had looked at it, it didn't seem like a trivial change.
  @elon.azoulay: that's the only form of case statement that calcite compiles which doesn't work w pql it seems.
  @jackie.jxt: `TransformExpressionTree` is for PQL `BrokerRequest` only. Once we deprecate PQL, we can remove this class
  @jackie.jxt: Actually after releasing `0.7`, we can already only compile `PinotQuery` for SQL query and leave `BrokerRequest` empty
  @elon.azoulay: We're still on v6 - seems like no matter what it uses the pql compiler in BrokerRequestHandler but only for the filter.
  @elon.azoulay: We will try out v7
@khushbu.agarwal: Hi all, while reloading status for a table, I get the following error message. How do we resolve this?
  @jackie.jxt: Reload status is for checking the table status after running `Reload All Segments`, and it is not supported for realtime table yet
  @jackie.jxt: You can ignore this error message as this is just for status check, no actual effect to the table
  @khushbu.agarwal: Okay
@ravi.maddi: Can somebody help me, I created schema(table) and trying to push data to that schema. but data(rows) not appearing in Query console. And I am not able to find any exceptions in the pinotBroker/Server/Controller logs. How can I find my data issues. :slightly_smiling_face:
@ravi.maddi: *I am trying JSON Indexing:* *I am able to see the data in the Query Console but. And I am fallowing "githubEvents" example in the Pinot Quick Starts applications provided with "incubator-pinot".* I run query form query console: ```select exp from my_table WHERE JSON_MATCH(colunm1, 'att1=''Lab''')``` But I am getting error ```[ { "errorCode": 200, "message": "QueryExecutionError:\njava.lang.IllegalStateException: Cannot apply JSON_MATCH on column: column1 without json index\n\tat shaded.com.google.common.base.Preconditions.checkState(Preconditions.java:518)\n\tat org.apache.pinot.core.plan.FilterPlanNode.constructPhysicalOperator(FilterPlanNode.java:206)\n\tat org.apache.pinot.core.plan.FilterPlanNode.run(FilterPlanNode.java:80)\n\tat org.apache.pinot.core.plan.DocIdSetPlanNode.run(DocIdSetPlanNode.java:41)\n\tat org.apache.pinot.core.plan.ProjectionPlanNode.run(ProjectionPlanNode.java:52)\n\tat org.apache.pinot.core.plan.TransformPlanNode.run(TransformPlanNode.java:52)\n\tat org.apache.pinot.core.plan.SelectionPlanNode.run(SelectionPlanNode.java:83)\n\tat org.apache.pinot.core.plan.CombinePlanNode.run(CombinePlanNode.java:100)\n\tat org.apache.pinot.core.plan.InstanceResponsePlanNode.run(InstanceResponsePlanNode.java:33)\n\tat org.apache.pinot.core.plan.GlobalPlanImplV0.execute(GlobalPlanImplV0.java:45)\n\tat org.apache.pinot.core.query.executor.ServerQueryExecutorV1Impl.processQuery(ServerQueryExecutorV1Impl.java:294)\n\tat org.apache.pinot.core.query.executor.ServerQueryExecutorV1Impl.processQuery(ServerQueryExecutorV1Impl.java:215)\n\tat org.apache.pinot.core.query.executor.QueryExecutor.processQuery(QueryExecutor.java:60)\n\tat org.apache.pinot.core.query.scheduler.QueryScheduler.processQueryAndSerialize(QueryScheduler.java:157)" } ]``` *My Table config appears like this:* ```. . "tableIndexConfig": { "loadMode": "MMAP", }, "jsonIndexColumns": [ "colunm1", "column2", "column3" ], "metadata": { "customConfigs": {} } }``` *I fallowed quick start project, there, it declared like below way, and I tried that to but this time not even see data in my table in query console,* ```. . "tableIndexConfig": { "loadMode": "MMAP", "jsonIndexColumns": [ "colunm1", "column2", "column3" ] }, "metadata": { "customConfigs": {} } }``` *Schema:* ```. . . "dimensionFieldSpecs": [ { "name": "colunm1", "dataType": "STRING", "maxLength": 2147483647 }, { "name": "colunm2", "dataType": "STRING", "maxLength": 2147483647 }, { "name": "colunm3", "dataType": "STRING", "maxLength": 2147483647 }, . . .``` *Need your help please* :innocent::innocent:
@g.kishore: you have typo in the field name
  @ravi.maddi: Sorry, the type error in the post only.
@g.kishore: colunm1 in schema and you specify column1 in query.
@falexvr: Guys, I need some help with our realtime tables. I added two directories within /opt/pinot/deployment/ for two different kafka cluster certs, one of those work as expected, but the second one even following the same steps and configuring everything the same way as the first one making sure it picks up its own kafka certs is not working, it drops this error: ```org.apache.kafka.common.errors.SerializationException: Error deserializing Avro message for id 514 Caused by: javax.net.ssl.SSLHandshakeException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target at sun.security.ssl.Alert.createSSLException(Alert.java:131) ~[?:1.8.0_282] ( . . . ) at io.confluent.kafka.schemaregistry.client.rest.RestService.sendHttpRequest(RestService.java:212) ~[pinot-confluent-avro-0.7.0-SNAPSHOT-shaded.jar:0.7.0-SNAPSHOT-a44d0b1bb64d00d851ea6f2d8bc46ff0ab080d3e] at io.confluent.kafka.schemaregistry.client.rest.RestService.httpRequest(RestService.java:256) ~[pinot-confluent-avro-0.7.0-SNAPSHOT-shaded.jar:0.7.0-SNAPSHOT-a44d0b1bb64d00d851ea6f2d8bc46ff0ab080d3e] at io.confluent.kafka.schemaregistry.client.rest.RestService.getId(RestService.java:486) ~[pinot-confluent-avro-0.7.0-SNAPSHOT-shaded.jar:0.7.0-SNAPSHOT-a44d0b1bb64d00d851ea6f2d8bc46ff0ab080d3e] at io.confluent.kafka.schemaregistry.client.rest.RestService.getId(RestService.java:479) ~[pinot-confluent-avro-0.7.0-SNAPSHOT-shaded.jar:0.7.0-SNAPSHOT-a44d0b1bb64d00d851ea6f2d8bc46ff0ab080d3e] at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getSchemaByIdFromRegistry(CachedSchemaRegistryClient.java:177) ~[pinot-confluent-avro-0.7.0-SNAPSHOT-shaded.jar:0.7.0-SNAPSHOT-a44d0b1bb64d00d851ea6f2d8bc46ff0ab080d3e] at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getBySubjectAndId(CachedSchemaRegistryClient.java:256) ~[pinot-confluent-avro-0.7.0-SNAPSHOT-shaded.jar:0.7.0-SNAPSHOT-a44d0b1bb64d00d851ea6f2d8bc46ff0ab080d3e] at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getById(CachedSchemaRegistryClient.java:235) ~[pinot-confluent-avro-0.7.0-SNAPSHOT-shaded.jar:0.7.0-SNAPSHOT-a44d0b1bb64d00d851ea6f2d8bc46ff0ab080d3e] at io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer.deserialize(AbstractKafkaAvroDeserializer.java:107) ~[pinot-confluent-avro-0.7.0-SNAPSHOT-shaded.jar:0.7.0-SNAPSHOT-a44d0b1bb64d00d851ea6f2d8bc46ff0ab080d3e] at io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer.deserialize(AbstractKafkaAvroDeserializer.java:79) ~[pinot-confluent-avro-0.7.0-SNAPSHOT-shaded.jar:0.7.0-SNAPSHOT-a44d0b1bb64d00d851ea6f2d8bc46ff0ab080d3e] at io.confluent.kafka.serializers.KafkaAvroDeserializer.deserialize(KafkaAvroDeserializer.java:55) ~[pinot-confluent-avro-0.7.0-SNAPSHOT-shaded.jar:0.7.0-SNAPSHOT-a44d0b1bb64d00d851ea6f2d8bc46ff0ab080d3e] at org.apache.pinot.plugin.inputformat.avro.confluent.KafkaConfluentSchemaRegistryAvroMessageDecoder.decode(KafkaConfluentSchemaRegistryAvroMessageDecoder.java:114) ~[pinot-confluent-avro-0.7.0-SNAPSHOT-shaded.jar:0.7.0-SNAPSHOT-a44d0b1bb64d00d851ea6f2d8bc46ff0ab080d3e] at org.apache.pinot.plugin.inputformat.avro.confluent.KafkaConfluentSchemaRegistryAvroMessageDecoder.decode(KafkaConfluentSchemaRegistryAvroMessageDecoder.java:120) ~[pinot-confluent-avro-0.7.0-SNAPSHOT-shaded.jar:0.7.0-SNAPSHOT-a44d0b1bb64d00d851ea6f2d8bc46ff0ab080d3e] at org.apache.pinot.plugin.inputformat.avro.confluent.KafkaConfluentSchemaRegistryAvroMessageDecoder.decode(KafkaConfluentSchemaRegistryAvroMessageDecoder.java:53) ~[pinot-confluent-avro-0.7.0-SNAPSHOT-shaded.jar:0.7.0-SNAPSHOT-a44d0b1bb64d00d851ea6f2d8bc46ff0ab080d3e] at org.apache.pinot.core.data.manager.realtime.LLRealtimeSegmentDataManager.processStreamEvents(LLRealtimeSegmentDataManager.java:471) ~[pinot-all-0.7.0-SNAPSHOT-jar-with-dependencies.jar:0.7.0-SNAPSHOT-a44d0b1bb64d00d851ea6f2d8bc46ff0ab080d3e] at org.apache.pinot.core.data.manager.realtime.LLRealtimeSegmentDataManager.consumeLoop(LLRealtimeSegmentDataManager.java:402) ~[pinot-all-0.7.0-SNAPSHOT-jar-with-dependencies.jar:0.7.0-SNAPSHOT-a44d0b1bb64d00d851ea6f2d8bc46ff0ab080d3e] at org.apache.pinot.core.data.manager.realtime.LLRealtimeSegmentDataManager$PartitionConsumer.run(LLRealtimeSegmentDataManager.java:538) [pinot-all-0.7.0-SNAPSHOT-jar-with-dependencies.jar:0.7.0-SNAPSHOT-a44d0b1bb64d00d851ea6f2d8bc46ff0ab080d3e] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_282] Caused by: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target``` I had a look at the pods in our server instances and all of them had their certs in place, all the table configs have the right path in their properties and even so the second table (the one we need to consume form a different kafka cluster) doesn't work
@falexvr: From my perspective it has to have something to do with the Schema Registry client because the kafka consumer is able to actually retrieve data from kafka, the step failing is the validation of the avro schema against the schema registry, but I am very sure it is picking the right certs for that schema registry
  @dlavoie: Your analysis is on point, kafka consumers are fine but the schema registry deserializer is not taking consideration of your truststore. Would have to look into the details of confluent’s deserializer.
  @falexvr: Just to make sure I wasn't providing bad certs, I went into aiven, got the user/key certs from a user with full access to both topics and schema registry and performed the same operation, the results were the same
@laxman: Updated this thread with more info. Any thoughts?
@jmeyer: @jmeyer has joined the channel
@jmeyer: Hello :wave: What could be the reasons for an external view to be missing ? (Specifically for a REALTIME table) Thanks
  @mayanks: Missing or empty? Also, was it there before and got deleted?
  @mayanks: May be zk issues?
  @jmeyer: (For more context, it's a new table) It really is missing, I'm currently setting up multiple tables and all of them are missing an external view
  @jmeyer: For instance `curl -s -X GET pinot-controller.infra-dev.svc.cluster.local:9000/tables/user_events` -> Returns the table definition But `curl -s -X GET pinot-controller.infra-dev.svc.cluster.local:9000/tables/user_events/externalview` -> Return `{"code": 404,"error": "Table not found"}`
  @mayanks: Hmm, how are you creating the table? Do you have access to controller logs to see what happens when you create a table? Also is ideal state created?
  @mayanks: Can you try idealstate if it exists?
  @jmeyer: Yep, idealstate is there I've got access to logs - I'll take a closer look at the moment the table are created Tables are created via the API
  @mayanks: Yeah controller log should say something
  @jmeyer: Probably unrelated because I've had these warning on a working table (other cluster) But I'm seeing multiple `The configuration 'stream.kafka.topic.name' was supplied but isn't a known config.` (and other related fields) [On 0.7]
  @mayanks: Any other errors or logs stating whether table was created or not?
  @jmeyer:
  @jmeyer: I can't seem to find a log telling that the table is indeed created, but via the API I can get the table config (and schema) back However, while creating a new table (after deleting schema & definition), I'm getting quite some warnings (see snippet above)
  @jmeyer: I find `[ZkBaseDataAccessor] [HelixController-pipeline-task-pinot-dev-(0dac30d3_TASK)] Fail to read record for paths: {/pinot-dev/INSTANCES/Server_pinot-server-0.pinot-server-headless.infra-dev.svc.cluster.local_8098/MESSAGES/e5d83f70-3ee9-4837-a9bd-58f06729ccb8=-101}` especially suspicious
  @jmeyer: While deleting the table beforehand, I got some `[ZkClient] [grizzly-http-server-1] Failed to delete path /pinot-dev/PROPERTYSTORE/SEGMENTS/document_open_REALTIME! org.I0Itec.zkclient.exception.ZkException: org.apache.zookeeper.KeeperException$NotEmptyException: KeeperErrorCode = Directory not empty for /pinot-dev/PROPERTYSTORE/SEGMENTS/document_open_REALTIME`
  @mayanks: I think that might just mean you need to first delete all segments of a table before deleting table
  @mayanks: It might be easier to collaborate on a zoom call if you are up for it
  @jmeyer: > I think that might just mean you need to first delete all segments of a table before deleting table That actually one thing I've been wondering for a bit :slightly_smiling_face:
  @mayanks: I’d say do a clean start for a table and let’s look at logs
  @mayanks: If you can create a new test cluster altogether, even better - to eliminate any weird state your current cluster might be in
  @jmeyer: Thanks I'll try that If that doesn't solve it, we could do that > It might be easier to collaborate on a zoom call if you are up for it
  @mayanks: Sure, ping me when you think zoom might be better, we can continue here until then
  @jmeyer: By deleting the segments (while keeping the table around), the external view still doesn't exist (not very surprising) But The UI (Cluster Manager / Tables / Tables / <table_name>) now opens up instead of being stuck in an infinite loop And the status goes to Green, but still no external view (checked in ZK - it's missing), nor segments (expecting 1 consuming) I've also tried creating the table from the UI - same result : • Status: Bad (from UI) • external view: :x: (missing) • ideal state :white_check_mark:
  @jmeyer: The table I'm creating is very simple >From the UI, I've basically only changed / input the table name, kafka broker & topic name
  @mayanks: Can you paste the ideal state?
  @jmeyer: ```{ "OFFLINE": null, "REALTIME": { "user_events__0__0__20210331T1617Z": { "Server_pinot-server-0.pinot-server-headless.infra-dev.svc.cluster.local_8098": "CONSUMING" } } }```
  @jmeyer: ```$ curl -s -X GET pinot-controller.infra-dev.svc.cluster.local:9000/segments/user_events [{"REALTIME":["user_events__0__0__20210331T1617Z $ curl -s -X GET pinot-controller.infra-dev.svc.cluster.local:9000/segments/user_events/user_events__0__0__20210331T1617Z {"code":404,"error":"Segment user_events__0__0__20210331T1617Z or table user_events not found in /var/pinot/controller/data/user_events/user_events__0__0__20210331T1617Z"}```
  @mayanks: Any logs in the server `Server_pinot-server-0.pinot-server-headless.infra-dev.svc.cluster.local_8098`? Although, I don't expect there might be (since no external view), but would help to check
  @jmeyer: I don't seem to find new logs vs the logs I've sent above
  @mayanks:
  @mayanks: In the zk explorer, can you check the znodes under INSTANCES/<server> to see if there are any errors?
  @jmeyer: node `server-.../ERRORS` is `{}` same for `MESSAGES` & `STATUSUPDATE`
  @mayanks: and currentstates?
  @jmeyer:
  @mayanks: And you don't see any logs in the server that suggest it was told by controller to start consuming?
  @jmeyer: Sorry for the wall of text, maybe you can grep some interesting logs in there
  @mayanks: The errors I see in the log seem to suggest they are from the time you tried to delete the table. And the fact that I see a log line below seems to suggest the table was not properly created to being with. Wondering if we can start with a fresh table creation to check the logs (if fresh pinot cluster, even better): ```2021-03-31 18:00:56 2021/03/31 16:00:56.576 WARN [SegmentDeletionManager] [PinotHelixResourceManagerExecutorService] Resource: user_events_REALTIME is not set up in idealState or ExternalView, won't do anything```
  @jmeyer: Yes, the table are missing the external view right from the start (after creation via the API) I've sent your logs surrounding table creation from a fresh start (no tables, but no fresh cluster)
@elon.azoulay: Hi, we added a column to a schema via PUT the schema api (it's in the kafka topic) but still do not see data, i.e. the select list does not contain that column. Do we need to restart the servers or do some other api call?
  @elon.azoulay: Or do we have to wait for new segments?
  @fx19880617: the segment created after the schema call should have new column
  @fx19880617: Or you can reload the consuming segments or just restart pinot server
  @elon.azoulay: Oh nice! How do you reload consuming segments?
  @elon.azoulay: Ah, is it the "reload table if backwards compatible" checkbox in the swagger page?
  @fx19880617: there is a swagger api to reload a segment. I haven’t used the UI button :p
  @elon.azoulay: thx!
  @elon.azoulay: one question - does that have a significant performance impact? i.e. does it regenerate the segments?
  @fx19880617: yes, it will drop current segment and try to recreate and re-consume it
  @elon.azoulay: Oh, so older segments should not be reloaded, i.e. if the kafka topic no longer contains the old data, right?
  @fx19880617: you can specify the segment name in that call to reload
  @elon.azoulay: sg, thanks @fx19880617!
  @fx19880617: if reloading the table, it will touch all segments
  @elon.azoulay: oh, so we could lose data if the kafka topic no longer contains the offsets from that segment, right?
  @fx19880617: yes, if your current segment is very large
  @fx19880617: the same issue will happen when you restart the server though
  @elon.azoulay: this is all really good to know, thanks! It looks like restarting the servers worked for us.
  @elon.azoulay: Much appreciated @fx19880617! you helped us again:)
@c.ravikiran123: @c.ravikiran123 has joined the channel

#pinot-s3


@nachiket.kate: @nachiket.kate has joined the channel

#pinot-dev


@nachiket.kate: @nachiket.kate has joined the channel
@fx19880617: I’m seeing an issue on helix when running integration test on *`JDK 11`*. After controller disconnected from helix, seems that the this Time task is not stopped as expected and it hangs the test forever. It might not be an issue for prod as this behavior doesn’t happen on normal workflow. ```17:37:59.009 [Timer-22] ERROR org.apache.helix.controller.GenericHelixController - Time task failed. Rebalance task type: PeriodicalRebalance, cluster: PinotBrokerRestletResourceStatelessTest org.apache.helix.HelixException: HelixManager (ZkClient) is not connected. Call HelixManager#connect() at org.apache.helix.manager.zk.ZKHelixManager.checkConnected(ZKHelixManager.java:363) ~[helix-core-0.9.8.jar:0.9.8] at org.apache.helix.manager.zk.ZKHelixManager.getHelixDataAccessor(ZKHelixManager.java:593) ~[helix-core-0.9.8.jar:0.9.8] at org.apache.helix.controller.GenericHelixController$RebalanceTask.run(GenericHelixController.java:247) [helix-core-0.9.8.jar:0.9.8] at java.util.TimerThread.mainLoop(Timer.java:556) [?:?] at java.util.TimerThread.run(Timer.java:506) [?:?] 17:37:59.176 [Timer-83] ERROR org.apache.helix.controller.GenericHelixController - Time task failed. Rebalance task type: PeriodicalRebalance, cluster: PinotControllerModeStatelessTest org.apache.helix.HelixException: HelixManager (ZkClient) is not connected. Call HelixManager#connect() at org.apache.helix.manager.zk.ZKHelixManager.checkConnected(ZKHelixManager.java:363) ~[helix-core-0.9.8.jar:0.9.8] at org.apache.helix.manager.zk.ZKHelixManager.getHelixDataAccessor(ZKHelixManager.java:593) ~[helix-core-0.9.8.jar:0.9.8] at org.apache.helix.controller.GenericHelixController$RebalanceTask.run(GenericHelixController.java:247) [helix-core-0.9.8.jar:0.9.8] at java.util.TimerThread.mainLoop(Timer.java:556) [?:?] at java.util.TimerThread.run(Timer.java:506) [?:?] 17:37:59.752 [Timer-125] ERROR org.apache.helix.controller.GenericHelixController - Time task failed. Rebalance task type: PeriodicalRebalance, cluster: PinotHelixResourceManagerStatelessTest org.apache.helix.HelixException: HelixManager (ZkClient) is not connected. Call HelixManager#connect() at org.apache.helix.manager.zk.ZKHelixManager.checkConnected(ZKHelixManager.java:363) ~[helix-core-0.9.8.jar:0.9.8] at org.apache.helix.manager.zk.ZKHelixManager.getHelixDataAccessor(ZKHelixManager.java:593) ~[helix-core-0.9.8.jar:0.9.8] at org.apache.helix.controller.GenericHelixController$RebalanceTask.run(GenericHelixController.java:247) [helix-core-0.9.8.jar:0.9.8] at java.util.TimerThread.mainLoop(Timer.java:556) [?:?] at java.util.TimerThread.run(Timer.java:506) [?:?]```
  @fx19880617: As a workaround, I’m trying to put controller stop into a timeout task and make all zk and controller tests using dynamic port.
  @jackie.jxt: I feel the issue is we don't disconnect properly?
  @jackie.jxt: Why is helix trying to reconnect if we explicitly ask it to disconnect?
  @fx19880617: it’s the timetask inside the helix
  @fx19880617: we called helix.disconnect in controller.stop()
  @fx19880617: the first thing in helix.disconnect() is to cancel the timetasks, but somehow that doesn’t happen
  @jackie.jxt: I see. Can we conclude that the Helix version we are using is not `java 11` ready?
  @jackie.jxt:
  @jackie.jxt: The key note is `Helix is now on Java 8!` :man-facepalming:
  @jackie.jxt: We are way ahead
  @fx19880617: :rolling_on_the_floor_laughing:
  @fx19880617: we are on 0.9.8
  @jackie.jxt: What? They published the `0.9.0` release note on 2021-02-01

#community


@malcomb00: @malcomb00 has joined the channel

#announcements


@malcomb00: @malcomb00 has joined the channel
@nachiket.kate: @nachiket.kate has joined the channel

#presto-pinot-streaming


@nachiket.kate: @nachiket.kate has joined the channel

#pinot-docs


@nachiket.kate: @nachiket.kate has joined the channel

#presto-pinot-connector


@nachiket.kate: @nachiket.kate has joined the channel

#config-tuner


@nachiket.kate: @nachiket.kate has joined the channel

#getting-started


@nachiket.kate: @nachiket.kate has joined the channel

#releases


@jmeyer: @jmeyer has joined the channel

#pinot-rack-awareness


@jaydesai.jd: Hey @g.kishore Can u have one more look at the doc: and sign off today. Thanks :blush:
@g.kishore: added few comments
  @jaydesai.jd: @g.kishore Responded to the comments

#minion-improvements


@laxman: @jackie.jxt @fx19880617: Can I pick this up? Please let me know if we have git issue for this
--------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]

Reply via email to