Hi, Thank you for your reply.
We're using warp10 version 2.0.2. The reason we have a distributed version is because we used to have a more complexe infrastructure with two separate nodes for reading and writing and we basically just kept it the same. Both warp10 instances have 32GB of memory allocated. Is it possible that the reason for the increase in CPU usage is that the 32GB is not enough and therefore GC cannot operate properly ? If so how can I monitor it ? It doesn't seem to be a problem HBase side. Regards, Le jeudi 27 février 2020 15:47:13 UTC+1, Mathias Herberts a écrit : > > Hi, > > what version of Warp 10 are you running? > > Also why have you chosen the distributed version rather than the > standalone one? It seems your distributed infrastructure is rather limited, > a standalone setup would reduce the overall complexity and probably make > better use of your resources. > > On Wednesday, February 26, 2020 at 11:16:28 AM UTC+1, MaFF wrote: >> >> Hi, >> >> We've been experiencing some issues with Warp10 and/or one of our HBase >> Region Servers. >> We have a 2 node cluster : >> - Warp10 2.0.2 >> - HDP 2.6.5 >> - HBase 1.1.2 >> - Kafka 0.8 >> - Zookeeper 3.4.6 >> >> We have installed on node 1: >> - HBase master >> - HBase RS (with issues) >> - Warp10: ingress and store >> >> node2: >> - HBase RS >> - Warp10 egress and directory >> >> There is enough RAM for all services to run, there is no GC issue and yet >> HBase RS and Warp10 take 800% CPU. >> continuum's regions are mostly on node 1 >> We ran a major compaction on continuum and it didn't change anything. >> HDFS and Zookeeper run just fine. >> >> Any input would be greatly appreciated. >> >> Regards, >> >> MF >> > -- You received this message because you are subscribed to the Google Groups "Warp 10 users" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/warp10-users/bc90570e-e6a7-4daf-b28e-29cb65ab2539%40googlegroups.com.
