Vikas Gorur wrote:
Gordan Bobic wrote:
Hi,

I'm noticing something that could be considered weird. I have a 3-node rootfs-on-gluster cluster, and I'm doing a big yum update on one of the nodes. On that node and one of the other nodes, glusterfsd is using about 10-15% CPU. But on the 3rd node, glusterfs is using < 0.3% CPU.

I'm guessing this has something to do with the way locking backup and fail-over is implemented (IIRC there is a master and a backup lock/state server, rather than replicating this information to all the servers). Can anyone hazard a guess and confirm or deny the possible cause of this discrepancy?
>
Hm, this is indeed a little strange. The first subvolume is used as the lock server, so that could possibly explain why one of the servers have higher CPU usage. However, this doesn't explain why *two* of the servers
have high CPU usage.

Are you sure it is not the case that the *client* on one node (I'm assuming
all your three nodes act as servers and clients) and the server on another node are using CPU?

That is plausible since I am using single-process client and server. Is there a way to tell on a running glfs cluster which node is the current lock master? The process creating the load was running on the first listed volume, so I would have expected this to be the primary lock server.

Gordan


_______________________________________________
Gluster-devel mailing list
Gluster-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel

Reply via email to