[Gluster-users] Gluster 3.8.15 - Major CPU Load
Hi folks, Can someone give me an idea why gluster and the glusterd would be taking almost 25% cpu each when under a READ load. (No writing). I have setup just a simple three node system, and when I put it under a load of a number of clients; gluster is really killing the cpu. Anything I should specifically look at? Using v3.8.15. Basically all three machines replicate each other; the idea is so that each machine can be a member of the apache serving group for scalability.The load test asked for basically the same 40 or so files over and over again. If anyone has some pointers on what to look at; I can ramp up the load on the machine on demand and test things; but having it take ~45% of the total cpu under what I consider a fairly light load worries me a lot about Glusters Scalability.Based on the bug reports and other newgroup messages it seems I might be hitting some corner case as I believe this is unusual result; so any pointers, any data I can provide. Nathanael A. ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] Gluster Monthly Newsletter, November 2017
Gluster Monthly Newsletter, November 2017 Come find us at KubeCon/CloudNativeCon in Austin, December 6-8! Special sessions around Storage include: Thursday, December 7 • 11:55am - 12:30pm Kubernetes Feature Prototyping with External Controllers and Custom Resource Definitions - Tomas Smetana, Red Hat https://kccncna17.sched.com/event/CU7O/kubernetes-feature-prototyping-with-external-controllers-and-custom-resource-definitions-i-tomas-smetana-red-hat Friday, December 8 • 11:10am - 11:45am You Have Stateful Apps - What if Kubernetes Would Also Run Your Storage? - Annette Clewett & Sudhir Prasad, Red Hat https://kccncna17.sched.com/event/CU8g/you-have-stateful-apps-what-if-kubernetes-would-also-run-your-storage-annette-clewett-sudhir-prasad-red-hat Friday, December 8 • 2:45pm - 3:20pm Providing Containerized Cinder Services to Baremetal Kubernetes Clusters - John Griffith, NetApp & Huamin Chen, Red Hat https://kccncna17.sched.com/event/CU7s/providing-containerized-cinder-services-to-baremetal-kubernetes-clusters-i-john-griffith-netapp-huamin-chen-red-hat Friday, December 8 • 4:25pm - 5:00pm Kubernetes Storage Evolution: Enabling High Performance Distributed Datastores - Erin A Boyd, Red Hat & Michelle Au, Google https://kccncna17.sched.com/event/CU7R/kubernetes-storage-evolution-enabling-high-performance-distributed-datastores-a-erin-a-boyd-red-hat-michelle-au-google Gluster Developer Conversations! We held our first Gluster Developer Conversations on November 28th! Our next one will be on December 12th! Noteworthy Threads: [Gluster-users] Request for Comments: Upgrades from 3.x to 4.0+ http://lists.gluster.org/pipermail/gluster-users/2017-November/032806.html [Gluster-users] Gluster Summit BOF - Rebalance http://lists.gluster.org/pipermail/gluster-users/2017-November/032830.html [Gluster-users] Gluster Summit BOF - Encryption http://lists.gluster.org/pipermail/gluster-users/2017-November/032834.html [Gluster-users] Gluster Summit BOF - Testing http://lists.gluster.org/pipermail/gluster-users/2017-November/032844.html [Gluster-users] Adding a slack for communication? http://lists.gluster.org/pipermail/gluster-users/2017-November/032854.html [Gluster-devel] RFC: FUSE kernel features to be adopted by GlusterFS http://lists.gluster.org/pipermail/gluster-devel/2017-November/053948.html [Gluster-devel] Release 4.0: Schedule and scope clarity (responses needed) http://lists.gluster.org/pipermail/gluster-devel/2017-November/053995.html [Gluster-devel] Abandoning the oldest reviews on Gerrit http://lists.gluster.org/pipermail/gluster-devel/2017-November/054000.html Top Contributing Companies: Red Hat, Gluster, Inc., Facebook Upcoming CFPs: Embedded Linux Conference - January 7, 2018 http://events.linuxfoundation.org/events/embedded-linux-conference/program/callforproposals KubeCon Europe - CFP Close: Friday, January 12, 2018 - 11:59 PM PST http://events.linuxfoundation.org/events/kubecon-and-cloudnativecon-europe/program/cfpguide LinuxCon China - March 4, 2018 https://www.lfasiallc.com/linuxcon-containercon-cloudopen-china/cfp -- Amye Scavarda | a...@redhat.com | Gluster Community Lead ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] gluster and nfs-ganesha
Hi Jiffin, I looked at the document, and there are 2 things: 1. In Gluster 3.8 it seems you don't need to do that at all, it creates this automatically, so why not in 3.10? 2. The step by step guide, in the last item, doesn't say where exactly do I need to create the nfs-ganesha directory. The copy/paste seems irrelevant as enabling nfs-ganesha creates automatically the ganesha.conf and a subdirectory (called "exports") with the volume share configuration file. Also, could someone tell me whats up with no ganesha on 3.12? Thanks On Mon, Dec 4, 2017 at 11:47 AM, Jiffin Tony Thottanwrote: > > > On Saturday 02 December 2017 07:00 PM, Hetz Ben Hamo wrote: > > HI, > > I'm using CentOS 7.4 with Gluster 3.10.7 and Ganesha NFS 2.4.5. > > I'm trying to create a very simple 2 nodes cluster to be used with > NFS-ganesha. I've created the bricks and the volume. Here's the output: > > # gluster volume info > > Volume Name: cluster-demo > Type: Replicate > Volume ID: 9c835a8e-c0ec-494c-a73b-cca9d77871c5 > Status: Started > Snapshot Count: 0 > Number of Bricks: 1 x 2 = 2 > Transport-type: tcp > Bricks: > Brick1: glnode1:/data/brick1/gv0 > Brick2: glnode2:/data/brick1/gv0 > Options Reconfigured: > nfs.disable: on > transport.address-family: inet > cluster.enable-shared-storage: enable > > Volume Name: gluster_shared_storage > Type: Replicate > Volume ID: caf36f36-0364-4ab9-a158-f0d1205898c4 > Status: Started > Snapshot Count: 0 > Number of Bricks: 1 x 2 = 2 > Transport-type: tcp > Bricks: > Brick1: glnode2:/var/lib/glusterd/ss_brick > Brick2: 192.168.0.95:/var/lib/glusterd/ss_brick > Options Reconfigured: > transport.address-family: inet > nfs.disable: on > cluster.enable-shared-storage: enable > > However, when I'm trying to run gluster nfs-ganesha enable - it creates a > wrong symbolic link and failes: > > # gluster nfs-ganesha enable > Enabling NFS-Ganesha requires Gluster-NFS to be disabled across the > trusted pool. Do you still want to continue? > (y/n) y > This will take a few minutes to complete. Please wait .. > nfs-ganesha: failed: creation of symlink ganesha.conf in /etc/ganesha > failed > > wrong link: ganesha.conf -> /var/run/gluster/shared_ > storage/nfs-ganesha/ganesha.conf > > # ls -l /var/run/gluster/shared_storage/ > total 0 > > I've seen some reports (and fixed) in Red Hat's Bugzilla and looked at the > Red Hat solutions (https://access.redhat.com/solutions/3099581) but this > doesn't help. > > > Suggestions? > > Hi, > > It seems you have not created directory nfs-ganesha under shared storage > and plus copy/create ganesha.conf/ganesha-ha.conf inside > Please follow this document http://docs.gluster.org/en/ > latest/Administrator%20Guide/NFS-Ganesha%20GlusterFS%20Integration/ > > Regards, > Jiffin > > > > > > I tried to upgrade to Gluster 3.12 and it seems Ganesha support was kicked > out? whats replacing it? > > > > ___ > Gluster-users mailing > listGluster-users@gluster.orghttp://lists.gluster.org/mailman/listinfo/gluster-users > > > ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] Single node distributed volume
Hi, Is it possible to create a single node distributed volume ? Later as the storage capacity starts to fill up, add another node, so on and so forth ? There are many articles on creating a two node distributed volume, but I could not find any pointed answer or HOWTOs for creating a single node distributed volume. The idea here is to create a single node distributed volume as the primary server and another single node distributed volume as the secondary server and geo-replicate between them. Thanks, Sameer ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] What’s the purpose of /var/lib/glusterd/nfs/secret.pem.pub ?
On Friday 01 December 2017 03:04 AM, Adam Ru wrote: Some time ago I read and followed this quide for installing and configuring Gluster: http://blog.gluster.org/linux-scale-out-nfsv4-using-nfs-ganesha-and-glusterfs-one-step-at-a-time/ with steps to create certificate: /var/lib/glusterd/nfs/secret.pem /var/lib/glusterd/nfs/secret.pem.pub and distribute public and private cert file among nodes. I’ve just tried new Gluster 3.12 and I forgot to create the certificate and I created new cluster and it worked: sudo gluster peer probe SecondNode peer probe: success. sudo gluster peer probe ThirdNode peer probe: success. After I mounted Gluster volumes everything seems to work and Gluster replicates files. So why do I need the certificate? Hi, If u are not using nfs-ganesha then those files are not required. Those certificates are used for internal communications for setting/modifying the nfs-ganesha HA, nothing to do with gluster at all. Regards, Jiffin Thank you. Kind regards, Adam ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] gluster and nfs-ganesha
On Saturday 02 December 2017 07:00 PM, Hetz Ben Hamo wrote: HI, I'm using CentOS 7.4 with Gluster 3.10.7 and Ganesha NFS 2.4.5. I'm trying to create a very simple 2 nodes cluster to be used with NFS-ganesha. I've created the bricks and the volume. Here's the output: # gluster volume info Volume Name: cluster-demo Type: Replicate Volume ID: 9c835a8e-c0ec-494c-a73b-cca9d77871c5 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: glnode1:/data/brick1/gv0 Brick2: glnode2:/data/brick1/gv0 Options Reconfigured: nfs.disable: on transport.address-family: inet cluster.enable-shared-storage: enable Volume Name: gluster_shared_storage Type: Replicate Volume ID: caf36f36-0364-4ab9-a158-f0d1205898c4 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: glnode2:/var/lib/glusterd/ss_brick Brick2: 192.168.0.95:/var/lib/glusterd/ss_brick Options Reconfigured: transport.address-family: inet nfs.disable: on cluster.enable-shared-storage: enable However, when I'm trying to run gluster nfs-ganesha enable - it creates a wrong symbolic link and failes: # gluster nfs-ganesha enable Enabling NFS-Ganesha requires Gluster-NFS to be disabled across the trusted pool. Do you still want to continue? (y/n) y This will take a few minutes to complete. Please wait .. nfs-ganesha: failed: creation of symlink ganesha.conf in /etc/ganesha failed wrong link: ganesha.conf -> /var/run/gluster/shared_storage/nfs-ganesha/ganesha.conf # ls -l /var/run/gluster/shared_storage/ total 0 I've seen some reports (and fixed) in Red Hat's Bugzilla and looked at the Red Hat solutions (https://access.redhat.com/solutions/3099581) but this doesn't help. Suggestions? Hi, It seems you have not created directory nfs-ganesha under shared storage and plus copy/create ganesha.conf/ganesha-ha.conf inside Please follow this document http://docs.gluster.org/en/latest/Administrator%20Guide/NFS-Ganesha%20GlusterFS%20Integration/ Regards, Jiffin I tried to upgrade to Gluster 3.12 and it seems Ganesha support was kicked out? whats replacing it? ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users