Re: [Gluster-users] Can a gluster server be an NFS client ?
- Original Message - From: Prasun Gera prasun.g...@gmail.com To: Jason Brooks jbro...@redhat.com Cc: gluster-users@gluster.org Sent: Monday, May 18, 2015 3:26:02 PM Subject: Re: [Gluster-users] Can a gluster server be an NFS client ? Thanks. Can you tell me what this achieves, and what the side-effects, if any, are ? Btw, the NFS mounts on the gluster server are unrelated to what the gluster server itself is exporting. They are regular nfs mounts from a different NFS server. Your nfs client and the gluster nfs server are both vying for portmap, turning off locking for the client resolves this. The side effects will depend on your environment -- in my converged ovirt+gluster setup, it isn't causing an issue, but I'm looking forward to dropping this workaround when ovirt gets support for hosting its engine vm from native gluster. For you, this may or may not be a workable workaround. Jason On Mon, May 18, 2015 at 5:35 PM, Jason Brooks jbro...@redhat.com wrote: - Original Message - From: Prasun Gera prasun.g...@gmail.com To: gluster-users@gluster.org Sent: Monday, May 18, 2015 1:47:32 PM Subject: [Gluster-users] Can a gluster server be an NFS client ? I am seeing some erratic behavior w.r.t. the NFS service on the gluster servers (RHS 3.0). The nfs service fails to start occasionally and randomly with Could not register with portmap 100021 4 38468 Program NLM4 registration failed I've encountered this before -- I had to disable file locking, adding Lock=False to /etc/nfsmount.conf This appears to be related to http://www.gluster.org/pipermail/gluster-users/2014-October/019215.html , although I'm not sure what the resolution is. The gluster servers use autofs to mount user home directories and other sundry directories. I could verify that stopping autofs and then starting the gluster volume seems to solve the problem. Starting autofs after gluster seems to work fine too. What's the right way to handle this ? ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Can a gluster server be an NFS client ?
- Original Message - From: Prasun Gera prasun.g...@gmail.com To: gluster-users@gluster.org Sent: Monday, May 18, 2015 1:47:32 PM Subject: [Gluster-users] Can a gluster server be an NFS client ? I am seeing some erratic behavior w.r.t. the NFS service on the gluster servers (RHS 3.0). The nfs service fails to start occasionally and randomly with Could not register with portmap 100021 4 38468 Program NLM4 registration failed I've encountered this before -- I had to disable file locking, adding Lock=False to /etc/nfsmount.conf This appears to be related to http://www.gluster.org/pipermail/gluster-users/2014-October/019215.html , although I'm not sure what the resolution is. The gluster servers use autofs to mount user home directories and other sundry directories. I could verify that stopping autofs and then starting the gluster volume seems to solve the problem. Starting autofs after gluster seems to work fine too. What's the right way to handle this ? ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Virt-store use case - HA failure issue - suggestions needed
- Original Message - From: Vince Loschiavo vloschi...@gmail.com To: gluster-users@gluster.org Sent: Thursday, July 31, 2014 9:22:16 AM Subject: [Gluster-users] Virt-store use case - HA failure issue - suggestions needed I'm currently testing Gluster 3.5.1 in a two server QEMU/KVM environment. Centos 6.5: Two servers (KVM07 KVM08), Two brick (one brick per server) replicated volume I've tuned the volume per the documentation here: http://gluster.org/documentation/use_cases/Virt-store-usecase/ I have the gluster volume fuse mounted on KVM07 and KVM08 and am using it to store raw disk images. KVM is using the fuse mounted volume as a dir: Filesystem Directory: storage pool. With setting dynamic_ownership = 0 in /etc/libvirt/qemu.conf and chown-ing the files to qemu:qemu, live migration works great. Problem: If I need to take down one of these servers for maintenance, I live migrate the VMs to the other server. service gluster stop then kill all the remaining gluster and brick processes. The guide says that quorum-type=auto sets a rule such that at least half of the bricks in the replica group should be UP and running. If not, the replica group becomes read-only. I think the rule is actually 51%, so bringing down one of the two servers makes your volume read-only. If you want two servers, you need to unset this rule. Better to add a third server and a third replica, though. Regards, Jason At this point, the VMs die. The Fuse mount recovers and remains attached to the volume via the other server, but the VIRT disk images are not fully synced. This causes the VMs to go into a read-only files system state, then kernel panic. Reboots/restarts of the VMs just cause kernel panics. This effectively brings down the two node cluster. Bringing back up the gluster node / bricks /etc, prompts a self-heal. Once self-heal is completed, the VMs can boot normally. Question: is there a better way to accomplish HA with live/running Virt images? The goal is to be able to bring down any one server in the pair and perform maintenance without interrupting the VMs. I assume my shutdown process is flawed but haven't been able to find a better process. Any suggestions are welcome. -- -Vince Loschiavo ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] Call for User Stories Feature Feedback
Greetings, Gluster Users: We're starting up a new Case Studies section on the Gluster project web site, and we need your help populating it with tales of your triumphs and travails. Share with us a bit about how you're using Gluster today, and about what's most important to you as Gluster development moves forward, and you'll help bring Gluster to the attention of new users while bringing your needs and use cases to the attention of the project developers. Tell us about your deployment: * How many Gluster nodes are you running, and what's the hardware and OS you're using to host them? * How much data are you hosting on Gluster? * What Gluster version are you using, and what sort of Gluster volumes (replicated, distributed, etc.) are you using? * What protocol(s) do you use to access your Gluster storage? * Looking ahead to the Gluster roadmap, what features/enhancements would you like to see? * Which features on the Gluster 3.5 Proposed Feature List are you most interested in? (http://www.gluster.org/community/documentation/index.php/Planning35) * Are there any pet bugs or backport requests you're particularly keen to see resolved? (http://www.gluster.org/community/documentation/index.php/Backport_Wishlist) * Anything else you'd like to add, or would like to hear about from other Gluster community members? Please feel free to reply on-list (so we can all learn from your deployment) or off-list (if you'd like your feedback to be more discrete). If we intend to turn your feedback into a case study for the site, we'll follow up with you for any further details or clarification. Thank you! Jason --- Jason Brooks Red Hat Open Source and Standards @jasonbrooks | @redhatopen http://community.redhat.com ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] How to enable WORM feature?
It should be: gluster volume set test features.worm enable - Original Message - From: Nux! n...@li.nux.ro To: gluster-users@gluster.org Sent: Thursday, May 9, 2013 9:07:22 AM Subject: [Gluster-users] How to enable WORM feature? Hi, I'm running 3.4-beta1. I tried to enable WORM for a volume but got this: gluster volume set test feature.worm enable volume set: failed: option : feature.worm does not exist As intructed here: http://www.gluster.org/community/documentation/index.php/Features/worm So.. how do I enable WORM? -- Sent from the Delta quadrant using Borg technology! Nux! www.nux.ro ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Coming Up Next Week: GlusterFS 3.4 Beta GlusterFest Test Day
Hi all -- With glusterfs-3.4beta1 released [1], we're all set for some GlusterFest testing action. You'll find pointers to Gluster Test Framework tests, basic functionality tests, and test guidance for some of the new features in the release linked from the GlusterFest page [2] on the project site. If you have suggestions for improving the test materials, please feel free to make edits/additions/improvements. Happy testing, and I'll see you in #gluster -- Jason (jbrooks) [1] http://lists.nongnu.org/archive/html/gluster-devel/2013-05/msg00050.html [2] http://www.gluster.org/community/documentation/index.php/GlusterFest - Original Message - From: Jason Brooks jbro...@redhat.com To: gluster-users@gluster.org, gluster-de...@nongnu.org Sent: Thursday, May 2, 2013 4:50:45 PM Subject: Coming Up Next Week: GlusterFS 3.4 Beta GlusterFest Test Day Greetings, The first beta of glusterfs 3.4 is scheduled for release on Tuesday, May 7th. In conjunction with the beta, we're running a GlusterFest: a 24-hour test day, starting at 8pm PDT May 7/03:00 UTC May 8: http://www.gluster.org/community/documentation/index.php/GlusterFest. Please join us for all the GlusterFest excitement, including: - Testing the software. Install the new beta (when it's released) and put it through its paces. You'll find some basic testing procedures linked from the GlusterFest page. Also, check out this video introducing our new testing framework: http://www.gluster.org/2013/04/gluster-testing-framework/ - Finding bugs. See the current list of bugs targeted for this release: https://bugzilla.redhat.com/showdependencytree.cgi?id=895528 - Fixing bugs. For information about contributing to development, see: http://www.gluster.org/community/documentation/index.php/Developers If you need assistance, join us in #gluster on Freenode, or send a message to the gluster-users mailing list for general usage questions, and gluster-devel for anything related to building, patching, and bug-fixing. Please spread the word to other potentially-interested Gluster testers! Happy testing and bug-hunting, -- Jason Jason Brooks Open Source and Standards, Red Hat http://community.redhat.com @jasonbrooks ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] Coming Up Next Week: GlusterFS 3.4 Beta GlusterFest Test Day
Greetings, The first beta of glusterfs 3.4 is scheduled for release on Tuesday, May 7th. In conjunction with the beta, we're running a GlusterFest: a 24-hour test day, starting at 8pm PDT May 7/03:00 UTC May 8: http://www.gluster.org/community/documentation/index.php/GlusterFest. Please join us for all the GlusterFest excitement, including: - Testing the software. Install the new beta (when it's released) and put it through its paces. You'll find some basic testing procedures linked from the GlusterFest page. Also, check out this video introducing our new testing framework: http://www.gluster.org/2013/04/gluster-testing-framework/ - Finding bugs. See the current list of bugs targeted for this release: https://bugzilla.redhat.com/showdependencytree.cgi?id=895528 - Fixing bugs. For information about contributing to development, see: http://www.gluster.org/community/documentation/index.php/Developers If you need assistance, join us in #gluster on Freenode, or send a message to the gluster-users mailing list for general usage questions, and gluster-devel for anything related to building, patching, and bug-fixing. Please spread the word to other potentially-interested Gluster testers! Happy testing and bug-hunting, -- Jason Jason Brooks Open Source and Standards, Red Hat http://community.redhat.com @jasonbrooks ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Feodra 17 GlusterFS 3.3 and Firefox
On 09/10/2012 01:06 AM, Yannik Lieblinger wrote: Hi Joe, i know you have much to do. But do you have already opened a bug report? Looks like this is it: https://bugzilla.redhat.com/show_bug.cgi?id=852224 -- @jasonbrooks ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Re: [Gluster-users] Troubleshooting Unified Object and File Storage in 3.3
On Wed 06 Jun 2012 10:25:38 PM PDT, Vijay Bellur wrote: On 06/07/2012 03:22 AM, Jason Brooks wrote: I've been testing on CentOS 6.2. The only command from the Admin guide I've run successfully has been: curl -v -H 'X-Storage-User: test:tester' -H 'X-Storage-Pass:testing' -k http://127.0.0.1:8080/auth/v1.0. I started out with a centos machine running gluster-swift, which I was connecting to a four node gluster cluster. It wasn't clear to me from the admin guide where I was supposed to mount my gluster volume, You will need to mount the gluster volume at /mnt/gluster-object/account. For the example in admin guide, /mnt/gluster-object/AUTH_test needs to be the mountpoint for your gluster volume. Thanks -- that helps a lot. Another Q on the admin guide. Under 12.4.4. Configuring Authentication System the guide says Proxy server must be configured to authenticate using tempauth. Is this the only supported auth method? I'm experimenting with keystone. Thanks, Jason ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Re: [Gluster-users] Troubleshooting Unified Object and File Storage in 3.3
On 06/06/2012 10:25 PM, Vijay Bellur wrote: On 06/07/2012 03:22 AM, Jason Brooks wrote: I've been testing on CentOS 6.2. The only command from the Admin guide I've run successfully has been: curl -v -H 'X-Storage-User: test:tester' -H 'X-Storage-Pass:testing' -k http://127.0.0.1:8080/auth/v1.0. I started out with a centos machine running gluster-swift, which I was connecting to a four node gluster cluster. It wasn't clear to me from the admin guide where I was supposed to mount my gluster volume, You will need to mount the gluster volume at /mnt/gluster-object/account. For the example in admin guide, /mnt/gluster-object/AUTH_test needs to be the mountpoint for your gluster volume. There's something I'm confused about -- if I mount my gluster volume at AUTH_test, I am able to work with it, but is the idea that users should manually create a gluster volume and mountpoint for every account? I've been working through this Fedora 17 openstack howto: https://fedoraproject.org/wiki/Getting_started_with_OpenStack_on_Fedora_17#Configure_swift_with_keystone. I thought I'd bring gluster into the mix, but it's not clear to me how the setup directions I see here and elsewhere for swift ought to interact with the gluster-swift packages. The gluster-swift-plugin places a set of configuration files into /etc/swift -- the 1.conf files and the ring configurations. The admin guide doesn't mention any swift-ring-builder operations -- are these not required with UFO? Thanks, Jason ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
[Gluster-users] Troubleshooting Unified Object and File Storage in 3.3
Hi everyone, I've been working on testing UFO in Gluster 3.3, but I've had a hard time getting it working. I've been following along with the admin guide at http://www.gluster.org/wp-content/uploads/2012/05/Gluster_File_System-3.3.0-Administration_Guide-en-US.pdf, on chapter 12. I've been testing on CentOS 6.2. The only command from the Admin guide I've run successfully has been: curl -v -H 'X-Storage-User: test:tester' -H 'X-Storage-Pass:testing' -k http://127.0.0.1:8080/auth/v1.0. I started out with a centos machine running gluster-swift, which I was connecting to a four node gluster cluster. It wasn't clear to me from the admin guide where I was supposed to mount my gluster volume, so I've tried various permutations, including mounting it at the location specified in the /etc/swift/fs.conf file, with a symlink to the /srv/1/node location cited in the 1.conf files provided by the gluster-swift-plugin package. I also tried creating a gluster volume on the same server as the gluster-swift, but that didn't work for me, either. Is there a better source of docs than the admin guide? I'll be happy to help write this up once I get everything working. One more thing, the 3.3 admin guide talks about downloading the 3.2 version of gluster-swift and gluster-swift-plugin -- I'd assumed that that was a typo, and I've been working with the packages from 3.3, but maybe I was wrong there -- are the 3.2 packages the right ones? Thanks, Jason ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users