Re: [Gluster-devel] Two consistent regression failures in release-3.6 HEAD
Hi, I had a look at the test case and the logs. A mount command is failing in the testcase, where we try to mount a snapshot to /mnt/glusterfs/2. [2015-02-17 19:28:24.291801] I [MSGID: 100030] [glusterfsd.c:2018:main] 0-glusterfs: Started running glusterfs version 3.6.3beta1 (args: glusterfs -s slave30.cloud.gluster.org --volfile-id=/snaps/patchy_single_gluster_volume_is_accessible_by_multiple_clients_offline_snapshot_is_a_long_name/patchy /mnt/glusterfs/2) [2015-02-17 19:28:24.292848] E [fuse-bridge.c:5334:init] 0-fuse: mountpoint /mnt/glusterfs/2 does not exist [2015-02-17 19:28:24.292871] E [xlator.c:425:xlator_init] 0-fuse: Initialization of volume 'fuse' failed, review your volfile again The mount fails with the error that it is unable to find /mnt/glusterfs/2, where as this directory is created as part of the basic include.rc. The only reason I can think of is that the same machine might be used for other runs, or any other activity independent of the test case which is removing /mnt/glusterfs/2, at the time this test is running. Is there any way we can confirm the same? Regards, Avra On 02/18/2015 05:19 AM, Justin Clift wrote: We seem to have two non-spurious regression failures in release-3.6 HEAD: ./tests/bugs/bug-1045333.t (Wstat: 0 Tests: 16 Failed: 1) Failed test: 15 ./tests/features/ssl-authz.t (Wstat: 0 Tests: 18 Failed: 1) Failed test: 18 Found while running a burn in regression run on some new Rackpace nodes, prior to swapping out some of our busted ones. They happened on both runs: slave30 (will become slave20) http://build.gluster.org/job/regression-test-burn-in/3/console slave31 (may or may not be kept) http://build.gluster.org/job/regression-test-burn-in/4/console Avra, git blame says you're the primary author for bug-1045333.t, do you want to take a look? Jeff, ditto for ssl-authz.t? + Justin -- GlusterFS - http://www.gluster.org An open source, distributed file system scaling to several petabytes, and handling thousands of clients. My personal twitter: twitter.com/realjustinclift ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] missing files
I have not made any progress on the internal systems, post Pranith's investigations on the inode release causing this slowness on a aged volume, due to other priorities. Need to get back on track with this one, let me discuss this with Pranith and see how best to move ahead with the same. Shyam On 02/17/2015 04:50 PM, David F. Robinson wrote: Any updates on this issue? Thanks in advance... David -- Original Message -- From: Shyam srang...@redhat.com To: David F. Robinson david.robin...@corvidtec.com; Justin Clift jus...@gluster.org Cc: Gluster Devel gluster-devel@gluster.org Sent: 2/11/2015 10:02:09 PM Subject: Re: [Gluster-devel] missing files On 02/11/2015 08:28 AM, David F. Robinson wrote: My base filesystem has 40-TB and the tar takes 19 minutes. I copied over 10-TB and it took the tar extraction from 1-minute to 7-minutes. My suspicion is that it is related to number of files and not necessarily file size. Shyam is looking into reproducing this behavior on a redhat system. I am able to reproduce the issue on a similar setup internally (at least at the surface it seems to be similar to what David is facing). I will continue the investigation for the root cause. Shyam ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
[Gluster-devel] uss.t in master doing bad things to our regression test VM's
Hi Vijaikumar, As part of investigating what is going wrong with our VM's in Rackspace, I created several new VM's (11 of them) and started a full regression test run on them. They're all hitting a major problem with uss.t. Part of it does a cat on /dev/urandom... which is taking several hours at 100% of a cpu. :( Here is output from ps -ef f on one of them: root 12094 1287 0 13:23 ? S 0:00 \_ /bin/bash /opt/qa/regression.sh root 12101 12094 0 13:23 ? S 0:00 \_ /bin/bash ./run-tests.sh root 12116 12101 0 13:23 ? S 0:01 \_ /usr/bin/perl /usr/bin/prove -rf --timer ./tests root382 12116 0 14:13 ? S 0:00 \_ /bin/bash ./tests/basic/uss.t root 1713 382 0 14:14 ? S 0:00 \_ /bin/bash ./tests/basic/uss.t root 1714 1713 96 14:14 ? R 166:31 \_ cat /dev/urandom root 1715 1713 2 14:14 ? S 5:04 \_ tr -dc a-zA-Z root 1716 1713 9 14:14 ? S 16:31 \_ fold -w 8 And from top: top - 17:09:19 up 3:50, 1 user, load average: 1.04, 1.03, 1.00 Tasks: 240 total, 3 running, 237 sleeping, 0 stopped, 0 zombie Cpu0 : 4.3%us, 95.7%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu1 : 8.1%us, 15.9%sy, 0.0%ni, 76.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 1916672k total, 1119544k used, 797128k free, 114976k buffers Swap:0k total,0k used,0k free, 427032k cached PID USER PR NI VIRT RES SHR S %CPU %MEMTIME+ COMMAND 1714 root 20 0 98.6m 620 504 R 96.0 0.0 169:00.94 cat 137 root 20 0 36100 1396 1140 S 15.9 0.1 37:01.55 plymouthd 1716 root 20 0 98.6m 712 616 S 10.0 0.0 16:46.55 fold 1715 root 20 0 98.6m 636 540 S 2.7 0.0 5:08.95 tr 9 root 20 0 000 S 0.3 0.0 0:00.59 ksoftirqd/1 1 root 20 0 19232 1128 860 S 0.0 0.1 0:00.93 init 2 root 20 0 000 S 0.0 0.0 0:00.00 kthreadd Your name is on the commit which added the code, but that was months ago. No idea why it's suddenly being a problem. Do you have any idea? I am going to shut down all of these new test VM's except one, which I can give you (or anyone) access to, if that would help find and fix the problem. Btw, this is pretty important. ;) + Justin -- GlusterFS - http://www.gluster.org An open source, distributed file system scaling to several petabytes, and handling thousands of clients. My personal twitter: twitter.com/realjustinclift ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] [Gluster-users] Gluster Design Summit
On 02/18/2015 10:12 AM, Krishnan Parthasarathi wrote: Right now, we need for people interested in the Gluster Design Summit to fill out a Google Form so we can track that interest and start to nail down the specifics about the event. If you'd like to attend the Gluster Design Summit, please go here: http://goo.gl/forms/NYRFz5aaop Is this survey for non-Red Hat community members only? I'd like to know if I should fill this form, as a community member and a Red Hat employee to let the organisers know that I am keen on being there. This is a community event but certainly something that Red Hat supports our team taking part in :) I would suggest filling in the form. Ric ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Sqlite3 dependency for Data Tiering
Then there will be two things 1) Update the this page http://www.gluster.org/community/documentation/index.php/CompilingRPMS#Common_Steps i.e add yum install sqlite3-devel The *real* canonical way to list dependencies is with BuildRequires lines in glusterfs.spec.in (in your source tree). IMO the instructions should refer people there. Also, yum-builddep will pull in all of the required packages based on an SRPM, but then there's a bit of a chicken-and-egg problem: how do you build an SRPM from source if you don't already have those packages? I guess it's meant for rebuilding or minor changes, not for true building from scratch. 2) Modify configure.ac to check if the sqlite3-devel packages are installed before make. Is there something I am missing? Other than making sure glusterfs.spec.in is updated along with configure.ac, I can't think of anything. ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Upcalls Infrastructure
Hi, We recently have uncovered few issues with respect to lease_locks support and had discussions around the same. Thanks to everyone involved. So the new changes proposed in the design (in addition to the ones discussed in the earlier mail) are - * Earlier, in case if a client takes a lease-lock and a conflicting fop is requested by another client, RECALL_LEASE CBK event will be sent to the first client and till the first client unlocks the LEASE_LOCK, we send EDELAY/ERETRY error to the conflicting fops. This works for protocol clients (like NFS/SMB) which keep retrying on receiving that error but not for FUSE clients or any of the other auxiliary services (like rebalance/self-heal/quota) which will error-out immediately. So to resolve that, we choose to block the fops based on the flags passed (by default 'BLOCK' or 'NON_BLOCK' incase of protocol clients). The blocking will be done in the same way as current locks xlator does to block lock requests (maintain a queue of call stubs and wake them up once the LEASE_LOCK is released/recalled). * Earlier, when a lease_lk request comes, the upcall xlator maps it to POSIX lock for the entire file before granting it. And incase if the same client takes an fcntl lock, it will be merged with the earlier lock taken and unlock of either of the locks will result in the loss of lock state. To avoid that, we plan to define a new lk_entry (LEASE_LOCK) in the 'locks' xlator to store lease_locks and add support to not merge it with the locks of any other type. * In addition, before granting lease_lock, we now check if there are existing open-fds on the file with the conflicting access requested. If yes, lease_lock will not be granted. * While sending RECALL_LEASE CBK event, a new timer event will be registered to notify in case of recall timeout so that we can purge lease locks forcefully and wake up blocked fops. * Few Enhancements which may be considered, * to start with upcall entries are maintained in a linked list. We may change it to RBT tree for performance improvement. * store Upcall entries in inode/fd_ctxt for faster lookup. Thanks, Soumya On 01/22/2015 02:31 PM, Soumya Koduri wrote: Hi, I have updated the feature page with more design details and the dependencies/limitations this support has. http://www.gluster.org/community/documentation/index.php/Features/Upcall-infrastructure#Dependencies Kindly check the same and provide your inputs. Few of them which may be addressed for 3.7 release are - *AFR/EC* - Incase of replica bricks maintained by AFR, the upcalls state is maintained and processed on all the replica bricks. This will result in duplicate notifications sent by all those bricks incase of non-idempotent fops. - Hence we need support on AFR to filter out such duplicate callback notifications. Similar support is needed for EC as well. - One of the approaches suggested by the AFR team is to cache the upcall notifications received for around 1min (their current lifetime) to detect filter out the duplicate notifications sent by the replica bricks. *Cleanup during network disconnect - protocol/server* - At present, incase of network disconnects between the glusterfs-server and the client, the protocol/server looks up the fd table associated with that client and sends 'flush' op for each of those fds to cleanup the locks associated with it. - We need similar support to flush the lease locks taken. Hence, while granting the lease-lock, we plan to associate that upcall_entry with the corresponding fd_ctx or inode_ctx so that they can be easily tracked if needed to be cleaned up. Also it will help in faster lookup of the upcall entries while trying to process the fops using the same fd/inode. Note: Above cleanup is done for the upcall state associated with only lease-locks. For the other entries maintained (for eg:, for cache-invalidations), the reaper thread (which will be used to cleanup the expired entries in this xlator) will clean-up those states as well once they get expired. *Replay of the lease-locks state* - At present, replay of locks by the client xlator (after network disconnect and reconnect) seems to have been disabled. - But when it is being enabled, we need to add support to replay lease-locks taken as well. - Till then, this will be considered as a limitation and will be documented as suggested by KP. Thanks, Soumya On 12/16/2014 09:36 AM, Krishnan Parthasarathi wrote: - Is there a new connection from glusterfsd (upcall xlator) to a client accessing a file? If so, how does the upcall xlator reuse connections when the same client accesses multiple files, or does it? No. We are using the same connection which client initiates to send-in fops. Thanks to you for pointing me initially to the 'client_t' structure. As these connection details are available only in the server xlator, I am passing these to upcall xlator by storing them in
[Gluster-devel] Meeting minutes of todays Gluster Community meeting
On Wed, Feb 18, 2015 at 12:41:40PM +0100, Niels de Vos wrote: Hi all, In about 20 minutes from now we will have the regular weekly Gluster Community meeting. Meeting details: - location: #gluster-meeting on Freenode IRC webchat: http://webchat.freenode.net/?channels=gluster-meeting - date: every Wednesday - time: 7:00 EST, 12:00 UTC, 13:00 CET, 17:30 IST (in your terminal, run: date -d 12:00 UTC) - agenda:https://public.pad.fsfe.org/p/gluster-community-meetings Currently the following items are listed: * Roll Call * Status of last weeks action items * GlusterFS 3.6 * GlusterFS 3.5 * GlusterFS 3.4 * GlusterFS Next * Open Floor The last topic has space for additions. If you have a suitable topic to discuss, please add it to the agenda. Thanks, Niels Minutes: http://meetbot.fedoraproject.org/gluster-meeting/2015-02-18/gluster-meeting.2015-02-18-12.03.html Minutes (text): http://meetbot.fedoraproject.org/gluster-meeting/2015-02-18/gluster-meeting.2015-02-18-12.03.txt Log: http://meetbot.fedoraproject.org/gluster-meeting/2015-02-18/gluster-meeting.2015-02-18-12.03.log.html Meeting summary --- * LINK: https://public.pad.fsfe.org/p/gluster-community-meetings (ndevos, 12:03:31) * Roll Call (ndevos, 12:03:37) * Last weeks action items (ndevos, 12:05:09) * subtopic: ndevos should publish an article on his blog (ndevos, 12:05:21) * subtopic: hchiramm will try to fix the duplicate syndication of posts from ndevos (ndevos, 12:05:58) * subtopic: hchiramm will share the outcome of the non-mailinglist packagng discussions on the mailinglist (including the Board) (ndevos, 12:06:29) * subtopic: hagarth to open a feature page for (k)vm hyperconvergence (ndevos, 12:08:02) * subtopic: spot to reach out to community about website messaging (ndevos, 12:09:22) * ACTION: ndevos will contact spot about open standing action items on the weekly agenda (ndevos, 12:10:44) * subtopic: hagarth to carry forward discussion on automated builds for various platforms in gluster-infra ML (ndevos, 12:10:57) * subtopic: ndevos should send out a reminder about Maintainer responsibilities to the -devel list (ndevos, 12:11:43) * subtopic: telmich will send an email to the gluster-users list about Gluster support in QEMU on Debian/Ubuntu (ndevos, 12:12:18) * subtopic: jimjag to engage the board, asking for their direction and input for both 3.7, and 4.0 releases (ndevos, 12:13:57) * GlusterFS 3.6 (ndevos, 12:16:14) * the release of 3.6 is getting delaybed because of regression test failures (ndevos, 12:23:29) * ACTION: JustinClift keeps on investigating the regression test failures (ndevos, 12:25:16) * GlusterFS 3.5 (ndevos, 12:29:06) * beta1 for 3.5.4 might be delayed a week due to regression test issues (ndevos, 12:33:05) * GlusterFS 3.4 (ndevos, 12:33:23) * Gluster.next (ndevos, 12:39:39) * subtopic: 3.7 (ndevos, 12:39:48) * ACTION: overclk will send out initial BitRot patches for review this week (ndevos, 12:44:14) * subtopic: 4.0 (ndevos, 12:48:15) * Other Agenda Items (ndevos, 12:52:12) * REMINDER: Upcoming talks at conferences: https://public.pad.fsfe.org/p/gluster-events (ndevos, 12:52:24) * subtopic: GSOC 2015 (ndevos, 12:52:58) * Open Floor (ndevos, 12:57:20) * ACTION: overclk will schedule a BitRot Google Hangout with some technical details and a small demo (ndevos, 13:03:00) * kkeithley and ndevos will be visiting Bangalore, 6-10 April (ndevos, 13:06:31) Meeting ended at 13:07:44 UTC. Action Items * ndevos will contact spot about open standing action items on the weekly agenda * JustinClift keeps on investigating the regression test failures * overclk will send out initial BitRot patches for review this week * overclk will schedule a BitRot Google Hangout with some technical details and a small demo Action Items, by person --- * JustinClift * JustinClift keeps on investigating the regression test failures * ndevos * ndevos will contact spot about open standing action items on the weekly agenda * overclk * overclk will send out initial BitRot patches for review this week * overclk will schedule a BitRot Google Hangout with some technical details and a small demo * **UNASSIGNED** * (none) People Present (lines said) --- * ndevos (113) * JustinClift (39) * kkeithley (21) * raghu (18) * jdarcy (16) * overclk (16) * hchiramm (10) * gothos (5) * bene2 (5) * kshlm (4) * msvbhat (3) * jimjag (2) * zodbot (2) * Debloper (1) * partner (1) Generated by `MeetBot`_ 0.1.4 .. _`MeetBot`: http://wiki.debian.org/MeetBot pgp4tIiQLNsXH.pgp Description: PGP signature ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] [Gluster-users] Looking for volunteer to write up official How to do GlusterFS in the Cloud: The Right Way for Rackspace...
I could probably chip in too. I've run tons of my own science experiments on Rackspace instead of our own hardware, becase that makes my results more reproducible by others. If we can enable more people to do likewise, that benefits everyone. P.S. Hi Jesse. Small world, huh? ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Looking for volunteer to write up official How to do GlusterFS in the Cloud: The Right Way for Rackspace...
On 18 Feb 2015, at 13:50, Jesse Noller jesse.nol...@rackspace.com wrote: Sorry, this is what I get for logging off for the night: short answer is YES. At the end of the day I want to showcase gluster' awesomeness and also be able to show users how to do it right in the cloud for shared, fault tolerant file systems Looks like we have four volunteers: * Ben Turner (primary GlusterFS perf tuning guy) * Jeff Darcy (greybeard GlusterFS developer and scalability expert) * Josh Boon (experienced GlusterFS guy - Ubuntu focused) * Nico Schottelius (newer GlusterFS guy - familiar with Ubuntu/CentOS) This sounds like a fairly good mix, so lets go with that. Ben and Jeff, does it make sense for you two to do the leading, with Josh and Nico involved and learning/assisting/idea-generation/stuff as needed? In our initial discussion, Jesse mentioned the required output is Markdown + diagrams, and he'll get the Rackspace design people to make things *look awesome*. Guessing they'll redo the diagrams for prettiness then, so any we output just need to be functional to convey info? Jesse, we're pretty maxxed out in our VM's atm (over the sponsorship monthly budget). Are you able to setup something just for this effort, or is there a better way, or ? :) + Justin -- GlusterFS - http://www.gluster.org An open source, distributed file system scaling to several petabytes, and handling thousands of clients. My personal twitter: twitter.com/realjustinclift ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Two consistent regression failures in release-3.6 HEAD
On 18 Feb 2015, at 14:40, Jeff Darcy jda...@redhat.com wrote: Jeff, ditto for ssl-authz.t? I was able to reproduce the failure on slave30. This is the same race that was fixed by http://review.gluster.org/9483 so I'll submit the backport for that. P.S. I'm done with slave30 unless/until something else comes up. Awesome Jeff, thanks. :) + Justin -- GlusterFS - http://www.gluster.org An open source, distributed file system scaling to several petabytes, and handling thousands of clients. My personal twitter: twitter.com/realjustinclift ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Looking for volunteer to write up official How to do GlusterFS in the Cloud: The Right Way for Rackspace...
Sorry, this is what I get for logging off for the night: short answer is YES. At the end of the day I want to showcase gluster' awesomeness and also be able to show users how to do it right in the cloud for shared, fault tolerant file systems Jesse On Feb 17, 2015, at 5:53 PM, Justin Clift jus...@gluster.org wrote: Yeah, that'd be pretty optimal. How full on do you want to go? Should we look into arranging some OnMetal stuff, in addition to the VM offerings? Jesse, Ben is our *very best* GlusterFS and RHS performance tuning expert, so capturing his interest is a *very* good thing. ;) + Justin On 17 Feb 2015, at 23:10, Benjamin Turner bennytu...@gmail.com wrote: This is interesting to me, I'd like the chance to run my performance tests on a cloud provider's systems. We could put together some recommendations for configuration, tuning, and performance numbers? Also it would be cool to enhance my setup scripts to work with cloud instances. Sound like what you are looking for ish? -b On Tue, Feb 17, 2015 at 5:06 PM, Justin Clift jus...@gluster.org wrote: On 17 Feb 2015, at 21:49, Josh Boon glus...@joshboon.com wrote: Do we have use cases to focus on? Gluster is part of the answer to many different questions so if it's things like simple replication and distribution and basic performance tuning I could help. I also have a heavy Ubuntu tilt so if it's Red Hat oriented I'm not much help :) Jesse, thoughts on this? I kinda think it would be useful to have instructions which give correct steps for Ubuntu + Red Hat (and anything else suitable). Josh, if Jesse agrees, then your Ubuntu knowledge will probably be useful for this. ;) + Justin - Original Message - From: Justin Clift jus...@gluster.org To: Gluster Users gluster-us...@gluster.org, Gluster Devel gluster-devel@gluster.org Cc: Jesse Noller jesse.nol...@rackspace.com Sent: Tuesday, February 17, 2015 9:37:05 PM Subject: [Gluster-devel] Looking for volunteer to write up official How to do GlusterFS in the Cloud: The Right Way for Rackspace... Yeah, huge subject line. :) But it gets the message across... Rackspace provide us a *bunch* of online VM's which we have our infrastructure in + run the majority of our regression tests with. They've asked us if we could write up a How to do GlusterFS in the Cloud: The Right Way (technical) doc, for them to add to their doc collection. They get asked for this a lot by customers. :D Sooo... looking for volunteers to write this up. And yep, you're welcome to have your name all over it (eg this is good promo/CV material :) VM's (in Rackspace obviously) will be provided of course. Anyone interested? (Note - not suitable for a GlusterFS newbie. ;)) Regards and best wishes, Justin Clift -- GlusterFS - http://www.gluster.org An open source, distributed file system scaling to several petabytes, and handling thousands of clients. My personal twitter: twitter.com/realjustinclift ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel -- GlusterFS - http://www.gluster.org An open source, distributed file system scaling to several petabytes, and handling thousands of clients. My personal twitter: twitter.com/realjustinclift ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel -- GlusterFS - http://www.gluster.org An open source, distributed file system scaling to several petabytes, and handling thousands of clients. My personal twitter: twitter.com/realjustinclift ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Two consistent regression failures in release-3.6 HEAD
Jeff, ditto for ssl-authz.t? I was able to reproduce the failure on slave30. This is the same race that was fixed by http://review.gluster.org/9483 so I'll submit the backport for that. P.S. I'm done with slave30 unless/until something else comes up. ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] [Gluster-users] Looking for volunteer to write up official How to do GlusterFS in the Cloud: The Right Way for Rackspace...
- Original Message - From: Justin Clift jus...@gluster.org To: Benjamin Turner bennytu...@gmail.com Cc: Gluster Users gluster-us...@gluster.org, Gluster Devel gluster-devel@gluster.org, Jesse Noller jesse.nol...@rackspace.com Sent: Tuesday, February 17, 2015 6:52:48 PM Subject: Re: [Gluster-users] [Gluster-devel] Looking for volunteer to write up official How to do GlusterFS in the Cloud: The Right Way for Rackspace... Yeah, that'd be pretty optimal. How full on do you want to go? I have some benchmark kits that can easily be enhanced to work with ubuntu, it should just be a matter of checking /etc/redaht-release and the ubuntu equivalent and running apt instead of yum. I was thinking I could do the testing on the RPM based systems and the other contributor could run on deb? We can share scripts / tuning / recommendations and at the end of this could have nice setup / benchmark kits that work on a variety of OSs + the DOC we are looking for. The tool uses IOZone, smallfile, and fio, runs the tool N # of times to get sample set, and does some statistical analysis on the samples(avg, std dev, delta between samples) before moving on to the next test. Should we look into arranging some OnMetal stuff, in addition to the VM offerings? I have all the BM numbers we can handle, happy to provide them. 1G, 10G, RDMA, spinning and SSDs. I will be rerunning everything with the mt epoll + RDMA changes also. Jesse, Ben is our *very best* GlusterFS and RHS performance tuning expert, so capturing his interest is a *very* good thing. ;) A little overstated but the sentiment is appreciated :) + Justin On 17 Feb 2015, at 23:10, Benjamin Turner bennytu...@gmail.com wrote: This is interesting to me, I'd like the chance to run my performance tests on a cloud provider's systems. We could put together some recommendations for configuration, tuning, and performance numbers? Also it would be cool to enhance my setup scripts to work with cloud instances. Sound like what you are looking for ish? -b On Tue, Feb 17, 2015 at 5:06 PM, Justin Clift jus...@gluster.org wrote: On 17 Feb 2015, at 21:49, Josh Boon glus...@joshboon.com wrote: Do we have use cases to focus on? Gluster is part of the answer to many different questions so if it's things like simple replication and distribution and basic performance tuning I could help. I also have a heavy Ubuntu tilt so if it's Red Hat oriented I'm not much help :) Jesse, thoughts on this? I kinda think it would be useful to have instructions which give correct steps for Ubuntu + Red Hat (and anything else suitable). Josh, if Jesse agrees, then your Ubuntu knowledge will probably be useful for this. ;) + Justin - Original Message - From: Justin Clift jus...@gluster.org To: Gluster Users gluster-us...@gluster.org, Gluster Devel gluster-devel@gluster.org Cc: Jesse Noller jesse.nol...@rackspace.com Sent: Tuesday, February 17, 2015 9:37:05 PM Subject: [Gluster-devel] Looking for volunteer to write up official How to do GlusterFS in the Cloud: The Right Way for Rackspace... Yeah, huge subject line. :) But it gets the message across... Rackspace provide us a *bunch* of online VM's which we have our infrastructure in + run the majority of our regression tests with. They've asked us if we could write up a How to do GlusterFS in the Cloud: The Right Way (technical) doc, for them to add to their doc collection. They get asked for this a lot by customers. :D Sooo... looking for volunteers to write this up. And yep, you're welcome to have your name all over it (eg this is good promo/CV material :) VM's (in Rackspace obviously) will be provided of course. Anyone interested? (Note - not suitable for a GlusterFS newbie. ;)) Regards and best wishes, Justin Clift -- GlusterFS - http://www.gluster.org An open source, distributed file system scaling to several petabytes, and handling thousands of clients. My personal twitter: twitter.com/realjustinclift ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel -- GlusterFS - http://www.gluster.org An open source, distributed file system scaling to several petabytes, and handling thousands of clients. My personal twitter: twitter.com/realjustinclift ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel -- GlusterFS - http://www.gluster.org An open source, distributed file system scaling to several petabytes, and handling thousands of clients. My personal twitter: twitter.com/realjustinclift ___
Re: [Gluster-devel] Looking for volunteer to write up official How to do GlusterFS in the Cloud: The Right Way for Rackspace...
Looks like we have four volunteers: * Ben Turner (primary GlusterFS perf tuning guy) * Jeff Darcy (greybeard GlusterFS developer and scalability expert) * Josh Boon (experienced GlusterFS guy - Ubuntu focused) * Nico Schottelius (newer GlusterFS guy - familiar with Ubuntu/CentOS) This sounds like a fairly good mix, so lets go with that. Ben and Jeff, does it make sense for you two to do the leading, with Josh and Nico involved and learning/assisting/idea-generation/stuff as needed? Sounds good to me. By purest coincidence, I was planning to do some experiments on Rackspace today anyway. I'll try to take notes, and share them when I'm done. ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] [Gluster-users] Gluster Design Summit
On to the logistics: When: I'm looking at sometime during the second week of May (May 11-15). Alternately, the third week of April (April 13-19), though, I'm concerned about being able to get it all in place before then. I'd like to have at least one day worth of scheduled presentations, then another day for an in-person board meeting and open sprints, so this would make a total of two days of Summit. Where: I know we have a lot of international Gluster contributors who are not in the United States, so I'm open to suggestion on this point. A quick internet search seems to imply that large international airports like Washington DC, New York, Chicago, and Los Angeles have cheaper international flights. I only suggest the US as a location country because it tends to be simpler for people to travel to the US from a visa and logistics perspective, but if anyone wants to make the case for a different location country, speak up (quickly). Who gets to go: The event will be open to anyone who wishes to attend, but we want to encourage active community members to be present. The intention is to use the bulk of the funding allocated to this event to cover travel and lodging costs for members of the Gluster community who wish to present and/or sprint. (Yes, you read that right. We will cover the travel and hotel costs for community contributors to attend, as needed, and as funding permits.) How can I help? Right now, we need for people interested in the Gluster Design Summit to fill out a Google Form so we can track that interest and start to nail down the specifics about the event. If you'd like to attend the Gluster Design Summit, please go here: http://goo.gl/forms/NYRFz5aaop Is this survey for non-Red Hat community members only? I'd like to know if I should fill this form, as a community member and a Red Hat employee to let the organisers know that I am keen on being there. ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org/mailman/listinfo/gluster-devel