[Gluster-devel] Smoke test question
Hi guys, I am investigating running smoke tests inside Docker.io, but I cannot seem to pass the posix-compliance tests. I then tried to run the posix-compliance tests on XFS and EXT4 but I could make them pass there either. This is what I did: $ truncate -s 5G mydisk $ sudo mkfs.xfs mydisk $ sudo mount -o loop mydisk /mnt $ cd /mnt * Change the value of 'fs' in ~/qa/tools/posix-compliance/conf according to the file system format. $ prove -r ~/qa/tools/posix-compliance/tests They always fail on chown tests. Anyone know why? - Luis ___ Gluster-devel mailing list Gluster-devel@nongnu.org https://lists.nongnu.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] GlusterFS unit test framework
Hi all, The patch is now live in GlusterFS. This patch will allow you to write unit tests for your new patches or if you want, you can start writing unit tests for older code. Writing unit tests for older code is a great way to learn the code (which I will be doing :-)). I have also created a presentation to help explain what unit tests are: http://slides-lpabon.rhcloud.com/feb24_glusterfs_unittest.html#/ The framework will also help in situations where developers are depending on APIs from other contributors. In this situation, the developer can write their patch and test their code by mocking any APIs their code is dependent on. This method allows for parallel development without the need to wait for the APIs to be available. - Luis On 02/20/2014 02:00 PM, Luis Pabon wrote: Hi all, I have uploaded my patch to add unit test support to GlusterFS. The unit test framework provides integration with Jenkins and coverage support. The patch is: http://review.gluster.org/#/c/7145/ * Documentation: https://github.com/lpabon/glusterfs/blob/xunit/doc/hacker-guide/en-US/markdown/unittest.md * Integration with Jenkins: http://build.gluster.org/job/glusterfs-unittests/ * Tracker Bug: https://bugzilla.redhat.com/show_bug.cgi?id=1067059 This is just the start. If this is accepted, we would need developers to start adding unit tests as part of their patches. Imagine, hundreds, no wait, thousands of unit tests running on every patch submit :-). Let me know what you think. - Luis ___ Gluster-devel mailing list Gluster-devel@nongnu.org https://lists.nongnu.org/mailman/listinfo/gluster-devel ___ Gluster-devel mailing list Gluster-devel@nongnu.org https://lists.nongnu.org/mailman/listinfo/gluster-devel
[Gluster-devel] GlusterFS unit test framework
Hi all, I have uploaded my patch to add unit test support to GlusterFS. The unit test framework provides integration with Jenkins and coverage support. The patch is: http://review.gluster.org/#/c/7145/ * Documentation: https://github.com/lpabon/glusterfs/blob/xunit/doc/hacker-guide/en-US/markdown/unittest.md * Integration with Jenkins: http://build.gluster.org/job/glusterfs-unittests/ * Tracker Bug: https://bugzilla.redhat.com/show_bug.cgi?id=1067059 This is just the start. If this is accepted, we would need developers to start adding unit tests as part of their patches. Imagine, hundreds, no wait, thousands of unit tests running on every patch submit :-). Let me know what you think. - Luis ___ Gluster-devel mailing list Gluster-devel@nongnu.org https://lists.nongnu.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Translator test harness
Excellent, thanks Raghavendra and Krishnan. I will keep you posted. - Luis On 11/21/2013 03:37 AM, Raghavendra Gowdappa wrote: Count me in too. I am also interested to work on this. - Original Message - From: Luis Pabon lpa...@redhat.com To: Anand Avati av...@gluster.org Cc: Gluster Devel gluster-devel@nongnu.org Sent: Wednesday, November 20, 2013 9:03:14 AM Subject: Re: [Gluster-devel] Translator test harness Excellent. Thanks Avati. - Luis On 11/19/2013 09:54 PM, Anand Avati wrote: On Tue, Nov 19, 2013 at 6:48 PM, Luis Pabon lpa...@redhat.com wrote: I'm definitely up for it, not just for the translators, but as Avati pointed out, a test harness for the GlusterFS system. I think, if possible, that the translator test harness is really a subclass a GlusterFS unit/functional test environment. I am currently in the process of qualifying some C Unit test frameworks (specifically those that provide mock frameworks -- Cmock, cmockery) to propose to the GlusterFS community as a foundation to the unit/functional tests environment. What I would like to see, and I still have a hard time finding, is a source coverage tool for C. Anyone know of one? gcov (+lcov) Avati ___ Gluster-devel mailing list Gluster-devel@nongnu.org https://lists.nongnu.org/mailman/listinfo/gluster-devel ___ Gluster-devel mailing list Gluster-devel@nongnu.org https://lists.nongnu.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Translator test harness
I'm definitely up for it, not just for the translators, but as Avati pointed out, a test harness for the GlusterFS system. I think, if possible, that the translator test harness is really a subclass a GlusterFS unit/functional test environment. I am currently in the process of qualifying some C Unit test frameworks (specifically those that provide mock frameworks -- Cmock, cmockery) to propose to the GlusterFS community as a foundation to the unit/functional tests environment. What I would like to see, and I still have a hard time finding, is a source coverage tool for C. Anyone know of one? - Luis On 11/18/2013 09:34 AM, Jeff Darcy wrote: Last week, Luis and I had a discussion about unit testing translator code. Unfortunately, the structure of a translator - a plugin with many entry points which interact in complex and instance-specific ways - is one that is notoriously challenging. Really, the only way to do it is to have some sort of a task-specific harness, with at least the following parts: * Code above to inject requests. * Code below to provide mocked replies to the translator's own requests. * Code on the side to track things like resources or locks acquired and released. This would be an ambitious undertaking, but not so ambitious that it's beyond reason. The benefits should be obvious. At this point, what I'm most interested in is volunteers to help define the requirements and scope so that we can propose this as a feature or task for some future GlusterFS release. Who's up for it? ___ Gluster-devel mailing list Gluster-devel@nongnu.org https://lists.nongnu.org/mailman/listinfo/gluster-devel ___ Gluster-devel mailing list Gluster-devel@nongnu.org https://lists.nongnu.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Translator test harness
Excellent. Thanks Avati. - Luis On 11/19/2013 09:54 PM, Anand Avati wrote: On Tue, Nov 19, 2013 at 6:48 PM, Luis Pabon lpa...@redhat.com mailto:lpa...@redhat.com wrote: I'm definitely up for it, not just for the translators, but as Avati pointed out, a test harness for the GlusterFS system. I think, if possible, that the translator test harness is really a subclass a GlusterFS unit/functional test environment. I am currently in the process of qualifying some C Unit test frameworks (specifically those that provide mock frameworks -- Cmock, cmockery) to propose to the GlusterFS community as a foundation to the unit/functional tests environment. What I would like to see, and I still have a hard time finding, is a source coverage tool for C. Anyone know of one? gcov (+lcov) Avati ___ Gluster-devel mailing list Gluster-devel@nongnu.org https://lists.nongnu.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Glupy bugs: found the solution?
Hi guys, On the python bindings side, we have hired a new developer to spend his time divided between gluster-swift and the python bindings for gfapi (and probably Java binding on some future date). His name is Thiago da Silva and he started a few weeks ago. The gluster-swift project will require, in the near future, to move from using FUSE to libgfapi, and so we really need to the python bindings to become a real project: python exceptions, unit tests, functional tests, documentation, releases, etc. We are also planning on moving the bindings away from the glusterfs repo to a new repo. We have been really busy working on SWAuth for Gluster-swift, but we plan on using the same development workflow as gluster-swift in python-libgfapi repo. Here is the repo information: Public Repo: https://github.com/gluster/libgfapi-python (for some reason it is not syncing with Gerrit. I'll ping Avati) Gerrit: http://review.gluster.org/libgfapi-python - Luis On 10/28/2013 03:28 PM, Justin Clift wrote: On 27/10/2013, at 8:37 AM, Niels de Vos wrote: snip Yeah, that's why I was thinking it was libgfapi stuff getting pulled in not Swift. The import line in your pdf needs updating btw, as the import line for current git head needs to be: from gluster import gfapi Ah, right. I'm not sure I'll update that because at the time of the presentation one needed to use the git repository. During the Gluster Community Day in Stockholm someone (sorry, can't remember the name) asked if gfapi.py could not be included in the packages. Good question, and it showed I wasn't the only one who would benefit from that ;-) Kind of thinking this through further, gfapi.py is kind of serving two roles at the moment (import + demo). Should we improve that by moving the import code into a more proper gfapi.py under api/src/, keeping the Python demo code in the examples (gfapi-demo.py or similar?) snip the Python package 'gluster' will be the base for other modules. Hence we have created put the api bindings in gluster/gfapi.py. I think it would make sense to rename the Glupy module gluster.glupy. If it can be placed in /usr/lib/python2.6/site-packages/gluster/glupy.py things would be even nicer :) That would make the import line: from gluster import glupy Wouldn't it? No objections to that, it's fairly simple and pretty logical. :) Yes, that's my suggestion. I am not maintaining or developing Glupy, so it is not my call and I have no insight if this would make things more complex somewhere else. I'm still planning to have a look at Glupy, but I have not found the time to do so yet... Sure, np. Jeff was wanting to hand the code off to someone else, either myself or Ram as we both put time into it. Not sure if Ram officially picked it up or not. ;) I'd be happy to, but there is some technical challenge there... I don't understand Python Ctypes at all, and it's not absorbing into me head (have tried). :( + Justin -- Open Source and Standards @ Red Hat twitter.com/realjustinclift ___ Gluster-devel mailing list Gluster-devel@nongnu.org https://lists.nongnu.org/mailman/listinfo/gluster-devel ___ Gluster-devel mailing list Gluster-devel@nongnu.org https://lists.nongnu.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] Snapshot design for glusterfs volumes
Hi Shishir, Thank you for sending out your paper. Here are some comments I have (written in markdown format): # Review of Online Snapshot Support for GlusterFS ## Section: Introduction * Primary use case should have a better explanation. It does not explain how the user currently compensating for not currently having the technology in their environment, nor the benefits of having the feature. * Last sentence should explain why it is the same. Why would it be? No benefits can be gained from having this feature for non-vm image environments? If not, then the name should be changed to vmsnapshots or something that discourages usage in environments other than VM image storage. ## Section: Snapshot Architecture * The architecture section does not talk about architecture, but instead focuses on certain modes of operation. Please explain how a user from either a client or something like OpenStack interface interact with the snapshots. Also describe in good detail all aspects of operation (delete,create,etc.). Describe here the concept of Barriers instead of at the end of the document. * I'm new to GlusterFS, but I am confused on what is meant by bullet #3: The planned support is for GlusterFS Volume based snapshots Seems like the sentence is not finished. Do you mean The planned support is for snapshots of GlusterFS volumes...? Also, how is brick coherency kept across multiple AFR nodes? * Snapshot Consistency section is confusing, please reword the description. Maybe change the format to paragraphs instead of bullets * Please explain why there is a snapshot limit of 256. Are we using only one byte for tracking a snapshot id? * When the CLI executes multiple volume snapshots, is it possible to execute them in parallel? Why do they need to be serially processed? * What happens when `restore` is executed? How does the volume state change? Does the .gluster directory change in any way? * What happens when `delete` is executed? When we have the following snaps `A-B-C-D`, and we delete `B`, what happens to the state of the volume? Do the changes from `B` get merged to `A` so that it provided the dependencies needed by `C`? * Using the example above, can I branch or clone from `B` to `B'` and create a *new* volume? I am guessing that the LVM technology would probably not allow this, but maybe btrfs would. ## Section: Data Flow * This section is confusing. Why are they bullets if they read as a sequence? This seems to me more like a project requirements list than a data flow description. * What are the side effects of acquiring the cluster wide lock? What benefits/concerns should it have on the system with N nodes? * What is the average amount of time the CLI will expect to be blocked before it returns? * I am not sure if we have something like this already, but we may want to discuss the concept of a JOB manager. For example, here the CLI will send a request which may take longer than 3 secs. In such a situation, the CLI will be returned a JOB ticket number. The user can then query the JOB manager and provide the ticket number for status, or provide a callback mechanism (which is a little harder, but possible to do). In any case, I think this JOB manager falls outside the scope of this paper, but is something we should revisit if we do not already posses. * The bullet Once barrier is on, initiate back-end snapshot. should explain in greater detail what is meant by back-end snapshot. ## Section: CLI Interface * Each one of these commands should be explained in the architecture section in fine detail on how they affect volume state changes and side effects. ## Section: Snapshot Design * Does the amount of content in a brick affect the create, delete, list, or restore snapshot time? * The paper only describes `create` in the first part of the section. There probably should be a subsection for each of the commands supported, each describing in detail how they are planned to be implemented. * Could there be a section showing how JSON/XML interfaces would be supporting this feature? ### Subsection: Stage-1 Prepare * Are barriers on multiple bricks executed serially? What is the maximum number of bricks supported by the snapshot feature before taking an unusual amount of time to execute? Should brick barriers be done in parallel? * This again seems like a requirement list and sometimes like a sequence. Please reword section. ## Section: Barrier * Paragraph states unless Asynchronous IO is used. How does that affect the barrier and snapshots? Paper does not describe this situation. * A description of the planned Barrier design will help understand what is meant by queuing of fops. * Will the barrier be implemented as a new xlator which will be interested on the fly when a snapshot is requested, or will it require changes to existing xlators? If it is not planned to be a xlator, should it be implemented as such to
Re: [Gluster-devel] file version on glusterfs using libgit
This sounds really interesting, but I do have some questions about Git (or any SCM) as a solution for file version support. 1. How well does Git handle large binary files like VM images? Does it keep a copy for each one, or does it keep diffs? 2. Does Git, or another SCM, allow for the deletion of older versions? 3. Can we this solution be used for VM linked clones? (I guess that would be like branching each one). This is really interesting, because Brian F. and I were just discussing the pluses and minuses of a file version solution, but instead using QEMU's block driver technology, specifically either QCOW2 or QED (leaning more to QED). Maybe what we are describing here is two different implementations for two different use cases. File Versioning?: 1. Google Drive/Dropbox style file versions for small documents and files (still a question on binary deltas), where older versions are never deleted. -- Solution: Git translator File Snapshots?: 2. Snap support for small or large files which may require the deleting and/or merging of different versions. Specifically, satisfying APIs like OpenStack Cinder Snapshot API (available in Grizzly [1]) and linked virtual machine clones. -- Possible Solution: QEMU Block Technology (Still under investigation) Also, I am not sure, but this type of translator could be better being at the client (behind DHT) than at the server (behind POSIX). I am still new to GlusterFS, but I am guessing that the .git repo (which is probably not be able to be seen by the client) would be handled by only one of the GlusterFS hosts. This could create a bottleneck. If instead, the xlator was at the client, then the files would be spread over the cluster (even the .git repo) by the DHT xlator. There may be a need to do some type of locking, but I am guessing GlusterFS already handles much of that. This issue parallels discussions Brian and I had around a QED based translator and how it would handle the IO for 100 linked cloned virtual machines. But like I said above... Definitely very cool stuff. - Luis PS. another possible solution: **IF** we had a deduplicating backend (xlator or file system), then we could just make a copy (although it could be slow) and be done with it :-). [1] https://wiki.openstack.org/wiki/Cinder On 03/08/2013 06:20 AM, Niels de Vos wrote: On Fri, Mar 08, 2013 at 06:00:24AM -0500, Shishir Gowda wrote: Hi Niels, Thinking out aloud, I think the snaps(in file version context) can be displayed as branches (list). Well, I am not sure if branches are really needed. Isn't linear history sufficient? Every change should be committed to the master branch anyway. Branches may be useful for switching between versions, but nothing prevents you from checking out (or git ls-files) with a comit or a date. I'm thinking of a virtual .snaps directory: $ cd $VOLUME/.snaps $ ls 2013-03-07/ 2013-03-06/ . current/ changelog yesterday - 2013-03-07/ This makes it possible to do something like: $ cat changelog - virtual file, showing the contents of 'git log' - find a commit you're interested in $ mkdir $GIT_COMMIT_ID $ ls $GIT_COMMIT_ID/ - get the state just like 'git checkout $GIT_COMMIT_ID' Maybe it would be helpful to be able to create tags inside this .snaps directory. But I would refrain from branches for now (unless there is a clear use-case). Cheers, Niels Once the user cd's into any one of them, we could do a git checkout of the branch. That should mimic the behaviour. With regards, Shishir - Original Message - From: Shishir Gowda sgo...@redhat.com To: Niels de Vos nde...@redhat.com Cc: gluster-devel@nongnu.org Sent: Friday, March 8, 2013 4:04:42 PM Subject: Re: [Gluster-devel] file version on glusterfs using libgit Hi Niels, My inclination too is to load git ontop of posix xlator. I was thinking of making previous versions (based on some policy) to be treated a new branch. We could see how to export these branches as user visible dirs. With regards, Shishir - Original Message - From: Niels de Vos nde...@redhat.com To: Shishir Gowda sgo...@redhat.com Cc: gluster-devel@nongnu.org Sent: Friday, March 8, 2013 3:25:57 PM Subject: Re: [Gluster-devel] file version on glusterfs using libgit On Thu, Mar 07, 2013 at 12:54:41AM -0500, Shishir Gowda wrote: Hi All, Was playing around with git on glusterfs volume, to provide was of file version support. And initial run is encouraging. A brief overview what was tried: Approach 1: Glusterfs volume as a git repo 1. created a 2 brick distribute volume 2. inited a git repo on fuse volume 3. created files, committed them in git. 4. Modified files, and committed them again 5. Did branch check-outs, to simulate versions @ point in time 6. reset branch heads, and was able access older version of files (after a stash). 7. Was able to create files/dirs/symlinks/hardlinks 8. Both NFS/FUSE clients were used. Approach 2: Glusterfs bricks as git