[Gluster-users] Simple recipes for JBOD and AutoScaling.
Hi folks. Does anyone have any simple guides for autoscaling gluster with resiliency and replication? I found the admin guide here: https://github.com/gluster/glusterfs/blob/master/doc/admin-guide/en-US/markdown/admin_managing_volumes.md#expanding-volumes Which is a start, but would like some more concrete examples. Thanks ! -- Jay Vyas http://jayunit100.blogspot.com ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] forge wiki : Upgrades ?
Hi folks. Any chance of upgrading any features on the forge wiki over time ? i.e. adding images or trac style plugins etc etc ? -- Jay Vyas http://jayunit100.blogspot.com ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] New upstream glusterfs-hadoop release 2.1.10
Hi folks. We've now pushed a couple new releases to glusterfs-hadoop. 1) We now support hadoop-2.3.0 using standard instructions on the forge for Linux container setup, with the caveat that you use GlusterLinuxContainer. This is a workaround for YARN-1235. 2) We also have fixed the unit tests so that they test 2.0 specs, avoiding conflicts where 1.0 and 2.0 FS semantics differ (testListStatus). Happy hadooping ! The plugin jars can be found here : http://rhbd.s3.amazonaws.com/maven/indexV2.html -- Jay Vyas http://jayunit100.blogspot.com ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] quick hadoop on gluster update
Hi gluster community ! - Brad childs has recently updated our glusterfs-hadoop plugin to support multiple volumes, and its now available on our upstream release site: http://rhbd.s3.amazonaws.com/maven/indexV2.html We'd love to hear if any of you are hacking around with hadoop on gluster what you think. The basic implementation changes can be seen in the core-site.xml file that the code ships with. Some possibly ways you can use this multivolume mapreduce: There are different performance and security use cases for this multivolume hadoop accessible storage that we're excited about playi ng with. For example, if you have a "normal" volume with our suggested settings for mapreduce jobs, keep it, and add in another volume for hadoop apps that do high throughput writes or puts. That way, your typical mapreduce jobs dont suffer from eventual consistency, but you can still have HA where you need it in your DFS. -- Jay Vyas http://jayunit100.blogspot.com ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] BigPetStore on GlusterFS
finally cobbled together a somewhat polished demo of BigPetStore hadoop app on glusterfs. Anyone interest in running hadoop applications on top of gluster should definetly check it out! https://www.youtube.com/watch?v=OVB3nEKN94k -- Jay Vyas http://jayunit100.blogspot.com ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] linux flash filesystems and GlusterFS
Are there any other requirements other than xattr support? Would be cool to find ever kernel fs impl and test them automatically on a server (maybe our rack space nodes) somewhere for a series of known gluster requirements. > On Apr 2, 2014, at 6:49 AM, Apostolos Manolitzas > wrote: > > Hello, > > as far as I can see, from the UBIFS Faq > http://www.linux-mtd.infradead.org/doc/ubifs.html#L_xattr > > UBIFS supports extended attributes if the corresponding configuration option > is enabled (no additional mount options are required). It supports the user, > trusted, and security name-spaces. However, access control lists (ACL) > support is not implemented. > > beyond that, no clue if it's working. > > -Apostolos > >> On 04/02/2014 02:41 PM, Carlos Capriotti wrote: >> Hi. >> >> i am not very familiar with those filesystems you mentioned, but as a rule >> of thumb, a FS for gluster has to support extended attributes, so, this is a >> good way to start: check if you can tdo that. >> >> Also, when setting gluster with the recommended configuration, you are >> supposed to use XFS, and the inodes have to be defined with -i size=512. >> >> If experimenting with other FS, make sure you can set the inode as well, >> just to be in the safe side. >> >> EXT4 is not a good option right now, just in case you are wondering. Lots of >> discussion and documentation on those issues in past threads of the list. >> >> Cheers. >> >> >> On Wed, Apr 2, 2014 at 1:34 PM, Apostolos Manolitzas >> wrote: >>> Hello all, >>> >>> I just discovered the GlusterFS while looking for a solution for high >>> availability on our NAND flashes. We use ubifs and jffs2 for filesystem and >>> we would like to apply some high availability strategy to a part of the >>> flash. So has anyone tested GlusterFS with this setup? Is it a viable >>> solution or should we move to an upper layer solution? >>> >>> thanks for any opinion, >>> >>> -Apostolos >>> >>> >>> >>> >>> >>> >>> ___ >>> Gluster-users mailing list >>> Gluster-users@gluster.org >>> http://supercolony.gluster.org/mailman/listinfo/gluster-users > > > > ___ > Gluster-users mailing list > Gluster-users@gluster.org > http://supercolony.gluster.org/mailman/listinfo/gluster-users ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] PLEASE READ ! We need your opinion. GSOC-2014 and the Gluster community
I vote for 3,2,1 : Best: the accelerator looks cool and would be awesome for supporting hi throughput workloads w/ different consistency gaurantees. In hadoop this could be valuable I think when we want fast ETL but can wait a few seconds for files to b available in global namespace 2nd best: split brain project. Id spin It... think best to focus on good tests for this type of thing, maybe even reproducible VM testing using vagrant or docker, to demonstrate distributed bugs like this if possible. By the way Just to confirm that This is all you needed...I missed this thread earlier . We need more Gsoc participation , right? and we have 3 projects we need to vote on? Thanks for bringing our attention to this by the way.! > On Mar 13, 2014, at 7:29 AM, "Bernhard Glomm" > wrote: > > thnx Carlos for your ambition Carlos > I'm not much of a developer but for what it's worth here my thoughts: > > Your #1) sounds great, but does that mean object store or will it still be > whole files that are handled? > up to now I loved the feeling of having at least my files on the bricks if > something went really wrong and > not ending up with a huge number of xMB sized snippets splattered around > (well I know pros and cons > and depending on scenario and size, but still though...) > > Your #2) could be incorporated in #1) somehow? "minimum 3 bricks, 1 per node" > with kind of quorum mechanism??? > > As #3) I would vote for reliable encryption to be able to use third party > storage, > just speed??? is affected by to many other things, bandwith, speed of > underlaing storage, different speed of different bricks... > so speed I would vote down on rank #4) ;-) > > Bernhard > > Am 13.03.2014 12:10:17, schrieb Carlos Capriotti: > Hello, all. > > I am a little bit impressed by the lack of action on this topic. I hate to be > "that guy", specially being new here, but it has to be done. > > If I've got this right, we have here a chance of developing Gluster even > further, sponsored by Google, with a dedicated programmer for the summer. > > In other words, if we play our cards right, we can get a free programmer and > at least a good start/advance on this fantastic. > > Well, I've checked the trello board, and there is a fair amount of things > there. > > There are a couple of things that are not there as well. > > I think it would be nice to listen to the COMMUNITY (yes, that means YOU), > for either suggestions, or at least a vote. > > My opinion, being also my vote, in order of PERSONAL preference: > > 1) There is a project going on (https://forge.gluster.org/disperse), that > consists on re-writing the stripe module on gluster. This is specially > important because it has a HUGE impact on Total Cost of Implementation > (customer side), Total Cost of Ownership, and also matching what the > competition has to offer. Among other things, it would allow gluster to > implement a RAIDZ/RAID5 type of fault tolerance, much more efficient, and > would, as far as I understand, allow you to use 3 nodes as a minimum > stripe+replication. This means 25% less money in computer hardware, with > increased data safety/resilience. > > 2) We have a recurring issue with split-brain solution. There is an entry on > trello asking/suggesting a mechanism that arbitrates this resolution > automatically. I pretty much think this could come together with another > solution that is file replication consistency check. > > 3) Accelerator node project. Some storage solutions out there offer an > "accelerator node", which is, in short, a, extra node with a lot of RAM, > eventually fast disks (SSD), and that works like a proxy to the regular > volumes. active chunks of files are moved there, logs (ZIL style) are > recorded on fast media, among other things. There is NO active project for > this, or trello entry, because it is something I started discussing with a > few fellows just a couple of days ago. I thought of starting to play with RAM > disks (tmpfs) as scratch disks, but, since we have an opportunity to do > something more efficient, or at the very least start it, why not ? > > Now, c'mon ! Time is running out. We need hands on deck here, for a simple > vote ! > > Can you share 3 lines with your thoughts ? > > Thanks > > > ___ > Gluster-users mailing list > Gluster-users@gluster.org > http://supercolony.gluster.org/mailman/listinfo/gluster-users ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] Has anyone dockerized gluster yet?
Hi folks. Well, after James Shubin's tour-de-awesome of Gluster on vagrant, i think we all learned two things: - james is an awesome hacker - vagrant isnt ready for primetime on KVM yet. So, an alternative for quick and easy spin up of gluster systems for dev/test would be a docker recipe for gluster on fedora/centos/... Has anyone set up a docker container that installs and mounts a couple of gluster peers as linux containers yet? It love to see how that works, and then maybe layer in hadoop on top of it. Jay Vyas http://jayunit100.blogspot.com ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] gluster on VM and RDBMS
-I think you will run into the fact that tables are blocks on disk. I have no idea how gluster would handle a table bigger than a brick? Probably it means all your reads will be non-local unless you happen to be on the server that has the brick for the file. -But then again, with a clever slave configuration, maybe there is a way to leverage gluster for durability as well as the local file system for other tasks? On Thu, Feb 13, 2014 at 8:47 AM, Targino Silveira wrote: > Hi Suvendu, > > I have used Gluster to store old data in PostgreSQL database, the access > for this data was very good, but was old data, the recent data was store in > SAS Disk because permance. > > Regards, > > Targino Silveira > +55-85-8626-7297 > www.twitter.com/targinosilveira > > > 2014-02-11 3:57 GMT-02:00 Suvendu Mitra : > >> Hi, >> We are planning to run glusterfs on openstack based VM cluster, I >> understood that gluster replication is based on file level. What is the >> drawback of running RDBMS on gluster >> >> -- >> Suvendu Mitra >> GSM - +358504821066 >> >> ___ >> Gluster-users mailing list >> Gluster-users@gluster.org >> http://supercolony.gluster.org/mailman/listinfo/gluster-users >> > > > _______ > Gluster-users mailing list > Gluster-users@gluster.org > http://supercolony.gluster.org/mailman/listinfo/gluster-users > -- Jay Vyas http://jayunit100.blogspot.com ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Atomic file updates
hola jeff: im not sure wether your volfile command is complimentary, or alternative , to my simple and easy "mount with entry-timeout=0" option. tom : Im not sure , lets wait for jeff, he's the hardcore gluster consistency expert. im just a user :) On Wed, Feb 12, 2014 at 5:23 PM, Tom Munro Glass wrote: > On 02/13/2014 10:24 AM, Jay Vyas wrote: > > For vanilla apps that are doing stuff in gluster, you normally do it > > through a fuse mount. > > > > mount -t glusterfs localhost:HadoopVol /mnt/glusterfs > > > > But in your case, you might want to do some strict consistency settings > to > > make it atomic: > > > > mount -t glusterfs localhost:HadoopVol -o > > entry-timeout=0,attribute-timeout=0/mnt/glusterfs > > > > This will make sure that everything is refreshed when you look up files. > > This strategy has solved our eventual consistency requirements for the > > hadoop plugin. > > > Are you saying that with these mount options I can just write files > directly without using flock or renaming a temporary file, and that > other processes trying to read the file will always see a complete and > consistent view of the file? > > Tom > > -- Jay Vyas http://jayunit100.blogspot.com ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Atomic file updates
For vanilla apps that are doing stuff in gluster, you normally do it through a fuse mount. mount -t glusterfs localhost:HadoopVol /mnt/glusterfs But in your case, you might want to do some strict consistency settings to make it atomic: mount -t glusterfs localhost:HadoopVol -o entry-timeout=0,attribute-timeout=0/mnt/glusterfs This will make sure that everything is refreshed when you look up files. This strategy has solved our eventual consistency requirements for the hadoop plugin. ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] getfattr in complex deployments
Hi gluster .. We've noticed something funny in a 2x2 configuration, which I havent seen in a 1x2 config. If there is a 2x1 vs 2x2 brick configuration, can we expect differences in distribution of getfattr metadata? Or is getfattr metadata always gauranteed to be available to all servers, regradless of the replication domain? -- Jay Vyas http://jayunit100.blogspot.com ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] libgfapi consistency model
Thanks avati. Just for context in case others are moving to libgfapi: On our end, we want to improve small file performance on WRITES, without sacrificing consistency on READS. Im not sure if thats possible but we can look deeper into the translators you mentioned, possibly as an answer to get that kind of behaviour. Is there a simple set of parameters stored somewhere (i.e. in /etc/... or something like that) that can guide LibGFAPI behaviour, similar to the mount "-o" options which we pass to the FUSE mount? On Wed, Feb 5, 2014 at 5:26 PM, Anand Avati wrote: > Jay, > there are few parts to consistency. > > - file data consistency: libgfapi by itself does not perform any file data > caching, it is entirely dependent on the set of translators (write-behind, > io-cache, read-ahead, quick-read) that are loaded, and the effect of those > xlators is same in both FUSE and libgfapi > > - inode attribute/xattr (metadata) consistency: (the thing you tune with > --attribute-timeout=N in FUSE) again libgfapi does not perform any meta > data caching and depends whether you have loaded md-cache/stat-prefetch > translator. > > - entry consistency: this is remembering dentries (e.g: "the name > 'file.txt' under directory having gfid 12345 maps to file with gfid 48586", > or "the name 'cat.jpg' under directory having gfid 456346 does not exist or > map to any inode" etc.) and is similar to the thing you tune with > --entry-timeout=N in FUSE. libgfapi remembers such dentries in an > optimistic way such that the path resolver re-uses the knowledge for the > next path resolution call. However the last component of a path is always > resolved "uncached" (even if entry is available in cache) and upon any > ESTALE error the entire path resolution + fop is re-attempted in a purely > uncached mode. This approach is very similar to the retry based optimistic > path resolution in the more recent linux kernel vfs. > > HTH > Avati > > > On Wed, Feb 5, 2014 at 8:31 AM, Jay Vyas wrote: > >> Hi gluster ! >> >> How does libgfapi enforce FileSystem consistency? Is it better than >> doing this than exsiting FUSE mounts which require the *timeout parameters >> to be set to 0? >> >> Thanks! >> >> This is important to us in hadoopland. >> >> -- >> Jay Vyas >> http://jayunit100.blogspot.com >> >> ___ >> Gluster-users mailing list >> Gluster-users@gluster.org >> http://supercolony.gluster.org/mailman/listinfo/gluster-users >> > > -- Jay Vyas http://jayunit100.blogspot.com ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] libgfapi consistency model
Hi gluster ! How does libgfapi enforce FileSystem consistency? Is it better than doing this than exsiting FUSE mounts which require the *timeout parameters to be set to 0? Thanks! This is important to us in hadoopland. -- Jay Vyas http://jayunit100.blogspot.com ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] gluster CPU cycles at 100%
the workload : copying 21,000 1KB files small files, into gluster. The CPU is spiked the whole time and it takes a long time to do the copying. There are some other moving parts, but Im pretty sure gluster shouldnt spike so high for just 21 MB of data, right? there might be an error in my numbers above, so dont lose sleep - i still am trying to reproduce this "bug"... if thats what it is. On Tue, Jan 28, 2014 at 7:11 PM, Paul Robert Marino wrote: > Well what do you mean by %100 CPU do you mean just 1 core or multiple also > what's your IO wait percentage on your gluster servers. > > Its not uncommon for gluster to spike up to use more than a whole core. > > You may simply be exhausting the effective write cache on the gluster > servers in which case unless you are striping your data across many raids > you may just be hitting the limits of your hardware. > > You may also have client(s) that are asymetricly connecting to only one or > a small subset of servers due to a connectivity issue which would cause > that one node or group of nodes to bear the brunt of replicating the entire > workload via constant self heal. > > To help you I would rather know more about your configuration before > giving advice. > > > > > -- Sent from my HP Pre3 > > -- > On Jan 28, 2014 14:08, Harshavardhana wrote: > > You should take a 'gluster' statedump of the process - and open a > bugzilla for analysis with logs. > > On Tue, Jan 28, 2014 at 10:30 AM, Jay Vyas wrote: > > Hi folks : > > > > Im running mahout on top of gluster using the GlusterFileSystem hadoop > > plugin. > > > > It works well, but Im noticing that glusterfsd is at 100% CPU, and a > copy > > action seems to be stalling after a long workload. > > > > Any ways to rescue the system? or should i restart it ? > > > > -- > > Jay Vyas > > http://jayunit100.blogspot.com > > > > ___ > > Gluster-users mailing list > > Gluster-users@gluster.org > > http://supercolony.gluster.org/mailman/listinfo/gluster-users > > > > -- > Religious confuse piety with mere ritual, the virtuous confuse > regulation with outcomes > ___ > Gluster-users mailing list > Gluster-users@gluster.org > http://supercolony.gluster.org/mailman/listinfo/gluster-users > -- Jay Vyas http://jayunit100.blogspot.com ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] gluster CPU cycles at 100%
ah nevermind, googled it: shoulda done this earlier. On Tue, Jan 28, 2014 at 4:58 PM, Jay Vyas wrote: > bump ^^ whats a "gluster statedump" and how do I create one? > this 100+% CPU issue is very intriguing, and id like to create a bug for it > (if it indeed is a bug) > > > On Tue, Jan 28, 2014 at 3:07 PM, Jay Vyas wrote: > >> okay... :) .. but whats a "gluster statedump"? >> >> >> On Tue, Jan 28, 2014 at 2:08 PM, Harshavardhana < >> har...@harshavardhana.net> wrote: >> >>> You should take a 'gluster' statedump of the process - and open a >>> bugzilla for analysis with logs. >>> >>> On Tue, Jan 28, 2014 at 10:30 AM, Jay Vyas wrote: >>> > Hi folks : >>> > >>> > Im running mahout on top of gluster using the GlusterFileSystem hadoop >>> > plugin. >>> > >>> > It works well, but Im noticing that glusterfsd is at 100% CPU, and a >>> copy >>> > action seems to be stalling after a long workload. >>> > >>> > Any ways to rescue the system? or should i restart it ? >>> > >>> > -- >>> > Jay Vyas >>> > http://jayunit100.blogspot.com >>> > >>> > _______ >>> > Gluster-users mailing list >>> > Gluster-users@gluster.org >>> > http://supercolony.gluster.org/mailman/listinfo/gluster-users >>> >>> >>> >>> -- >>> Religious confuse piety with mere ritual, the virtuous confuse >>> regulation with outcomes >>> >> >> >> >> -- >> Jay Vyas >> http://jayunit100.blogspot.com >> > > > > -- > Jay Vyas > http://jayunit100.blogspot.com > -- Jay Vyas http://jayunit100.blogspot.com ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] gluster CPU cycles at 100%
bump ^^ whats a "gluster statedump" and how do I create one? this 100+% CPU issue is very intriguing, and id like to create a bug for it (if it indeed is a bug) On Tue, Jan 28, 2014 at 3:07 PM, Jay Vyas wrote: > okay... :) .. but whats a "gluster statedump"? > > > On Tue, Jan 28, 2014 at 2:08 PM, Harshavardhana > wrote: > >> You should take a 'gluster' statedump of the process - and open a >> bugzilla for analysis with logs. >> >> On Tue, Jan 28, 2014 at 10:30 AM, Jay Vyas wrote: >> > Hi folks : >> > >> > Im running mahout on top of gluster using the GlusterFileSystem hadoop >> > plugin. >> > >> > It works well, but Im noticing that glusterfsd is at 100% CPU, and a >> copy >> > action seems to be stalling after a long workload. >> > >> > Any ways to rescue the system? or should i restart it ? >> > >> > -- >> > Jay Vyas >> > http://jayunit100.blogspot.com >> > >> > ___ >> > Gluster-users mailing list >> > Gluster-users@gluster.org >> > http://supercolony.gluster.org/mailman/listinfo/gluster-users >> >> >> >> -- >> Religious confuse piety with mere ritual, the virtuous confuse >> regulation with outcomes >> > > > > -- > Jay Vyas > http://jayunit100.blogspot.com > -- Jay Vyas http://jayunit100.blogspot.com ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] gluster CPU cycles at 100%
okay... :) .. but whats a "gluster statedump"? On Tue, Jan 28, 2014 at 2:08 PM, Harshavardhana wrote: > You should take a 'gluster' statedump of the process - and open a > bugzilla for analysis with logs. > > On Tue, Jan 28, 2014 at 10:30 AM, Jay Vyas wrote: > > Hi folks : > > > > Im running mahout on top of gluster using the GlusterFileSystem hadoop > > plugin. > > > > It works well, but Im noticing that glusterfsd is at 100% CPU, and a copy > > action seems to be stalling after a long workload. > > > > Any ways to rescue the system? or should i restart it ? > > > > -- > > Jay Vyas > > http://jayunit100.blogspot.com > > > > ___ > > Gluster-users mailing list > > Gluster-users@gluster.org > > http://supercolony.gluster.org/mailman/listinfo/gluster-users > > > > -- > Religious confuse piety with mere ritual, the virtuous confuse > regulation with outcomes > -- Jay Vyas http://jayunit100.blogspot.com ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] gluster CPU cycles at 100%
Hi folks : Im running mahout on top of gluster using the GlusterFileSystem hadoop plugin. It works well, but Im noticing that glusterfsd is at 100% CPU, and a copy action seems to be stalling after a long workload. Any ways to rescue the system? or should i restart it ? -- Jay Vyas http://jayunit100.blogspot.com ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Automatically Deploying GlusterFS with Puppet-Gluster+Vagrant!
Thanks james. Also looking at the code there is alot of use of advanced puppet features (hiera for example).. during the screencast some hints about the puppet modules are coded would also be awesome. thanks again for all this work On Mon, Jan 27, 2014 at 10:07 AM, James wrote: > On Mon, Jan 27, 2014 at 9:41 AM, Kaushal M wrote: > > This. I hadn't done a recursive clone. I cloned the repo correctly > > again and everything works now. The vms are being provisioned as I > > type this. Finally, time to test puppet-gluster. > > > Awesome! I'm actually recording a screencast of the whole process. I > figured, I might as well in the hopes visualizing the process helps > others! > > I'll post shortly, it might help with any other confusion. > > Cheers, > James > ___ > Gluster-users mailing list > Gluster-users@gluster.org > http://supercolony.gluster.org/mailman/listinfo/gluster-users > -- Jay Vyas http://jayunit100.blogspot.com ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] Forge wiki pages are to narrow
hi johnmark: I think the glusterforge wiki pages are WAY to narrow for sophisticated docs Can you fix them so that wiki pages can be shorter and easier to read without as much scrolling? -- Jay Vyas http://jayunit100.blogspot.com ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] puppet-gluster from zero: hangout?
hi james. re subject matter: Yup - gluster-puppet from zero (no assumed knowledge of puppet). I think it will drive adoption of your repo because at the moment , alot of gluster users (at least, some that i know) need a simpler deployment strategy but dont have the time to grasping the steep puppet learning curve. re hangout: You shouldnt need to install any special software? its just a url that you go to. On Mon, Dec 16, 2013 at 7:44 AM, James wrote: > On Mon, Dec 16, 2013 at 7:35 AM, John Mark Walker > wrote: > > I'm unclear on why you can't do Google hangout. > > > I don't have google hangout software installed on my computer. > -- Jay Vyas http://jayunit100.blogspot.com ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] puppet-gluster from zero: hangout?
Hi james. I guess a static video is okay for me but would prefer an interactive session. If any other technology suits the bill better for an interactive session let us know. otherwise look forward to the video and we can go from there ! On Sun, Dec 15, 2013 at 1:36 PM, James wrote: > On Sun, 2013-12-15 at 13:32 -0500, John Mark Walker wrote: > > Let's send a reminder to jmwbot so It's on my calendar :) > Ahh! Super glad you saw it ;) > > -- Jay Vyas http://jayunit100.blogspot.com ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] Are directory moves cheap?
Hi gluster! Are move/rename ops over directories in fuse mounted gluster generally fast when there are TBs of data in the underlying dirs? ...Context follows... One of the common operations in hive data manipulation is "load data", where you move a set of files from one place to another when ETLing a data set into a cluster. In our hadoop implementation (GlusterFileSystem), we use the fuse mount to do renames using local posix commands. I'm wondering this because In hadoop (hdfs), moving directories is (iirc) a relatively cheap metadata operation. ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] puppet-gluster from zero: hangout?
Hey james and JMW: Can/Should we schedule a google hangout where james spins up a puppet-gluster based gluster deployment on fedora from scratch? Would love to see it in action (and possibly steal it for our own vagrant recipes). To speed this along: Assuming James is in England here , correct me if im wrong, but if so ~ Let me propose a date: Tuesday at 12 EST (thats 5 PM in london - which i think should work for james as well as us out here on the other side of the pond). -- Jay Vyas http://jayunit100.blogspot.com ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Scalability - File system or Object Store
in object stores you sacrifice the consistency gauranteed by filesystems for **higher** availability. probably by "scale" you mean higher availability, so... the answer is probably object storage. That said, gluster is an interesting file system in that it is "object-like" --- it is really fast for lookups and so if you aren't really sure you need objects, you might be able to do just fine with gluster out of the box. One really cool idea that is permeating the gluster community nowadays is this "UFO" concept, -- you can easily start with regular gluster, and then layer an object store on top at a later date if you want to sacrifice posix operations for (even) higher availability. "Unified File and Object Storage - Unified file and object storage allows admins to utilize the same data store for both POSIX-style mounts as well as S3 or Swift-compatible APIs." (from http://gluster.org/community/documentation/index.php/3.3beta) On Mon, Dec 9, 2013 at 10:57 AM, Randy Breunling wrote: > From any experience...which has shown to scale better...a file system or > an object store? > > --Randy > San Jose CA > > ___ > Gluster-users mailing list > Gluster-users@gluster.org > http://supercolony.gluster.org/mailman/listinfo/gluster-users > -- Jay Vyas http://jayunit100.blogspot.com ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Gluster Community Weekly Meeting
wait so this is just IRC or a google hangout? On Wed, Dec 4, 2013 at 7:20 AM, James wrote: > Is this happening today in about 3 hours? > > Cheers, > > James > > On Thu, Nov 28, 2013 at 3:02 AM, Vijay Bellur wrote: > > The following is a new meeting request: > > > > Subject: Gluster Community Weekly Meeting > > Organizer: "Vijay Bellur" > > > > Location: #gluster-meeting on irc.freenode.net > > Time: 8:30:00 PM - 9:30:00 PM GMT +05:30 Chennai, Kolkata, Mumbai, New > Delhi > > Recurrence : Every Wednesday No end date Effective Dec 4, 2013 > > > > Invitees: gluster-users@gluster.org; gluster-de...@nongnu.org > > > > > > *~*~*~*~*~*~*~*~*~* > > > > Greetings, > > > > This is the weekly slot to discuss all aspects concerning the Gluster > community. > > > > Etherpad for the meeting - > http://titanpad.com/gluster-community-meetings. Please feel free to add > your agenda items in the Etherpad before the meeting. > > > > Cheers, > > Vijay > > > > ___ > > Gluster-users mailing list > > Gluster-users@gluster.org > > http://supercolony.gluster.org/mailman/listinfo/gluster-users > ___ > Gluster-users mailing list > Gluster-users@gluster.org > http://supercolony.gluster.org/mailman/listinfo/gluster-users > -- Jay Vyas http://jayunit100.blogspot.com ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] A question about comparing gluster to other storage
- GlusterFS will be better if you want to do directory operations, and want a "real" filesystem (i.e. where you can grep stuff, edit files, have consistency gaurantees etc... and swift will be better for pure scale (less need to worry about metadata == easier to scale on commodity hardware. - Certainly both glusterFS and swift would scale to petabytes (object stores are built for that sort of thing, as is gluster). On Tue, Nov 26, 2013 at 5:25 PM, Randy Breunling wrote: > I'm relatively new to the gluster community and wanted to know if there's > anyone out there that can talk to me, or point me to comparative > information on the following: > - glusterFS > - SWIFT > - CEPH > > I am interested in a solution that is object-based and will scale into > single-digit petabytes. > I'd like to know of experiences with these solutions that have to do with > large-scale deployments. > > If this is not the/a correct forum to discuss this...please let me know. > > Thanks... > > --Randy Breunling > > rbreunl...@gmail.com > > > ___ > Gluster-users mailing list > Gluster-users@gluster.org > http://supercolony.gluster.org/mailman/listinfo/gluster-users > -- Jay Vyas http://jayunit100.blogspot.com ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] in honor of fredrick sanger
Hey john mark. I saw that you recently mentioned some work using gluster for sequencing data (where there are alot of intermediates, and sometimes, huge raw input data sets that get denoised). http://184.106.200.248/2012/07/improving-high-throughput-next-gen-sequencing/ Well, today fredrick sangar, the guy who pretty much created the technology necessary for generating "bioinformatics" data, has passed away. In honor of fredrick sanger, it might be interesting to see a community post on all the genomics and bioinformatics organizations out there using gluster to store tera/pedabytes of genomic or bioinformatics data. http://www.genengnews.com/gen-news-highlights/frederick-sanger-father-of-genomic-era-dies-at-95/81249136/ http://www.eaglegenomics.com/2013/03/glusterfs-vs-a-future-distributed-bioinformatics-file-system/ anyways, just a thought -- Jay Vyas http://jayunit100.blogspot.com ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Gluster KVM scripts
- screencast is perfect :) that should help enough for us to hack the rest of the bits. - re: private keys, ah yes, no problem. Thats the "insecure" private/public vagrant key used for simple setups, see https://github.com/mitchellh/vagrant/tree/master/keys. - regarding the "big data" stuff: the spirit here is to unify and simplify gluster deployment, possibly using your puppet recipes, so that all of us can build new tech on top of it. - just saw your updated docs email. they look beautiful :) thanks james. now i guess we have even less of an excuse to leverage the existing gluster puppet incantations you've been working on ! On Sun, Nov 17, 2013 at 6:28 PM, James wrote: > On Sun, 2013-11-17 at 17:29 -0500, Jay Vyas wrote: > > hi james, moving this to public. > Moved to gluster-users then. > > > The subject was how to start using jame's > > puppet modules with the vagrant/gluster examples that we are working on. > I can't help you with anything vagrant related until it properly > supports libvirt/kvm. > > I think it's actually a F20 goal: > https://fedoraproject.org/wiki/Changes/Vagrant > > > > So here are some bullets to move things forward. > > > > - Here are the functions for creating the gluster setup : > > > https://forge.gluster.org/vagrant/fedora19-gluster/blobs/master/gluster-hbase-example/setup.sh > > You know that your _private_ key is visible in that file, right? > > > We > > basically create a fake disk using truncate, assign it as a brick for > > the gluster volume, and then mount. From there , we point hbase to that > > mount point and thats all there is. > > > > > > - lets disregard the hbase part for now, > Agreed. > > > and maybe you could create a > > "vagrant+puppet+gluster" starter project that uses some of the logic from > See above about vagrant. Maybe after F20 is released. > > > this? From there maybe we could work together to hack in the > > hbase/hadoop/whatever bits to make a puppetized version of these bash > > files. > > > > The advantage in my eyes of moving to your puppet: > > > > 1) mister james maintains the gluster bits :) :) :) > This I'm happy to do. > > > 2) Less implementation details, more logic on how we integrate gluster > with > > bigdata tools > I'm happy to work on this type of thing, but this sounds more like a > consulting or needs donations project. I only have 2 vm's to test > puppet/gluster on. bigdata probably implies > 20GiB :P > > > 3) the gluster community gets a cool example for learning how to use > puppet > > and gluster together in a completely reproducible, zero startup > > environment. > > Have you looked at gluster::simple ? > > https://github.com/purpleidea/puppet-gluster/blob/master/examples/gluster-simple-example.pp > > AFAICT, that's all you need. My understanding is that you're just trying > to build a simple throw away cluster... Let me know if I misunderstood. > If you want to customize your volume further, you can use it like this: > > class { '::gluster::simple': > #path => '',# defaults to $vardir/data/ > # NOTE: this can be a list... > volume => ['hbase', 'foobar'], > replica => 1, > } > > HTH! For now, I'll think about adding a screencast and better docs. > > James > > > > > > > > > > > > > On Sun, Nov 17, 2013 at 5:03 PM, James wrote: > > > > > On Sun, 2013-11-17 at 10:52 -0500, Jay Vyas wrote: > > > > Hi there mister james... ! > > > Hey, > > > > > > > > > > > As im not much of a puppet expert, im still not quite sure how to > replace > > > > my bash scripts with your puppet gluster modules. > > > That I can help with ;) > > > > > > Tell you what, if you send me your bash scripts, I'll even "port" them > > > to puppet-gluster for you. (Or I'll try anyways.) > > > > > > > > > > > > > > Can we create a "puppet on gluster from zero" community page or blog > > > post > > > > or readme update? I'd LOVE to use your puppet modules to drive some > > > stuff > > > > I'm doing for bigtop, and think it would be a huge win for broader > > > gluster > > > > adoption. > > > Can you give me more information about what/how you're trying to drive? > > > I don't know what a bigtop is (other than a circus tent). > > > > > &
[Gluster-users] Large files.. What's the latest news?
Hi folks. The precious question regarding xfs got me thinking.. Any news on these two questions or existing modifications to gluster underway to deal with larger-than-brick files? 1) Are bricks aware if the total underlying file system size? 2) And when we try to write a file that is hashed to a machine without enough disk space, is gluster smart enough to forward/rebalance the file to be on a different node? ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Gluster "Cheat Sheet"
+1 for cheat sheets. i always like when these things are embedded in the actual README, or else, somewhere else in the code, because they stay up to date better that way. ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Cassandra on Gluster
hi there brian. *** 1) I don't think anyone in the gluster community has done much with Cassandra on gluster.but i could be wrong. See (3) for my (postulated) reason why. 2) Have you considered *HBase* (which we are actively testing and playing with on many hadoop distributions) or *Riak*? Those are both much more "gluster freindly", i think, in that they have a more modular architecture. Also, because of the fact that cassandra requires ridiculously large files, I always wonder wether any file system (other than Cassandra's file system) is really *that* good of a fit. 3) Just my opinion, but whereas Riak has a modular storeage backend, and HBase has an abstract "org.apache.hadoop.FileSystem" interfacial backend - Cassandra sort of tries to do everything. Firstly, providing both key/value storage infrastructure, and secondly providing its own distributed file system implementation, making it a little bit more monolithic of a component in your stack. 4) FYI: We've recently setup an HBase on Gluster two node vagrant recipe. You should play with it if you get a chance: git clone https://forge.gluster.org/vagrant/fedora19-gluster/trees/master cd gluster-hbase-example vagrant destroy --force && vagrant up On Fri, Nov 1, 2013 at 8:36 AM, Brian wrote: > Has anyone set up Cassandra on a Gluster file system (commit logs disk & > data disks)? > > If so - any recommended gluster tweaks/settings? > > According to Datastax - these disks should be formatted to XFS. Any > recommended settings for this? > > Lastly - In general has anyone used Gluster with a MySQL database and what > have they seen as performance and bottlenecks? > > -- > Brian Silverwood > Systems Administrator > Viddler | Viddler.com <http://www.viddler.com> | 215-962-5829 > Subscribe to the Viddler blog <http://blog.viddler.com> > > ___ > Gluster-users mailing list > Gluster-users@gluster.org > http://supercolony.gluster.org/mailman/listinfo/gluster-users > -- Jay Vyas http://jayunit100.blogspot.com ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] metadata for stat : Should it be identical?
thanks alex... fair enough; but how could metadata for a distributed file possibly vary across a cluster? is that expected behaviour. I guess it tells you that the metadata for the file is not aquired from any single location, but rather its determined and stored and maintained separately by different machines, and so if one machines time is off, it wont share the same view of the file as others. so i guess im wondering if thats a bug ? or maybe a feature and i just dont realize it yet :) ? On Thu, Oct 24, 2013 at 3:37 PM, Alex Chekholko wrote: > You definitely want time synchronization between all of your machines. > > It's super easy to do these days, just installing the ntp package with > whatever your distribution default settings should do the trick. > > e.g. 'aptitude install ntp' on Debian/Ubuntu > > If you have to choose NTP servers, just use pool.ntp.org > > http://www.pool.ntp.org/en/**use.html<http://www.pool.ntp.org/en/use.html> > > > On 10/24/2013 12:14 PM, Jay Vyas wrote: > >> FYI, these two machines are not clock synchronized, but -- should that >> be an issue? >> > > -- > Alex Chekholko ch...@stanford.edu 347-401-4860 > __**_ > Gluster-users mailing list > Gluster-users@gluster.org > http://supercolony.gluster.**org/mailman/listinfo/gluster-**users<http://supercolony.gluster.org/mailman/listinfo/gluster-users> > -- Jay Vyas http://jayunit100.blogspot.com ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] metadata for stat : Should it be identical?
Hi folks. Im wondering wether i should expect gluster to always return identical stat info for a file on the system. seems like, by definition of a "distributed file system" the answer would be yes. But Im not seeing that. Rather, I'm seeing different metadata from the stat command, in particular, the modification time is not the same. FYI, these two machines are not clock synchronized, but -- should that be an issue? I would assume there would be one central source of truth for any stat info on a file in the global namespace, right? # ssh mrg11 stat /mnt/glusterfs/user/yarn/.staging/job_1382637900757_0002/appTokens [1/1897] File: `/mnt/glusterfs/user/yarn/.staging/job_1382637900757_0002/appTokens' Size: 38 Blocks: 1 IO Block: 131072 regular file Device: 13h/19d Inode: 11771249148879556417 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 493/yarn) Gid: ( 490/yarn) Access: 2013-10-24 14:22:28.858586945 -0400 Modify: 2013-10-24 14:22:28.861593090 -0400 Change: 2013-10-24 14:22:28.861593090 -0400 #ssh mrg12 stat /mnt/glusterfs/user/yarn/.staging/job_1382637900757_0002/appTokens File: `/mnt/glusterfs/user/yarn/.staging/job_1382637900757_0002/appTokens' Size: 38 Blocks: 1 IO Block: 131072 regular file Device: 13h/19d Inode: 11771249148879556417 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 493/yarn) Gid: ( 490/yarn) Access: 2013-10-24 14:22:29.868940729 -0400 Modify: 2013-10-24 14:22:29.871694979 -0400 Change: 2013-10-24 14:22:29.871694979 -0400 ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] New to GlusterFS
So Im sure it would work for you the same as any distributed fs would , and you could always optimize things later if it wasn't fast enough. Gluster is very tune able. I'm curious -- do you know what your access patterns are going to be? Is this for testing or for a real production system? 1) If the kvm boxes are simply a way for you to have different services that might scale up/down and unifying their storage then it sounds plausible. Certainly it's easier than unifying a bunch of manually curated virtual disks. 2) And remember - gluster is very stackable - and hadoop friendly, so you can put it underneath mapreduce if you want to process your data in parallel - or else, with volumes, you can decrease/increase replication of certain areas customarily. 3) Compared to alternatives like nfs/hdfs gluster will give you fast lookups, no need for a centralized server , and no SPOF... And is super easy to install. FYI we have vagrant setups for two node VMs, and would love to automate gluster on kvm configurations for people like yourself to try out. > On Oct 22, 2013, at 5:57 AM, JC Putter wrote: > > Hi, > > I am new to GlusterFS, i am trying to accomplish something which i am > not 100% sure is the correct use case but hear me out. > > I want to use GlusterFS to host KVM VM's, from what I've read this was > not recommended due to poor write performance however since > libgfapi/qemu 1.3 this is now viable ? > > > Currently i'am testing out GlusterFS with two nodes, both running as > server and client > > i have the following Volume: > > Volume Name: DATA > Type: Replicate > Volume ID: eaa7746b-a1c1-4959-ad7d-743ac519f86a > Status: Started > Number of Bricks: 1 x 2 = 2 > Transport-type: tcp > Bricks: > Brick1: glusterfs1.example.com:/data > Brick2: glusterfs2.example.com:/data > > > and mounting the brick locally on each server as /mnt/gluster, > replication works and everything but as soon as i kill one node, the > directory /mnt/gluster/ becomes unavailable for 30/40 seconds > > log shows > > [2013-10-22 11:55:48.055571] W [socket.c:514:__socket_rwv] > 0-DATA-client-0: readv failed (No data available) > > > Thanks in advance! > ___ > Gluster-users mailing list > Gluster-users@gluster.org > http://supercolony.gluster.org/mailman/listinfo/gluster-users ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] A "Wizard" for Initial Gluster Configuration
yeah this is a nice tool. i think developers could use it to,,, You know what:, ive seen alot of incantations of this script: https://forge.gluster.org/gluster-deploy/gluster-deploy/blobs/master/scripts/findDevs.py I think it would be ideal to take your utilities and put them in a separate project, thats maintained by the community - gluster-dev-utils or something. I like the idea of lower llevel bash/python APIs for managing gluster setups, LVM, etc that hide the implementation. On Fri, Oct 11, 2013 at 2:14 PM, Nux! wrote: > On 10.10.2013 01:08, Paul Cuzner wrote: > >> Hi, >> >> >> I'm writing a tool to simplify the initial configuration of a >> cluster, and it's now in a state that I find useful. >> >> Obviously the code is on the forge and can be found at >> https://forge.gluster.org/**gluster-deploy<https://forge.gluster.org/gluster-deploy> >> >> If your interested in what it does, but don't have the time to look >> at the code I've uploaded a video to youtube >> >> http://www.youtube.com/watch?**v=UxyPLnlCdhA<http://www.youtube.com/watch?v=UxyPLnlCdhA> >> > > Impressive video, it looks very nice and I'm sure there are many "IT > managers" out there who would love this. > Good job! > > > >> Feedback / ideas / code contributions - all welcome ;o) >> > > Please add at least some basic volume management support (create/delete, > acl). :) > > Regards, > Lucian > > -- > Sent from the Delta quadrant using Borg technology! > > Nux! > www.nux.ro > > __**_________ > Gluster-users mailing list > Gluster-users@gluster.org > http://supercolony.gluster.**org/mailman/listinfo/gluster-**users<http://supercolony.gluster.org/mailman/listinfo/gluster-users> > -- Jay Vyas http://jayunit100.blogspot.com ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] (no subject)
thanks james :) how about a INSTALL_FROM_ZERO instructions in the readme? :) On Fri, Oct 4, 2013 at 10:31 AM, James wrote: > On Fri, Oct 4, 2013 at 9:36 AM, Jay Vyas wrote: > > Ah yes thanks James..! Forgot about this - haven't adopted it yet > because I still don't > know puppet to well.. > I'm happy to help if you get stuck. > > > but the two node vagrant recipe on the forge is more for beginners who > don't know what config they want - but just want to play with an > operational multi node gluster stack. > Similar to people wanting to > download an ISO. > > Sure thing! > > > > > So, I guess someday lets join forces and put a vagrant recipe to use > puppet to create a Virtual fedora cluster on the fly. That will take the > best of both worlds: vagrant for setting up machines and puppet for > transparently configuring them. > > I'm happy to help hack on this front. I have a new puppet-gluster > class coming up which is a one line "include gluster::simple" It's > actually ready, but I am adding a few extra features first. > > Cheers > -- Jay Vyas http://jayunit100.blogspot.com ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] (no subject)
Ah yes thanks James..! Forgot about this - haven't adopted it yet because I still don't know puppet to well.. but the two node vagrant recipe on the forge is more for beginners who don't know what config they want - but just want to play with an operational multi node gluster stack. Similar to people wanting to download an ISO. So, I guess someday lets join forces and put a vagrant recipe to use puppet to create a Virtual fedora cluster on the fly. That will take the best of both worlds: vagrant for setting up machines and puppet for transparently configuring them. > On Oct 3, 2013, at 8:40 PM, James wrote: > >> On Thu, 2013-10-03 at 14:14 -0400, Jay Vyas wrote: >> FYI If youre interested in "Trying" to play with a gluster distribtued >> set >> up on VMs, you can try to spin up and have vagrant installed , >> Checkout >> this post : >> http://www.gluster.org/2013/10/instant-ephemeral-gluster-clusters-with-vagrant/. >> Im using this for my VM's at the moment. Its the simplest way (in my >> humble opinion) to get started, i.e. a bare bones distributed gluster >> setup > > *cough* or use puppet-gluster instead :) > https://github.com/purpleidea/puppet-gluster > > > >> in VMs. And its also a playground for learniing because you can hack >> the >> shell scripts manually. >> >> im hoping over time maybe a few others will use / test it and help >> provide >> feedback so we can have more easy setup recipes for POC gluster >> clusters, > ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] (no subject)
FYI If youre interested in "Trying" to play with a gluster distribtued set up on VMs, you can try to spin up and have vagrant installed , Checkout this post : http://www.gluster.org/2013/10/instant-ephemeral-gluster-clusters-with-vagrant/. Im using this for my VM's at the moment. Its the simplest way (in my humble opinion) to get started, i.e. a bare bones distributed gluster setup in VMs. And its also a playground for learniing because you can hack the shell scripts manually. im hoping over time maybe a few others will use / test it and help provide feedback so we can have more easy setup recipes for POC gluster clusters, On Thu, Oct 3, 2013 at 1:11 PM, John Mark Walker wrote: > The Gluster Storage Platform was a very old attempt to provide a > downloadable ISO. It no longer exists - and hasn't existed for 2.5 years. > > If you're looking for an easily usable product with GlusterFS, the only > one currently available is from Red Hat under the "Red Hat Storage" brand. > > If you want a GUI for GlusterFS, try oVirt at http://www.ovirt.org/ - you > can use ovirt as management for KVM + Gluster storage pools, or as > Gluster-only. > > Thanks, > JM > > > -- > > Hello Friends, > > > Please give me the link, How can i download the gluster storage platform > iso. And how can i get the graphical console of gluster. > > -- > *Thanks and Regards.* > *Vishvendra Singh Chauhan* > *+91-8750625343* > http:// <http://linux-links.blogspot.com>linux-links.blogspot.com > God First Work Hard Success is Sure... > > ___ > Gluster-users mailing list > Gluster-users@gluster.org > http://supercolony.gluster.org/mailman/listinfo/gluster-users > > > > _______ > Gluster-users mailing list > Gluster-users@gluster.org > http://supercolony.gluster.org/mailman/listinfo/gluster-users > -- Jay Vyas http://jayunit100.blogspot.com ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] peer probe stochastic behaviour in shell
Hi everone: This is a quite odd behaviour i saw today in my VMs. sh-4.2# gluster peer status peer status: No peers present sh-4.2# gluster peer probe 10.10.10.12 sh-4.2# gluster peer status peer status: No peers present sh-4.2# gluster peer probe 10.10.10.12 peer probe: success The second call returns NOTHING. the third call returns SUCCESS. Any thoughts on this? FYI afterwards, I can do "peer probe / peer detach" in tandem 30 times with no errors, so im maybe there can be a cold start issue with probing? Also FYI , these are in F19 VMs which have run "iptables -F" -- Jay Vyas http://jayunit100.blogspot.com ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] set up a Gluster 2 node cluster in two lines of code :)
It should work in kvm, VMware if they have the plugins installed not sure how far along fedora is. The provisioner argument in vagrant now is meant to allow you to deploy to any env, but yeah I only tested so far on mac. Let me know if you get a chance to test it in other places. I really would like if we could all share and centralizes out virtualization utilities somewhere, hope this provides us with a start! > On Sep 30, 2013, at 11:38 AM, Justin Clift wrote: > > On Sun, 29 Sep 2013 11:18:37 -0400 > Jay Vyas wrote: > >> Just finished creating a vagrantized, fully automated, totally rebuildable >> and teardownable two node fedora 19 gluster setup and shared it on the >> forge! > > Sounds interesting. I've mostly seen people using Vagrant > from OSX. Does it work from Fedora 19 or similar these days > as well? :) > > Side note - Alex Drahon (CC'd) has been putting effort into > a KVM extension/plugin/something for Vagrant, but I haven't > actually tried it yet, so not sure if this would tie in to > that: > > https://github.com/adrahon/vagrant-kvm > > :) > > + Justin > > -- > Justin Clift ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] set up a Gluster 2 node cluster in two lines of code :)
Hi gluster ! For those of you who need to spin up virtual gluster clusters for development and testing: Just finished creating a vagrantized, fully automated, totally rebuildable and teardownable two node fedora 19 gluster setup and shared it on the forge! It uses fedora19 but since its all vagrant powered, you dont need to grab or download a distro or iso or anything, just clone the git repo, run vagrant up, and let vagrant automagically pull down and manage your base box and set up the rest for you. *clone it here: https://forge.gluster.org/vagrant * So what does this do? This basically means that you can spin up 2 vms , from scratch, by installing vagrant and then, literally, typing: *git clone g...@forge.gluster.org:vagrant/fedora19-gluster.git cd fedora19-gluster ln -l Vagrantfile_cluster Vagrantfile vagrant up --provision * Does it work? Yes ! After watching it spin up , you can ssh in to the cluster by typing: *vagrant ssh gluster1* And destroy the same two node cluster by running: *vagrant destroy* .. Example . #Below : ssh into the first node, create a file, and ls it in the other node to confirm that the cluster is working. [jays-macbook]$ vagrant ssh gluster1 [vagrant@gluster1 ~]$ sudo touch /mnt/glusterfs/a [vagrant@gluster1 ~]$ ssh 10.10.10.12 [vagrant@gluster2 ~]$ ls /mnt/glusterfs/ Help wanted to maintain this and make development clusters for different gluster use cases. - bug reporting - testing functionality of different tuning/config options - testing hadoop interop .. Will blog post about it soon , but till then just ping me if you want to get involved etc.. -- Jay Vyas http://jayunit100.blogspot.com ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Enabling Apache Hadoop on GlusterFS: glusterfs-hadoop 2.1 released
Thanks for the announcement steve ! FYI ~ for those interested, there are alot of different areas of expertise which can be useful in this project. I've added a list here: https://forge.gluster.org/hadoop/pages/HackIdeas . In my view, people from the Gluster, DevOps, Linux , and MapReduce communities could all contribute something useful to this project . ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Gluster to support scalable web site
so are you asking if gluster would do all the load balancing for you for static content? I bet it could if you created a volume with extremely high replication. This potentially similar to the idea of using a gluster distributed volume to replace a distributed file cache. Curious if anyone has tried this and the results are as good /comparable to just manually replicating the files on normal disks. On Tue, Aug 6, 2013 at 10:15 AM, Michael.OBrien wrote: > Hi Gluster Gurus > > ** ** > > I’m looking at implementing a web project that needs to be able to scale > but starting out very small and I think gluster might be solution for my > needs but I have a few questions that if answered would help clarify things > in my head. > > ** ** > > **1. **Is gluster a viable solution for hosting web site content > (site files themselves as well as user submitted but not db hosting) > another forum pointed me to gluster as a way but I just want to check with > the experts as I need to be able to scale out from 2 web server to n > servers while having data replicated to all the web servers > > **2. **Can I start with having my bricks located on the same server > as my web server but then if needed move seamlessly to add more brick > servers without having to reconfigure my glusterfs mounts? > > **3. **How would you recommend I handle anti-virus scanning gluster > stored content? > > **4. ** Would you recommend 1 gluster volume mounted as /var/www/ > and used to server different web sites from the same web server or a volume > per website /var/www/site1/ > > ** ** > > I’m also interested to learn about the geo-replication feature as a way of > keeping a backup of the data in a different location if there are any do’s > and don’ts on the topic you might like to share. > > ** ** > > Thanks in advance for any help you might be able to provide > > Michael > > ___ > Gluster-users mailing list > Gluster-users@gluster.org > http://supercolony.gluster.org/mailman/listinfo/gluster-users > -- Jay Vyas http://jayunit100.blogspot.com ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] higher "op" version
It appears that some peer probes only work in one direction. Why is that the case? I guess the mystery lies in the "op" version value. Not sure what that refers to - im using 3.4git, so I assume all versions should be the same even though the build dates are slightly off, because i build from source on all servers. Example.. [root@hbase-regionserver3 ~]# gluster peer probe hbase-head peer probe: failed: Peer hbase-head is already at a higher op-version [root@hbase-regionserver3 ~]# exit [root@hbase-master ~]# gluster peer probe hbase-regionserver3 peer probe: success -- Jay Vyas http://jayunit100.blogspot.com ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Gluster version....
Ah okay. May I suggest bundling the commit id or some other unambiguous information into the source build ? On Jul 18, 2013, at 7:42 AM, Kaleb KEITHLEY wrote: > On 07/17/2013 09:56 PM, Jay Vyas wrote: >> hi gluster : >> >> Okay so ive been playing around in my vm, and wanted to check my >> version. Here is the output: >> >> [root@fedoravm glusterfs]# gluster --version >> glusterfs 3git built on Mar 27 2013 20:39:59 >> Repository revision: git://git.gluster.com/glusterfs.git >> <http://git.gluster.com/glusterfs.git> >> Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com> >> GlusterFS comes with ABSOLUTELY NO WARRANTY. >> You may redistribute copies of GlusterFS under the terms of the GNU >> General Public License. >> >> >> I just noticed that no actual gluster version is pringted ? Am I >> missing something ? Or is this just because I built it from source, so >> it simply doesnt tag the version when you build from source? > > You're running version 3git, i.e. the version that's displayed when you build > from source. > > When you install the Official Community RPMs from > http://download.gluster.org/pub/gluster/glusterfs/3.4/3.4.0/ you'll get > 3.4.0, and that's the version that will be displayed. There are packages > there for EPEL (RHEL, CentOS, etc), Fedora, and Ubuntu; Debian packages are > coming soon. > > For Fedora-18 and -19 the Software Install app and yum will give you 3.3.1 or > 3.4.0beta4 respectively at present. In another week or so those will be > updated to 3.3.2 and 3.4.0, and not long after that f18 will be updated again > to 3.4.0. > > -- > > Kaleb > > > ___ > Gluster-users mailing list > Gluster-users@gluster.org > http://supercolony.gluster.org/mailman/listinfo/gluster-users ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] Gluster version....
hi gluster : Okay so ive been playing around in my vm, and wanted to check my version. Here is the output: [root@fedoravm glusterfs]# gluster --version glusterfs 3git built on Mar 27 2013 20:39:59 Repository revision: git://git.gluster.com/glusterfs.git Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com> GlusterFS comes with ABSOLUTELY NO WARRANTY. You may redistribute copies of GlusterFS under the terms of the GNU General Public License. I just noticed that no actual gluster version is pringted ? Am I missing something ? Or is this just because I built it from source, so it simply doesnt tag the version when you build from source? -- Jay Vyas http://jayunit100.blogspot.com ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] One Volume Per User - Possible with Gluster?
Hmmm... but given that glusters fuse client is posix compliant, can't you just create a single volume and use a customized umask setup on user-named subdirectories in that volume to mimic this behaviour? On Jul 2, 2013, at 7:25 PM, Joshua Hawn wrote: > I've been looking into using Gluster to replace a system that we currently > use for storing data for several thousand users. With our current networked > file system, each user can create volumes and only that user has access to > their volumes with authentication. > > I see that Gluster also offers a username/password auth system, which is > great, but there are several issues about it that bother me: > > [1] Currently all the authentication related information is passed > un-encrypted over the network from client to server. > [2] Currently each volume is managed as a separate process on the server. > > [1] is a major security issue for me and [2] is a major scalablity issue. > > Are either of these issues going to be fixed in the next release or are there > any alternatives that Gluster offers? Also, is the authentication layer only > used by the Gluster FUSE client or is it possible with NFS or CIFS? > > I've also wondered if Gluster can support authentication on a sub-directory > level? If not, how complicated would it be to modify the source code to > enable it? This would enable us to go around the one-process-per-volume issue. > ___ > Gluster-users mailing list > Gluster-users@gluster.org > http://supercolony.gluster.org/mailman/listinfo/gluster-users ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Gluster KVM scripts
Hi james: I didnt know were behind this :) I saw it the other day .. I guess i better play some with https://forge.gluster.org/puppet-gluster to see whats available and maybe ill post directly here or leave feedback on glusterforge ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] Gluster KVM scripts
Hi gluster ! Should an automated KVM gluster deployment creation script be included in the gluster source code, so that people can easily test and build gluster issues in a transparent and reproducible manner? We could centralize it in gluster so that anyone could spin up a gluster instance and easily run the tests against a known setup. As a next step , automating creation of distributed Gluster+KVM VMs might be good for easier testing of atomicity / replication / etc. -- Jay Vyas http://jayunit100.blogspot.com ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] --fopen-keep-cache glusterfs
Thanks brian for the response regarding caching. What exactly does the --fopen-keep-cache glusterfs option to and are there any docs on it? -- Jay Vyas http://jayunit100.blogspot.com ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] local caching of file across global cluster
Is there any value / way to tell all the gluster nodes to make a file highly available, potentially at the cost of consistency (i.e. forget about locks for all files named and cache them in local disk)? Scenario: Imagine I have a workflow of processing 1 million files, and I want to compare all 1 billion files to all the words in , say, a set of ten files, each of which are 10MB. It would be easy to cache the ten files (100MB of data) on every local gluster node. Or even in memory for that matter. Admittedly...Im not an expert on disk caching so, maybe, this is already done using heuristics for us... and its just a matter of time for FUSE/Underlying filesystem/Gluster mount to figure out that a file is important before it starts caching it in some magical sort of way. -- Jay Vyas http://jayunit100.blogspot.com ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] peer probe fails (107)
Ahhh iptables strikes back ! You were right. Ive now got Peer probe success. :) *** Lesson Learned In killing IPTables on Fedora16 ?*** Im on Fedora 16, so this might not be relevant to everyone.. but... Rather than "service iptables stop" (maybe this wasnt really killing all the ip rules), I just manually flushed them according to the script below stolen from http://www.cyberciti.biz/tips/linux-iptables-how-to-flush-all-rules.html #!/bin/sh echo "Stopping firewall and allowing everyone..." iptables -F iptables -X iptables -t nat -F iptables -t nat -X iptables -t mangle -F iptables -t mangle -X iptables -P INPUT ACCEPT iptables -P FORWARD ACCEPT iptables -P OUTPUT ACCEPT On Mon, May 20, 2013 at 7:00 PM, Anand Avati wrote: > Looks like there might be a firewall (iptables) in the way? Can you flush > all iptables rules and retry - just to confirm? > > Avati > > > > On Mon, May 20, 2013 at 3:45 PM, Jay Vyas wrote: > >> Hi gluster: >> >> Im getting the cryptic 107 error, (I guess this means gluster can't see a >> peer)... >> >> gluster peer probe vm-2 >> peer probe: failed: Probe returned with unknown errno 107 >> >> When I can effectively ssh and ping a given server. >> >> I've seen other threads regarding this, some of them to deal with the >> "net.ipv4.ip_nonlocal_bind" parameter, and also a bug >> https://bugzilla.redhat.com/show_bug.cgi?id=890587 ... >> >> But I'm still not sure what the nature of this error is - any thoughts? >> >> >> -- >> Jay Vyas >> http://jayunit100.blogspot.com >> >> ___ >> Gluster-users mailing list >> Gluster-users@gluster.org >> http://supercolony.gluster.org/mailman/listinfo/gluster-users >> > > -- Jay Vyas http://jayunit100.blogspot.com ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] peer probe fails (107)
Hi gluster: Im getting the cryptic 107 error, (I guess this means gluster can't see a peer)... gluster peer probe vm-2 peer probe: failed: Probe returned with unknown errno 107 When I can effectively ssh and ping a given server. I've seen other threads regarding this, some of them to deal with the "net.ipv4.ip_nonlocal_bind" parameter, and also a bug https://bugzilla.redhat.com/show_bug.cgi?id=890587 ... But I'm still not sure what the nature of this error is - any thoughts? -- Jay Vyas http://jayunit100.blogspot.com ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Glusterfs-Hadoop
Hi again ! You can download the latest copy of the shim on our "beta" glusterfs-hadoop versioning server. http://ec2-54-243-59-213.compute-1.amazonaws.com/archiva/browse/org.apache.hadoop.fs.glusterfs/glusterfs-hadoop/20130507-0.0.3 (click "jar" on the right). This is a temporary staging server for the glusterfs-hadoop plugin. It has many new features: - honoring of permissions - configurable buffered file writing - batteries of tests - proper implementations of mkdirs - significantly increased test coverage Let us know what you think and thanks ! On Mon, May 20, 2013 at 3:23 PM, Jay Vyas wrote: > Hi and thanks for playing with the GlusterFS hadoop plugin. > > It is under active development, And We have a more up to date plugin with > several fixes, you can grab it our staging glusterfs-hadoop release server > (ill forward the link to you shortly..) > > Or else, easily build from source using the > > mvn package:package > > Invocation. > > What are you trying to do? > > On May 20, 2013, at 10:23 AM, Xing Yang wrote: > > Hi, > > Where can I find > glusterfs-hadoop-0.20.2-0.1.x86_64.rpm<http://download.gluster.com/pub/gluster/glusterfs/qa-releases/3.3-beta-2/glusterfs-hadoop-0.20.2-0.1.x86_64.rpm> > ? > > The following link is from the Gluster FS Admin Guide, but it doesn't > exist: > > ** ** > > > http://download.gluster.com/pub/gluster/glusterfs/qa-releases/3.3-beta-2/glusterfs-hadoop-0.20.2-0.1.x86_64.rpm > > > Thanks! > > > _______ > Gluster-users mailing list > Gluster-users@gluster.org > http://supercolony.gluster.org/mailman/listinfo/gluster-users > > -- Jay Vyas http://jayunit100.blogspot.com ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Glusterfs-Hadoop
Hi and thanks for playing with the GlusterFS hadoop plugin. It is under active development, And We have a more up to date plugin with several fixes, you can grab it our staging glusterfs-hadoop release server (ill forward the link to you shortly..) Or else, easily build from source using the mvn package:package Invocation. What are you trying to do? On May 20, 2013, at 10:23 AM, Xing Yang wrote: > Hi, > > Where can I find glusterfs-hadoop-0.20.2-0.1.x86_64.rpm? > > The following link is from the Gluster FS Admin Guide, but it doesn't exist: > > http://download.gluster.com/pub/gluster/glusterfs/qa-releases/3.3-beta-2/glusterfs-hadoop-0.20.2-0.1.x86_64.rpm > > Thanks! > > > ___ > Gluster-users mailing list > Gluster-users@gluster.org > http://supercolony.gluster.org/mailman/listinfo/gluster-users ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] NoSQL tools that run on Gluster
Ohhh okay, for example, Riak _> Swift _> Gluster? On Wed, May 1, 2013 at 4:40 PM, Peter Portante wrote: > Greg Kleiman writes: > > > I agree with Jeff that there are overlaps between NoSQL and gluster, > > but there are customers using NoSQL as a metadata store to front end > > gluster as an object store using the native client. > > We have seen setups like this with Gluster-Swift. The work being done on > Gluster-Swift in the future will go a long way to efficiently using > Gluster as a object store. > > Regards, > > -peter > > > Jeff's suggestion > > of using libgfapi or special translators can add even more features > > and performance. > > > > Thanks, Greg > > > > - Original Message - > > From: "Jeff Darcy" > > To: "Jay Vyas" > > Cc: "Gluster-users@gluster.org" > > Sent: Wednesday, May 1, 2013 11:26:47 AM > > Subject: Re: [Gluster-users] NoSQL tools that run on Gluster > > > > On 05/01/2013 01:50 PM, Jay Vyas wrote: > >> There has been chatter about "X on gluster", where x=mongo, riak,... > >> > >> Im wondering - is there a "most popular" or most well tested > >> transactional datastore that runs/leverages gluster ? > >> > >> Or is the idea of running a transactional nosql tool on gluster still > >> mostly a fun/cool/interesting thought experiment? > > > > I follow developments in the NoSQL world pretty closely, and count many > > people in that space as my friends. This idea comes up often, but > > nobody really pursues it much because what they do and what we do is > > already so similar. The consistent hashing we use in DHT is clearly of > > the same general sort as that used in Cassandra, Riak, or Voldemort. > > Some of the discussions we've had about various forms of replication and > > different consistency models clearly relate well to those same concepts > > in MongoDB or Couchbase. If we're using the same algorithms for things > > like distribution and replication already, why put one on top of the > > other? Putting Cassandra on top of GlusterFS would be too much like > > putting Cassandra on top of itself. > > > > That said, there are a couple of related ideas that are somewhat > > interesting. Most have to do with splicing pieces of these related > > technologies together instead of layering them. What if we could layer > > our front end (full POSIX via FUSE plus SMB/NFS support) on top of their > > back end? What if we could put their API on top of our back end with a > > specialized translator or libgfapi, much as we're doing for Swift and > > Hadoop? There are plenty of possibilities like that to explore. > > ___ > > Gluster-users mailing list > > Gluster-users@gluster.org > > http://supercolony.gluster.org/mailman/listinfo/gluster-users > > ___ > > Gluster-users mailing list > > Gluster-users@gluster.org > > http://supercolony.gluster.org/mailman/listinfo/gluster-users > ___ > Gluster-users mailing list > Gluster-users@gluster.org > http://supercolony.gluster.org/mailman/listinfo/gluster-users > -- Jay Vyas http://jayunit100.blogspot.com ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] NoSQL tools that run on Gluster
Yeah, I was thinking - gluster as a domain-specific-database PLATFORM, rather than just a store for some other db that already indexes, hashes, and shards. Where - you define the CAP tradeoffs in translator stack using glupy or something like that. On Wed, May 1, 2013 at 2:26 PM, Jeff Darcy wrote: > On 05/01/2013 01:50 PM, Jay Vyas wrote: > > There has been chatter about "X on gluster", where x=mongo, riak,... > > > > Im wondering - is there a "most popular" or most well tested > > transactional datastore that runs/leverages gluster ? > > > > Or is the idea of running a transactional nosql tool on gluster still > > mostly a fun/cool/interesting thought experiment? > > I follow developments in the NoSQL world pretty closely, and count many > people in that space as my friends. This idea comes up often, but > nobody really pursues it much because what they do and what we do is > already so similar. The consistent hashing we use in DHT is clearly of > the same general sort as that used in Cassandra, Riak, or Voldemort. > Some of the discussions we've had about various forms of replication and > different consistency models clearly relate well to those same concepts > in MongoDB or Couchbase. If we're using the same algorithms for things > like distribution and replication already, why put one on top of the > other? Putting Cassandra on top of GlusterFS would be too much like > putting Cassandra on top of itself. > > That said, there are a couple of related ideas that are somewhat > interesting. Most have to do with splicing pieces of these related > technologies together instead of layering them. What if we could layer > our front end (full POSIX via FUSE plus SMB/NFS support) on top of their > back end? What if we could put their API on top of our back end with a > specialized translator or libgfapi, much as we're doing for Swift and > Hadoop? There are plenty of possibilities like that to explore. > -- Jay Vyas http://jayunit100.blogspot.com ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] NoSQL tools that run on Gluster
Hi guys: There has been chatter about "X on gluster", where x=mongo, riak,... Im wondering - is there a "most popular" or most well tested transactional datastore that runs/leverages gluster ? Or is the idea of running a transactional nosql tool on gluster still mostly a fun/cool/interesting thought experiment? -- Jay Vyas http://jayunit100.blogspot.com ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Gluster and Cloudera's Hadoop
Hi james and thanks for submitting this .staging permissions problem to us. It actually came full circle today and we saw it manifest itself in a different context, leading us to a pretty significant fix :) We have a branch available now that fixes this... and also a temporary workaround (easy - just change the permissions yourself or use the umask to change default permissions). ** some interesting details about this bug ** The problem is that we were not reading in hadoop API assigned privileges on ** writes ** of directories and files in the gluster plugin. It turns out that newer release of hadoop (branch-1) actually fixes this for you (for other purposes - to avoid a race-condition)-- By contrasting these two files, you can see that newer hadoop (branch-1) versions actually defensively set the permissions correctly: https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1/src/mapred/org/apache/hadoop/mapreduce/JobSubmissionFiles.java Whereas older hadoop versions do not: http://javasourcecode.org/html/open-source/hadoop/hadoop-0.20.203.0/org/apache/hadoop/mapreduce/JobSubmissionFiles.java.html. ** The official ticket is here ** The official ticket is here https://bugzilla.redhat.com/show_bug.cgi?id=951305 Hope this helps. On Mon, Apr 8, 2013 at 6:07 PM, Jay Vyas wrote: > Hi james: > > Looks like standard Hadoop seems to want to keep the files as permission > 700, just like you mention in your email: > > > https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1/src/mapred/org/apache/hadoop/mapreduce/JobSubmissionFiles.java > > Just a guess : but -- maybe it will work if you try submitting the job > from the same machine that is running your jobtracker? I've seen this > error before when submitting jobs from random places. > > Again, the above is more of a guess than anything else, until we look > further into it..... > > > > > On Mon, Apr 8, 2013 at 4:01 PM, Jay Vyas wrote: > >> Hi james ! >> >> 1) Yes, right now, we run as root. Thanks for noticing :) ... We are >> working on modifying this in the very near future. The problem is that >> the plugin attempts to mount a filesystem, but we recently have discussed >> that auto mount behaviour may be a superfluous feature, since mounting can >> easily be automated for >> nodes in a cluster. >> >> 2) You're right the pervious version of the gluster hadoop filesystem >> implementation did not deal correctly with privileges. >> This is now fixed, however. You can get a "bleeding edge" jar which >> fixes your permissions error from the >> glusterfs-hadoop github repository: >> https://github.com/gluster/hadoop-glusterfs, where these fixes have been >> merged into head. >> >> Also we can get you this jar prebuilt if you want, just let me know! >> >> Thanks for trying out the GlusterFileSystem and keep the feedback coming ! >> >> - Original Message - >> From: "James Gurtowski" >> To: jv...@redhat.com >> Cc: gluster-users@gluster.org >> Sent: Monday, April 8, 2013 2:17:44 PM >> Subject: Gluster and Cloudera's Hadoop >> >> Hello, >> >> It seems the gluster hadoop plugin assumes all hadoop daemons/commands are >> run as root? I was having trouble getting the jobtracker to start because >> every time the fs is initialized a system call "mount -t glusterfs ..." is >> issued. Cloudera runs all daemons as the mapred user who is not allowed to >> run mount, so this is failing. I modified GlusterFileSystem.java (see >> attached diff) and set fs.glusterfs.automount to false in core-site.xml so >> this wouldn't happen. >> That fixed the initial issue of getting daemons to start. >> >> My next issue is getting hadoop jobs to run. I get an error: >> >> File /mnt/glusterfs/user/james/.staging/job_201304081221_0013/job.xml does >> not exist. >> >> I believe this to be a permissions issue, I can access this file fine from >> my account, but the .staging directory is only accessible by the user who >> launches the job : >> >> drwx-- 8 james james 870 Apr 8 14:10 .staging >> >> If I change the permissions, they are changed back (by Cloudera's hadoop) >> when I launch a job: >> Permissions on staging directory >> glusterfs://node001:9000/user/james/.staging are incorrect: rwxrwxrwx. >> Fixing permissions to correct value rwx-- >> >> Any ideas of a work around would be greatly appreciated. >> >> Thanks, >> James >> ___ >> Gluster-users mailing list >> Gluster-users@gluster.org >> http://supercolony.gluster.org/mailman/listinfo/gluster-users >> > > > > -- > Jay Vyas > http://jayunit100.blogspot.com > -- Jay Vyas http://jayunit100.blogspot.com ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] How to properly shutdown Gluster (glusterd *and* related processes)
This is a great question, something I've been wondering. Reposting some details from jeff darcy's email regarding a similar question which i asked could help shed some light on this: 1) The daemons that run in gluster are: glusterd = management daemon glusterfsd = per-brick daemon glustershd = self-heal daemon glusterfs = usually client-side, but also NFS on servers 2) The lifecycle of the daemons: *** The others are all started from glusterd, in response to volume start and stop commands *** *** They're actually all the same executable with different translators *** *** glusterfs-server = the server side gluster implementation, which needs to be instaled for serving gluster data *** 3) When glusterd starts up: It spawns any daemons that "should" be running (according to which volumes are started, which have NFS or replication enabled, etc.) and seem to be missing. So... If thats the case then I would say that ***stopping glusterd*** should invert the "starting" of the above processes ... right? But I would leave it to the gluster vets to answer this definitively... On Wed, Apr 10, 2013 at 11:51 AM, Guido De Rosa wrote: > Hello list, > > I've installed GlusterFS via Debian experimental packages, version > 3.4.0~qa9realyalpha2-1. > > ( For the records, the reason I use an alpha release is that I want > this feature: > http://raobharata.wordpress.com/2012/10/29/qemu-glusterfs-native-integration/ > ) > > I've also followed the Quick Start Guide and now I have a cluster of 2 > virtual machines, each contributing to a Gluster volume with one brick > each. > > Now my issue: > > Let's assume no machine has actually mounted the Gluster volume. > > If I do: > > ps aux | grep gluster > > I get a couple of daemons: glusterd, glusterfsd, glusterfs. > > If I do: > > /etc/init.d/glusterfs-server stop > > I find (re-issuing ps) that glusterd has been terminated BUT the other > processes (glusterfs and glusterfsd instances) *are still running*. > > (The same happens if I manually kill the glusterd process). > > Is this normal? Doesn't this leave the system in an inconsistent > state? (For example on system shutdown). > > Should the init script be fixed? (maybe including "gluster volume > stop" or something)? > > What's the best practice to terminate *all* Gluster related process > (especially on system shutdown/reboot)? > > Thanks, > Guido > ___ > Gluster-users mailing list > Gluster-users@gluster.org > http://supercolony.gluster.org/mailman/listinfo/gluster-users > -- Jay Vyas http://jayunit100.blogspot.com ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Gluster and Cloudera's Hadoop
Hi james: Looks like standard Hadoop seems to want to keep the files as permission 700, just like you mention in your email: https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1/src/mapred/org/apache/hadoop/mapreduce/JobSubmissionFiles.java Just a guess : but -- maybe it will work if you try submitting the job from the same machine that is running your jobtracker? I've seen this error before when submitting jobs from random places. Again, the above is more of a guess than anything else, until we look further into it. On Mon, Apr 8, 2013 at 4:01 PM, Jay Vyas wrote: > Hi james ! > > 1) Yes, right now, we run as root. Thanks for noticing :) ... We are > working on modifying this in the very near future. The problem is that > the plugin attempts to mount a filesystem, but we recently have discussed > that auto mount behaviour may be a superfluous feature, since mounting can > easily be automated for > nodes in a cluster. > > 2) You're right the pervious version of the gluster hadoop filesystem > implementation did not deal correctly with privileges. > This is now fixed, however. You can get a "bleeding edge" jar which fixes > your permissions error from the > glusterfs-hadoop github repository: > https://github.com/gluster/hadoop-glusterfs, where these fixes have been > merged into head. > > Also we can get you this jar prebuilt if you want, just let me know! > > Thanks for trying out the GlusterFileSystem and keep the feedback coming ! > > - Original Message - > From: "James Gurtowski" > To: jv...@redhat.com > Cc: gluster-users@gluster.org > Sent: Monday, April 8, 2013 2:17:44 PM > Subject: Gluster and Cloudera's Hadoop > > Hello, > > It seems the gluster hadoop plugin assumes all hadoop daemons/commands are > run as root? I was having trouble getting the jobtracker to start because > every time the fs is initialized a system call "mount -t glusterfs ..." is > issued. Cloudera runs all daemons as the mapred user who is not allowed to > run mount, so this is failing. I modified GlusterFileSystem.java (see > attached diff) and set fs.glusterfs.automount to false in core-site.xml so > this wouldn't happen. > That fixed the initial issue of getting daemons to start. > > My next issue is getting hadoop jobs to run. I get an error: > > File /mnt/glusterfs/user/james/.staging/job_201304081221_0013/job.xml does > not exist. > > I believe this to be a permissions issue, I can access this file fine from > my account, but the .staging directory is only accessible by the user who > launches the job : > > drwx-- 8 james james 870 Apr 8 14:10 .staging > > If I change the permissions, they are changed back (by Cloudera's hadoop) > when I launch a job: > Permissions on staging directory > glusterfs://node001:9000/user/james/.staging are incorrect: rwxrwxrwx. > Fixing permissions to correct value rwx-- > > Any ideas of a work around would be greatly appreciated. > > Thanks, > James > ___ > Gluster-users mailing list > Gluster-users@gluster.org > http://supercolony.gluster.org/mailman/listinfo/gluster-users > -- Jay Vyas http://jayunit100.blogspot.com ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Gluster and Cloudera's Hadoop
Hi james ! 1) Yes, right now, we run as root. Thanks for noticing :) ... We are working on modifying this in the very near future. The problem is that the plugin attempts to mount a filesystem, but we recently have discussed that auto mount behaviour may be a superfluous feature, since mounting can easily be automated for nodes in a cluster. 2) You're right the pervious version of the gluster hadoop filesystem implementation did not deal correctly with privileges. This is now fixed, however. You can get a "bleeding edge" jar which fixes your permissions error from the glusterfs-hadoop github repository: https://github.com/gluster/hadoop-glusterfs, where these fixes have been merged into head. Also we can get you this jar prebuilt if you want, just let me know! Thanks for trying out the GlusterFileSystem and keep the feedback coming ! - Original Message - From: "James Gurtowski" To: jv...@redhat.com Cc: gluster-users@gluster.org Sent: Monday, April 8, 2013 2:17:44 PM Subject: Gluster and Cloudera's Hadoop Hello, It seems the gluster hadoop plugin assumes all hadoop daemons/commands are run as root? I was having trouble getting the jobtracker to start because every time the fs is initialized a system call "mount -t glusterfs ..." is issued. Cloudera runs all daemons as the mapred user who is not allowed to run mount, so this is failing. I modified GlusterFileSystem.java (see attached diff) and set fs.glusterfs.automount to false in core-site.xml so this wouldn't happen. That fixed the initial issue of getting daemons to start. My next issue is getting hadoop jobs to run. I get an error: File /mnt/glusterfs/user/james/.staging/job_201304081221_0013/job.xml does not exist. I believe this to be a permissions issue, I can access this file fine from my account, but the .staging directory is only accessible by the user who launches the job : drwx-- 8 james james 870 Apr 8 14:10 .staging If I change the permissions, they are changed back (by Cloudera's hadoop) when I launch a job: Permissions on staging directory glusterfs://node001:9000/user/james/.staging are incorrect: rwxrwxrwx. Fixing permissions to correct value rwx-- Any ideas of a work around would be greatly appreciated. Thanks, James ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] Gluster and Cloudera's Hadoop
Hi james ! 1) Yes, right now, we run as root. Thanks for noticing :) ... We are working on modifying this in the very near future. The problem is that the plugin attempts to mount a filesystem, but we recently have discussed that auto mount behaviour may be a superfluous feature, since mounting can easily be automated for nodes in a cluster. 2) You're right the pervious version of the gluster hadoop filesystem implementation did not deal correctly with privileges. This is now fixed, however. You can get a "bleeding edge" jar which fixes your permissions error from the glusterfs-hadoop github repository: https://github.com/gluster/hadoop-glusterfs, where these fixes have been merged into head. Also we can get you this jar prebuilt if you want, just let me know... Thanks for trying out the GlusterFileSystem and keep the feedback coming ...! -- Jay Vyas http://jayunit100.blogspot.com ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] A simple way to federate gluster namespace ?
unionfs sounds like it may work. Not sure what you mean by "tree"? On Fri, Apr 5, 2013 at 6:56 PM, Robert Hajime Lanning wrote: > On 04/05/13 15:33, Jay Vyas wrote: > >> Hi guys: >> >> BTW thanks for the insights regarding locality . Now I have a new >> stupid question for you: >> >> Namespace federation ! >> >> Say I have two gluster volumes, and I want to access both volumes from >> the same mount point. >> >> It would be cool if there was a "gluster volume federate volA volB >> supervol", which created a new volume that read/wrote to supervol/volA >> super/volB transparently. >> >> But in the absence of such a command, could I just federate two gluster >> namespaces using the mount command? Would there be nasty hidden overhead >> and costs to this? >> >> i.e. something like: >> >> mount -o /tmp/supermount/subA /submount/a >> mount -o /tmp/supermount/subB /submount/b >> >> Or maybe you could do the equivalent with symlinks? >> > > Are you wanting to mix namespaces or make a tree? > > A tree is easy: > /mnt/vola > /mnt/volb > > If you want to mix namespaces (i.e. have the roots mingle so an ls show > files from both), that is not possible. > > In linux you might be able to hack something with unionfs, but I am not > sure. > > You won't be able to have a server mount both, then us them as bricks in a > "super volume", as the xattrs will clash. > > -- > Mr. Flibble > King of the Potato People > -- Jay Vyas http://jayunit100.blogspot.com ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] A simple way to federate gluster namespace ?
Hi guys: BTW thanks for the insights regarding locality . Now I have a new stupid question for you: Namespace federation ! Say I have two gluster volumes, and I want to access both volumes from the same mount point. It would be cool if there was a "gluster volume federate volA volB supervol", which created a new volume that read/wrote to supervol/volA super/volB transparently. But in the absence of such a command, could I just federate two gluster namespaces using the mount command? Would there be nasty hidden overhead and costs to this? i.e. something like: mount -o /tmp/supermount/subA /submount/a mount -o /tmp/supermount/subB /submount/b Or maybe you could do the equivalent with symlinks? -- Jay Vyas http://jayunit100.blogspot.com ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] GlusterFS and optimization of locality.
Hi guys: suppose I was going to serve a pedabyte of data sharded over 10 files (1,2,3,...,10) over glusterfs, in 3 servers (call them Server1, Server2, and Server3). The 3 servers would need access to the files such that : Server 1 will usually only access file 1 Server 2 will usually only access file2. Server 3 will access all ten files (the whole data set). Is there a way to get gluster to rebalance bricks over time based on access patterns ... or otherwise .. what is the best way to increase the average locality of access to files in the cluster ? -- Jay Vyas http://jayunit100.blogspot.com ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] MapReduce on glusterfs in Hadoop
Hi nithin: The fuse mount is what allows the filesystem to access distributed files in gluster: that is, GlusterFS has its own fuse mount ... And GlusterFileSystem wraps that in hadoop FileSystem semantics. Meanwhile, The mapreduce jobs are invoked using on custom core-site and mapred-site XML nodes which specify GlusterFileSystem as the dfs. On Feb 22, 2013, at 3:17 AM, Nikhil Agarwal wrote: > Hi All, > > > > Thanks a lot for taking out your time to answer my question. > > > > I am trying to implement a file system in hadoop under irg.apache.hadoop.fs > package something similar to KFS, glusterfs, etc. I wanted to know is that in > README.txt of glusterfs it is mentioned : > > > > >> # ./bin/start-mapred.sh > If the map/reduce job/task trackers are up, all I/O will be done to > GlusterFS. > > > > So, suppose my input files are scattered in different nodes(glusterfs > servers), how do I(hadoop client having glusterfs plugged in) issue a > Mapreduce command? > > Moreover, after issuing a Mapreduce command would my hadoop client fetch all > the data from different servers to my local machine and then do a Mapreduce > or would it start the TaskTracker daemons on the machine(s) where the input > file(s) are located and perform a Mapreduce there? > > Please rectify me if I am wrong but I suppose that the location of input > files top Mapreduce is being returned by the function getFileBlockLocations > (FileStatus file, long start, long len). > > > > Thank you very much for your time and helping me out. > > > > Regards, > > Nikhil > > > > > ___ > Gluster-users mailing list > Gluster-users@gluster.org > http://supercolony.gluster.org/mailman/listinfo/gluster-users ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] (no subject)
Hi guys: Oddly, when I try to start glusterd using sudo service glusterd start I get no logs to /var/log/gluster* Upon jeff's advice, I ran glusterd this way: glusterd --debug -f /etc/glusterfs/... And below is the output. It appears to be related to ports but im not sure ... any idea whats going on? [jay@fedoravm ~]$ glusterd --debug -f /etc/glusterfs/glusterd.vol [2013-01-07 20:34:59.163535] I [glusterfsd.c:1877:main] 0-glusterd: Started running glusterd version 3git (glusterd --debug -f /etc/glusterfs/glusterd.vol) [2013-01-07 20:34:59.163784] D [glusterfsd.c:549:get_volfp] 0-glusterfsd: loading volume file /etc/glusterfs/glusterd.vol [2013-01-07 20:34:59.165230] I [glusterd.c:929:init] 0-management: Using /var/lib/glusterd as working directory [2013-01-07 20:34:59.165324] D [glusterd.c:330:glusterd_rpcsvc_options_build] 0-: listen-backlog value: 128 [2013-01-07 20:34:59.165485] D [rpcsvc.c:1900:rpcsvc_init] 0-rpc-service: RPC service inited. [2013-01-07 20:34:59.165539] D [rpcsvc.c:1665:rpcsvc_program_register] 0-rpc-service: New program registered: GF-DUMP, Num: 123451501, Ver: 1, Port: 0 [2013-01-07 20:34:59.165588] D [rpc-transport.c:248:rpc_transport_load] 0-rpc-transport: attempt to load file /usr/lib/glusterfs/3git/rpc-transport/socket.so [2013-01-07 20:34:59.166966] I [socket.c:3390:socket_init] 0-socket.management: SSL support is NOT enabled [2013-01-07 20:34:59.167078] I [socket.c:3405:socket_init] 0-socket.management: using system polling thread [2013-01-07 20:34:59.167106] D [name.c:557:server_fill_address_family] 0-socket.management: option address-family not specified, defaulting to inet [2013-01-07 20:34:59.167222] E [socket.c:665:__socket_server_bind] 0-socket.management: binding to failed: Address already in use [2013-01-07 20:34:59.167254] E [socket.c:668:__socket_server_bind] 0-socket.management: Port is already in use [2013-01-07 20:34:59.167280] W [rpcsvc.c:1395:rpcsvc_transport_create] 0-rpc-service: listening on transport failed [2013-01-07 20:34:59.167304] E [glusterd.c:1023:init] 0-management: creation of listener failed [2013-01-07 20:34:59.167351] E [xlator.c:408:xlator_init] 0-management: Initialization of volume 'management' failed, review your volfile again [2013-01-07 20:34:59.167370] E [graph.c:292:glusterfs_graph_init] 0-management: initializing translator failed [2013-01-07 20:34:59.167387] E [graph.c:479:glusterfs_graph_activate] 0-graph: init failed [2013-01-07 20:34:59.167513] W [glusterfsd.c:969:cleanup_and_exit] (-->glusterd(main+0x39d) [0x40493d] (-->glusterd(glusterfs_volumes_init+0xb7) [0x407527] (-->glusterd(glusterfs_process_volfp+0x103) [0x407433]))) 0-: received signum (0), shutting down [2013-01-07 20:34:59.167554] D [glusterfsd-mgmt.c:2214:glusterfs_mgmt_pmap_signout] 0-fsd-mgmt: portmapper signout arguments not given ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] compile error (glusterfs)
Anyone familiar with this error? [jay@fedoravm glusterfs]$ make make --no-print-directory --quiet all-recursive Making all in argp-standalone Making all in . CC argp-ba.o CC argp-eexst.o CC argp-fmtstream.o CC argp-help.o CC argp-parse.o CC argp-pv.o CC argp-pvh.o AR libargp.a Making all in libglusterfs Making all in src CC libglusterfs_la-dict.lo CC libglusterfs_la-xlator.lo CC libglusterfs_la-logging.lo CC libglusterfs_la-hashfn.lo CC libglusterfs_la-defaults.lo CC libglusterfs_la-common-utils.lo CC libglusterfs_la-timer.lo CC libglusterfs_la-inode.lo CC libglusterfs_la-call-stub.lo CC libglusterfs_la-compat.lo CC libglusterfs_la-fd.lo CC libglusterfs_la-compat-errno.lo CC libglusterfs_la-event.lo CC libglusterfs_la-mem-pool.lo CC libglusterfs_la-gf-dirent.lo CC libglusterfs_la-syscall.lo CC libglusterfs_la-iobuf.lo CC libglusterfs_la-globals.lo CC libglusterfs_la-statedump.lo CC libglusterfs_la-stack.lo CC libglusterfs_la-checksum.lo CC libglusterfs_la-daemon.lo CC libglusterfs_la-rb.lo CC libglusterfs_la-rbthash.lo CC libglusterfs_la-latency.lo CC libglusterfs_la-graph.lo CC libglusterfs_la-clear.lo CC libglusterfs_la-copy.lo CC libglusterfs_la-gen_uuid.lo CC libglusterfs_la-pack.lo CC libglusterfs_la-parse.lo CC libglusterfs_la-unparse.lo CC libglusterfs_la-uuid_time.lo CC libglusterfs_la-compare.lo CC libglusterfs_la-isnull.lo CC libglusterfs_la-unpack.lo CC libglusterfs_la-syncop.lo CC libglusterfs_la-graph-print.lo CC libglusterfs_la-trie.lo CC libglusterfs_la-run.lo CC libglusterfs_la-options.lo CC libglusterfs_la-fd-lk.lo CC libglusterfs_la-circ-buff.lo CC libglusterfs_la-event-history.lo CC libglusterfs_la-gidcache.lo CC libglusterfs_la-ctx.lo CC libglusterfs_la-basename_r.lo CC libglusterfs_la-dirname_r.lo CC libglusterfs_la-gf_mkostemp.lo CC libglusterfs_la-event-poll.lo CC libglusterfs_la-event-epoll.lo CC libglusterfs_la-y.tab.lo CC libglusterfs_la-graph.lex.lo CCLD libglusterfs.la Making all in rpc Making all in rpc-lib Making all in src CC auth-unix.lo CC rpcsvc-auth.lo CC rpcsvc.lo CC auth-null.lo CC rpc-transport.lo CC xdr-rpc.lo CC xdr-rpcclnt.lo CC rpc-clnt.lo CC auth-glusterfs.lo CCLD libgfrpc.la Making all in rpc-transport Making all in socket Making all in src CC socket.lo CC name.lo CCLD socket.la Making all in xdr Making all in src CC libgfxdr_la-xdr-generic.lo CC libgfxdr_la-rpc-common-xdr.lo CC libgfxdr_la-glusterfs3-xdr.lo CC libgfxdr_la-cli1-xdr.lo CC libgfxdr_la-glusterd1-xdr.lo CC libgfxdr_la-portmap-xdr.lo CC libgfxdr_la-nlm4-xdr.lo CC libgfxdr_la-xdr-nfs3.lo CC libgfxdr_la-msg-nfs3.lo CC libgfxdr_la-nsm-xdr.lo CC libgfxdr_la-nlmcbk-xdr.lo CC libgfxdr_la-acl3-xdr.lo CCLD libgfxdr.la Making all in api Making all in src CC libgfapi_la-glfs.lo CC libgfapi_la-glfs-mgmt.lo CC libgfapi_la-glfs-fops.lo CC libgfapi_la-glfs-resolve.lo CCLD libgfapi.la CC glfs-master.lo CCLD api.la Making all in xlators Making all in cluster Making all in stripe Making all in src CC stripe.lo CC stripe-helpers.lo CC libxlator.lo CCLD stripe.la Making all in afr Making all in src CC afr-dir-read.lo CC afr-dir-write.lo CC afr-inode-read.lo CC afr-inode-write.lo CC afr-open.lo CC afr-transaction.lo CC afr-self-heal-data.lo CC afr-self-heal-common.lo CC afr-self-heal-metadata.lo CC afr-self-heal-entry.lo CC afr-self-heal-algorithm.lo CC afr-lk-common.lo CC afr-self-heald.lo CC libxlator.lo CC afr.lo make[5]: *** [afr.lo] Error 1 make[4]: *** [all-recursive] Error 1 make[3]: *** [all-recursive] Error 1 make[2]: *** [all-recursive] Error 1 make[1]: *** [all-recursive] Error 1 -- Jay Vyas http://jayunit100.blogspot.com ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] libgfapi docs
Hi guys: Whats the status for documentation on the libgfapi. -- Jay Vyas http://jayunit100.blogspot.com ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] "Failed to perform brick order check,..."
Hi guys: I have just installed gluster on a single instance, and the command: gluster volume create gv0 replica 2 server.n1:/export/brick1 server.n1:/export/brick2 returns with: "Failed to perform brick order check... do you want to continue ..? y/N"? What is the meaning of this error message, and why does brick order matter? -- Jay Vyas http://jayunit100.blogspot.com ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] mount glusterfs: ports
Regarding the error message--- Why would gluster "fail to get a port number for a remote subvolume?" Where is the port number being "gotten" from? Isn't it simply hardcoded into the mount command? On Sun, Dec 30, 2012 at 5:12 PM, wrote: > The following command > > mount -t glusterfs domU-12-31-39-00-A5-7B:111:PetShop /mnt/petshop > > fails. Luckily, I found that under /var/log/glusterfs there is a specific > log file for this mount operation: > > It appears that its related to the "port number". > > [root@domU-12-31-39-00-A5-7B ~]# tail /var/log/glusterfs/mnt-petshop.log > [2012-12-30 20:44:17.509907] E > [client-handshake.c:1717:client_query_portmap_cbk] 0-PetShop-client-1: > failed to get the port number for remote subvolume > > I have several questions on this subject: > > 1) How can I know *what port *my gluster server is using? > 2) Can I mount a gluster server as localhost (i.e. *localhost:PetShop* or > 127.0.0.1:PetShop) ? > 3) Is it possible using a command similar to "*gluster volume info*" to > show the *ports* that Bricks are serving on? > > <<<< LOG DUMP FROM ABOVE CONTINUED >>>> > > [2012-12-30 20:44:17.510088] I [client.c:2090:client_rpc_notify] > 0-PetShop-client-1: disconnected > [2012-12-30 20:44:17.527225] I [fuse-bridge.c:4193:fuse_graph_setup] > 0-fuse: switched to graph 0 > [2012-12-30 20:44:17.527727] I [fuse-bridge.c:3376:fuse_init] > 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.13 kernel > 7.13 > [2012-12-30 20:44:17.527882] E [dht-common.c:1372:dht_lookup] > 0-PetShop-dht: Failed to get hashed subvol for / > [2012-12-30 20:44:17.528120] E [dht-common.c:1372:dht_lookup] > 0-PetShop-dht: Failed to get hashed subvol for / > [2012-12-30 20:44:17.528168] W [fuse-bridge.c:513:fuse_attr_cbk] > 0-glusterfs-fuse: 2: LOOKUP() / => -1 (Invalid argument) > [2012-12-30 20:44:17.535553] I [fuse-bridge.c:4093:fuse_thread_proc] > 0-fuse: unmounting /mnt/petshop > [2012-12-30 20:44:17.536129] W [glusterfsd.c:831:cleanup_and_exit] > (-->/lib64/libc.so.6(clone+0x6d) [0x3e9a4e5ccd] > (-->/lib64/libpthread.so.0() [0x3e9ac077f1] > (-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xdd) [0x405cfd]))) 0-: > received signum (15), shutting down > [2012-12-30 20:44:17.536160] I [fuse-bridge.c:4643:fini] 0-fuse: > Unmounting '/mnt/petshop'. > > Jay Vyas > http://jayunit100.blogspot.com > -- Jay Vyas http://jayunit100.blogspot.com ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users