Re: [Gluster-users] Gluster.org site relaunch

2014-07-17 Thread Eco Willson
All,

I have created an etherpad for tracking issues on the site:

https://titanpad.com/gluster-site-relaunch-todo

We are fine tuning a new staging server to check the changes at to make the 
process of building the site go smoother. Until it is put into DNS it can be 
accessed directly at http://23.253.126.183/  (you can see Garrett's new graphic 
incorporated there as well).  I will be adding the Downloads page and fixing 
the HowTo page tomorrow, with a note sent after it is finished.  Please add any 
additional items you do not see listed in the etherpad.

Thanks,

Eco

- Original Message -
From: Haruka Iwao har...@redhat.com
To: Justin Clift jus...@gluster.org
Cc: gluster-users@gluster.org
Sent: Sunday, July 13, 2014 4:26:14 PM
Subject: Re: [Gluster-users] Gluster.org site relaunch

(7/11/14, 10:46 PM), Justin Clift wrote:
 Do you reckon making a pretty looking page with icons and
 stuff like this would be useful?
https://www.transmissionbt.com/download/

No, I don't think we need icons.
Redirecting to a auto-generated diretory list without any pre-caution 
makes feel me awkward.
I just would like a simple page which includes at least the following:

- Which version is the latest, and a link to the directory
- List of package names and short descriptions
- Link to quick start (how to install) page

like this http://ceph.com/resources/downloads/

Regards,
Haruka
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster.org site relaunch

2014-07-10 Thread Eco Willson
All,

Thanks for the rapid feedback. 

Haruka: I made a few changes that address most of the issues here:  In addition 
to the link in the Documentation section, I added an About section to the top 
navigation of the front page.  The image on the About page should be fixed.  
The social media buttons are removed for now since they should actually be on 
the front page but that is not a priority item at the moment.  

Neils:  I am working on full documentation of how to update the site so that 
community members can work on issues as they find them as well as update any 
documentation that is out of date.  Once this is in place that will be the 
preferred way to add documentation going forward, the wiki should be considered 
legacy.

Anders:  The administrators guide that was linked before was very out of date, 
the link now is to the most current documentation.  We can look into posting 
the old admin guide elsewhere if it is helpful.  Vijay (and/or anyone from the 
docs team) are there plans to continue making a PDF version of the docs 
available?  We can reference that as well.

Please keep the feedback coming!

Regards,

Eco

- Original Message -
From: Justin Clift jus...@gluster.org
To: Haruka Iwao har...@redhat.com
Cc: gluster-users@gluster.org
Sent: Thursday, July 10, 2014 2:54:55 AM
Subject: Re: [Gluster-users] Gluster.org site relaunch

On 10/07/2014, at 8:34 AM, Haruka Iwao wrote:
snip
 I've found that some images are missing from the About Gluster page.
 http://www.gluster.org/documentation/About_Gluster/
 Here it shows four broken image links.


Thanks Haruka.  Eco will probably look into this later on
today. :)

+ Justin

--
GlusterFS - http://www.gluster.org

An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.

My personal twitter: twitter.com/realjustinclift

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster.org site relaunch

2014-07-10 Thread Eco Willson
Joe,

I agree with your strong disagreement ;)  There is no intent to remove 
community involvement in the documentation, this is obviously critical.  The 
change is simply in the mechanism (git).  There is no link on the new site to 
the wiki since the intent is to make that legacy.  As for the howto section, 
that section apparently was missed, I can add a placeholder link to the old 
wiki in the meantime until we convert those docs.

- Eco

- Original Message -
From: Joe Julian j...@julianfamily.org
To: Eco Willson ewill...@redhat.com, gluster-users@gluster.org
Sent: Thursday, July 10, 2014 2:04:21 PM
Subject: Re: [Gluster-users] Gluster.org site relaunch

Where's the link to the Wiki?

If the plan is to do away with community edited documentation, I can't 
disagree strongly enough.

On 07/09/2014 07:56 PM, Eco Willson wrote:
 Greetings all,

 Just a quick note to let everyone know that we switched over to the new 
 Gluster.org site earlier this evening, please feel free to take a look for 
 yourselves at www.gluster.org.  In addition to the updated graphics and 
 layout, we are now using the Middleman site generator to allow the site to be 
 statically generated.  Some of the new changes you will notice are a 
 spotlight section on the front page.  New navigation is in place as well.  
 The look and layout have also changed.  We are still maintaining a legacy 
 version of the site so any old bookmarks you have should work just fine.

 Your feedback is welcome and appreciated, let us know of any issues you find. 
  I will write a more detailed blog post that includes some of the changes and 
 how end users can update the documentation portion of the site.

 Thanks,

 Eco
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster.org site relaunch

2014-07-10 Thread Eco Willson
Rodrigo,

I am writing more detailed documentation (and my apologies for not having it 
ready at launch) for how to get involved but the high level overview will be:

For documentation:
1) Grab the docs project
2) Make your changes
3) Commit in git

For making changes directly to the site (e.g. to add navigation for newly added 
pages, to change existing navigation, add media etc)
1) Request access to modify the gluster-site repo
2) Grab the site repo
3) Make changes
4) Commit in git

For clarity, it is still entirely possible to edit the wiki now, the drawback 
being that at some point it will need to be ported from the wiki into the new 
site (this is fairly painless via pandoc and some quick editing, which i will 
include in the documentation).


- Original Message -
From: Rodrigo Gonzalez rjgonzale.li...@gmail.com
To: Eco Willson ewill...@redhat.com, Joe Julian j...@julianfamily.org
Cc: gluster-users@gluster.org
Sent: Thursday, July 10, 2014 2:42:29 PM
Subject: Re: [Gluster-users] Gluster.org site relaunch

El 10/07/14 18:31, Eco Willson escribió:
 Joe,
 
 I agree with your strong disagreement ;)  There is no intent to remove 
 community involvement in the documentation, this is obviously critical.  The 
 change is simply in the mechanism (git).  There is no link on the new site to 
 the wiki since the intent is to make that legacy.  As for the howto section, 
 that section apparently was missed, I can add a placeholder link to the old 
 wiki in the meantime until we convert those docs.

And if wiki is consider legacywhat is the new way for community to
get involved? I am not talking about changes in code...

Thanks!


 
 - Eco
 
 - Original Message -
 From: Joe Julian j...@julianfamily.org
 To: Eco Willson ewill...@redhat.com, gluster-users@gluster.org
 Sent: Thursday, July 10, 2014 2:04:21 PM
 Subject: Re: [Gluster-users] Gluster.org site relaunch
 
 Where's the link to the Wiki?
 
 If the plan is to do away with community edited documentation, I can't 
 disagree strongly enough.
 
 On 07/09/2014 07:56 PM, Eco Willson wrote:
 Greetings all,

 Just a quick note to let everyone know that we switched over to the new 
 Gluster.org site earlier this evening, please feel free to take a look for 
 yourselves at www.gluster.org.  In addition to the updated graphics and 
 layout, we are now using the Middleman site generator to allow the site to 
 be statically generated.  Some of the new changes you will notice are a 
 spotlight section on the front page.  New navigation is in place as well.  
 The look and layout have also changed.  We are still maintaining a legacy 
 version of the site so any old bookmarks you have should work just fine.

 Your feedback is welcome and appreciated, let us know of any issues you 
 find.  I will write a more detailed blog post that includes some of the 
 changes and how end users can update the documentation portion of the site.

 Thanks,

 Eco
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users
 
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users
 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster.org site relaunch

2014-07-10 Thread Eco Willson
HowTo placeholder is up at http://www.gluster.org/documentation/howto/HowTo/

- Original Message -
From: Rodrigo Gonzalez rjgonzale.li...@gmail.com
To: Eco Willson ewill...@redhat.com, Joe Julian j...@julianfamily.org
Cc: gluster-users@gluster.org
Sent: Thursday, July 10, 2014 2:42:29 PM
Subject: Re: [Gluster-users] Gluster.org site relaunch

El 10/07/14 18:31, Eco Willson escribió:
 Joe,
 
 I agree with your strong disagreement ;)  There is no intent to remove 
 community involvement in the documentation, this is obviously critical.  The 
 change is simply in the mechanism (git).  There is no link on the new site to 
 the wiki since the intent is to make that legacy.  As for the howto section, 
 that section apparently was missed, I can add a placeholder link to the old 
 wiki in the meantime until we convert those docs.

And if wiki is consider legacywhat is the new way for community to
get involved? I am not talking about changes in code...

Thanks!


 
 - Eco
 
 - Original Message -
 From: Joe Julian j...@julianfamily.org
 To: Eco Willson ewill...@redhat.com, gluster-users@gluster.org
 Sent: Thursday, July 10, 2014 2:04:21 PM
 Subject: Re: [Gluster-users] Gluster.org site relaunch
 
 Where's the link to the Wiki?
 
 If the plan is to do away with community edited documentation, I can't 
 disagree strongly enough.
 
 On 07/09/2014 07:56 PM, Eco Willson wrote:
 Greetings all,

 Just a quick note to let everyone know that we switched over to the new 
 Gluster.org site earlier this evening, please feel free to take a look for 
 yourselves at www.gluster.org.  In addition to the updated graphics and 
 layout, we are now using the Middleman site generator to allow the site to 
 be statically generated.  Some of the new changes you will notice are a 
 spotlight section on the front page.  New navigation is in place as well.  
 The look and layout have also changed.  We are still maintaining a legacy 
 version of the site so any old bookmarks you have should work just fine.

 Your feedback is welcome and appreciated, let us know of any issues you 
 find.  I will write a more detailed blog post that includes some of the 
 changes and how end users can update the documentation portion of the site.

 Thanks,

 Eco
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users
 
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users
 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Gluster.org site relaunch

2014-07-09 Thread Eco Willson
Greetings all,

Just a quick note to let everyone know that we switched over to the new 
Gluster.org site earlier this evening, please feel free to take a look for 
yourselves at www.gluster.org.  In addition to the updated graphics and layout, 
we are now using the Middleman site generator to allow the site to be 
statically generated.  Some of the new changes you will notice are a spotlight 
section on the front page.  New navigation is in place as well.  The look and 
layout have also changed.  We are still maintaining a legacy version of the 
site so any old bookmarks you have should work just fine.  

Your feedback is welcome and appreciated, let us know of any issues you find.  
I will write a more detailed blog post that includes some of the changes and 
how end users can update the documentation portion of the site.

Thanks,

Eco
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Request for feedback on staging.gluster.org

2014-05-30 Thread Eco Willson
Great to hear that you guys are liking it.  To be clear, the work on the site 
design was actually done by two members of the Red Hat OSAS (Open Source and 
Standards) team, Tuomas Kuosmanen and Garrett LeSage who do a lot of work for 
projects (their work shows up in other notable places like the RDO website, 
oVirt, CentOS and others).  My work has mostly been the manual squirrel herding 
to convert the existing site into the new framework.

As far as having forum type functions, no there is no plan for that at this 
time.  We had tried a similar Q and A forum before that did not work out mainly 
due to management headaches IIRC.  Keep in mind that the framework for the site 
is Middleman, which essentially generates static content so it is not natively 
built to do dynamic pages that would be required for a forum style.  Plugins 
for functionality like that may exist for Middleman but we haven't looked into 
it.

-- Eco

- Original Message -
From: Lalatendu Mohanty lmoha...@redhat.com
To: gluster-users@gluster.org
Sent: Friday, May 30, 2014 4:24:24 AM
Subject: Re: [Gluster-users] Request for feedback on staging.gluster.org

On 05/30/2014 04:40 AM, Eco Willson wrote:
 Dear Community members,

 We have been working on a new site design and we would love to get your 
 feedback.  You can check things out at staging.gluster.org.  Things are still 
 very much in beta (a few pages not displaying properly or at all, etc), but 
 we decided to roll things out so that we can get a list of improvements (and 
 hopefully help from) the community.  If you would like to test the most 
 recent changes on your own machine, you can do the following:

   git clone g...@forge.gluster.org:gluster-site/gluster-site.git

   git clone 
 g...@forge.gluster.org:~eco/gluster-docs-project/ecos-gluster-docs-project.git
  (this is the temporary testing grounds, we will switch back to the official 
 docs-project once initial evaluation is complete)

   cd gluster-site

   gem install bundle

   gem install middleman

   bundle exec middleman

 You will need to manually copy or symlink the contents of the 
 gluster-docs-project/htmltext/documentation folder into the 
 gluster-site/source/documentation directory for testing currently. Once you 
 `bundle exec middleman`, you can point a browser to localhost:4567

 Looking forward to feedback and submissions!

 Regards,

 Eco
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users

Awesome stuff!!


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Request for feedback on staging.gluster.org

2014-05-29 Thread Eco Willson
Dear Community members,

We have been working on a new site design and we would love to get your 
feedback.  You can check things out at staging.gluster.org.  Things are still 
very much in beta (a few pages not displaying properly or at all, etc), but we 
decided to roll things out so that we can get a list of improvements (and 
hopefully help from) the community.  If you would like to test the most recent 
changes on your own machine, you can do the following:

 git clone g...@forge.gluster.org:gluster-site/gluster-site.git

 git clone 
g...@forge.gluster.org:~eco/gluster-docs-project/ecos-gluster-docs-project.git 
(this is the temporary testing grounds, we will switch back to the official 
docs-project once initial evaluation is complete)

 cd gluster-site

 gem install bundle

 gem install middleman

 bundle exec middleman

You will need to manually copy or symlink the contents of the 
gluster-docs-project/htmltext/documentation folder into the 
gluster-site/source/documentation directory for testing currently. Once you 
`bundle exec middleman`, you can point a browser to localhost:4567

Looking forward to feedback and submissions!

Regards,

Eco
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] rogue file

2013-07-09 Thread Eco Willson
1. What is gluster supposed to do when a non-gluster file (no gluster xattrs) 
appears on a gluster brick?
If the file is not accessed via the client mount point, nothing will happen to 
the file from a Gluster standpoint

2. What does the current implementation actually do?
Once the file is accessed from the client mount point, the xattrs are appended 
to the file

3. Does it matter whether it is or is not a replicated brick?
4. Does it matter whether it is a striped brick?
Neither of these should matter.

Thinking about these scenarios after the file has been written:

A. No one ever looks at the file
Nothing will happen in this case

B. A self-heal runs
If the file has not had xattrs appended to it, self-heal will skip the file

C. A directory read gets done on that sub-directory
ls by itself will not append the xattrs, but a `stat` of the file will

D. Someone actually tries to read the file (like if it was called readme.txt 
or some other common name)
A stat or vi in this case will append the xattrs.

In 3.4beta4 it appears even an ls on the file will work.


Thanks,

Eco

- Original Message -
From: Ted Miller tmil...@hcjb.org
To: Gluster-users@gluster.org
Sent: Monday, July 8, 2013 3:20:17 PM
Subject: [Gluster-users] rogue file

A multi-part question relating to a scenario where a rogue program/person 
writes a file into a brick through the underlying file system (e.g. XFS) 
without going through glusterfs?

1. What is gluster supposed to do when a non-gluster file (no gluster xattrs) 
appears on a gluster brick?

2. What does the current implementation actually do?

3. Does it matter whether it is or is not a replicated brick?

4. Does it matter whether it is a striped brick?


Thinking about these scenarios after the file has been written:

A. No one ever looks at the file
B. A self-heal runs
C. A directory read gets done on that sub-directory
D. Someone actually tries to read the file (like if it was called readme.txt 
or some other common name)

Don't have an example at hand, just wondering what would happen as I work on 
putting together a test bed that will be either replica 2 or replica 4, 
unstriped.

Ted Miller
Elkhart, IN, USA


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Is it possible to manually specify brick replication location?

2013-07-08 Thread Eco Willson
Michael,

For your situation, you would want to use something like gluster volume create 
volume name replica 2 node1:/export/bricks/a node2:/export/bricks/a 
node1:/export/bricks/b node2:/export/bricks/b etc...

You are correct that the order in which you specify the volume is how the 
pairing will occur.  Specifying replica 2 means there will always be at least 
two copies of the data, and if one of the nodes goes down, the other will pick 
up automatically.

Thanks,

Eco

- Original Message -
From: Michael Peek p...@nimbios.org
To: Gluster-users@gluster.org
Sent: Monday, July 8, 2013 8:46:47 AM
Subject: [Gluster-users] Is it possible to manually specify brick   
replication location?

Hi Gluster gurus, 

I'm new to Gluster, so if there is a solution already talked about somewhere 
then gladly point me to it and I'll get out of the way. That said, here's my 
problem: 

I have four machines. Each machine is running Ubuntu 12.04 with Gluster 3.2.5. 
Each machine has two drives: 

node1:/export/bricks/a 
node1:/export/bricks/b 
node2:/export/bricks/a 
node2:/export/bricks/b 
node3:/export/bricks/a 
node3:/export/bricks/b 
node4:/export/bricks/a 
node4:/export/bricks/b 

I created a volume with a single replication, added the bricks, mounted it to 
/mnt, and then created a file with touch /mnt/this. The file this appeared 
on the two bricks located on node1: 

node1:/export/bricks/a/this 
and 
node1:/export/bricks/b/this 

So if node1 goes down, all access to the file this is lost. It seemed to me 
that the order in which bricks were added dictated the replication location -- 
i.e. the second brick added is used as the replication destination for the 
first brick, and so on with the 3rd and 4th pair of bricks, 5th and 6th, etc. 

I've searched the archives, and this seems to be confirmed in a past post 
located here: 
http://supercolony.gluster.org/pipermail/gluster-users/2013-June/036272.html 




Replica sets are done in order that the bricks are added to the volume. 
... 



So, you have an issue here, that both bricks of a replica set are on the 
same host. 

Unfortunately, this was the end of the thread and no more information was 
forthcoming. 

Now, I'm just starting out, and my volume is not yet used in production, so I 
have the luxury of removing all the bricks and then adding them back in an 
order that allows for replication to be done across nodes the way that I want. 
But I see this as a serious problem. What happens down the road when I need to 
expand? 

How would I add another machine as a node, and then add it's bricks, and still 
have replication done outside of that one machine? Is there a way to manually 
specify master/replication location? Is there a way to reshuffle replicant 
brick on a running system? 

A couple of solutions have presented themselves to me: 
1) Only add new nodes in pairs, and make sure to add bricks in the correct 
order. 
2) Only add new nodes in pairs, but setup two Gluster volumes and use 
geo-replication (even though the geographical distance between the two clusters 
may be as little as only 1 inch). 
3) Only add new nodes in pairs, and use RAID or LVM to glue the drives 
together, so that as far as Gluster is concerned, each node only has one brick. 

But each of these solutions involves adding new nodes in pairs, which increases 
the incremental cost of expansion more than it feels like it should. It just 
seems to me that there should be a smarter way to handle things than what I'm 
seeing before me, so I'm hoping that I've just missed something obvious. 

So what is the common wisdom among seasoned Gluster admins? 

Thanks for your help, 

Michael 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Best configuration for odd number of different sized bricks

2013-07-08 Thread Eco Willson
Maik,

You can use the add-brick command to add additional bricks to a single 
volume.  Setting up the volume as replica 2 will give you fault tolerance.  You 
will not need to add additional servers per se, but it will be required that 
you add at least two bricks at a time for this setup.

Thanks,

Eco

- Original Message -
From: Maik Kulbe i...@linux-web-development.de
To: gluster-users@gluster.org
Sent: Monday, July 8, 2013 5:15:56 AM
Subject: [Gluster-users] Best configuration for odd number of different sized 
bricks

Hi,

I just got into Gluster and so far what I have seen is impressive. Now I have 
some space I really would like to incorporate into a Gluster volume.

There are 5 Servers containing storage in the range from several 100 GB to a 
few TB each. From what I've read there are several modes to create redundant 
storage.

Is there a possibility to aggregate the storage into a singe volume with 
fault-tolerance(e.g. n replicas) without having the need to always add n new 
servers each time I want to add more storage to the volume?

$ Maik
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] files do not show up on gluster volume

2013-07-02 Thread Eco Willson
Matt,

Do you still have the problem after an `ls -lR` or `stat *` on the mount point?

Thanks,

Eco

- Original Message -
From: Matthew Sacks msacksda...@gmail.com
To: gluster-users@gluster.org
Sent: Tuesday, July 2, 2013 2:55:52 PM
Subject: Re: [Gluster-users] files do not show up on gluster volume

creating a new mount point does not help 


On Tue, Jul 2, 2013 at 2:53 PM, Matthew Sacks  msacksda...@gmail.com  wrote: 



I am trying to touch files on a mounted gluster mount point. 

gluster1:/gv0 24G 786M 22G 4% /mnt 
[root@centos63 ~]# cd /mnt 
[root@centos63 mnt]# ll 
total 0 
[root@centos63 mnt]# touch hi 
[root@centos63 mnt]# ll 
total 0 


The files don't show up after I ls them, but if I try to do a mv operation 
something very strange happens: 
[root@centos63 mnt]# mv /tmp/hi . 
mv: overwrite `./hi'? 

Strange, it seems to be there. 


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] FW: cant mount gluster volume

2012-11-21 Thread Eco Willson

Steve,

The simplest way to troubleshoot (assuming that the nodes are not in 
production) would be:


1) unmounting from the clients
2) stopping gluster
3) `killall gluster{,d,fs,fsd}`
4) Start gluster again

Try to telnet to the ports again afterwards which would be expected to work.

Thanks,

Eco



On 11/21/2012 07:19 AM, Steve Postma wrote:

Your right Eco
I am only able to telnet on port  24007,  ports 24009, 24010 and 24011 are all 
connection refused . Iptables is not running on any of the machines


mseas-data  24007, 24009 are open, 24010 and 24011 closed
nas-0-0 24007 open, 24009,24010 and 24011 closed
nas-0-124007 open, 24009,24010 and 24011 closed



Steve

From: gluster-users-boun...@gluster.org [gluster-users-boun...@gluster.org] on 
behalf of Eco Willson [ewill...@redhat.com]
Sent: Tuesday, November 20, 2012 6:32 PM
To: gluster-users@gluster.org
Subject: Re: [Gluster-users] FW: cant mount gluster volume

Steve,
On 11/20/2012 02:43 PM, Steve Postma wrote:

Hi Eco,
I believe you are asking that I run

find /mount/glusterfs /dev/null

only? That should take care of the issue?

Meaning, run a recursive find against the client mount point
(/mount/glusterfs is used as an example in the docs). This should solve
the specific issue of the files not being visible.
However, the issue of the disk space discrepancy is different. From the
df output, the only filesystem with 18GB is / on the mseas-data node, I
assume this is where you are mounting from?
If so, then the issue goes back to one of connectivity, the gluster
bricks most likely are still not being connected to, which may actually
be the root cause of both problems.

Can you confirm that iptables is off on all hosts (and from any client
you would connect from)? I had seen your previous tests with telnet,
was this done from and to all hosts from the client machine?
Make sure that at a minimum you can hit 24007, 24009, 24010 and 24011.
This will test the management port and the expected initial port for
each of the bricks in the volume.


Thanks,

Eco


Thanks for your time,
Steve


From: gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.org 
[gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.org] on behalf of 
Eco Willson [ewill...@redhat.commailto:ewill...@redhat.com]
Sent: Tuesday, November 20, 2012 5:39 PM
To: gluster-users@gluster.orgmailto:gluster-users@gluster.org
Subject: Re: [Gluster-users] FW: cant mount gluster volume

Steve,

On 11/20/2012 01:32 PM, Steve Postma wrote:

[root@mseas-datamailto:root@mseas-data gdata]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 18G 6.6G 9.7G 41% /
/dev/sda6 77G 49G 25G 67% /scratch
/dev/sda3 18G 3.8G 13G 24% /var
/dev/sda2 18G 173M 16G 2% /tmp
tmpfs 3.9G 0 3.9G 0% /dev/shm
/dev/mapper/the_raid-lv_home
3.0T 2.2T 628G 79% /home
glusterfs#mseas-data:/gdata
15T 14T 606G 96% /gdata


[root@nas-0-0mailto:root@nas-0-0 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 137G 33G 97G 26% /
/dev/sda1 190M 24M 157M 14% /boot
tmpfs 2.0G 0 2.0G 0% /dev/shm
/dev/sdb1 21T 19T 1.5T 93% /mseas-data-0-0

[root@nas-0-1mailto:root@nas-0-1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 137G 34G 97G 26% /
/dev/sda1 190M 24M 157M 14% /boot
tmpfs 2.0G 0 2.0G 0% /dev/shm
/dev/sdb1 21T 19T 1.3T 94% /mseas-data-0-1


Thanks for confirming.

cat of /etc/glusterfs/glusterd.vol from backup

[root@mseas-datamailto:root@mseas-data glusterd]# cat 
/root/mseas_backup/etc/glusterfs/glusterd.vol
volume management
type mgmt/glusterd
option working-directory /etc/glusterd
option transport-type socket,rdma
option transport.socket.keepalive-time 10
option transport.socket.keepalive-interval 2
end-volume


The vol file for 2.x would be in /etc/glusterfs/volume name.vol I believe. It 
should contain an entry similar to this output for each of the servers toward the top 
of the file.

Article you referenced is looking for the words glusterfs-volgen in a vol 
file. I have used locate and grep, but can find no such entry in any .vol files.


This would not appear if the glusterfs-volgen command wasn't used during 
creation. The main consideration is to ensure that you have the command in step 
5:

find /mount/glusterfs /dev/null

- Eco

Thanks





From: 
gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.org
 
[gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.org]
 on behalf of Eco Willson [ewill...@redhat.commailto:ewill...@redhat.commailto:ewill...@redhat.com]
Sent: Tuesday, November 20, 2012 4:03 PM
To: 
gluster-users@gluster.orgmailto:gluster-users@gluster.orgmailto:gluster-users@gluster.org
Subject: Re: [Gluster-users] FW: cant mount gluster volume

Steve,



On 11/20/2012 12:03 PM, Steve Postma wrote:


The do show expected size. I have

Re: [Gluster-users] FW: cant mount gluster volume

2012-11-21 Thread Eco Willson

Steve,

On 11/21/2012 12:13 PM, Steve Postma wrote:

Eco, after stopping Gluster and restarting, same results as before with telnet able to 
connect to 24007, none of the other ports.  I noticed 1 machine has a process running 
that the other two do not. 22603 refers to --volfile-id 
gdata.gluster-data.data
and is only running on the one machine. Is this correct?
If you have a client mount on this machine, then this is expected. If 
24009 is available then that is fine, one port is consumed per brick but 
in instances where gluster has restarted for some reason, the port can 
increment.  Is the df -h output to the client mount correct now or still 
showing as 18GB?


- Eco




[root@mseas-data ~]# ps -ef | grep gluster
root 22582 1  0 15:00 ?00:00:00 /usr/sbin/glusterd -p 
/var/run/glusterd.pid
root 22603 1  0 15:00 ?00:00:00 /usr/sbin/glusterfsd -s 
localhost --volfile-id gdata.gluster-data.data -p 
/var/lib/glusterd/vols/gdata/run/gluster-data-data.pid -S 
/tmp/e3eac7ce95e786a3d909b8fc65ed2059.socket --brick-name /data -l 
/var/log/glusterfs/bricks/data.log --xlator-option 
*-posix.glusterd-uuid=22f1102a-08e6-482d-ad23-d8e063cf32ed --brick-port 24009 
--xlator-option gdata-server.listen-port=24009
root 22609 1  0 15:00 ?00:00:00 /usr/sbin/glusterfs -s 
localhost --volfile-id gluster/nfs -p /var/lib/glusterd/nfs/run/nfs.pid -l 
/var/log/glusterfs/nfs.log -S /tmp/d5c892de43c28a1ee7481b780245b789.socket
root 22690 22511  0 15:01 pts/000:00:00 grep gluster



[root@nas-0-0 ~]# ps -ef | grep gluster
root  7943 1  3 14:43 ?00:00:00 /usr/sbin/glusterd -p 
/var/run/glusterd.pid
root  7965 1  0 14:43 ?00:00:00 /usr/sbin/glusterfs -s 
localhost --volfile-id gluster/nfs -p /var/lib/glusterd/nfs/run/nfs.pid -l 
/var/log/glusterfs/nfs.log -S /tmp/8f87e178e9707e4694ee7a2543c66db9.socket
root  7976  7898  0 14:43 pts/100:00:00 grep gluster
[root@nas-0-0 ~]#
[root@nas-0-1 ~]# ps -ef | grep gluster
root  7567 1  4 14:47 ?00:00:00 /usr/sbin/glusterd -p 
/var/run/glusterd.pid
root  7589 1  0 14:47 ?00:00:00 /usr/sbin/glusterfs -s 
localhost --volfile-id gluster/nfs -p /var/lib/glusterd/nfs/run/nfs.pid -l 
/var/log/glusterfs/nfs.log -S /tmp/6054da6605d9f9d1c1e99252f1d235a6.socket
root  7600  7521  0 14:47 pts/200:00:00 grep gluster

From: gluster-users-boun...@gluster.org [gluster-users-boun...@gluster.org] on 
behalf of Eco Willson [ewill...@redhat.com]
Sent: Wednesday, November 21, 2012 2:52 PM
To: gluster-users@gluster.org
Subject: Re: [Gluster-users] FW: cant mount gluster volume

Steve,

The simplest way to troubleshoot (assuming that the nodes are not in
production) would be:

1) unmounting from the clients
2) stopping gluster
3) `killall gluster{,d,fs,fsd}`
4) Start gluster again

Try to telnet to the ports again afterwards which would be expected to work.

Thanks,

Eco



On 11/21/2012 07:19 AM, Steve Postma wrote:

Your right Eco
I am only able to telnet on port 24007, ports 24009, 24010 and 24011 are all 
connection refused . Iptables is not running on any of the machines


mseas-data 24007, 24009 are open, 24010 and 24011 closed
nas-0-0 24007 open, 24009,24010 and 24011 closed
nas-0-1 24007 open, 24009,24010 and 24011 closed



Steve

From: gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.org 
[gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.org] on behalf of 
Eco Willson [ewill...@redhat.commailto:ewill...@redhat.com]
Sent: Tuesday, November 20, 2012 6:32 PM
To: gluster-users@gluster.orgmailto:gluster-users@gluster.org
Subject: Re: [Gluster-users] FW: cant mount gluster volume

Steve,
On 11/20/2012 02:43 PM, Steve Postma wrote:

Hi Eco,
I believe you are asking that I run

find /mount/glusterfs /dev/null

only? That should take care of the issue?

Meaning, run a recursive find against the client mount point
(/mount/glusterfs is used as an example in the docs). This should solve
the specific issue of the files not being visible.
However, the issue of the disk space discrepancy is different. From the
df output, the only filesystem with 18GB is / on the mseas-data node, I
assume this is where you are mounting from?
If so, then the issue goes back to one of connectivity, the gluster
bricks most likely are still not being connected to, which may actually
be the root cause of both problems.

Can you confirm that iptables is off on all hosts (and from any client
you would connect from)? I had seen your previous tests with telnet,
was this done from and to all hosts from the client machine?
Make sure that at a minimum you can hit 24007, 24009, 24010 and 24011.
This will test the management port and the expected initial port for
each of the bricks in the volume.


Thanks,

Eco


Thanks for your time,
Steve


From: 
gluster-users-boun

Re: [Gluster-users] cant mount gluster volume

2012-11-20 Thread Eco Willson

On 11/19/2012 09:37 PM, John Mark Walker wrote:

Steve - have you been to #gluster on IRC? I recommend you drop by tomorrow 
morning.

-JM


- Original Message -

Thanks, your right. Can telnet to both ports. 24009 and 24007

From: John Mark Walker [johnm...@redhat.com]
Sent: Monday, November 19, 2012 3:44 PM
To: Steve Postma
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] cant mount gluster volume



- Original Message -

I connect on 24009 glusterfs and fail on 27040 glusterd
Steve

27040 is the PID. Were you connecting to the right port? :)

-JM

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Steve,

Some simple things to check in this case would be:

Is iptables running anywhere (clients/servers)?
Are you using RDMA? (test port 24008 if so)
Patrick mentioned that there was a previous cluster, did any hostnames 
or IP's change for these nodes?

Is glusterd running on all servers?
Did you do anything to modify the vol files? (Hand editing is no longer 
required, and moreover, not supported)


In most cases checking those will fix this type of issue.  I did notice 
from the snippet that you had Transport endpoint is not connected 
errors to all three nodes (10.1.1.2, 10.1.1.10, 10.1.1.11), you should 
run these checks to and from all the nodes to be sure.


Thanks,

Eco


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] FW: cant mount gluster volume

2012-11-20 Thread Eco Willson

Steve,

The volume is a pure distribute:


Type: Distribute

In order to have files replicate, you need
1) to have a number of bricks that is a multiple of the replica count, 
e.g., for your three node configuration, you would need two bricks per 
node to set up replica two.  You could set up replica 3, but you will 
take a performance hit in doing so.

2) to add a replica count during the volume creation, e.g.
`gluster volume create vol name replica 2 server1:/export server2:/export

From the volume info you provided, the export directories are different 
for all three nodes:


Brick1: gluster-0-0:/mseas-data-0-0
Brick2: gluster-0-1:/mseas-data-0-1
Brick3: gluster-data:/data
 

Which node are you trying to mount to /data?  If it is not the 
gluster-data node, then it will fail if there is not a /data directory.  
In this case, it is a good thing, since mounting to /data on gluster-0-0 
or gluster-0-1 would not accomplish what you need.
To clarify, there is a distinction to be made between the export volume 
mount and the gluster mount point.  In this case, you are mounting the 
brick.
In order to see all the files, you would need to mount the volume with 
the native client, or NFS.

For the native client:
mount -t glusterfs gluster-data:/gdata /mnt/gluster mount dir
For NFS:
mount -t nfs -o vers=3 gluster-data:/gdata /mnt/gluster mount dir


Thanks,

Eco
On 11/20/2012 09:42 AM, Steve Postma wrote:

  I have a 3 node gluster cluster that had 3.1.4 uninstalled and 3.3.1 
installed.

I had some mounting issues yesterday, from a rocks 6.2 install to the cluster. 
I was able to overcome those issues and mount the export on my node. Thanks to 
all for your help.

However, I can only view the portion of files that is directly stored on the 
one brick in the cluster. The other bricks do not seem to be replicating, tho 
gluster reports the volume as up.

[root@mseas-data ~]# gluster volume info
Volume Name: gdata
Type: Distribute
Volume ID: eccc3a90-212d-4563-ae8d-10a77758738d
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: gluster-0-0:/mseas-data-0-0
Brick2: gluster-0-1:/mseas-data-0-1
Brick3: gluster-data:/data



The brick we are attaching to has this in the fstab file.
/dev/mapper/the_raid-lv_data /data  xfs quota,noauto1 0


but mount -a does not appear to do anything.
I have to run mount -t xfs  /dev/mapper/the_raid-lv_data /data
manually to mount it.



Any help with troubleshooting why we are only seeing data from 1 brick of 3 
would be appreciated,
Thanks,
Steve Postma








From: Steve Postma
Sent: Monday, November 19, 2012 3:29 PM
To: gluster-users@gluster.org
Subject: cant mount gluster volume

  I am still unable to mount a new 3.3.1 glusterfs install. I have tried from 
one of the actual machines in the cluster to itself, as well as from various 
other clients. They all seem to be failing in the same part of the process.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] FW: cant mount gluster volume

2012-11-20 Thread Eco Willson

Steve,




Does df -h show the expected directories on each server, and do they 
show the expected size?


If the file


On 11/20/2012 11:09 AM, Steve Postma wrote:

Hi Eco, thanks for your help.

If I run on brick 1:
mount -t glusterfs gluster-data:/gdata /gdata

it mounts but appears as a 18 GB partition with nothing in it
To confirm, are the export directories mounted properly on all three 
servers?
Does df -h show the expected directories on each server, and do they 
show the expected size?

Does gluster volume info show the same output on all three servers?


I can mount it from the client, but again, there is nothing in it.



Before upgrade this was a 50 TB gluster volume. Was that volume information 
lost with upgrade?
Do you have the old vol files from before the upgrade?  It would be good 
to see them to make sure the volume got recreated properly.

The file structure appears intact on each brick.
As long as the file structure is intact, you will be able to recreate 
the volume although it may require a potentially painful rsync in the 
worst case.


- Eco





Steve



From: gluster-users-boun...@gluster.org [gluster-users-boun...@gluster.org] on 
behalf of Eco Willson [ewill...@redhat.com]
Sent: Tuesday, November 20, 2012 1:29 PM
To: gluster-users@gluster.org
Subject: Re: [Gluster-users] FW: cant mount gluster volume

Steve,

The volume is a pure distribute:


Type: Distribute

In order to have files replicate, you need
1) to have a number of bricks that is a multiple of the replica count,
e.g., for your three node configuration, you would need two bricks per
node to set up replica two. You could set up replica 3, but you will
take a performance hit in doing so.
2) to add a replica count during the volume creation, e.g.
`gluster volume create vol name replica 2 server1:/export server2:/export

 From the volume info you provided, the export directories are different
for all three nodes:

Brick1: gluster-0-0:/mseas-data-0-0
Brick2: gluster-0-1:/mseas-data-0-1
Brick3: gluster-data:/data


Which node are you trying to mount to /data? If it is not the
gluster-data node, then it will fail if there is not a /data directory.
In this case, it is a good thing, since mounting to /data on gluster-0-0
or gluster-0-1 would not accomplish what you need.
To clarify, there is a distinction to be made between the export volume
mount and the gluster mount point. In this case, you are mounting the
brick.
In order to see all the files, you would need to mount the volume with
the native client, or NFS.
For the native client:
mount -t glusterfs gluster-data:/gdata /mnt/gluster mount dir
For NFS:
mount -t nfs -o vers=3 gluster-data:/gdata /mnt/gluster mount dir


Thanks,

Eco
On 11/20/2012 09:42 AM, Steve Postma wrote:

I have a 3 node gluster cluster that had 3.1.4 uninstalled and 3.3.1 installed.

I had some mounting issues yesterday, from a rocks 6.2 install to the cluster. 
I was able to overcome those issues and mount the export on my node. Thanks to 
all for your help.

However, I can only view the portion of files that is directly stored on the 
one brick in the cluster. The other bricks do not seem to be replicating, tho 
gluster reports the volume as up.

[root@mseas-datamailto:root@mseas-data ~]# gluster volume info
Volume Name: gdata
Type: Distribute
Volume ID: eccc3a90-212d-4563-ae8d-10a77758738d
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: gluster-0-0:/mseas-data-0-0
Brick2: gluster-0-1:/mseas-data-0-1
Brick3: gluster-data:/data



The brick we are attaching to has this in the fstab file.
/dev/mapper/the_raid-lv_data /data xfs quota,noauto 1 0


but mount -a does not appear to do anything.
I have to run mount -t xfs /dev/mapper/the_raid-lv_data /data
manually to mount it.



Any help with troubleshooting why we are only seeing data from 1 brick of 3 
would be appreciated,
Thanks,
Steve Postma








From: Steve Postma
Sent: Monday, November 19, 2012 3:29 PM
To: gluster-users@gluster.orgmailto:gluster-users@gluster.org
Subject: cant mount gluster volume

I am still unable to mount a new 3.3.1 glusterfs install. I have tried from one 
of the actual machines in the cluster to itself, as well as from various other 
clients. They all seem to be failing in the same part of the process.

___
Gluster-users mailing list
Gluster-users@gluster.orgmailto:Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.orgmailto:Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


___
Gluster

Re: [Gluster-users] FW: cant mount gluster volume

2012-11-20 Thread Eco Willson

Steve,



On 11/20/2012 12:03 PM, Steve Postma wrote:

The do show expected size. I have a backup of /etc/glusterd and /etc/glusterfs 
from before upgrade.
Can we see the vol file from the 2.x install and the output of df -h for 
each of the bricks?


Its interesting that gluster volume info shows the correct path for each 
machine.

These are the correct mountpoints on each machine, and from each machine I can 
see the files and structure.
If the volume was created in a different order than before, then it is 
expected you would be able to see the files only from the backend 
directories and not from the client mount.
If this is the case, recreating the volume in the correct order should 
show the files from the mount.
If the volume was recreated properly, make sure you have followed the 
upgrade steps to go from versions prior to 3.1:

http://www.gluster.org/community/documentation/index.php/Gluster_3.0_to_3.2_Upgrade_Guide

This would explain why the files can't be viewed from the client, but 
the size discrepancy isn't expected if we see the expected output from 
df for the bricks.





[root@mseas-data data]# gluster volume info

Volume Name: gdata
Type: Distribute
Volume ID: eccc3a90-212d-4563-ae8d-10a77758738d
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: gluster-0-0:/mseas-data-0-0
Brick2: gluster-0-1:/mseas-data-0-1
Brick3: gluster-data:/data




From: gluster-users-boun...@gluster.org [gluster-users-boun...@gluster.org] on 
behalf of Eco Willson [ewill...@redhat.com]
Sent: Tuesday, November 20, 2012 3:02 PM
To: gluster-users@gluster.org
Subject: Re: [Gluster-users] FW: cant mount gluster volume

Steve,




Does df -h show the expected directories on each server, and do they
show the expected size?

If the file


On 11/20/2012 11:09 AM, Steve Postma wrote:

Hi Eco, thanks for your help.

If I run on brick 1:
mount -t glusterfs gluster-data:/gdata /gdata

it mounts but appears as a 18 GB partition with nothing in it

To confirm, are the export directories mounted properly on all three
servers?
Does df -h show the expected directories on each server, and do they
show the expected size?
Does gluster volume info show the same output on all three servers?

I can mount it from the client, but again, there is nothing in it.



Before upgrade this was a 50 TB gluster volume. Was that volume information 
lost with upgrade?

Do you have the old vol files from before the upgrade? It would be good
to see them to make sure the volume got recreated properly.

The file structure appears intact on each brick.

As long as the file structure is intact, you will be able to recreate
the volume although it may require a potentially painful rsync in the
worst case.

- Eco




Steve



From: gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.org 
[gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.org] on behalf of 
Eco Willson [ewill...@redhat.commailto:ewill...@redhat.com]
Sent: Tuesday, November 20, 2012 1:29 PM
To: gluster-users@gluster.orgmailto:gluster-users@gluster.org
Subject: Re: [Gluster-users] FW: cant mount gluster volume

Steve,

The volume is a pure distribute:


Type: Distribute

In order to have files replicate, you need
1) to have a number of bricks that is a multiple of the replica count,
e.g., for your three node configuration, you would need two bricks per
node to set up replica two. You could set up replica 3, but you will
take a performance hit in doing so.
2) to add a replica count during the volume creation, e.g.
`gluster volume create vol name replica 2 server1:/export server2:/export

 From the volume info you provided, the export directories are different
for all three nodes:

Brick1: gluster-0-0:/mseas-data-0-0
Brick2: gluster-0-1:/mseas-data-0-1
Brick3: gluster-data:/data


Which node are you trying to mount to /data? If it is not the
gluster-data node, then it will fail if there is not a /data directory.
In this case, it is a good thing, since mounting to /data on gluster-0-0
or gluster-0-1 would not accomplish what you need.
To clarify, there is a distinction to be made between the export volume
mount and the gluster mount point. In this case, you are mounting the
brick.
In order to see all the files, you would need to mount the volume with
the native client, or NFS.
For the native client:
mount -t glusterfs gluster-data:/gdata /mnt/gluster mount dir
For NFS:
mount -t nfs -o vers=3 gluster-data:/gdata /mnt/gluster mount dir


Thanks,

Eco
On 11/20/2012 09:42 AM, Steve Postma wrote:

I have a 3 node gluster cluster that had 3.1.4 uninstalled and 3.3.1 installed.

I had some mounting issues yesterday, from a rocks 6.2 install to the cluster. 
I was able to overcome those issues and mount the export on my node. Thanks to 
all for your help.

However, I can only view the portion of files that is directly stored on the 
one brick in the cluster. The other bricks do

Re: [Gluster-users] FW: cant mount gluster volume

2012-11-20 Thread Eco Willson

Steve,

On 11/20/2012 01:32 PM, Steve Postma wrote:

  [root@mseas-data gdata]# df -h
FilesystemSize  Used Avail Use% Mounted on
/dev/sda1  18G  6.6G  9.7G  41% /
/dev/sda6  77G   49G   25G  67% /scratch
/dev/sda3  18G  3.8G   13G  24% /var
/dev/sda2  18G  173M   16G   2% /tmp
tmpfs 3.9G 0  3.9G   0% /dev/shm
/dev/mapper/the_raid-lv_home
   3.0T  2.2T  628G  79% /home
glusterfs#mseas-data:/gdata
15T   14T  606G  96% /gdata


[root@nas-0-0 ~]# df -h
FilesystemSize  Used Avail Use% Mounted on
/dev/sda3 137G   33G   97G  26% /
/dev/sda1 190M   24M  157M  14% /boot
tmpfs 2.0G 0  2.0G   0% /dev/shm
/dev/sdb1  21T   19T  1.5T  93% /mseas-data-0-0

[root@nas-0-1 ~]# df -h
FilesystemSize  Used Avail Use% Mounted on
/dev/sda3 137G   34G   97G  26% /
/dev/sda1 190M   24M  157M  14% /boot
tmpfs 2.0G 0  2.0G   0% /dev/shm
/dev/sdb1  21T   19T  1.3T  94% /mseas-data-0-1

Thanks for confirming.



  cat of /etc/glusterfs/glusterd.vol from backup

[root@mseas-data glusterd]# cat /root/mseas_backup/etc/glusterfs/glusterd.vol
volume management
 type mgmt/glusterd
 option working-directory /etc/glusterd
 option transport-type socket,rdma
 option transport.socket.keepalive-time 10
 option transport.socket.keepalive-interval 2
end-volume
The vol file for 2.x would be in /etc/glusterfs/volume name.vol I 
believe. It should contain an entry similar to this output for each of 
the servers toward the top of the file.




Article you referenced is looking for the words glusterfs-volgen in a vol 
file. I have used locate and grep, but can find no such entry in any .vol files.
This would not appear if the glusterfs-volgen command wasn't used during 
creation.  The main consideration is to ensure that you have the command 
in step 5:


find /mount//glusterfs/ /dev/null

- Eco


Thanks





From: gluster-users-boun...@gluster.org [gluster-users-boun...@gluster.org] on 
behalf of Eco Willson [ewill...@redhat.com]
Sent: Tuesday, November 20, 2012 4:03 PM
To: gluster-users@gluster.org
Subject: Re: [Gluster-users] FW: cant mount gluster volume

Steve,



On 11/20/2012 12:03 PM, Steve Postma wrote:

The do show expected size. I have a backup of /etc/glusterd and /etc/glusterfs 
from before upgrade.

Can we see the vol file from the 2.x install and the output of df -h for
each of the bricks?

Its interesting that gluster volume info shows the correct path for each 
machine.

These are the correct mountpoints on each machine, and from each machine I can 
see the files and structure.

If the volume was created in a different order than before, then it is
expected you would be able to see the files only from the backend
directories and not from the client mount.
If this is the case, recreating the volume in the correct order should
show the files from the mount.
If the volume was recreated properly, make sure you have followed the
upgrade steps to go from versions prior to 3.1:
http://www.gluster.org/community/documentation/index.php/Gluster_3.0_to_3.2_Upgrade_Guide

This would explain why the files can't be viewed from the client, but
the size discrepancy isn't expected if we see the expected output from
df for the bricks.



[root@mseas-datamailto:root@mseas-data data]# gluster volume info

Volume Name: gdata
Type: Distribute
Volume ID: eccc3a90-212d-4563-ae8d-10a77758738d
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: gluster-0-0:/mseas-data-0-0
Brick2: gluster-0-1:/mseas-data-0-1
Brick3: gluster-data:/data




From: gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.org 
[gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.org] on behalf of 
Eco Willson [ewill...@redhat.commailto:ewill...@redhat.com]
Sent: Tuesday, November 20, 2012 3:02 PM
To: gluster-users@gluster.orgmailto:gluster-users@gluster.org
Subject: Re: [Gluster-users] FW: cant mount gluster volume

Steve,




Does df -h show the expected directories on each server, and do they
show the expected size?

If the file


On 11/20/2012 11:09 AM, Steve Postma wrote:

Hi Eco, thanks for your help.

If I run on brick 1:
mount -t glusterfs gluster-data:/gdata /gdata

it mounts but appears as a 18 GB partition with nothing in it

To confirm, are the export directories mounted properly on all three
servers?
Does df -h show the expected directories on each server, and do they
show the expected size?
Does gluster volume info show the same output on all three servers?

I can mount it from the client, but again, there is nothing in it.



Before upgrade this was a 50 TB gluster volume. Was that volume information 
lost with upgrade?

Do you have the old vol files from before the upgrade

Re: [Gluster-users] FW: cant mount gluster volume

2012-11-20 Thread Eco Willson

Steve,
On 11/20/2012 02:43 PM, Steve Postma wrote:

Hi Eco,
I believe you are asking that I run

find /mount/glusterfs /dev/null

only? That should take care of the issue?
Meaning, run a recursive find against the client mount point 
(/mount/glusterfs is used as an example in the docs).  This should solve 
the specific issue of the files not being visible.
However, the issue of the disk space discrepancy is different.  From the 
df output, the only filesystem with 18GB is / on the mseas-data node, I 
assume this is where you are mounting from?
If so, then the issue goes back to one of connectivity, the gluster 
bricks most likely are still not being connected to, which may actually 
be the root cause of both problems.


Can you confirm that iptables is off on all hosts (and from any client 
you would connect from)?  I had seen your previous tests with telnet, 
was this done from and to all hosts from the client machine?
Make sure that at a minimum you can hit 24007, 24009, 24010 and 24011.  
This will test the management port and the expected initial port for 
each of the bricks in the volume.



Thanks,

Eco


Thanks for your time,
Steve


From: gluster-users-boun...@gluster.org [gluster-users-boun...@gluster.org] on 
behalf of Eco Willson [ewill...@redhat.com]
Sent: Tuesday, November 20, 2012 5:39 PM
To: gluster-users@gluster.org
Subject: Re: [Gluster-users] FW: cant mount gluster volume

Steve,

On 11/20/2012 01:32 PM, Steve Postma wrote:

  [root@mseas-data gdata]# df -h
FilesystemSize  Used Avail Use% Mounted on
/dev/sda1  18G  6.6G  9.7G  41% /
/dev/sda6  77G   49G   25G  67% /scratch
/dev/sda3  18G  3.8G   13G  24% /var
/dev/sda2  18G  173M   16G   2% /tmp
tmpfs 3.9G 0  3.9G   0% /dev/shm
/dev/mapper/the_raid-lv_home
   3.0T  2.2T  628G  79% /home
glusterfs#mseas-data:/gdata
15T   14T  606G  96% /gdata


[root@nas-0-0 ~]# df -h
FilesystemSize  Used Avail Use% Mounted on
/dev/sda3 137G   33G   97G  26% /
/dev/sda1 190M   24M  157M  14% /boot
tmpfs 2.0G 0  2.0G   0% /dev/shm
/dev/sdb1  21T   19T  1.5T  93% /mseas-data-0-0

[root@nas-0-1 ~]# df -h
FilesystemSize  Used Avail Use% Mounted on
/dev/sda3 137G   34G   97G  26% /
/dev/sda1 190M   24M  157M  14% /boot
tmpfs 2.0G 0  2.0G   0% /dev/shm
/dev/sdb1  21T   19T  1.3T  94% /mseas-data-0-1


Thanks for confirming.

  cat of /etc/glusterfs/glusterd.vol from backup

[root@mseas-data glusterd]# cat /root/mseas_backup/etc/glusterfs/glusterd.vol
volume management
 type mgmt/glusterd
 option working-directory /etc/glusterd
 option transport-type socket,rdma
 option transport.socket.keepalive-time 10
 option transport.socket.keepalive-interval 2
end-volume


The vol file for 2.x would be in /etc/glusterfs/volume name.vol I believe. It 
should contain an entry similar to this output for each of the servers toward the top 
of the file.

Article you referenced is looking for the words glusterfs-volgen in a vol 
file. I have used locate and grep, but can find no such entry in any .vol files.


This would not appear if the glusterfs-volgen command wasn't used during 
creation.  The main consideration is to ensure that you have the command in 
step 5:

find /mount/glusterfs /dev/null

- Eco

Thanks





From: gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.org 
[gluster-users-boun...@gluster.orgmailto:gluster-users-boun...@gluster.org] on behalf of 
Eco Willson [ewill...@redhat.commailto:ewill...@redhat.com]
Sent: Tuesday, November 20, 2012 4:03 PM
To: gluster-users@gluster.orgmailto:gluster-users@gluster.org
Subject: Re: [Gluster-users] FW: cant mount gluster volume

Steve,



On 11/20/2012 12:03 PM, Steve Postma wrote:


The do show expected size. I have a backup of /etc/glusterd and /etc/glusterfs 
from before upgrade.


Can we see the vol file from the 2.x install and the output of df -h for
each of the bricks?


Its interesting that gluster volume info shows the correct path for each 
machine.

These are the correct mountpoints on each machine, and from each machine I can 
see the files and structure.


If the volume was created in a different order than before, then it is
expected you would be able to see the files only from the backend
directories and not from the client mount.
If this is the case, recreating the volume in the correct order should
show the files from the mount.
If the volume was recreated properly, make sure you have followed the
upgrade steps to go from versions prior to 3.1:
http://www.gluster.org/community/documentation/index.php/Gluster_3.0_to_3.2_Upgrade_Guide

This would explain why the files can't be viewed from the client, but
the size discrepancy isn't expected if we see

[Gluster-users] Gluster Community Office Hours, Oct 26 2012 edition

2012-10-25 Thread Eco Willson

Greetings all,

We will be presenting another session of the Gluster Community office 
hours tomorrow at 2PM Eastern/11 AM Pacific.  Tomorrow we will focus on 
basic troubleshooting of some of the most common issues with setup and 
using Gluster.  Join us in the #gluster-meeting room on 
freenode.irc.net, we will be posting the live video feed again tomorrow.


Hope to see you then,

Eco
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] New Gluster community documentation and infrastructure IRC meeting

2012-09-20 Thread Eco Willson
Greetings,

We are working on providing some additions to our documentation with a
focus on enabling new users, and compiling a lot of the tribal knowledge
into a single place for people to read, review and revise.  Some needs
this effort is intended to address:

  * Showcase common best practices, that aren't always commonly known
  * Aggregate the most common troubleshooting and configuration
questions into one location
  * Give new users an easy way to get up to speed with Gluster
  * Provide a community driven set of standard configurations for
implementing Gluster in various ways, such as for use with web servers
  * How To's, Tips and Tricks intended to help expedite setting up
various components of Gluster

A lot of what you will see here initially may be things you already have
seen on the mailing list or in the IRC channel.  So why do it?  We see a
clear need for a place where new users can get educated on Gluster
concepts and implementation.  Getting new users up to speed faster means
getting more meaningful contributions faster as well.  And lets' not
forget all the hard work put in by glusterbot!  Secondly, as you may
know, we are currently looking into changing some of the functionality
of the community site (keep reading if this is news to you, or if you
want to know how to contribute).  This will be a great place to
aggregate information during any transitional periods that may be required.

For those of you whose ears and/or eyes perked up at the mention of
contributing to the community site, we invite you to join us tonight at
10PM Eastern in the #gluster-meeting channel.  Tonight we continue
discussion on how to drive more community centered documentation going
forward, as well as discuss what technologies we can use in place of
some of our existing infrastructure. 

Thanks, and looking forward to seeing you tonight,

Eco

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] New user docs

2012-09-20 Thread Eco Willson
We are excited to announce  some new user guides on the Gluster.org 
wiki.  There are three different flavors available.


Quick Start guide - single page, intended to be completed in a lunch 
break or less

http://www.gluster.org/community/documentation/index.php/QuickStart

Getting Started guide - more comprehensive and includes instructions on 
setting up in multiple environments and distributions

http://www.gluster.org/community/documentation/index.php/Getting_started_overview

Really Really Quick Start Guide - single page subset of the getting 
started guide, a handy reference sheet with the few steps needed to get 
going with Gluster

http://www.gluster.org/community/documentation/index.php/Getting_started_rrqsg

Thanks and enjoy,

Eco
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users