Re: [Gluster-users] Bring up a brick after disk failure

2014-03-19 Thread Jon Tegner

I managed to add the brick by using the "force"-flag, i.e.,


"gluster volume add-brick gluster s1:/mnt/raid6 force"

Hopefully there are no drawbacks involved with this...

/jon

On 19/03/14 12:17, teg...@renget.se wrote:

Hi,

One of my bricks suffered from complete raid failure, (3 disks on 
raid6). I created a new raid, and wanted to bring the brick back up in 
the volume. Did the following

1. Removed it with
gluster volume remove-brick gluster s1:/mnt/raid6 start
gluster volume remove-brick gluster s1:/mnt/raid6 commit

2. Detached with
gluster peer detach s1

3. Probed anew:
gluster peer probe s1

4. Tried to add with
gluster volume add-brick gluster s1:/mnt/raid6

But last one fails with
Stage failed on operation 'Volume Add brick'

I'm using 3.4.2.

Thanks,

/jon


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Problem with duplicate files

2014-02-14 Thread Jon Tegner



On 13/02/14 12:39, Vijay Bellur wrote:

On 02/11/2014 08:48 PM, teg...@renget.se wrote:

Hi,

have a system consisting of 4 bricks, distributed, 3.4.1, and I have
noticed that some of the files are stored on three of the bricks.
Typically a listing can look something like this:

brick1: -T 2 root root 0 Feb 11 15:47 /mnt/raid6/file
brick2: -rw--- 2 2686 2022 10545 Mar  6  2012 /mnt/raid6/file
brick3: -rw--- 2 2686 2022 10545 Mar  6  2012 /mnt/raid6/file
brick4: no such file or directory


Can you please post your volume configuration?

gluster volume info gives:

**
Volume Name: glusterKumiko
Type: Distribute
Volume ID: f54efdc7-0664-409d-b3bc-c39f31040f5e
Status: Started
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: kumiko01IB:/mnt/raid6
Brick2: kumiko02IB:/mnt/raid6
Brick3: kumiko03IB:/mnt/raid6
Brick4: kumiko04IB:/mnt/raid6
**

Also, the file /etc/glusterfs/glusterd.vol have the following content:

**
volume management
type mgmt/glusterd
option working-directory /var/lib/glusterd
option transport-type socket,rdma
option transport.socket.keepalive-time 10
option transport.socket.keepalive-interval 2
option transport.socket.read-fail-log off
#   option base-port 49152
end-volume
***

Should I be worried that rdma is listed as transport-type (and the lack 
of tcp)?







There are quite a few of these files, and it would be tedious to clean
them up manually. Is there a way to have gluster fix these?



This is not a normal occurrence. Do you happen to know the sequence of 
steps which resulted in this?

Probably messed up:

1. When the volume was created (with RDMA) two of the bricks were 
populated with files I wanted to "merge into" gluster. It seemed as if 
they became "incorporated" after doing a "rsync -vaun" from our primary 
file system (the system in question is used as a mirror).


2. At a later time I wanted to go away from RDMA (using IPoIB instead), 
and at that time I removed the old volume, followed hints from


http://joejulian.name/blog/glusterfs-path-or-a-prefix-of-it-is-already-part-of-a-volume/

and created the new volume (with tcp instead of rdma).

3. Just recently I upgraded to 3.4.2-1, and started a

gluster volume rebalance glusterKumiko fix-layout start (since I have a 
lot of

"disk layout missing" and "mismatching layouts" in the logs).

Regards, and thanks!

/jon
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Copy files from brick folders.

2013-12-08 Thread Jon Tegner

On 07/12/13 18:00, Vijay Bellur wrote:
If there is no write activity (either application driven or gluster's 
maintenance operations like self-heal, rebalance), then reading from 
the bricks might be fine. Since these operations are asynchronous in 
nature and since most applications are not written to be aware of the 
implications of loose consistency, the recommended method is to 
perform all reads through a gluster client stack. 
This seems very reasonable. But what about the case where a new 
installation of gluster is supposed to "incorporate" files already 
present at the mount points, how to "bring them in" under gluster?


Regards,

/jon
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS 3.4.0 and 3.3.2 released!

2013-07-16 Thread Jon Tegner
Great!




Much appreciated!




Two (stupid) questions:




1. Which of these two releases would work "best" using RDMA?

2. In release Notes for 3.4.0 it is stated that ext4-issue has been
addressed - does this mean that it is safe to use ext4 with later
kernels now?




Regards, and thanks!




/jon



On Jul 15, 2013 18:38 "Vijay Bellur"  wrote:

> Hi All,
> 
> 3.4.0 and 3.3.2 releases of GlusterFS are now available. GlusterFS
> 3.4.0
> can be downloaded from [1]
> and release notes are available at [2]. Upgrade instructions can be
> found at [3].
> 
> If you would like to propose bug fix candidates or minor features for
> inclusion in 3.4.1, please add them at [4].
> 
> 3.3.2 packages can be downloaded from [5].
> 
> A big note of thanks to everyone who helped in getting these releases
> out!
> 
> Cheers,
> Vijay
> 
> [1] 
> 
> [2]
>  es/3.4.0.md>
> 
> [3]
> 
> 
> [4]
>  hlist#Requested_Backports_for_3.4.1>
> 
> [5] 
> ___
> Gluster-users mailing list
> 
> 
> ___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] cluster.min-free-disk working?

2013-06-19 Thread Jon Tegner

On 20/06/13 06:15, Vijay Bellur wrote:
With 3.3.1, the percentage was being interpreted as a number. A 
workaround was to configure using bytes and the references allude to 
this.


Can you explain a bit more about the issue being seen with 3.3.2qa3? 
Are new files being created in bricks that are more than 80% full 
after setting min.free-disk to 20%?


Regards,
Vijay


Hi!

Yes, new files are being created even on the servers being full to over 80%.

Apart from the min.free-disk I haven't made any configurations. I 
introduced this limit when two of the were full to about 80%, and now 
they are filled up to about 90%.


My guess is that it could have something to do with the fact that the 
two disks in questions were populated already when I installed gluster 
on them, and that those files are not accounted for?


Regards, and thanks!

/jon
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] cluster.min-free-disk working?

2013-06-14 Thread Jon Tegner
Maybe my question was a bit "involved", I'll try again:



while searching the web I have found various issues connected to
"cluster.min-free-disk" (e.g., one shouldn't use % but rather a size
number). Would it be possible with an update of the status?




Thanks,




/jon











On Jun 11, 2013 16:47 "Jon Tegner"  wrote:

> Hi,
> 
> 
> 
> have a system consisting of four bricks, using 3.3.2qa3. I used the
> command
> 
> 
> 
> 
> gluster volume set glusterKumiko cluster.min-free-disk 20%
> 
> 
> 
> Two of the bricks where empty, and two were full to just under 80%
> when
> building the volume.
> 
> 
> 
> Now, when syncing data (from a primary system), and using
> min-free-disk
> 20% I thought new data would go to the two empty bricks, but gluster
> does not seem to honor the 20% limit.
> 
> 
> 
> Have I missed something here?
> 
> 
> 
> Thanks!
> 
> 
> 
> /jon
> 
> 
> 
> 
> 
> 
> 
> ***gluster volume info
> 
> 
> 
> 
> Volume Name: glusterKumiko
> 
> Type: Distribute
> 
> Volume ID: 8f639d0f-9099-46b4-b597-244d89def5bd
> 
> Status: Started
> 
> Number of Bricks: 4
> 
> Transport-type: tcp,rdma
> 
> Bricks:
> 
> Brick1: kumiko01:/mnt/raid6
> 
> Brick2: kumiko02:/mnt/raid6
> 
> Brick3: kumiko03:/mnt/raid6
> 
> Brick4: kumiko04:/mnt/raid6
> 
> Options Reconfigured:
> 
> cluster.min-free-disk: 20%
> 
> 
> 
> 
> 
> ___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] cluster.min-free-disk working?

2013-06-11 Thread Jon Tegner
Hi,



have a system consisting of four bricks, using 3.3.2qa3. I used the
command




gluster volume set glusterKumiko cluster.min-free-disk 20%



Two of the bricks where empty, and two were full to just under 80% when
building the volume.



Now, when syncing data (from a primary system), and using min-free-disk
20% I thought new data would go to the two empty bricks, but gluster
does not seem to honor the 20% limit.



Have I missed something here?



Thanks!



/jon







***gluster volume info




Volume Name: glusterKumiko

Type: Distribute

Volume ID: 8f639d0f-9099-46b4-b597-244d89def5bd

Status: Started

Number of Bricks: 4

Transport-type: tcp,rdma

Bricks:

Brick1: kumiko01:/mnt/raid6

Brick2: kumiko02:/mnt/raid6

Brick3: kumiko03:/mnt/raid6

Brick4: kumiko04:/mnt/raid6

Options Reconfigured:

cluster.min-free-disk: 20%





___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Extending raid

2013-05-29 Thread Jon Tegner
Hi,

I have a storage system consisting of 4 bricks. All of them run
CentOS-6.3, with
gluster 3.3.1. Two of the servers are equipped with twelve 2-TB disks
and two with twelve 1-TB disks (slightly older). I use this as a
secondary, mirrored storage system, and now I would like to do two
things:

1. Swap the disks in the nodes with 1-TB disks to 2 TB disks.
2. Put in infiniband on all the nodes.
3. Even though it is possible to resync all the data from the primary
system I would like to save time by keeping the data on the two servers
with 2-TB disks.




My plan is:




1. Delete the volume.

2. Update the OS (to 6.4) on all four servers.

3. Build a new volume (this time with rdma), using two bricks with data
and two without.

4. "Rsync" the missing data from the primary storage system.

5. Rebalance.




Could there be issues with this approach, e.g., when building the new
volume, will the data on the two underlying raids be visible?




Thanks,




/jon___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Re-balancing after expansion

2013-04-29 Thread Jon Tegner

On 04/29/2013 01:08 PM, Hans Lambermont wrote:

Jon Tegner wrote on 20130429:


On 04/26/2013 12:03 PM, Hans Lambermont wrote:

Jon Tegner wrote on 20130426:

Hi, I have a system using 3.2.6-1, running ext4. I recently expanded
the system from 4 to 5 servers. I have NOT yet done re-balancing of
the system - so at the moment most of the writing of new files goes to
the new server.
My impression is that it would be possible to re balance while the
system is in use, is this correct?

Yes.

went ahead and did
gluster volume rebalance glusterNoriko migrate-data start
and seemed to start OK. However, when running
gluster volume rebalance glusterNoriko status
system reports
rebalance not started. Should I be worried here?

Yes, search the gluster logs for the abort reason and work from there.



gluster volume info gives:

Name: glusterNoriko
Type: Distribute
Status: Started
Number of Bricks: 5
Transport-type: tcp,rdma
Bricks:
Brick1: n01-noriko:/mnt/raid10
Brick2: n02-noriko:/mnt/raid10
Brick3: n03-noriko:/mnt/raid10
Brick4: n04-noriko:/mnt/raid10
Brick5: n05-noriko:/mnt/raid10
Options Reconfigured:
nfs.disable: off

However, when running "gluster volume info"  (or other gluster commands) 
I get stuff like


0-rpc-transport: missing 'option transport-type', defaulting to "socket"
and
geo-replication not installed

Seems to be infiniband related?

Thanks!

/jon





___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Re-balancing after expansion

2013-04-29 Thread Jon Tegner

On 04/26/2013 12:03 PM, Hans Lambermont wrote:

Hi Jon,

Jon Tegner wrote on 20130426:

Hi, I have a system using 3.2.6-1, running ext4. I recently expanded
the system from 4 to 5 servers. I have NOT yet done re-balancing of
the system - so at the moment most of the writing of new files goes to
the new server.

My impression is that it would be possible to re balance while the
system is in use, is this correct?

Yes.



Hi,

went ahead and did

gluster volume rebalance glusterNoriko migrate-data start

and seemed to start OK. However, when running

gluster volume rebalance glusterNoriko status

system reports

rebalance not started. Should I be worried here?

Thanks!

/jon
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Re-balancing after expansion

2013-04-26 Thread Jon Tegner
Hi, I have a system using 3.2.6-1, running ext4. I recently expanded the
system from 4 to 5 servers. I have NOT yet done re-balancing of the
system - so at the moment most of the writing of new files goes to the
new server.



My impression is that it would be possible to re balance while the
system is in use, is this correct? Or are there any gotchas involved in
this? The servers consist of about 11 TB each, approximately how long
time would a re-balance take (we are using RDMA here, and assuming four
are full and one is empty)?




Thanks,




/jon___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Interesting post about glusterfs setup

2013-04-13 Thread Jon Tegner

On 04/06/2013 11:50 AM, Daniel Mons wrote:

On 5 April 2013 14:23,   wrote:

Nice read indeed! Question regarding raid. In an hpc-environment we use
gluster on raid10 as a primary file system, and on raid6 on the back up one.
Didn't test, but went for raid10 cause I thoght it would be advantageous
from a performance perspective - can one say something general ragarding
this?
Regards,
/jon

I answer the same question from a forum user in the original link:
http://forums.overclockers.com.au/showpost.php?p=15234327&postcount=14

GlusterFS's main bottleneck is rarely local storage.  I'm running 16
spindles per node in a RAID6+1S setup, with intelligent SSD
caching/reordering for small random IOPS (SSDs are bypassed for large
sequential reads and writes).  The performance of the local disk is
not the problem.  Gluster's single biggest bottleneck (and this is
common to many clustered file systems) is file lookup over the network
for uncached content, and especially negative lookup.  These are
several orders of magnitude slower than the storage, and increasing
the storage IOPS won't help things much at all.

I'm currently losing 3 disks per 16 (~19%) to per-node redundancy.
Moving to RAID10 would bump that to 50% space loss for no real world
net gain, meaning I'd get less storage for the same dollar outlay, and
no measurable speed increase.

What I'd be interested in seeing is someone testing Infiniband/RDMA
compared to Fibre 10GbE on the same storage bricks to see if the
latency reduction has a positive effect on the file lookup speed of
GlusterFS.  If anyone has a link to that sort of test, please post it
in-thread.

-Dan


Thanks!

Our primary system is using infiniband, and gluster 3.2.6-1, with ext4. 
Our backup filsystem uses gluster 3.3.1-1, with gigabit and xfs. I'm 
planning to extend this (the backup one) to use infiniband. Then we 
could test it out (bearing in mind that one system is using ext4 and the 
other is using xfs). However, seen some information that rdma is not 
supported on 3.3.x. Would it be waste of effort to upgrade to infiniband 
in this scenario?


Regards,

/jon
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Expanding with different OS/file system

2012-10-27 Thread Jon Tegner

Hi,

we have a system consisting of 4 nodes, each running CentOS-5.5 and 
raid10 on ext4, and using gluster 3.2.6.


Now we want to expand this system with a 5:th node, and my plan was to 
use the same setup (5.5, raid10, ext4, 3.2.6). However, due to hardware 
issues I can't install CentOS-5.5 on the new machine, and I'm wondering 
of my options here.


Should I:

1. Go for the lowest version of CentOS the new hardware can take, and 
hope this isn't affected by the ext4-issue.


or

2. Should I opt for newer OS, like CentOS-6.3, and use xfs on the new 
machine? That is, would it be an issue to have bricks using different 
underlying file systems?


In both these cases I intend to use 3.2.6.

Regards,

/jon
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Downgrading OS while keeping files

2012-09-26 Thread Jon Tegner
What if I create a gluster system consisting of a number of bricks (say 
4) like


Brick1: k01:/mnt/raid6
Brick2: k02:/mnt/raid6
Brick3: k03:/mnt/raid6
Brick4: k04:/mnt/raid6

and if the mount points on each brick contains files I want to 
"incorporate" into the gluster file system. How can this be achieved?


Thanks,

/jon

On 09/23/2012 12:06 PM, Jon Tegner wrote:

Hi (again),

have four gluster (3.2.6) servers on which I want to downgrade the OS 
(fr CentOS-6 to CentOS-5). Want to keep the file system (with 
raid10/ext4 on the servers), and I contemplate the following method:


Keep all configuration files under /etc/glusterd and /etc/glusterfs 
from the current installation, and after downgrading the OS (where I 
don't touch the raids)  and installing gluster see to that these 
configuration files are identical to how they were before.


Will this bring up file system as it was before the downgrade? Or am I 
missing something here?


Thanks,

/jon

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Downgrading OS while keeping files

2012-09-23 Thread Jon Tegner

Hi (again),

have four gluster (3.2.6) servers on which I want to downgrade the OS 
(fr CentOS-6 to CentOS-5). Want to keep the file system (with 
raid10/ext4 on the servers), and I contemplate the following method:


Keep all configuration files under /etc/glusterd and /etc/glusterfs from 
the current installation, and after downgrading the OS (where I don't 
touch the raids)  and installing gluster see to that these configuration 
files are identical to how they were before.


Will this bring up file system as it was before the downgrade? Or am I 
missing something here?


Thanks,

/jon

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Removing transport type from a volume

2012-09-17 Thread Jon Tegner
After restarting the services, the error messages disappeared, problem
solved ;-)



/jon


On Sep 17, 2012 10:22 "Jon Tegner"  wrote:

> Thanks!
> 
> 
> 
> 
> We see a lot of errors of the type
> 
> "E [rdma.c:4417:tcp_connect_finish] 0-glusterStore2-client-2: tcp
> connect to failed (Connection refused)"
> 
> These errors do appear on the two servers with infiniband, not on the
> ones without.
> 
> Also, despite this flood of error messages in nfs.log, things appear
> to
> work. Is it safe to ignore this?
> 
> 
> 
> 
> Thanks again,
> 
> 
> 
> 
> /jon
> 
> 
> 
> 
> 
> 
> 
> 
> 
> On Sep 17, 2012 08:05 "Vijay Bellur"  wrote:
> 
> > On 09/17/2012 12:39 AM, Jon Tegner wrote:
> > > Have a volume consisting of 4 bricks. It was set up using
> > > infiniband
> > > with
> > >
> > > "Transport-type: tcp,rdma"
> > >
> > > Since then the infiniband adapters have been removed from two of
> > > the
> > > servers, and I would like to remove the "rdma-transport-type" from
> > > the
> > > setup.
> > >
> > > Please excuse me if this is a stupid question, but I haven't been
> > > able
> > > to figure out how to achieve this (is it just by removing "rdma"
> > > from
> > > the "volume.brick.mount.vol-files" on the 4 bricks)?
> > >
> >
> > Right now, there is no way to alter the transport type after a
> > volume
> > has been created.
> >
> > If you are interested in preventing mounts to happen over
> > Infiniband,
> > blocking port 24008 on all servers might be an option.
> >
> > -Vijay
> >
> >___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] page allocation error, CentOS-6.

2012-09-17 Thread Jon Tegner
We have been happily running gluster for a couple of years now, however,
as of lately we have encountered issues.



Issues are rather vague, but included a lot of messages about page
allocation failure, and spontaneous reboots of ONE of the servers (we
have four).




We are using 3.2.6, on CentOS-6, and in




https://bugzilla.redhat.com/show_bug.cgi?id=770545



it is recommended to modify "/proc/sys/vm/zone_reclaim_mode", we did
that, which seemed to stop the "page allocation errors" from appearing.
However, we still get the impression that the file system is "less
responsive" (sorry for not being more specific), and in eg.



https://bugzilla.redhat.com/show_bug.cgi?id=713546



it seems the solution is to wait for CentOS-6.4 (maybe).



Of course, our problem can be related to hardware as well, but I'm
contemplating going back to CentOS-5 instead, and wonder if there are
others that can shed some light on this issue?



Thanks,



/jon___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Removing transport type from a volume

2012-09-17 Thread Jon Tegner
Thanks!




We see a lot of errors of the type

"E [rdma.c:4417:tcp_connect_finish] 0-glusterStore2-client-2: tcp
connect to failed (Connection refused)"

These errors do appear on the two servers with infiniband, not on the
ones without.

Also, despite this flood of error messages in nfs.log, things appear to
work. Is it safe to ignore this?




Thanks again,




/jon









On Sep 17, 2012 08:05 "Vijay Bellur"  wrote:

> On 09/17/2012 12:39 AM, Jon Tegner wrote:
> > Have a volume consisting of 4 bricks. It was set up using infiniband
> > with
> > 
> > "Transport-type: tcp,rdma"
> > 
> > Since then the infiniband adapters have been removed from two of the
> > servers, and I would like to remove the "rdma-transport-type" from
> > the
> > setup.
> > 
> > Please excuse me if this is a stupid question, but I haven't been
> > able
> > to figure out how to achieve this (is it just by removing "rdma"
> > from
> > the "volume.brick.mount.vol-files" on the 4 bricks)?
> > 
> 
> Right now, there is no way to alter the transport type after a volume
> has been created.
> 
> If you are interested in preventing mounts to happen over Infiniband,
> blocking port 24008 on all servers might be an option.
> 
> -Vijay
> 
> ___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Removing transport type from a volume

2012-09-16 Thread Jon Tegner
Or, if someone could indicate were I could learn about the layout of the 
vol-files. Have managed to find


http://www.gluster.org/community/documentation/index.php/Understanding_Vol_Files

but this seems related to an earlier version of gluster (I'm using 3.2.6).

/jon

On 09/16/2012 09:09 PM, Jon Tegner wrote:

Have a volume consisting of 4 bricks. It was set up using infiniband with

"Transport-type: tcp,rdma"

Since then the infiniband adapters have been removed from two of the 
servers, and I would like to remove the "rdma-transport-type" from the 
setup.


Please excuse me if this is a stupid question, but I haven't been able 
to figure out how to achieve this (is it just by removing "rdma" from 
the "volume.brick.mount.vol-files" on the 4 bricks)?


Thanks,

/jon
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Removing transport type from a volume

2012-09-16 Thread Jon Tegner

Have a volume consisting of 4 bricks. It was set up using infiniband with

"Transport-type: tcp,rdma"

Since then the infiniband adapters have been removed from two of the 
servers, and I would like to remove the "rdma-transport-type" from the 
setup.


Please excuse me if this is a stupid question, but I haven't been able 
to figure out how to achieve this (is it just by removing "rdma" from 
the "volume.brick.mount.vol-files" on the 4 bricks)?


Thanks,

/jon
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Adding bricks without infiniband to an "infiniband" volume

2012-09-06 Thread Jon Tegner
Hi,



have a volume, consisting of two bricks:




Volume Name: glusterStore2

Type: Distribute

Status: Started

Number of Bricks: 2

Transport-type: tcp,rdma

Bricks:

Brick1: toki:/mnt/raid10

Brick2: yoshie:/mnt/raid10




Now, I would like to extend this volume, and I have two suitable
machines, but these are not equipped with infiniband, and I wonder if it
is OK to just add these two to the volume? Using:




"gluster volume add-brick glusterStore2 kumiko01:/mnt/raid6"



and



"gluster volume add-brick glusterStore2 kumiko02:/mnt/raid6"



Regards,



/jon___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] ext4 issues

2012-08-24 Thread Jon Tegner
Hi, I have seen that there are issues with gluster on ext4. Just to be
clear, is this something which is only related to clients using nfs,
i.e., can I happily use gluster (without downgrading kernel) if all
clients are using gluster native client?



Thanks,




/jon___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Stale NFS file handle

2012-08-23 Thread Jon Tegner
Hi, I'm a bit curious of error messages of the type "remote operation
failed: Stale NFS file handle". All clients using the file system use
Gluster Native Client, so why should stale nfs file handle be reported?



Regards,




/jon





___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Error messages in etc-glusterfs-glusterd.vol.log

2012-08-21 Thread Jon Tegner
Hi,



have a gluster file system running on four bricks, it seems to be
running OK (can mount it, and files are visible and can be accessed).
However, when starting glusterd on the bricks I get errors of the type:




E [glusterd-store.c:1820:glusterd_store_retrieve_volume] 0-: Unknown
key: brick-0

E [glusterd-store.c:1820:glusterd_store_retrieve_volume] 0-: Unknown
key: brick-1

E [glusterd-store.c:1820:glusterd_store_retrieve_volume] 0-: Unknown
key: brick-2

E [glusterd-store.c:1820:glusterd_store_retrieve_volume] 0-: Unknown
key: brick-3




and




E [socket.c:1685:socket_connect_finish] 0-management: connection to
failed (Connection refused)



and



E [rdma.c:4468:rdma_event_handler] 0-rpc-transport/rdma:
rdma.management: pollin received on tcp socket (peer:
192.168.0.205:1022) after handshake is complete



Also, when mounting the file system on the clients:



W [socket.c:1494:__socket_proto_state_machine] 0-socket.management:
reading from socket failed. Error (Transport endpoint is not connected),
peer (192.168.0.207:1018)



As I said, the file system seems to be working in spite of these
messages, but they do make me worried. Any hints of where to look for
errors in configuration would be appreciated...



Regards,



/jon___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] according to mtab, GlusterFS is already mounted

2012-07-05 Thread Jon Tegner
Yes, sorry, the '__' are supposed to be spaces...




mount shows




glusterfs#server1:/glusterStore1 on /home1 type fuse
(rw,allow_other,default_permissions,max_read=131072)




glusterfs#server2:glusterStore2 on /home2 type fuse
(rw,allow_other,default_permissions,max_read=131072)




And I don't think there are any suspect symlinks.




Thanks!




/jon



On Jul 5, 2012 15:57 "Harry Mangalam"  wrote:

> Do you have some dangling symlinks?
> /home -> /home2 (or vice versa)
> 
> ie
> 
> ls -ld /home*
> 
> what does 'mount' or /etc/mtab say?
> 
> (assuming that the '_' are supposed to be spaces; if not, all bets are
> off)
> 
> hjm
> 
> On Thu, Jul 5, 2012 at 1:41 AM, Jon Tegner  wrote:
> > Hi,
> > 
> > I want to mount from two different "gluster-filesystems", according
> > to the
> > following lines in fstab:
> > 
> > server1:glusterStore1___/home1__glusterfs___defaults,_netdev,transpo
> > rt=rdma___0_0
> > server2:/glusterStore2___/home2__glusterfs___defaults,_netdev,transp
> > ort=rdma___0_0
> > 
> > However, when the second one is mounted I get the following message:
> > 
> > "/sbin/mount.glusterfs: according to mtab, GlusterFS is already
> > mounted on
> > /home"
> > 
> > However, both system seems to be mounted OK, am I doing something
> > terribly
> > wrong here, or can I just disregard this message?
> > 
> > On server1 I'm using 3.2.3-1.
> > 
> > On the client and server2 its 3.2.6-1.
> > 
> > Regards,
> > 
> > /jon
> > 
> > ___
> > Gluster-users mailing list
> > 
> > <http://gluster.org/cgi-bin/mailman/listinfo/gluster-users>
> > 
> 
> 
> ___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] according to mtab, GlusterFS is already mounted

2012-07-05 Thread Jon Tegner
Hi,



I want to mount from two different "gluster-filesystems", according to
the following lines in fstab:




server1:glusterStore1___/home1__glusterfs___defaults,_netdev,transport=r
dma___0_0

server2:/glusterStore2___/home2__glusterfs___defaults,_netdev,transport=
rdma___0_0




However, when the second one is mounted I get the following message:




"/sbin/mount.glusterfs: according to mtab, GlusterFS is already mounted
on /home"



However, both system seems to be mounted OK, am I doing something
terribly wrong here, or can I just disregard this message?



On server1 I'm using3.2.3-1.



On the client and server2 its 3.2.6-1.



Regards,



/jon___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Configuration Advice

2012-06-02 Thread Jon Tegner

On 06/01/2012 09:53 AM, Brian Candler wrote:
* What form of RAID are you planning to use for the data? Hardware 
raid controller or software md raid? RAID10, RAID6, ..? For a mixed 
I/O pattern which contains more than a tiny amount of writes, don't 
even consider RAID5 or RAID6. * Why three bricks per node? Are you 
planning to split the 12 x 2 TB drives into three 4-drive arrays? * 
What sort of network are you linking this to the compute nodes with? 
GigE, 10gigE, Infiniband, something else?
Is this common knowledge, one shouldn't use RAID5/6 on gluster? I was 
actually considering ;-) doing just that (with software raid) on a 
backup file system in a HPC environment.


Regards,

/jon
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] ZFS + Linux + Glusterfs for a production ready 100+ TB NAS on cloud

2011-09-25 Thread Jon Tegner
Sorry for a stupid question, but would there be issues using glusterfs 
based on several 11 TB ext4-bricks?


/jon


On 09/24/2011 09:26 PM, Anand Babu Periasamy wrote:
On Sat, Sep 24, 2011 at 12:14 PM, Liam Slusser > wrote:


I have a very large, >500tb, Gluster cluster on Centos Linux but I
use the XFS filesystem in a production role.  Each xfs filesystem
(brick) is around 32tb in size.  No problems all runs very well.

ls

Yes XFS is the way to go for large partitions > 16TB (or even 12TB). 
XFS is brought back to life by Red Hat. Most of the XFS developers are 
now RH employees. We can confidently recommend XFS now.


--
Anand Babu Periasamy
Blog [ http://www.unlocksmith.org ]
Twitter [ http://twitter.com/abperiasamy ]

Imagination is more important than knowledge --Albert Einstein


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Substitute for SMP?

2011-05-27 Thread Jon Tegner

Nope, measured nothing! Had hoped someone had already done it!

Plan would be to play around with an infiniband switch and a bunch of 
nodes, testing both hard drives, ssd and ram disks. Whenever I get the 
time...


Regards,

/jon

On 05/27/2011 08:55 PM, Berend de Boer wrote:

"Jon" == Jon Tegner  writes:

 Jon>  A general question, suppose I have a parallel application,
 Jon>  using mpi, where really fast access to the file system is
 Jon>  critical.

 Jon>  Would it be stupid to consider a ram disk based setup?

You really measured that infiband and 15,000rpm disks, and a lot of
striped gluster nodes can't give you that right?



___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Substitute for SMP?

2011-05-27 Thread Jon Tegner

On 05/27/2011 04:31 PM, Joe Landman wrote:

On 05/27/2011 07:12 AM, Jon Tegner wrote:

A general question, suppose I have a parallel application, using mpi,
where really fast access to the file system is critical.

Would it be stupid to consider a ram disk based setup? Say a 36 port QDR


Ram disks won't work directly, due to lack of locking in tmpfs.  You 
could create a tmpfs, then create a file that fills this up, then a 
loopback device pointing to that file, then build a file system atop 
that, and mount it.  And then mount gluster atop that.


Needless to say, all these layers significantly decrease performance 
and introduce inefficiencies.



infiniband with half of the ports connected to computational nodes and
the other half to gluster nodes?


There may be other options, but the options are not going to be 
cheap/inexpensive.  How fast, and by fast do you mean bandwidth and/or 
latency (e.g. streaming bandwidth or random IOPs)?  What does your IO 
profile look like?


You can get nodes that stream 4.6+ GB/s read, and 3.6+ GB/s writes for 
single readers/writers to single files.  For MPI jobs with single 
readers/writers, this is good.  For very large IO jobs where you need 
10's of GB/s, you probably need a more specific design to your problem.


Regards,

Joe



Thanks!

Would you say that the inefficiencies related to the ram disk would 
remove all advantages of using ram instead of hard drives (or could it 
still be worth a try)?


As for speed, I would think that latency would be the most critical - 
but I don't really know, it its the code of a colleague of mine (I was 
trying to come up with a replacement of his SMP-machine which is getting 
old).


Regards,

/jon
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Substitute for SMP?

2011-05-27 Thread Jon Tegner
A general question, suppose I have a parallel application, using mpi, 
where really fast access to the file system is critical.


Would it be stupid to consider a ram disk based setup? Say a 36 port QDR 
infiniband with half of the ports connected to computational nodes and 
the other half to gluster nodes?


Thanks,

/jon

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] patched fuse

2010-07-17 Thread Jon Tegner
Hi



Is the patched fuse needed for centos/redhat 5.4 or 5.5?

Thanks,

/jon___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] root squashing

2010-07-14 Thread Jon Tegner
I have a really simple glusterfs setup.



Used




glusterfs-volgen --name glusterStore --transport tcp host1:/mnt/raid10
host2:/mnt/raid10




to create the necessary files. And mounted the system with




glusterfs --volfile=/etc/glusterfs/glusterfs.vol /mnt/glusterfs/




on the clients.




However, with this setup there is no "root squashing", which is
something I need. How can this be achieved?




Regards,




/jon___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] No linear scaling

2010-06-18 Thread Jon Tegner
I made an error in creating the raids. Get the expected behavior now.




/j






On Jun 17, 2010 13:00 "Jon Tegner"  wrote:

> Hi,
> 
> 
> 
> I'm only recently started playing with glusterfs. My set up consists
> of
> two servers (noriko and kumiko), each with twelve 1Tb disk, raided
> together in raid10.
> 
> 
> 
> 
> The systems have CentOS-5.5 installed, and I have
> installed glusterfs-3.0.4-1 (client, common and server).
> 
> 
> 
> 
> I have generated the server and client files with:
> 
> 
> 
> 
> glusterfs-volgen --name store1 --transport tcp noriko:/mnt/raid10/
> kumiko:/mnt/raid10
> 
> 
> 
> 
> I then mounted the filesystem with:
> 
> 
> 
> 
> glusterfs --volfile=/etc/glusterfs/store1-tcp.vol /mnt/glusterfs
> 
> 
> 
> 
> In a naive test of the performance I tried:
> 
> 
> 
> 
> dd if=largeFile of=copyOfLargeFile bs=1024k oflag=sync
> 
> 
> 
> 
> (largeFile is about 450 M)
> 
> 
> 
> 
> I did this first from one client, and then from two clients
> simultaneously. I expected the time to be about the same for the two
> cases, but in contrast it took about twice the time when doing the
> operation simultaneously. I'm probably missing something obvious here,
> and would appreciate any help.
> 
> 
> 
> 
> Regards,
> 
> 
> 
> 
> /jon___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] No linear scaling

2010-06-17 Thread Jon Tegner
Hi,



I'm only recently started playing with glusterfs. My set up consists of
two servers (noriko and kumiko), each with twelve 1Tb disk, raided
together in raid10.




The systems have CentOS-5.5 installed, and I have
installed glusterfs-3.0.4-1 (client, common and server).




I have generated the server and client files with:




glusterfs-volgen --name store1 --transport tcp noriko:/mnt/raid10/
kumiko:/mnt/raid10




I then mounted the filesystem with:




glusterfs --volfile=/etc/glusterfs/store1-tcp.vol /mnt/glusterfs




In a naive test of the performance I tried:




dd if=largeFile of=copyOfLargeFile bs=1024k oflag=sync




(largeFile is about 450 M)




I did this first from one client, and then from two clients
simultaneously. I expected the time to be about the same for the two
cases, but in contrast it took about twice the time when doing the
operation simultaneously. I'm probably missing something obvious here,
and would appreciate any help.




Regards,




/jon___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Newbie questions

2010-05-03 Thread Jon Tegner

Hi, I'm also a newbie, and I'm looking forward to answers to your questions.

Just one question, why would distributed be preferable over striped (I'm 
probably the bigger newbie here)?


For purpose 1, clearly I'm looking at a replicated volume.  For 
purpose 2, I'm assuming that distributed is the way to go (rather than 
striped), although for 


Regards,

/jon
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users