Re: [Gluster-users] Problems to work with mounted directory in Gluster 3.2.7 -> switch to 3.2.4 ; -)

2014-02-19 Thread BGM
... keep it simple, make it robust ...
use raid1 (or raidz if you can) for the bricks
hth

On 19.02.2014, at 20:32, Targino Silveira  wrote:

> Sure, 
> 
> I will use XFS, as I sayd before it's for old data, so we don't need a great 
> performance, we only need to store data.
> 
> regards,
> 
> Targino Silveira
> +55-85-8626-7297
> www.twitter.com/targinosilveira
> 
> 
> 2014-02-19 16:11 GMT-03:00 BGM :
>> well, note:
>> - you don't need zfs on the hardeware machines, xfs or ext3 or ext4 would do 
>> it too
>> - for production you wouldn't use a glusterfs on top of a glusterfs but 
>> rather giving the vm access to a real blockdevice, like a whole harddisk or 
>> at least a partition of it although migration of the vm wouldn't be possible 
>> than...
>> therefor: a VM as a glusterserver might not be the best idea.
>> - remember to peer probe the glusterserver partner from both sides! as 
>> mentioned below
>> 
>> for a first setup you should be fine with that.
>> 
>> regards
>> 
>> On 19.02.2014, at 19:32, Targino Silveira  wrote:
>> 
>>> Thanks Bernhard I will do this.
>>> 
>>> Regards, 
>>> 
>>> 
>>> Targino Silveira
>>> +55-85-8626-7297
>>> www.twitter.com/targinosilveira
>>> 
>>> 
>>> 2014-02-19 14:43 GMT-03:00 Bernhard Glomm :
 I would strongly recommend to restart fresh with gluster 3.2.4 from 
 http://download.gluster.org/pub/gluster/glusterfs/3.4/
 It works totally fine for me.
 (reinstall the vms as slim as possible if you can.)
 
 As a quick howto consider this:
 
 - We have 2 Hardware machines (just desktop machines for dev-env)
 - both running zol
 - create a zpool and zfs filesystem
 - create a gluster replica 2 volume between hostA and hostB
 - installe 3 VM vmachine0{4,5,6}
 - vmachine0{4,5} each have a 100GB diskimage file as /dev/vdb which also 
 resides on the glustervolume
 - create ext3 filesystem on vmachine0{4,5}:/dev/vdb1
 - create gluster replica 2 between vmachine04 and vmachine05 as shown below
 
 (!!!obviously nobody would do that in any serious environment,
 just to show that even a setup like that _would_ be possible!!!)
 
 - run some benchmarks on that volume and compare the results to other 
 
 So:
 
 root@vmachine04[/0]:~ # mkdir -p /srv/vdb1/gf_brick
 root@vmachine04[/0]:~ # mount /dev/vdb1 /srv/vdb1/
 root@vmachine04[/0]:~ # gluster peer probe vmachine05
 peer probe: success
 
 # now switch over to vmachine05 and do
 
 root@vmachine05[/1]:~ # mkdir -p /srv/vdb1/gf_brick
 root@vmachine05[/1]:~ # mount /dev/vdb1 /srv/vdb1/
 root@vmachine05[/1]:~ # gluster peer probe vmachine04
 peer probe: success
 root@vmachine05[/1]:~ # gluster peer probe vmachine04
 peer probe: success: host vmachine04 port 24007 already in peer list
 
 # the peer probe from BOTH sides ist often forgotten 
 # switch back to vmachine04 and continue with
 
 root@vmachine04[/0]:~ # gluster peer status
 Number of Peers: 1
 
 Hostname: vmachine05
 Port: 24007
 Uuid: 085a1489-dabf-40bb-90c1-fbfe66539953
 State: Peer in Cluster (Connected)
 root@vmachine04[/0]:~ # gluster volume info layer_cake_volume
 
 Volume Name: layer_cake_volume
 Type: Replicate
 Volume ID: ef5299db-2896-4631-a2a8-d0082c1b25be
 Status: Started
 Number of Bricks: 1 x 2 = 2
 Transport-type: tcp
 Bricks:
 Brick1: vmachine04:/srv/vdb1/gf_brick
 Brick2: vmachine05:/srv/vdb1/gf_brick
 root@vmachine04[/0]:~ # gluster volume status layer_cake_volume
 Status of volume: layer_cake_volume
 Gluster process PortOnline  Pid
 --
 Brick vmachine04:/srv/vdb1/gf_brick 49152   Y  
  12778
 Brick vmachine05:/srv/vdb1/gf_brick 49152   Y  
  16307
 NFS Server on localhost 2049Y   
 12790
 Self-heal Daemon on localhost   N/A Y   
 12791
 NFS Server on vmachine052049Y  
  16320
 Self-heal Daemon on vmachine05  N/A Y  
  16319
 
 There are no active volume tasks
 
 # set any option you might like
 
 root@vmachine04[/1]:~ # gluster volume set layer_cake_volume 
 network.remote-dio enable
 volume set: success
 
 # go to vmachine06 and mount the volume
 root@vmachine06[/1]:~ # mkdir /srv/layer_cake
 root@vmachine06[/1]:~ # mount -t glusterfs -o 
 backupvolfile-server=vmachine05 vmachine04:/layer_cake_volume 
 /srv/layer_cake
 root@vmachine06[/1]:~ # mount
 vmachine04:/layer_cake_volume on /srv/layer_cake type fuse.glusterfs 
 (rw,default_permissions,allow_other,max_read=131072)
 r

Re: [Gluster-users] Problems to work with mounted directory in Gluster 3.2.7 -> switch to 3.2.4 ; -)

2014-02-19 Thread Targino Silveira
Sure,

I will use XFS, as I sayd before it's for old data, so we don't need a
great performance, we only need to store data.

regards,

Targino Silveira
+55-85-8626-7297
www.twitter.com/targinosilveira


2014-02-19 16:11 GMT-03:00 BGM :

> well, note:
> - you don't need zfs on the hardeware machines, xfs or ext3 or ext4 would
> do it too
> - for production you wouldn't use a glusterfs on top of a glusterfs but
> rather giving the vm access to a real blockdevice, like a whole harddisk or
> at least a partition of it although migration of the vm wouldn't be
> possible than...
> therefor: a VM as a glusterserver might not be the best idea.
> - remember to peer probe the glusterserver partner from both sides! as
> mentioned below
>
> for a first setup you should be fine with that.
>
> regards
>
> On 19.02.2014, at 19:32, Targino Silveira 
> wrote:
>
> Thanks Bernhard I will do this.
>
> Regards,
>
>
> Targino Silveira
> +55-85-8626-7297
> www.twitter.com/targinosilveira
>
>
> 2014-02-19 14:43 GMT-03:00 Bernhard Glomm :
>
>> I would strongly recommend to restart fresh with gluster 3.2.4 from
>> http://download.gluster.org/pub/gluster/glusterfs/3.4/
>> It works totally fine for me.
>> (reinstall the vms as slim as possible if you can.)
>>
>> As a quick howto consider this:
>>
>> - We have 2 Hardware machines (just desktop machines for dev-env)
>> - both running zol
>> - create a zpool and zfs filesystem
>> - create a gluster replica 2 volume between hostA and hostB
>> - installe 3 VM vmachine0{4,5,6}
>> - vmachine0{4,5} each have a 100GB diskimage file as /dev/vdb which also
>> resides on the glustervolume
>> - create ext3 filesystem on vmachine0{4,5}:/dev/vdb1
>> - create gluster replica 2 between vmachine04 and vmachine05 as shown
>> below
>>
>> (!!!obviously nobody would do that in any serious environment,
>> just to show that even a setup like that _would_ be possible!!!)
>>
>> - run some benchmarks on that volume and compare the results to other
>>
>> So:
>>
>> root@vmachine04[/0]:~ # mkdir -p /srv/vdb1/gf_brick
>> root@vmachine04[/0]:~ # mount /dev/vdb1 /srv/vdb1/
>> root@vmachine04[/0]:~ # gluster peer probe vmachine05
>> peer probe: success
>>
>> # now switch over to vmachine05 and do
>>
>> root@vmachine05[/1]:~ # mkdir -p /srv/vdb1/gf_brick
>> root@vmachine05[/1]:~ # mount /dev/vdb1 /srv/vdb1/
>> root@vmachine05[/1]:~ # gluster peer probe vmachine04
>> peer probe: success
>> root@vmachine05[/1]:~ # gluster peer probe vmachine04
>> peer probe: success: host vmachine04 port 24007 already in peer list
>>
>> # the peer probe from BOTH sides ist often forgotten
>> # switch back to vmachine04 and continue with
>>
>> root@vmachine04[/0]:~ # gluster peer status
>> Number of Peers: 1
>>
>> Hostname: vmachine05
>> Port: 24007
>> Uuid: 085a1489-dabf-40bb-90c1-fbfe66539953
>> State: Peer in Cluster (Connected)
>> root@vmachine04[/0]:~ # gluster volume info layer_cake_volume
>>
>> Volume Name: layer_cake_volume
>> Type: Replicate
>> Volume ID: ef5299db-2896-4631-a2a8-d0082c1b25be
>> Status: Started
>> Number of Bricks: 1 x 2 = 2
>> Transport-type: tcp
>> Bricks:
>> Brick1: vmachine04:/srv/vdb1/gf_brick
>> Brick2: vmachine05:/srv/vdb1/gf_brick
>> root@vmachine04[/0]:~ # gluster volume status layer_cake_volume
>> Status of volume: layer_cake_volume
>> Gluster process PortOnline
>>  Pid
>>
>> --
>> Brick vmachine04:/srv/vdb1/gf_brick 49152   Y
>>   12778
>> Brick vmachine05:/srv/vdb1/gf_brick 49152   Y
>>   16307
>> NFS Server on localhost 2049Y
>> 12790
>> Self-heal Daemon on localhost   N/A Y
>> 12791
>> NFS Server on vmachine052049Y
>>   16320
>> Self-heal Daemon on vmachine05  N/A Y
>>   16319
>>
>> There are no active volume tasks
>>
>> # set any option you might like
>>
>> root@vmachine04[/1]:~ # gluster volume set layer_cake_volume
>> network.remote-dio enable
>> volume set: success
>>
>> # go to vmachine06 and mount the volume
>> root@vmachine06[/1]:~ # mkdir /srv/layer_cake
>> root@vmachine06[/1]:~ # mount -t glusterfs -o
>> backupvolfile-server=vmachine05 vmachine04:/layer_cake_volume
>> /srv/layer_cake
>> root@vmachine06[/1]:~ # mount
>> vmachine04:/layer_cake_volume on /srv/layer_cake type fuse.glusterfs
>> (rw,default_permissions,allow_other,max_read=131072)
>> root@vmachine06[/1]:~ # df -h
>> Filesystem Size  Used Avail Use% Mounted on
>> ...
>> vmachine04:/layer_cake_volume   97G  188M   92G   1% /srv/layer_cake
>>
>> All fine and stable
>>
>>
>>
>> # now let's see how it tastes
>> # note this is postmark on  / NOT on the glustermounted layer_cake_volume!
>> # that postmark results might be available tomorrow ;-)))
>> root@vmachine06[/1]:~ # postmark
>> PostMark v1.51 : 8/14/01
>> pm>set tr

Re: [Gluster-users] Problems to work with mounted directory in Gluster 3.2.7 -> switch to 3.2.4 ; -)

2014-02-19 Thread BGM
well, note:
- you don't need zfs on the hardeware machines, xfs or ext3 or ext4 would do it 
too
- for production you wouldn't use a glusterfs on top of a glusterfs but rather 
giving the vm access to a real blockdevice, like a whole harddisk or at least a 
partition of it although migration of the vm wouldn't be possible than...
therefor: a VM as a glusterserver might not be the best idea.
- remember to peer probe the glusterserver partner from both sides! as 
mentioned below

for a first setup you should be fine with that.

regards

On 19.02.2014, at 19:32, Targino Silveira  wrote:

> Thanks Bernhard I will do this.
> 
> Regards, 
> 
> 
> Targino Silveira
> +55-85-8626-7297
> www.twitter.com/targinosilveira
> 
> 
> 2014-02-19 14:43 GMT-03:00 Bernhard Glomm :
>> I would strongly recommend to restart fresh with gluster 3.2.4 from 
>> http://download.gluster.org/pub/gluster/glusterfs/3.4/
>> It works totally fine for me.
>> (reinstall the vms as slim as possible if you can.)
>> 
>> As a quick howto consider this:
>> 
>> - We have 2 Hardware machines (just desktop machines for dev-env)
>> - both running zol
>> - create a zpool and zfs filesystem
>> - create a gluster replica 2 volume between hostA and hostB
>> - installe 3 VM vmachine0{4,5,6}
>> - vmachine0{4,5} each have a 100GB diskimage file as /dev/vdb which also 
>> resides on the glustervolume
>> - create ext3 filesystem on vmachine0{4,5}:/dev/vdb1
>> - create gluster replica 2 between vmachine04 and vmachine05 as shown below
>> 
>> (!!!obviously nobody would do that in any serious environment,
>> just to show that even a setup like that _would_ be possible!!!)
>> 
>> - run some benchmarks on that volume and compare the results to other 
>> 
>> So:
>> 
>> root@vmachine04[/0]:~ # mkdir -p /srv/vdb1/gf_brick
>> root@vmachine04[/0]:~ # mount /dev/vdb1 /srv/vdb1/
>> root@vmachine04[/0]:~ # gluster peer probe vmachine05
>> peer probe: success
>> 
>> # now switch over to vmachine05 and do
>> 
>> root@vmachine05[/1]:~ # mkdir -p /srv/vdb1/gf_brick
>> root@vmachine05[/1]:~ # mount /dev/vdb1 /srv/vdb1/
>> root@vmachine05[/1]:~ # gluster peer probe vmachine04
>> peer probe: success
>> root@vmachine05[/1]:~ # gluster peer probe vmachine04
>> peer probe: success: host vmachine04 port 24007 already in peer list
>> 
>> # the peer probe from BOTH sides ist often forgotten 
>> # switch back to vmachine04 and continue with
>> 
>> root@vmachine04[/0]:~ # gluster peer status
>> Number of Peers: 1
>> 
>> Hostname: vmachine05
>> Port: 24007
>> Uuid: 085a1489-dabf-40bb-90c1-fbfe66539953
>> State: Peer in Cluster (Connected)
>> root@vmachine04[/0]:~ # gluster volume info layer_cake_volume
>> 
>> Volume Name: layer_cake_volume
>> Type: Replicate
>> Volume ID: ef5299db-2896-4631-a2a8-d0082c1b25be
>> Status: Started
>> Number of Bricks: 1 x 2 = 2
>> Transport-type: tcp
>> Bricks:
>> Brick1: vmachine04:/srv/vdb1/gf_brick
>> Brick2: vmachine05:/srv/vdb1/gf_brick
>> root@vmachine04[/0]:~ # gluster volume status layer_cake_volume
>> Status of volume: layer_cake_volume
>> Gluster process PortOnline  Pid
>> --
>> Brick vmachine04:/srv/vdb1/gf_brick 49152   Y   
>> 12778
>> Brick vmachine05:/srv/vdb1/gf_brick 49152   Y   
>> 16307
>> NFS Server on localhost 2049Y   12790
>> Self-heal Daemon on localhost   N/A Y   12791
>> NFS Server on vmachine052049Y   
>> 16320
>> Self-heal Daemon on vmachine05  N/A Y   
>> 16319
>> 
>> There are no active volume tasks
>> 
>> # set any option you might like
>> 
>> root@vmachine04[/1]:~ # gluster volume set layer_cake_volume 
>> network.remote-dio enable
>> volume set: success
>> 
>> # go to vmachine06 and mount the volume
>> root@vmachine06[/1]:~ # mkdir /srv/layer_cake
>> root@vmachine06[/1]:~ # mount -t glusterfs -o 
>> backupvolfile-server=vmachine05 vmachine04:/layer_cake_volume /srv/layer_cake
>> root@vmachine06[/1]:~ # mount
>> vmachine04:/layer_cake_volume on /srv/layer_cake type fuse.glusterfs 
>> (rw,default_permissions,allow_other,max_read=131072)
>> root@vmachine06[/1]:~ # df -h
>> Filesystem Size  Used Avail Use% Mounted on
>> ...
>> vmachine04:/layer_cake_volume   97G  188M   92G   1% /srv/layer_cake
>> 
>> All fine and stable
>> 
>> 
>> 
>> # now let's see how it tastes
>> # note this is postmark on  / NOT on the glustermounted layer_cake_volume!
>> # that postmark results might be available tomorrow ;-)))
>> root@vmachine06[/1]:~ # postmark
>> PostMark v1.51 : 8/14/01
>> pm>set transactions 50
>> pm>set number 20
>> pm>set subdirectories 1
>> pm>run
>> Creating subdirectories...Done
>> Creating files...Done
>> Performing transactions..Done
>> Deleting files...Done
>> Deleting s

Re: [Gluster-users] Problems to work with mounted directory in Gluster 3.2.7 -> switch to 3.2.4 ; -)

2014-02-19 Thread Targino Silveira
Thanks Bernhard I will do this.

Regards,


Targino Silveira
+55-85-8626-7297
www.twitter.com/targinosilveira


2014-02-19 14:43 GMT-03:00 Bernhard Glomm :

> I would strongly recommend to restart fresh with gluster 3.2.4 from
> http://download.gluster.org/pub/gluster/glusterfs/3.4/
> It works totally fine for me.
> (reinstall the vms as slim as possible if you can.)
>
> As a quick howto consider this:
>
> - We have 2 Hardware machines (just desktop machines for dev-env)
> - both running zol
> - create a zpool and zfs filesystem
> - create a gluster replica 2 volume between hostA and hostB
> - installe 3 VM vmachine0{4,5,6}
> - vmachine0{4,5} each have a 100GB diskimage file as /dev/vdb which also
> resides on the glustervolume
> - create ext3 filesystem on vmachine0{4,5}:/dev/vdb1
> - create gluster replica 2 between vmachine04 and vmachine05 as shown below
>
> (!!!obviously nobody would do that in any serious environment,
> just to show that even a setup like that _would_ be possible!!!)
>
> - run some benchmarks on that volume and compare the results to other
>
> So:
>
> root@vmachine04[/0]:~ # mkdir -p /srv/vdb1/gf_brick
> root@vmachine04[/0]:~ # mount /dev/vdb1 /srv/vdb1/
> root@vmachine04[/0]:~ # gluster peer probe vmachine05
> peer probe: success
>
> # now switch over to vmachine05 and do
>
> root@vmachine05[/1]:~ # mkdir -p /srv/vdb1/gf_brick
> root@vmachine05[/1]:~ # mount /dev/vdb1 /srv/vdb1/
> root@vmachine05[/1]:~ # gluster peer probe vmachine04
> peer probe: success
> root@vmachine05[/1]:~ # gluster peer probe vmachine04
> peer probe: success: host vmachine04 port 24007 already in peer list
>
> # the peer probe from BOTH sides ist often forgotten
> # switch back to vmachine04 and continue with
>
> root@vmachine04[/0]:~ # gluster peer status
> Number of Peers: 1
>
> Hostname: vmachine05
> Port: 24007
> Uuid: 085a1489-dabf-40bb-90c1-fbfe66539953
> State: Peer in Cluster (Connected)
> root@vmachine04[/0]:~ # gluster volume info layer_cake_volume
>
> Volume Name: layer_cake_volume
> Type: Replicate
> Volume ID: ef5299db-2896-4631-a2a8-d0082c1b25be
> Status: Started
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: vmachine04:/srv/vdb1/gf_brick
> Brick2: vmachine05:/srv/vdb1/gf_brick
> root@vmachine04[/0]:~ # gluster volume status layer_cake_volume
> Status of volume: layer_cake_volume
> Gluster process PortOnline  Pid
>
> --
> Brick vmachine04:/srv/vdb1/gf_brick 49152   Y
>   12778
> Brick vmachine05:/srv/vdb1/gf_brick 49152   Y
>   16307
> NFS Server on localhost 2049Y
> 12790
> Self-heal Daemon on localhost   N/A Y
> 12791
> NFS Server on vmachine052049Y
>   16320
> Self-heal Daemon on vmachine05  N/A Y
>   16319
>
> There are no active volume tasks
>
> # set any option you might like
>
> root@vmachine04[/1]:~ # gluster volume set layer_cake_volume
> network.remote-dio enable
> volume set: success
>
> # go to vmachine06 and mount the volume
> root@vmachine06[/1]:~ # mkdir /srv/layer_cake
> root@vmachine06[/1]:~ # mount -t glusterfs -o
> backupvolfile-server=vmachine05 vmachine04:/layer_cake_volume
> /srv/layer_cake
> root@vmachine06[/1]:~ # mount
> vmachine04:/layer_cake_volume on /srv/layer_cake type fuse.glusterfs
> (rw,default_permissions,allow_other,max_read=131072)
> root@vmachine06[/1]:~ # df -h
> Filesystem Size  Used Avail Use% Mounted on
> ...
> vmachine04:/layer_cake_volume   97G  188M   92G   1% /srv/layer_cake
>
> All fine and stable
>
>
>
> # now let's see how it tastes
> # note this is postmark on  / NOT on the glustermounted layer_cake_volume!
> # that postmark results might be available tomorrow ;-)))
> root@vmachine06[/1]:~ # postmark
> PostMark v1.51 : 8/14/01
> pm>set transactions 50
> pm>set number 20
> pm>set subdirectories 1
> pm>run
> Creating subdirectories...Done
> Creating files...Done
> Performing transactions..Done
> Deleting files...Done
> Deleting subdirectories...Done
> Time:
> 2314 seconds total
> 2214 seconds of transactions (225 per second)
> Files:
> 450096 created (194 per second)
> Creation alone: 20 files (4166 per second)
> Mixed with transactions: 250096 files (112 per second)
> 249584 read (112 per second)
> 250081 appended (112 per second)
> 450096 deleted (194 per second)
> Deletion alone: 200192 files (3849 per second)
> Mixed with transactions: 249904 files (112 per second)
>
> Data:
> 1456.29 megabytes read (644.44 kilobytes per second)
> 2715.89 megabytes written (1.17 megabytes per second)
>
> # reference
> # running postmark on the hardware machine directly on zf

Re: [Gluster-users] Problems to work with mounted directory in Gluster 3.2.7 -> switch to 3.2.4 ; -)

2014-02-19 Thread Bernhard Glomm
I would strongly recommend to restart fresh with gluster 3.2.4 from 
http://download.gluster.org/pub/gluster/glusterfs/3.4/
It works totally fine for me.
(reinstall the vms as slim as possible if you can.)

As a quick howto consider this:



- We have 2 Hardware machines (just desktop machines for dev-env)
- both running zol
- create a zpool and zfs filesystem
- create a gluster replica 2 volume between hostA and hostB
- installe 3 VM vmachine0{4,5,6}
- vmachine0{4,5} each have a 100GB diskimage file as /dev/vdb which also 
resides on the glustervolume
- create ext3 filesystem on vmachine0{4,5}:/dev/vdb1
- create gluster replica 2 between vmachine04 and vmachine05 as shown below

(!!!obviously nobody would do that in any serious environment,
just to show that even a setup like that _would_ be possible!!!)


- run some benchmarks on that volume and compare the results to other 

So:

root@vmachine04[/0]:~ # mkdir -p /srv/vdb1/gf_brick
root@vmachine04[/0]:~ # mount /dev/vdb1 /srv/vdb1/
root@vmachine04[/0]:~ # gluster peer probe vmachine05
peer probe: success

# now switch over to vmachine05 and do

root@vmachine05[/1]:~ # mkdir -p /srv/vdb1/gf_brick
root@vmachine05[/1]:~ # mount /dev/vdb1 /srv/vdb1/
root@vmachine05[/1]:~ # gluster peer probe vmachine04
peer probe: success
root@vmachine05[/1]:~ # gluster peer probe vmachine04
peer probe: success: host vmachine04 port 24007 already in peer list

# the peer probe from BOTH sides ist often forgotten 
# switch back to vmachine04 and continue with

root@vmachine04[/0]:~ # gluster peer status
Number of Peers: 1

Hostname: vmachine05
Port: 24007
Uuid: 085a1489-dabf-40bb-90c1-fbfe66539953
State: Peer in Cluster (Connected)
root@vmachine04[/0]:~ # gluster volume info layer_cake_volume

Volume Name: layer_cake_volume
Type: Replicate
Volume ID: ef5299db-2896-4631-a2a8-d0082c1b25be
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: vmachine04:/srv/vdb1/gf_brick
Brick2: vmachine05:/srv/vdb1/gf_brick
root@vmachine04[/0]:~ # gluster volume status layer_cake_volume
Status of volume: layer_cake_volume
Gluster process                                         Port    Online  Pid
--
Brick vmachine04:/srv/vdb1/gf_brick                         49152   Y       
12778
Brick vmachine05:/srv/vdb1/gf_brick                         49152   Y       
16307
NFS Server on localhost                                 2049    Y       12790
Self-heal Daemon on localhost                           N/A     Y       12791
NFS Server on vmachine05                                    2049    Y       
16320
Self-heal Daemon on vmachine05                              N/A     Y       
16319

There are no active volume tasks

# set any option you might like

root@vmachine04[/1]:~ # gluster volume set layer_cake_volume network.remote-dio 
enable
volume set: success

# go to vmachine06 and mount the volume
root@vmachine06[/1]:~ # mkdir /srv/layer_cake
root@vmachine06[/1]:~ # mount -t glusterfs -o backupvolfile-server=vmachine05 
vmachine04:/layer_cake_volume /srv/layer_cake
root@vmachine06[/1]:~ # mount
vmachine04:/layer_cake_volume on /srv/layer_cake type fuse.glusterfs 
(rw,default_permissions,allow_other,max_read=131072)
root@vmachine06[/1]:~ # df -h
Filesystem                 Size  Used Avail Use% Mounted on
...
vmachine04:/layer_cake_volume   97G  188M   92G   1% /srv/layer_cake

All fine and stable



# now let's see how it tastes
# note this is postmark on  / NOT on the glustermounted layer_cake_volume!
# that postmark results might be available tomorrow ;-)))
root@vmachine06[/1]:~ # postmark
PostMark v1.51 : 8/14/01
pm>set transactions 50
pm>set number 20
pm>set subdirectories 1
pm>run
Creating subdirectories...Done
Creating files...Done
Performing transactions..Done
Deleting files...Done
Deleting subdirectories...Done
Time:
        2314 seconds total
        2214 seconds of transactions (225 per second)

Files:
        450096 created (194 per second)
                Creation alone: 20 files (4166 per second)
                Mixed with transactions: 250096 files (112 per second)
        249584 read (112 per second)
        250081 appended (112 per second)
        450096 deleted (194 per second)
                Deletion alone: 200192 files (3849 per second)
                Mixed with transactions: 249904 files (112 per second)

Data:
        1456.29 megabytes read (644.44 kilobytes per second)
        2715.89 megabytes written (1.17 megabytes per second)

# reference
# running postmark on the hardware machine directly on zfs
#
#           /test # postmark
#           PostMark v1.51 : 8/14/01
#           pm>set transactions 50
#           pm>set number 20
#           pm>set subdirectories 1
#           pm>run
#           Creating subdirectories...Done
#           Creating files...Done
#           Performing transactions..Done
#           Deleting files...Done

Re: [Gluster-users] Problems to work with mounted directory in Gluster 3.2.7

2014-02-18 Thread Targino Silveira
Hi Bernhard,

2014-02-18 19:45 GMT-03:00 bernhard glomm :

> hi,
> not very clear to me,
> you run VMs as gluster servers?
>

Right, I am using VM to run glusters servers.



> so 2 of your VMs using each a 1tb brick for what a ...
>

I am making a server for store some old data via FTP, so I have a  two
bricks with 1tb each one, in a volume replica 2.



> could you give a
> node1:~ # gluster volume info 
> ...
> node1:~ # gluster volume status 
> ...
>

#First Node
root@vm-gluster-cloudbackup1:~# gluster volume info

Volume Name: vol1
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: vm-gluster-cloudbackup1:/mnt/data1/vol1
Brick2: vm-gluster-cloudbackup2:/mnt/data1/vol1

#Second node
root@vm-gluster-cloudbackup2:~# gluster volume info

Volume Name: vol1
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: vm-gluster-cloudbackup1:/mnt/data1/vol1
Brick2: vm-gluster-cloudbackup2:/mnt/data1/vol1

The volumes are started normaly with no problem, I am trying to get some
thing on the log files, I know that speed may be slow, but it's not a
problem to me at this moment.

Regards,
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Problems to work with mounted directory in Gluster 3.2.7

2014-02-18 Thread Franco Broi

You need to check the log files for obvious problems. On the servers
these should be in /var/log/glusterfs/ and you can turn on logging for
the fuse client like this:

# mount -olog-level=debug,log-file=/var/log/glusterfs/glusterfs.log  -t
glusterfs 

On Tue, 2014-02-18 at 17:30 -0300, Targino Silveira wrote: 
> Hi all, 
> 
> 
> I am getting a problem and can't understand why, I have created a
> cluster with gluster following the most simple way as explaind in
> QuickStart on the glustger.org. 
> 
> 
> I have created 3 VM in KVM. 
> 
> 
> 2 To host gluster server with one disk image with 1tb. 
> 
> 
> 1 To host gluster client to mounting my volume. 
> 
> 
> I'm using Debian 7 and used apt-get to install Gluster 3.2.7, Server
> and Client. 
> 
> 
> After all finished I could to mount as glusterfs with no problem
> "mount -t glusterfs /mnt/cloudbackup_data/
> vm-gluster-cloudbackup1:/vol1" but I can't to work on the mounted
> directory, if I perform a "ls -lh" it's running forver, and I can't do
> any other operation and VM is blocked.
> 
> 
> If I try to mount as NFS  "mount -t nfs
> vm-gluster-cloudbackup2:/vol1 /mnt/cloudbackup_data/ "I get a "Time
> out" I'm not so much expert in gluster, but I can't see any reason for
> this problem, someone know something like that?
> 
> 
> Regards, 
> 
> 
> 
> Targino Silveira
> +55-85-8626-7297
> www.twitter.com/targinosilveira
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users