[Gluster-users] Gluster 3.3: Unable to delete xattrs

2012-01-31 Thread Joey McDonald
Hi,

I'm running the latest qa build of 3.3 and having a bit of trouble with
extended attrs.

[root@compute-0-0 ~]# rpm -qa | grep gluster
glusterfs-geo-replication-3.3.0qa20-1
glusterfs-core-3.3.0qa20-1
glusterfs-rdma-3.3.0qa20-1
glusterfs-fuse-3.3.0qa20-1

Firstly, is it mandatory to mount ext3 file systems with 'user_xattr'? The
issue I'm having is that I would like to delete a gluster volume and create
a new one. Gluster isn't allowing me to do so:

[root@compute-0-0 ~]# gluster volume create cloudfs replica 2
compute-0-0:/gluster1 compute-0-1:/gluster1 compute-0-2:/gluster1
compute-0-3:/gluster1
'compute-0-0:/gluster1' has been part of a deleted volume with id
d83317f2-0c13-4481-bd0a-838fc013ceff. Please re-create the brick directory.

So, I'm trying to delete the brick directory but the xattrs remain:

[root@compute-0-0 ~]# attr -l /gluster1
Attribute "gfid" has a 16 byte value for /gluster1
Attribute "glusterfs.dht" has a 16 byte value for /gluster1
Attribute "glusterfs.volume-id" has a 16 byte value for /gluster1
Attribute "afr.cloudfs-client-0" has a 12 byte value for /gluster1
Attribute "afr.cloudfs-client-1" has a 12 byte value for /gluster1

[root@compute-0-0 ~]# attr -r afr.cloudfs-client-1 /gluster1
attr_remove: No data available
Could not remove "afr.cloudfs-client-1" for /gluster1

[root@compute-0-0 ~]# mount | grep gluster1
/dev/sdb1 on /gluster1 type ext3 (rw,user_xattr)

[root@compute-0-0 ~]# umount /gluster1

[root@compute-0-0 ~]# rm -rf /gluster1

[root@compute-0-0 ~]# mkdir /gluster1

[root@compute-0-0 ~]# mount /gluster1

[root@compute-0-0 ~]# mount | grep gluster1
/dev/sdb1 on /gluster1 type ext3 (rw,user_xattr)

[root@compute-0-0 ~]# attr -l /gluster1
Attribute "gfid" has a 16 byte value for /gluster1
Attribute "glusterfs.dht" has a 16 byte value for /gluster1
Attribute "glusterfs.volume-id" has a 16 byte value for /gluster1
Attribute "afr.cloudfs-client-0" has a 12 byte value for /gluster1
Attribute "afr.cloudfs-client-1" has a 12 byte value for /gluster1

Thanks for any help.


   --joey
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Can't Get 3.3beta2-1 Going :(

2012-01-27 Thread Joey McDonald
Hi Venky,

I installed the geo replication RPM along with it's dependencies but
gluster was still unhappy:

[root@compute-0-0 ~]# rpm -qa | grep -i gluster
glusterfs-rdma-3.3beta2-1
glusterfs-fuse-3.3beta2-1
glusterfs-geo-replication-3.3beta2-1
glusterfs-core-3.3beta2-1

However, when I added this line to /etc/glusterfs/glusterd.vol, everything
started working happily:

option transport.address-family inet

I should mention that the RPM's are installing stuff in:

   /opt/glusterfs/3.3beta2/

Which isn't ideal, hopefully that's just for the beta RPM's. :) Thanks for
your help!


   --joey






On Thu, Jan 26, 2012 at 11:29 PM, Venky Shankar  wrote:

> **
> Hey Joey,
>
> Can you try installing the glusterfs-geo-replication rpm ? You are using
> 3.3-beta2; so the rpm would be here:
> http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.3-beta-2/RHEL/
>
> Let me know if installing this works for you. This should have been in the
> dependency list of the RPMs. I'll see if there is some issues with that.
>
> Thanks,
> -Venky
>
> On 01/27/2012 11:25 AM, Joey McDonald wrote:
>
> Hi all,
>
>  I have downloaded and installed the following:
>
>  [root@compute-0-0 ~]# rpm -qa | grep gluster
> glusterfs-rdma-3.3beta2-1
> glusterfs-fuse-3.3beta2-1
> glusterfs-core-3.3beta2-1
>
>  I have also compiled all those from source but they didn't work either.
> The problem I'm having is that when I start glusterd I get the following
> and I'm not sure how to fix it:
>
>  [2012-01-26 23:51:01.800533] I [glusterd.c:574:init] 0-management: Using
> /etc/glusterd as working directory
> [2012-01-26 23:51:01.817010] E [socket.c:2190:socket_listen]
> 0-socket.management: socket creation failed (Address family not supported
> by protocol)
> [2012-01-26 23:51:01.817427] C [rdma.c:3935:rdma_init]
> 0-rpc-transport/rdma: Failed to get IB devices
> [2012-01-26 23:51:01.817484] E [rdma.c:4806:init] 0-rdma.management:
> Failed to initialize IB Device
> [2012-01-26 23:51:01.817499] E [rpc-transport.c:325:rpc_transport_load]
> 0-rpc-transport: 'rdma' initialization failed
> [2012-01-26 23:51:01.817513] W [rpcsvc.c:1320:rpcsvc_transport_create]
> 0-rpc-service: cannot create listener, initing the transport failed
> [2012-01-26 23:51:01.817608] I [glusterd.c:89:glusterd_uuid_init]
> 0-glusterd: retrieved UUID: 89b15475-cb6f-4d3c-8d7c-e18eb8358f4f
> [2012-01-26 23:51:01.818323] I
> [glusterd.c:294:glusterd_check_gsync_present] 0-: geo-replication module
> not installed in the system
> Given volfile:
>
> +--+
>   1: volume management
>   2: type mgmt/glusterd
>   3: option working-directory /etc/glusterd
>   4: option transport-type socket,rdma
>   5: option transport.socket.keepalive-time 10
>   6: option transport.socket.keepalive-interval 2
>   7: option transport.socket.read-fail-log off
>   8: end-volume
>
>
> +--+
>
>  Anyone know what I'm doing wrong? Thanks!
>
> --joey
>
>
>
>
>
> ___
> Gluster-users mailing 
> listGluster-users@gluster.orghttp://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Can't Get 3.3beta2-1 Going :(

2012-01-26 Thread Joey McDonald
Hi all,

I have downloaded and installed the following:

[root@compute-0-0 ~]# rpm -qa | grep gluster
glusterfs-rdma-3.3beta2-1
glusterfs-fuse-3.3beta2-1
glusterfs-core-3.3beta2-1

I have also compiled all those from source but they didn't work either. The
problem I'm having is that when I start glusterd I get the following and
I'm not sure how to fix it:

[2012-01-26 23:51:01.800533] I [glusterd.c:574:init] 0-management: Using
/etc/glusterd as working directory
[2012-01-26 23:51:01.817010] E [socket.c:2190:socket_listen]
0-socket.management: socket creation failed (Address family not supported
by protocol)
[2012-01-26 23:51:01.817427] C [rdma.c:3935:rdma_init]
0-rpc-transport/rdma: Failed to get IB devices
[2012-01-26 23:51:01.817484] E [rdma.c:4806:init] 0-rdma.management: Failed
to initialize IB Device
[2012-01-26 23:51:01.817499] E [rpc-transport.c:325:rpc_transport_load]
0-rpc-transport: 'rdma' initialization failed
[2012-01-26 23:51:01.817513] W [rpcsvc.c:1320:rpcsvc_transport_create]
0-rpc-service: cannot create listener, initing the transport failed
[2012-01-26 23:51:01.817608] I [glusterd.c:89:glusterd_uuid_init]
0-glusterd: retrieved UUID: 89b15475-cb6f-4d3c-8d7c-e18eb8358f4f
[2012-01-26 23:51:01.818323] I
[glusterd.c:294:glusterd_check_gsync_present] 0-: geo-replication module
not installed in the system
Given volfile:
+--+
  1: volume management
  2: type mgmt/glusterd
  3: option working-directory /etc/glusterd
  4: option transport-type socket,rdma
  5: option transport.socket.keepalive-time 10
  6: option transport.socket.keepalive-interval 2
  7: option transport.socket.read-fail-log off
  8: end-volume

+--+

Anyone know what I'm doing wrong? Thanks!

   --joey
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Problem with Gluster's LATEST RPM Builds

2011-10-04 Thread Joey McDonald
Ok, thank you!


   --joey




On Mon, Oct 3, 2011 at 1:11 AM, Lakshmipathi Ganapathi <
lakshmipa...@gluster.com> wrote:

>  Hi -
> Interesting,I have downloaded python 64-bit rpm from other resources (like
> centos,rpmfind ) they show  arch as 32-bit!
>
> $ file python-2.6.5-3.el6.x86_64.rpm
> python-2.6.5-3.el6.x86_64.rpm: RPM v3 bin i386 python-2.6.5-3.el6
>
> $ file python-2.7.1-7.fc15.x86_64.rpm
> python-2.7.1-7.fc15.x86_64.rpm: RPM v3 bin i386 python-2.7.1-7.fc15
>
> Reading more about from here
> http://www.rpm.org/max-rpm/s1-rpm-file-format-rpm-file-format.html
>
> My guess is, 'file' command still uses an old magic signature (lead - where
> 'arch' value can be found with in first few bytes), but the new magic
> signature(header) stores the details
> in some other place.May be recent the rpmbuild updates arch value in
> 'header' but not in old 'lead' structure.
>
> $ file --version
> file-5.03
>
> .
> ..
> ...
> Cheers,
> Lakshmipathi.G
> FOSS Programmer.
>
>  --
> *From:* gluster-users-boun...@gluster.org [
> gluster-users-boun...@gluster.org] on behalf of Joey McDonald [
> j...@scare.org]
> *Sent:* Sunday, October 02, 2011 5:26 AM
> *To:* gluster-users@gluster.org
> *Subject:* [Gluster-users] Problem with Gluster's LATEST RPM Builds
>
>  Hi,
>
>  You have some Gluster packages available for download here:
>
>  http://download.gluster.com/pub/gluster/glusterfs/LATEST/CentOS/
>
>  They are labeled as x86_64, however they're actually i386, as you can see
> here:
>
>  [joey@nahalam-0-0 ~]$ wget
> http://download.gluster.com/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-core-3.2.4-1.x86_64.rpm--2011-10-0123:55:48--
> http://download.gluster.com/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-core-3.2.4-1.x86_64.rpm
> Resolving download.gluster.com... 70.38.57.57
> Connecting to download.gluster.com|70.38.57.57|:80... connected.
> HTTP request sent, awaiting response... 200 OK
> Length: 2317773 (2.2M) [application/x-redhat-package-manager]
> Saving to: `glusterfs-core-3.2.4-1.x86_64.rpm'
>
>  
> 100%[===>]
> 2,317,773   1.98M/s   in 1.1s
>
>  2011-10-01 23:55:49 (1.98 MB/s) - `glusterfs-core-3.2.4-1.x86_64.rpm'
> saved [2317773/2317773]
>
>  [joey@nahalam-0-0 ~]$ file glusterfs-core-3.2.4-1.x86_64.rpm
> glusterfs-core-3.2.4-1.x86_64.rpm: RPM v3 bin i386 glusterfs-core-3.2.4-1
>
>
> --joey
>
>
>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Problem with Gluster's LATEST RPM Builds

2011-10-01 Thread Joey McDonald
Hi,

You have some Gluster packages available for download here:

http://download.gluster.com/pub/gluster/glusterfs/LATEST/CentOS/

They are labeled as x86_64, however they're actually i386, as you can see
here:

[joey@nahalam-0-0 ~]$ wget
http://download.gluster.com/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-core-3.2.4-1.x86_64.rpm--2011-10-0123:55:48--
http://download.gluster.com/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-core-3.2.4-1.x86_64.rpm
Resolving download.gluster.com... 70.38.57.57
Connecting to download.gluster.com|70.38.57.57|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2317773 (2.2M) [application/x-redhat-package-manager]
Saving to: `glusterfs-core-3.2.4-1.x86_64.rpm'

100%[===>]
2,317,773   1.98M/s   in 1.1s

2011-10-01 23:55:49 (1.98 MB/s) - `glusterfs-core-3.2.4-1.x86_64.rpm' saved
[2317773/2317773]

[joey@nahalam-0-0 ~]$ file glusterfs-core-3.2.4-1.x86_64.rpm
glusterfs-core-3.2.4-1.x86_64.rpm: RPM v3 bin i386 glusterfs-core-3.2.4-1


   --joey
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] 3.2.2 Performance Issue

2011-08-16 Thread Joey McDonald
Good news!

That seems to have improved performance quite a bit so I'd like to share
what I've done. Originally, with only distribute configured on a volume, I
was seeing 100MB/s writes. When moving to distribute/replicate, I was
getting 10MB/s or less. Avati suggested that I'm running out of extended
attribute space for inodes.

I have reformatted /dev/sdb which is what I'm currently using as my gluster
export. I have created a single primary partition (/dev/sdb1). My version
(CentOS 5) of mke2fs (mkfs.ext3) has an undocumented option for increasing
the inode-size attribute:

/sbin/mkfs.ext3 -I 512 /dev/sdb1

Recreating my volume with dist/replicate:

[root@vm-container-0-3 ~]# gluster volume info pifs

Volume Name: pifs
Type: Distributed-Replicate
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: vm-container-0-0:/gluster
Brick2: vm-container-0-1:/gluster
Brick3: vm-container-0-2:/gluster
Brick4: vm-container-0-3:/gluster

and I'm consistently seeing 30+ MB/s writes with no changes to the network
setup.

Thanks Avati!!

   --joey



On Tue, Aug 16, 2011 at 9:31 AM, Joey McDonald  wrote:

> Hi Avati,
>
>
>> Write performance in replicate is not only a throughput factor of disk and
>> network, but also involves xattr performance. xattr performance is a
>> function of the inode size in most of the disk filesystems. Can you give
>> some more details about the backend filesystem, specifically the inode size
>> with which it was formatted? If it was ext3 with the default 128byte inode,
>> it is very likely you might be running out of in-inode xattr space (due to
>> enabling marker-related features like geo-sync or quota?) and hitting data
>> blocks. If so, please reformat with 512byte or 1KB inode size.
>>
>> Also, what about read performance in replicate?
>>
>
> Thanks for your insight on this issue, we are using ext3 for the gluster
> partition with CentOS 5 default inode size:
>
> [root@vm-container-0-0 ~]# tune2fs -l /dev/sdb1 | grep Inode
> Inode count:  244219904
> Inodes per group: 32768
> Inode blocks per group:   1024
> Inode size:   128
>
> I'll reformat sdb1 with 512 bytes and recreate my gluster volumes with
> distribute/replicate and run my benchmark tests again.
>
>
>--joey
>
>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] 3.2.2 Performance Issue

2011-08-16 Thread Joey McDonald
Hi Avati,


> Write performance in replicate is not only a throughput factor of disk and
> network, but also involves xattr performance. xattr performance is a
> function of the inode size in most of the disk filesystems. Can you give
> some more details about the backend filesystem, specifically the inode size
> with which it was formatted? If it was ext3 with the default 128byte inode,
> it is very likely you might be running out of in-inode xattr space (due to
> enabling marker-related features like geo-sync or quota?) and hitting data
> blocks. If so, please reformat with 512byte or 1KB inode size.
>
> Also, what about read performance in replicate?
>

Thanks for your insight on this issue, we are using ext3 for the gluster
partition with CentOS 5 default inode size:

[root@vm-container-0-0 ~]# tune2fs -l /dev/sdb1 | grep Inode
Inode count:  244219904
Inodes per group: 32768
Inode blocks per group:   1024
Inode size:   128

I'll reformat sdb1 with 512 bytes and recreate my gluster volumes with
distribute/replicate and run my benchmark tests again.


   --joey
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] 3.2.2 Performance Issue

2011-08-10 Thread Joey McDonald
Hi Joe, thanks for your response!


>  An order of magnitude slower with replication. What's going on I wonder?
>> Thanks for any suggestions.
>>
>
> You are dealing with contention for Gigabit bandwidth.  Replication will do
> that, and will be pronounced over 1GbE.  Much less of an issue over 10GbE or
> Infiniband.
>

I do understand that replication will increase network usage but a 10x
increase doesn't seem to add up, or does it? I guess a 2x increase would
make more sense to me. I'm only doing a 2x replicated volume.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] 3.2.2 Performance Issue

2011-08-10 Thread Joey McDonald
Hello again,

For testing purposes, I have deleted my original distributed/replicated
volume that was seeing write's at around 10 MB/s and created one on the same
cluster that is only distributed.

Now, I'm seeing the kind of performance I was hoping to see from
distributed/replicated:

[root@vm-container-0-0 ~]# gluster volume info pifs

Volume Name: pifs
Type: Distribute
Status: Started
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: vm-container-0-0:/gluster
Brick2: vm-container-0-1:/gluster
Brick3: vm-container-0-2:/gluster
Brick4: vm-container-0-3:/gluster

[root@vm-container-0-0 ~]# mount | grep pifs
glusterfs#127.0.0.1:pifs on /pifs type fuse
(rw,allow_other,default_permissions,max_read=131072)

[root@vm-container-0-0 ~]# dd if=/dev/zero of=/pifs/dd_test.img bs=1M
count=2000
2000+0 records in
2000+0 records out
2097152000 bytes (2.1 GB) copied, 22.2793 seconds, 94.1 MB/s


An order of magnitude slower with replication. What's going on I wonder?
Thanks for any suggestions.

  --joey



On Wed, Aug 10, 2011 at 10:05 AM, Joey McDonald  wrote:

>
>
> On Tue, Aug 9, 2011 at 11:19 PM, Mohit Anchlia wrote:
>
>> And can you also give the mount options of gluster fs?
>>
>>
> Sure:
>
> [root@vm-container-0-0 ~]# tail -2 /etc/fstab
> /dev/sdb1   /glusterext3defaults0 1
> 127.0.0.1:pifs  /pifs   glusterfs defaults,_netdev
> 0 0
>
> I'm not running VM's off the gluster share but that's the general idea of
> where I'd like to get with this platform. That's obviously going to require
> some different mount options but I'd like to get this performance issue
> resolved before I start in on that.
>
>
>--joey
>
>
>
>
>> On Tue, Aug 9, 2011 at 4:41 PM, Joey McDonald  wrote:
>> >
>> >
>> > On Tue, Aug 9, 2011 at 5:40 PM, Joey McDonald  wrote:
>> >>
>> >> Hi Pavan,
>> >> Thanks for your quick reply, comments inline:
>> >>
>> >>>
>> >>> 1. Are these baremetal systems or are they Virtual machines ?
>> >>
>> >> Bare metal systems.
>> >>
>> >>>
>> >>> 2. What is the amount of RAM of each of these systems ?
>> >>
>> >> They all have 4194304 kB of memory.
>> >>
>> >>>
>> >>> 3. How many CPUs do they have ?
>> >>
>> >> They each have 8 procs.
>> >>
>> >>>
>> >>> 4. Can you also perform the dd on /gluster as opposed to /root to
>> check
>> >>> the backend performance ?
>> >>
>> >> Sure, here is that output:
>> >>
>> >> [root@vm-container-0-0 ~]# dd if=/dev/zero of=/gluster/dd_test.img
>> bs=1M
>> >> count=2000
>> >> 2000+0 records in
>> >> 2000+0 records out
>> >> 2097152000 bytes (2.1 GB) copied, 6.65193 seconds, 315 MB/s
>> >>
>> >>>
>> >>> 5. What is your disk backend ? Is it direct attached or is it an array
>> ?
>> >>
>> >> Direct attached, /gluster is /dev/sdb1, 1TB SATA drive (as is
>> /dev/sda):
>> >> [root@vm-container-0-0 ~]# hdparm -i /dev/sdb
>> >> /dev/sdb:
>> >>  Model=WDC WD1002FBYS-02A6B0   , FwRev=03.00C06,
>> SerialNo=
>> >> WD-WMATV5311442
>> >>  Config={ HardSect NotMFM HdSw>15uSec SpinMotCtl Fixed DTR>5Mbs
>> FmtGapReq
>> >> }
>> >>  RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=50
>> >>  BuffType=unknown, BuffSize=32767kB, MaxMultSect=16, MultSect=?0?
>> >>  CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=268435455
>> >>  IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120}
>> >>  PIO modes:  pio0 pio3 pio4
>> >>  DMA modes:  mdma0 mdma1 mdma2
>> >>  UDMA modes: udma0 udma1 udma2
>> >>  AdvancedPM=no WriteCache=enabled
>> >>  Drive conforms to: Unspecified:  ATA/ATAPI-1 ATA/ATAPI-2 ATA/ATAPI-3
>> >> ATA/ATAPI-4 ATA/ATAPI-5 ATA/ATAPI-6 ATA/ATAPI-7
>> >>  * signifies the current active mode
>> >>
>> >>>
>> >>> 6. What is the backend filesystem ?
>> >>
>> >> ext3
>> >>
>> >>>
>> >>> 7. Can you run a simple scp of about 10M between any two of these
>> systems
>> >>> and report the speed ?
>> >>
>> >>  Sure, output:
>> >> [root@vm-container-0-1 ~]# scp vm-container-0-0:/gluster/dd_test.img .
>> >> Warning: Permanently added 'vm-container-0-0' (RSA) to the list of
>> known
>> >> hosts.
>> >> root@vm-container-0-0's password:
>> >> dd_test.img
>> >>   100%
>> 2000MB
>> >>  39.2MB/s   00:51
>> >>
>> >>--joey
>> >>
>> >>
>> >
>> >
>> > ___
>> > Gluster-users mailing list
>> > Gluster-users@gluster.org
>> > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>> >
>> >
>>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] 3.2.2 Performance Issue

2011-08-10 Thread Joey McDonald
On Tue, Aug 9, 2011 at 11:19 PM, Mohit Anchlia wrote:

> And can you also give the mount options of gluster fs?
>
>
Sure:

[root@vm-container-0-0 ~]# tail -2 /etc/fstab
/dev/sdb1   /glusterext3defaults0 1
127.0.0.1:pifs  /pifs   glusterfs defaults,_netdev 0
0

I'm not running VM's off the gluster share but that's the general idea of
where I'd like to get with this platform. That's obviously going to require
some different mount options but I'd like to get this performance issue
resolved before I start in on that.


   --joey




> On Tue, Aug 9, 2011 at 4:41 PM, Joey McDonald  wrote:
> >
> >
> > On Tue, Aug 9, 2011 at 5:40 PM, Joey McDonald  wrote:
> >>
> >> Hi Pavan,
> >> Thanks for your quick reply, comments inline:
> >>
> >>>
> >>> 1. Are these baremetal systems or are they Virtual machines ?
> >>
> >> Bare metal systems.
> >>
> >>>
> >>> 2. What is the amount of RAM of each of these systems ?
> >>
> >> They all have 4194304 kB of memory.
> >>
> >>>
> >>> 3. How many CPUs do they have ?
> >>
> >> They each have 8 procs.
> >>
> >>>
> >>> 4. Can you also perform the dd on /gluster as opposed to /root to check
> >>> the backend performance ?
> >>
> >> Sure, here is that output:
> >>
> >> [root@vm-container-0-0 ~]# dd if=/dev/zero of=/gluster/dd_test.img
> bs=1M
> >> count=2000
> >> 2000+0 records in
> >> 2000+0 records out
> >> 2097152000 bytes (2.1 GB) copied, 6.65193 seconds, 315 MB/s
> >>
> >>>
> >>> 5. What is your disk backend ? Is it direct attached or is it an array
> ?
> >>
> >> Direct attached, /gluster is /dev/sdb1, 1TB SATA drive (as is /dev/sda):
> >> [root@vm-container-0-0 ~]# hdparm -i /dev/sdb
> >> /dev/sdb:
> >>  Model=WDC WD1002FBYS-02A6B0   , FwRev=03.00C06,
> SerialNo=
> >> WD-WMATV5311442
> >>  Config={ HardSect NotMFM HdSw>15uSec SpinMotCtl Fixed DTR>5Mbs
> FmtGapReq
> >> }
> >>  RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=50
> >>  BuffType=unknown, BuffSize=32767kB, MaxMultSect=16, MultSect=?0?
> >>  CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=268435455
> >>  IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120}
> >>  PIO modes:  pio0 pio3 pio4
> >>  DMA modes:  mdma0 mdma1 mdma2
> >>  UDMA modes: udma0 udma1 udma2
> >>  AdvancedPM=no WriteCache=enabled
> >>  Drive conforms to: Unspecified:  ATA/ATAPI-1 ATA/ATAPI-2 ATA/ATAPI-3
> >> ATA/ATAPI-4 ATA/ATAPI-5 ATA/ATAPI-6 ATA/ATAPI-7
> >>  * signifies the current active mode
> >>
> >>>
> >>> 6. What is the backend filesystem ?
> >>
> >> ext3
> >>
> >>>
> >>> 7. Can you run a simple scp of about 10M between any two of these
> systems
> >>> and report the speed ?
> >>
> >>  Sure, output:
> >> [root@vm-container-0-1 ~]# scp vm-container-0-0:/gluster/dd_test.img .
> >> Warning: Permanently added 'vm-container-0-0' (RSA) to the list of known
> >> hosts.
> >> root@vm-container-0-0's password:
> >> dd_test.img
> >>   100%
> 2000MB
> >>  39.2MB/s   00:51
> >>
> >>--joey
> >>
> >>
> >
> >
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
> >
> >
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] 3.2.2 Performance Issue

2011-08-09 Thread Joey McDonald
On Tue, Aug 9, 2011 at 5:40 PM, Joey McDonald  wrote:

> Hi Pavan,
>
> Thanks for your quick reply, comments inline:
>
>
>> 1. Are these baremetal systems or are they Virtual machines ?
>>
>
> Bare metal systems.
>
>
>
>> 2. What is the amount of RAM of each of these systems ?
>>
>
> They all have 4194304 kB of memory.
>
>
>>
>> 3. How many CPUs do they have ?
>>
>
> They each have 8 procs.
>
>
>> 4. Can you also perform the dd on /gluster as opposed to /root to check
>> the backend performance ?
>>
>
> Sure, here is that output:
>
> [root@vm-container-0-0 ~]# dd if=/dev/zero of=/gluster/dd_test.img bs=1M
> count=2000
> 2000+0 records in
> 2000+0 records out
> 2097152000 bytes (2.1 GB) copied, 6.65193 seconds, 315 MB/s
>
>
>
>> 5. What is your disk backend ? Is it direct attached or is it an array ?
>>
>
> Direct attached, /gluster is /dev/sdb1, 1TB SATA drive (as is /dev/sda):
>
> [root@vm-container-0-0 ~]# hdparm -i /dev/sdb
>
> /dev/sdb:
>
>  Model=WDC WD1002FBYS-02A6B0   , FwRev=03.00C06, SerialNo=
> WD-WMATV5311442
>  Config={ HardSect NotMFM HdSw>15uSec SpinMotCtl Fixed DTR>5Mbs FmtGapReq }
>  RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=50
>  BuffType=unknown, BuffSize=32767kB, MaxMultSect=16, MultSect=?0?
>  CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=268435455
>  IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120}
>  PIO modes:  pio0 pio3 pio4
>  DMA modes:  mdma0 mdma1 mdma2
>  UDMA modes: udma0 udma1 udma2
>  AdvancedPM=no WriteCache=enabled
>  Drive conforms to: Unspecified:  ATA/ATAPI-1 ATA/ATAPI-2 ATA/ATAPI-3
> ATA/ATAPI-4 ATA/ATAPI-5 ATA/ATAPI-6 ATA/ATAPI-7
>
>  * signifies the current active mode
>
>
>
>> 6. What is the backend filesystem ?
>>
>
> ext3
>
>
>> 7. Can you run a simple scp of about 10M between any two of these systems
>> and report the speed ?
>>
>
>  Sure, output:
>
> [root@vm-container-0-1 ~]# scp vm-container-0-0:/gluster/dd_test.img .
> Warning: Permanently added 'vm-container-0-0' (RSA) to the list of known
> hosts.
> root@vm-container-0-0's password:
> dd_test.img
> 100% 2000MB
>  39.2MB/s   00:51
>
>
>--joey
>
>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] 3.2.2 Performance Issue

2011-08-09 Thread Joey McDonald
Hello all,

I've configured 4 bricks over a GigE network, however I'm getting very slow
performance for writing to my gluster share.

Just set this up this week, and here's what I'm seeing:

[root@vm-container-0-0 ~]# gluster --version | head -1
glusterfs 3.2.2 built on Jul 14 2011 13:34:25

[root@vm-container-0-0 pifs]# gluster volume info

Volume Name: pifs
Type: Distributed-Replicate
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: vm-container-0-0:/gluster
Brick2: vm-container-0-1:/gluster
Brick3: vm-container-0-2:/gluster
Brick4: vm-container-0-3:/gluster

The 4 systems, are each storage bricks and storage clients, mounting gluster
like so:

[root@vm-container-0-1 ~]# df -h  /pifs/
FilesystemSize  Used Avail Use% Mounted on
glusterfs#127.0.0.1:pifs
  1.8T  848M  1.7T   1% /pifs

iperf show's network through put looking good:

[root@vm-container-0-0 pifs]# iperf -c vm-container-0-1

Client connecting to vm-container-0-1, TCP port 5001
TCP window size: 16.0 KByte (default)

[  3] local 10.19.127.254 port 53441 connected with 10.19.127.253 port 5001
[ ID] Interval   Transfer Bandwidth
[  3]  0.0-10.0 sec  1.10 GBytes941 Mbits/sec


Then, writing to the local disk is pretty fast:

[root@vm-container-0-0 pifs]# dd if=/dev/zero of=/root/dd_test.img bs=1M
count=2000
2000+0 records in
2000+0 records out
2097152000 bytes (2.1 GB) copied, 4.8066 seconds, 436 MB/s

However, writes to the gluster share, are abysmally slow:

[root@vm-container-0-0 pifs]# dd if=/dev/zero of=/pifs/dd_test.img bs=1M
count=2000
2000+0 records in
2000+0 records out
2097152000 bytes (2.1 GB) copied, 241.866 seconds, 8.7 MB/s

Other than the fact that it's quite slow, it seems to be very stable.

iozone testing shows about the same results.

Any help troubleshooting would be much appreciated. Thanks!

   --joey
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] VM Instances Hang When Mounting Root Filesystem

2011-04-25 Thread Joey McDonald
Greetings Gluster folk,

I'm running a set of Xen instances and the Xen server configuration and disk
images live on a local disk.  I'm attempting to migrate the images/instances
off local disk and onto a gluster share.

I have a 4 node physical system cluster. Each system in the cluster has 2
disks. The 2nd drive is dedicated to gluster. It's mounted up as /gluster/
and the volume looks like this:

*[root@vm-container-0-0 ~]# gluster volume info*
*
*
*Volume Name: pifs*
*Type: Distributed-Replicate*
*Status: Started*
*Number of Bricks: 2 x 2 = 4*
*Transport-type: tcp*
*Bricks:*
*Brick1: vm-container-0-0:/gluster*
*Brick2: vm-container-0-1:/gluster*
*Brick3: vm-container-0-2:/gluster*
*Brick4: vm-container-0-3:/gluster*
*
*
*[root@vm-container-0-1 ~]# df -h /pifs/*
*FilesystemSize  Used Avail Use% Mounted on*
*glusterfs#127.0.0.1:pifs*
*  1.8T  421G  1.3T  25% /pifs*


I'm mounting the gluster partition from /etc/fstab, like so:

*127.0.0.1:pifs  /pifs   glusterfs
direct-io-mode=disable,_netdev 0 0*

on each of the 4 nodes. Thus, each system is a gluster server and a gluster
client. Everything seem to work fairly happily except when xen tries to
actually run and instance. It begins to boot and every instance has the same
problem, which is it simply hangs after checking the local disks. So, to be
clear, this is the output of 'xm console' for a VM instance running off the
glusterfs based /pifs partition:

*kjournald starting.  Commit interval 5 seconds*
*EXT3-fs: mounted filesystem with ordered data mode.*
*Setting up other filesystems.*
*Setting up new root fs*
*no fstab.sys, mounting internal defaults*
*Switching to new root and running init.*
*unmounting old /dev*
*unmounting old /proc*
*unmounting old /sys*
*SELinux:  Disabled at runtime.*
*type=1404 audit(1303751618.915:2): selinux=0 auid=4294967295 ses=4294967295
*
*INIT: version 2.86 booting*
*Welcome to  CentOS release 5.5 (Final)*
*Press 'I' to enter interactive startup.*
*Cannot access the Hardware Clock via any known method.*
*Use the --debug option to see the details of our search for an access
method.*
*Setting clock : Mon Apr 25 13:13:39 EDT 2011 [  OK  ]*
*Starting udev: [  OK  ]*
*Setting hostname localhost:  [  OK  ]*
*No devices found*
*Setting up Logical Volume Management: [  OK  ]*
*Checking filesystems*
*Checking all file systems.*
*[/sbin/fsck.ext3 (1) -- /] fsck.ext3 -a /dev/sda1*
*/dev/sda1: clean, 55806/262144 files, 383646/524288 blocks*
*[/sbin/fsck.ext3 (1) -- /opt] fsck.ext3 -a /dev/sda2*
*/dev/sda2: clean, 11/2359296 files, 118099/4718592 blocks*

This is obviously the output from a CentOS 5.5 VM. I've also tried to run a
very simple ttylinux 6.0 VM simply to test with and it hangs in the same
place, when trying to mount the root file system:

*NET: Registered protocol family 2*
*IP route cache hash table entries: 32768 (order: 5, 131072 bytes)*
*TCP established hash table entries: 131072 (order: 8, 1048576 bytes)*
*TCP bind hash table entries: 65536 (order: 7, 524288 bytes)*
*TCP: Hash tables configured (established 131072 bind 65536)*
*TCP reno registered*
*TCP bic registered*
*NET: Registered protocol family 1*
*NET: Registered protocol family 17*
*Bridge firewalling registered*
*Using IPI Shortcut mode*
*md: Autodetecting RAID arrays.*
*md: autorun ...*
*md: ... autorun DONE.*
*VFS: Mounted root (ext2 filesystem) readonly.*
*Freeing unused kernel memory: 380k freed*
*
*
*-=# ttylinux 6.0#=-*
*
*
*Mounting proc:  done*
*Mounting sysfs: done*
*Setting console loglevel:   done*
*Setting system clock: hwclock: cannot access RTC: No such file or directory
*
*failed*
*Starting fsck for root filesystem.*
*e2fsck 1.39 (29-May-2006)*
*/dev/sda1: clean, 427/1280 files, 4215/5120 blocks*
*Checking root filesystem:   done*
*Remounting root rw:*


The instances simply hang here forever... When the instance files are
located on the local disk, they boot up with no problem.

I have mounted /pifs/ with direct-io-mode=disabled but am still seeing some
strange behavior.

Does anyone know what the issue with this could be? Thanks!

   --joey
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Bug? Mount and fstab

2010-10-21 Thread Joey McDonald
Hi Amar, thanks for checking. I'm running version 3.0.5 .

   --joey

On Thu, Oct 21, 2010 at 12:02 AM, Amar Tumballi  wrote:

> Joey,
>
> Which version of GlusterFS are you using? If you are using 3.1, then I am
> not able to reproduce the problem locally here on our system.
>
> Also, if you are using 3.1, we recommend you to use ':/'
> format in 1st column, instead of direct path to client volume file, to avail
> DVM functionalities.
>
> Regards,
> Amar
>
>
> On Thu, Oct 21, 2010 at 5:35 AM, Joey McDonald  wrote:
>
>> I think this is likely a bug with either mount or glusterfs:
>>
>>
>> [r...@vm-container-0-0 ~]# cat /etc/fstab
>> LABEL=/ /   ext3defaults1
>> 1
>> LABEL=/state/partition  /state/partition1   ext3defaults1
>> 2
>> LABEL=/var  /varext3defaults1
>> 2
>> tmpfs   /dev/shmtmpfs   defaults0
>> 0
>> devpts  /dev/ptsdevpts  gid=5,mode=620  0
>> 0
>> sysfs   /syssysfs   defaults0
>> 0
>> proc/proc   procdefaults0
>> 0
>> LABEL=SWAP-sda3 swapswapdefaults0
>> 0
>> /dev/sdb1   /glusterext3defaults0
>> 1
>> /etc/glusterfs/glusterfs.vol  /pifs/ glusterfs  defaults  0  0
>>
>>
>> [r...@vm-container-0-0 ~]# df -h
>> FilesystemSize  Used Avail Use% Mounted on
>> /dev/sda1  16G  2.6G   12G  18% /
>> /dev/sda5 883G   35G  803G   5% /state/partition1
>> /dev/sda2 3.8G  121M  3.5G   4% /var
>> tmpfs 7.7G 0  7.7G   0% /dev/shm
>> /dev/sdb1 917G  200M  871G   1% /gluster
>> none  7.7G  104K  7.7G   1% /var/lib/xenstored
>> glusterfs#/etc/glusterfs/glusterfs.vol
>>  2.7T  600M  2.6T   1% /pifs
>> [r...@vm-container-0-0 ~]# mount -a
>> [r...@vm-container-0-0 ~]# mount -a
>> [r...@vm-container-0-0 ~]# mount -a
>> [r...@vm-container-0-0 ~]# df -h
>> FilesystemSize  Used Avail Use% Mounted on
>> /dev/sda1  16G  2.6G   12G  18% /
>> /dev/sda5 883G   35G  803G   5% /state/partition1
>> /dev/sda2 3.8G  121M  3.5G   4% /var
>> tmpfs 7.7G 0  7.7G   0% /dev/shm
>> /dev/sdb1 917G  200M  871G   1% /gluster
>> none  7.7G  104K  7.7G   1% /var/lib/xenstored
>> glusterfs#/etc/glusterfs/glusterfs.vol
>>  2.7T  600M  2.6T   1% /pifs
>> glusterfs#/etc/glusterfs/glusterfs.vol
>>  2.7T  600M  2.6T   1% /pifs
>> glusterfs#/etc/glusterfs/glusterfs.vol
>>  2.7T  600M  2.6T   1% /pifs
>> glusterfs#/etc/glusterfs/glusterfs.vol
>>  2.7T  600M  2.6T   1% /pifs
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>
>>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Bug? Mount and fstab

2010-10-20 Thread Joey McDonald
I think this is likely a bug with either mount or glusterfs:


[r...@vm-container-0-0 ~]# cat /etc/fstab
LABEL=/ /   ext3defaults1 1
LABEL=/state/partition  /state/partition1   ext3defaults1 2
LABEL=/var  /varext3defaults1 2
tmpfs   /dev/shmtmpfs   defaults0 0
devpts  /dev/ptsdevpts  gid=5,mode=620  0 0
sysfs   /syssysfs   defaults0 0
proc/proc   procdefaults0 0
LABEL=SWAP-sda3 swapswapdefaults0 0
/dev/sdb1   /glusterext3defaults0 1
/etc/glusterfs/glusterfs.vol  /pifs/ glusterfs  defaults  0  0


[r...@vm-container-0-0 ~]# df -h
FilesystemSize  Used Avail Use% Mounted on
/dev/sda1  16G  2.6G   12G  18% /
/dev/sda5 883G   35G  803G   5% /state/partition1
/dev/sda2 3.8G  121M  3.5G   4% /var
tmpfs 7.7G 0  7.7G   0% /dev/shm
/dev/sdb1 917G  200M  871G   1% /gluster
none  7.7G  104K  7.7G   1% /var/lib/xenstored
glusterfs#/etc/glusterfs/glusterfs.vol
  2.7T  600M  2.6T   1% /pifs
[r...@vm-container-0-0 ~]# mount -a
[r...@vm-container-0-0 ~]# mount -a
[r...@vm-container-0-0 ~]# mount -a
[r...@vm-container-0-0 ~]# df -h
FilesystemSize  Used Avail Use% Mounted on
/dev/sda1  16G  2.6G   12G  18% /
/dev/sda5 883G   35G  803G   5% /state/partition1
/dev/sda2 3.8G  121M  3.5G   4% /var
tmpfs 7.7G 0  7.7G   0% /dev/shm
/dev/sdb1 917G  200M  871G   1% /gluster
none  7.7G  104K  7.7G   1% /var/lib/xenstored
glusterfs#/etc/glusterfs/glusterfs.vol
  2.7T  600M  2.6T   1% /pifs
glusterfs#/etc/glusterfs/glusterfs.vol
  2.7T  600M  2.6T   1% /pifs
glusterfs#/etc/glusterfs/glusterfs.vol
  2.7T  600M  2.6T   1% /pifs
glusterfs#/etc/glusterfs/glusterfs.vol
  2.7T  600M  2.6T   1% /pifs
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster and rocks?

2010-10-20 Thread Joey McDonald
On Sat, Oct 9, 2010 at 12:45 AM, Lana Deere  wrote:

> Is anyone using Gluster with the Rocks cluster system?  If so, do you
> know if there is a Roll for Gluster?  Or, do you have any advice about
> things to do and things to avoid doing?
>
> We're considering using Gluster with Infiniband on our cluster and
> trying to learn whether other people have done this so we can perhaps
> learn from their experience.
>
>

Comments welcome:

http://sysextra.blogspot.com/2010/10/how-to-create-gluster-roll-for-rocks.html

This is for 3.0.5 - when I get around to deploying 3.1, I'll do another
post.


   --joey









> Thanks.
>
> .. Lana (lana.de...@gmail.com)
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users