Re: [Gluster-users] ZFS + Linux + Glusterfs for a production ready 100+ TB NAS on cloud

2011-09-24 Thread Liam Slusser
I've also heard it can be slower however I've never done any performance
tests on the same hardware with ext3/4 vs XFS since my partitions are so big
ext3/4 is just not an option.  With that said I've been pleased with the
performance I get and am a happy XFS user.

ls
On Sep 24, 2011 12:31 PM, "Craig Carl"  wrote:
> XFS is a valid alternative to ZFS on Linux. If I remember correctly any
operation that requires modifying a lot of xattr's can be slower than ext*,
have you noticed anything like that? You might see slower rebalances or self
healing?
>
> Craig
>
> Sent from a mobile device, please excuse my tpyos.
>
> On Sep 24, 2011, at 22:14, Liam Slusser  wrote:
>
>> I have a very large, >500tb, Gluster cluster on Centos Linux but I use
the XFS filesystem in a production role. Each xfs filesystem (brick) is
around 32tb in size. No problems all runs very well.
>>
>> ls
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] ZFS + Linux + Glusterfs for a production ready 100+ TB NAS on cloud

2011-09-24 Thread Craig Carl
XFS is a valid alternative to ZFS on Linux. If I remember correctly any 
operation that requires modifying a lot of xattr's can be slower than ext*, 
have you noticed anything like that? You might see slower rebalances or self 
healing?

Craig

Sent from a mobile device, please excuse my tpyos.

On Sep 24, 2011, at 22:14, Liam Slusser  wrote:

> I have a very large, >500tb, Gluster cluster on Centos Linux but I use the 
> XFS filesystem in a production role.  Each xfs filesystem (brick) is around 
> 32tb in size.  No problems all runs very well.
> 
> ls
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] ZFS + Linux + Glusterfs for a production ready 100+ TB NAS on cloud

2011-09-24 Thread Anand Babu Periasamy
On Sat, Sep 24, 2011 at 12:14 PM, Liam Slusser  wrote:

> I have a very large, >500tb, Gluster cluster on Centos Linux but I use the
> XFS filesystem in a production role.  Each xfs filesystem (brick) is around
> 32tb in size.  No problems all runs very well.
>
> ls
>
Yes XFS is the way to go for large partitions > 16TB (or even 12TB). XFS is
brought back to life by Red Hat. Most of the XFS developers are now RH
employees. We can confidently recommend XFS now.

-- 
Anand Babu Periasamy
Blog [ http://www.unlocksmith.org ]
Twitter [ http://twitter.com/abperiasamy ]

Imagination is more important than knowledge --Albert Einstein
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] can't use sqlite3 on gluster mounted as NFS

2011-09-24 Thread Craig Carl
Brandon -
SQLite uses POSIX locking to implement some of its ACID compliant behavior 
and requires the filesystem to fully implement POSIX advisory locks. Most 
network filesystems (including Gluster native and NFS) don't support everything 
that SQLite needs and so using SQLite on a networked filesystem isn't recommend 
by the SQLite team, see this excerpt from the link I sent earlier -

SQLite uses POSIX advisory locks to implement locking on Unix. On Windows it 
uses the LockFile(), LockFileEx(), and UnlockFile() system calls. SQLite 
assumes that these system calls all work as advertised. If that is not the 
case, then database corruption can result. One should note that POSIX advisory 
locking is known to be buggy or even unimplemented on many NFS implementations 
(including recent versions of Mac OS X) and that there are reports of locking 
problems for network filesystems under Windows. Your best defense is to not use 
SQLite for files on a network filesystem.

Craig

Sent from a mobile device, please excuse my tpyos.

On Sep 24, 2011, at 0:19, Brandon Simmons  wrote:

> On Fri, Sep 23, 2011 at 4:11 PM, Anand Babu Periasamy  
> wrote:
>> This is a known issue. Gluster NFS doesn't support NLM (locking) yet. 3.4
>> may implement this.  Did you try on GlusterFS native mount?
> 
> Thanks for that information.
> 
> I did test with the native fuse mount, but the results were difficult
> to interpret. We have a rails application that writes to multiple
> sqlite databases, and a test script that simulates a bunch of random
> writes to a specified DB, retrying if it fails.
> 
> On NFS this test runs reasonably well: both clients take turns, a
> couple retries, all writes complete without failures.
> 
> But mounted over gluster (same machines, underlying disk as above) one
> client always runs while the other gets locked out (different client
> machines depending on which was started first). At some point during
> this test the client that was locked out from writing to the DB
> actually gets disconnected from gluster and I have to remount:
> 
>$ ls /mnt/gluster
>ls: cannot access /websites/: Transport endpoint is not connected
> 
> One client is consistently locked out even if they are writing to
> DIFFERENT DBs altogether.
> 
> The breakage of the mountpoint happened every time the test was run
> concurrently against the SAME DB, but did not seem to occur when
> clients were running against different DBs.
> 
> But like I said, this was a very high level test with many moving
> parts so I'm not sure how useful the above details are for you to
> know.
> 
> Happy to hear any ideas for testing,
> Brandon
> 
> /var/log/glusterfs/etc-glusterfs-glusterd.vol.log:
> [2011-09-16 19:32:38.122196] W
> [socket.c:1494:__socket_proto_state_machine] 0-socket.management:
> reading from socket failed. Error (Transport endpoint is not
> connected), peer (127.0.0.1:1017)
> 
>> 
>> --AB
>> 
>> On Sep 23, 2011 10:00 AM, "Brandon Simmons" 
>> wrote:
>>> I am able to successfully mount a gluster volume using the NFS client
>>> on my test servers. Simple reading and writing seems to work, but
>>> trying to work with sqlite databases seems to cause the sqlite client
>>> and libraries to freeze. I have to send KILL to stop the process.
>>> 
>>> Here is an example, server 1 and 2 are clients mounting gluster volume
>>> over NFS:
>>> 
>>> server1# echo "working" > /mnt/gluster/test_simple
>>> server2# echo "working" >> /mnt/gluster/test_simple
>>> server1# cat /mnt/gluster/test_simple
>>> working
>>> working
>>> server1# sqlite3 /websites/new.sqlite3
>>> SQLite version 3.6.10
>>> Enter ".help" for instructions
>>> Enter SQL statements terminated with a ";"
>>> sqlite> create table memos(text, priority INTEGER);
>>> (...hangs forever, have to detach screen and do kill -9)
>>> 
>>> the gluster volume was created and NFS-mounted as per the instructions
>>> here:
>>> 
>>> 
>>> http://www.gluster.com/community/documentation/index.php/Gluster_3.2_Filesystem_Administration_Guide
>>> 
>>> If I mount the volume using the nolock option, then things work:
>>> 
>>> mount -t nfs -o nolock server:/test-vol /mnt/gluster
>>> 
>>> So I assume this has something to do with the locking RPC service
>>> stufff, which I don't know much about. Here's output from rpc info:
>>> 
>>> server# rpcinfo -p
>>> program vers proto port
>>> 10 2 tcp 111 portmapper
>>> 10 2 udp 111 portmapper
>>> 100024 1 udp 56286 status
>>> 100024 1 tcp 40356 status
>>> 15 3 tcp 38465 mountd
>>> 15 1 tcp 38466 mountd
>>> 13 3 tcp 38467 nfs
>>> 
>>> 
>>> client1# rpcinfo -p server
>>> program vers proto port
>>> 10 2 tcp 111 portmapper
>>> 10 2 udp 111 portmapper
>>> 100024 1 udp 56286 status
>>> 100024 1 tcp 40356 status
>>> 15 3 tcp 38465 mountd
>>> 15 1 tcp 38466 mountd
>>> 13 3 tcp 38467 nfs
>>> 
>>> client1# # rpcinfo -p
>>> program vers proto port
>>> 10 2 tcp 111 portmapper
>>> 10 2 udp 111 portmapper
>>> 100024 1 udp 

Re: [Gluster-users] ZFS + Linux + Glusterfs for a production ready 100+ TB NAS on cloud

2011-09-24 Thread Liam Slusser
I have a very large, >500tb, Gluster cluster on Centos Linux but I use the
XFS filesystem in a production role.  Each xfs filesystem (brick) is around
32tb in size.  No problems all runs very well.

ls
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] ZFS + Linux + Glusterfs for a production ready 100+ TB NAS on cloud

2011-09-24 Thread Craig Carl
The only thing I might worry about is using ZFS on Linux, I still think it 
might be a little early to trust it with truly critical data, plus there 
doesn't seem to be a big ZFS+Linux+Gluster install base to help you if problems 
come up.

I would use mdadm + LVM2 to create your RAID arrays on each server, creating 
multiple ~2TB LUNs on each server with ext3 or 4, then layer Gluster on top of 
that.

Craig

Sent from a mobile device, please excuse my tpyos.

On Sep 24, 2011, at 14:10, RDP  wrote:

> Hello, 
>   May be this question would have been addressed elsewhere but I did like the 
> opinion and experience of other users.
> 
> There could be some misconceptions that I might be carrying, so please be 
> kind to point them out. Any help, advice and suggestions will be very highly 
> appreciated.
> 
> My goal is to get a greater than 100 TB gluster NAS up on the cloud. Each 
> server will hold around 2x8TB disks. The export volume size (client disk 
> mount size) would be greater than 20 TB.
> 
> This is how I am planning to set it up all.. 16 servers each with 2x8=16 TB 
> of space. The glusterfs will be replicate and distributed (raid-10). I did 
> like to go with ZFS on linux for the disks. 
> The client machines will use the glusterfs client for mounting the volumes.
> 
> ext4 is limited to 16 TB due to userspace tool (e2fsprogs).
> 
> Would this be considered as a production ready setup? The data housed on this 
> cluster will is critical and hence I need to very sure before I go ahead with 
> this kind of a setup.
> 
> Or would using ZFS with Gluster makes more sense on FreeBSD or illuminos (ZFS 
> is native there).
> 
> Thanks a lot
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] ZFS + Linux + Glusterfs for a production ready 100+ TB NAS on cloud

2011-09-24 Thread RDP
Hello,
  May be this question would have been addressed elsewhere but I did like
the opinion and experience of other users.

There could be some misconceptions that I might be carrying, so please be
kind to point them out. Any help, advice and suggestions will be very highly
appreciated.

My goal is to get a greater than 100 TB gluster NAS up on the cloud. Each
server will hold around 2x8TB disks. The export volume size (client disk
mount size) would be greater than 20 TB.

This is how I am planning to set it up all.. 16 servers each with 2x8=16 TB
of space. The glusterfs will be replicate and distributed (raid-10). I did
like to go with ZFS on linux for the disks.
The client machines will use the glusterfs client for mounting the volumes.

ext4 is limited to 16 TB due to userspace tool (e2fsprogs).

Would this be considered as a production ready setup? The data housed on
this cluster will is critical and hence I need to very sure before I go
ahead with this kind of a setup.

Or would using ZFS with Gluster makes more sense on FreeBSD or illuminos
(ZFS is native there).

Thanks a lot
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] GLUSTERFS + ZFS ON LINUX

2011-09-24 Thread RDP
Hello,
  May be this question would have been addressed elsewhere but I did like
the opinion and experience of other users.

There could be some misconceptions that I might be carrying, so please be
kind to point them out. Any help, advice and suggestions will be very highly
appreciated.

My goal is to get a greater than 100 TB gluster NAS up on the cloud. Each
server will hold around 2x8TB disks. The export volume size (client disk
mount size) would be greater than 20 TB.

This is how I am planning to set it up all.. 16 servers each with 2x8=16 TB
of space. The glusterfs will be replicate and distributed (raid-10). I did
like to go with ZFS on linux for the disks.
The client machines will use the glusterfs client for mounting the volumes.

ext4 is limited to 16 TB due to userspace tool (e2fsprogs).

Would this be considered as a production ready setup? The data housed on
this cluster will is critical and hence I need to very sure before I go
ahead with this kind of a setup.

Or would using ZFS with Gluster makes more sense on FreeBSD or illuminos
(ZFS is native there).

Thanks a lot
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users