[Gluster-users] NFS and cachefilesd

2012-02-10 Thread rickytato rickytato
It's possible and make sense use nfs mount and cachefilesd for GlusterFS?
Actually I use native fuse client to mount my simple cluster, but I'd like
to know if
this setup can work well.


rr
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] user_xattr

2011-05-27 Thread rickytato rickytato
The flag user_xattr must be enable? I use my two different cluster without
this flags and seems work well, but if I do:
setfattr -n user.foo -v bar test.txt
setfattr: test.txt: Operation not supported

It's normal? GlusterFS use "internal" xattr? I user Ubuntu 10.04 server
64bit, with ext4 filesystem.


rr
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] version 3.2

2011-04-26 Thread rickytato rickytato
2011/4/26 Giovanni Toraldo 

> On Tue, Apr 26, 2011 at 11:11 AM, rickytato rickytato
>  wrote:
> > Version 3.2 it's ready for production?
>
> Maybe you can ask a more specific question?
>
> Or a:
>
> Yes.
>
> sounds an anwser to you?



Well, but there are any release note ?


rr
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] version 3.2

2011-04-26 Thread rickytato rickytato
Version 3.2 it's ready for production?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] v3.2

2011-04-23 Thread rickytato rickytato
Version 3.2 it's ready for production?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] georeplication

2011-03-18 Thread rickytato rickytato
I've been upgraded my two node cluster to 3.1.3. I saw georeplication.
What's georeplication ?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] 4 node replica 2 crash

2011-01-12 Thread rickytato rickytato
This is stack trace I found syslog:

Jan 10 18:08:24 www3 kernel: [2773721.043130] INFO: task nginx:22664 blocked
for more than 120 seconds.
Jan 10 18:08:24 www3 kernel: [2773721.043152] "echo 0 >
/proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jan 10 18:08:24 www3 kernel: [2773721.043176] nginx D
0001108733fc 0 22664   3107 0x0004
Jan 10 18:08:24 www3 kernel: [2773721.043179]  880058d7db68
0082 8800 00015980
Jan 10 18:08:24 www3 kernel: [2773721.043181]  880058d7dfd8
00015980 880058d7dfd8 880018492dc0
Jan 10 18:08:24 www3 kernel: [2773721.043184]  00015980
00015980 880058d7dfd8 00015980
Jan 10 18:08:24 www3 kernel: [2773721.043186] Call Trace:
Jan 10 18:08:24 www3 kernel: [2773721.043192]  []
request_wait_answer+0x85/0x240
Jan 10 18:08:24 www3 kernel: [2773721.043196]  [] ?
autoremove_wake_function+0x0/0x40
Jan 10 18:08:24 www3 kernel: [2773721.043199]  []
fuse_request_send+0x7c/0x90
Jan 10 18:08:24 www3 kernel: [2773721.043202]  []
fuse_dentry_revalidate+0x179/0x2b0
Jan 10 18:08:24 www3 kernel: [2773721.043204]  []
do_lookup+0x84/0x280
Jan 10 18:08:24 www3 kernel: [2773721.043206]  []
link_path_walk+0x12e/0xab0
Jan 10 18:08:24 www3 kernel: [2773721.043208]  []
do_filp_open+0x143/0x660
Jan 10 18:08:24 www3 kernel: [2773721.043212]  [] ?
default_spin_lock_flags+0x9/0x10
Jan 10 18:08:24 www3 kernel: [2773721.043216]  [] ?
sys_recvfrom+0xe1/0x170
Jan 10 18:08:24 www3 kernel: [2773721.043220]  [] ?
_raw_spin_lock+0xe/0x20
Jan 10 18:08:24 www3 kernel: [2773721.043222]  [] ?
alloc_fd+0x10a/0x150
Jan 10 18:08:24 www3 kernel: [2773721.043226]  []
do_sys_open+0x69/0x170
Jan 10 18:08:24 www3 kernel: [2773721.043229]  []
sys_open+0x20/0x30
Jan 10 18:08:24 www3 kernel: [2773721.043232]  []
system_call_fastpath+0x16/0x1b


2011/1/12 rickytato rickytato 

> Some other info:
> S.O. Ubuntu 10.10 64bit
> GlusterFS compiled from source
>
> Client and server are the same machine; the machine are simple webserver
> with Nginx + PHP-FPM and only one directory for static contents are exported
> by GlusterFS; the PHP core are only local.
>
> The server are 2 NIC 1GBit in bonding.
>
> Other?
>
> The very strange problem is that only after about 4 hours to add new node
> Nginx stop to response.
>
> Any suggestions?
>
>
> rr
>
> 2011/1/11 rickytato rickytato 
>
> Hi,
>> I'm using for about 4 weeks a simple 2 node replica 2 cluster; I'm
>> using glusterfs 3.1.1 built on Dec  9 2010 15:41:32 Repository revision:
>> v3.1.1 .
>> I use it to provide images trought Nginx.
>> All works well.
>>
>> Today i've added 2 new brick, and rebalance volume. For about 4 hours
>> work, after the Nginx hang; i've rebooted all server but nothings to do.
>>
>> When I removed two brick all returns ok (I've manually copied file from
>> "old" brick to the original).
>>
>>
>> What's wrong?
>>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] 4 node replica 2 crash

2011-01-12 Thread rickytato rickytato
Some other info:
S.O. Ubuntu 10.10 64bit
GlusterFS compiled from source

Client and server are the same machine; the machine are simple webserver
with Nginx + PHP-FPM and only one directory for static contents are exported
by GlusterFS; the PHP core are only local.

The server are 2 NIC 1GBit in bonding.

Other?

The very strange problem is that only after about 4 hours to add new node
Nginx stop to response.

Any suggestions?


rr

2011/1/11 rickytato rickytato 

> Hi,
> I'm using for about 4 weeks a simple 2 node replica 2 cluster; I'm
> using glusterfs 3.1.1 built on Dec  9 2010 15:41:32 Repository revision:
> v3.1.1 .
> I use it to provide images trought Nginx.
> All works well.
>
> Today i've added 2 new brick, and rebalance volume. For about 4 hours work,
> after the Nginx hang; i've rebooted all server but nothings to do.
>
> When I removed two brick all returns ok (I've manually copied file from
> "old" brick to the original).
>
>
> What's wrong?
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] 4 node replica 2 crash

2011-01-10 Thread rickytato rickytato
Hi,
I'm using for about 4 weeks a simple 2 node replica 2 cluster; I'm
using glusterfs 3.1.1 built on Dec  9 2010 15:41:32 Repository revision:
v3.1.1 .
I use it to provide images trought Nginx.
All works well.

Today i've added 2 new brick, and rebalance volume. For about 4 hours work,
after the Nginx hang; i've rebooted all server but nothings to do.

When I removed two brick all returns ok (I've manually copied file from
"old" brick to the original).


What's wrong?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS 3.1.1 ETA?

2010-11-30 Thread rickytato rickytato
Fantastic!!

2010/11/30 Mike Hanby 

> ha, just like magic, 3.1.1 is now available under LATEST on the download
> server.
>
> -Original Message-
> From: gluster-users-boun...@gluster.org [mailto:
> gluster-users-boun...@gluster.org] On Behalf Of Mike Hanby
> Sent: Monday, November 29, 2010 1:35 PM
> To: gluster-users@gluster.org
> Subject: [Gluster-users] GlusterFS 3.1.1 ETA?
>
> Howdy,
>
> Is there any ETA on the 3.1.1 patch that will have the secondary group
> membership fix?
>
> Thanks,
>
> Mike
>
> =
> Mike Hanby
> mha...@uab.edu
> UAB School of Engineering
> Information Systems Specialist II
> IT HPCS / Research Computing
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Problems with Gluster 3.1 and replicate/mirror

2010-11-22 Thread rickytato rickytato
I'had same problem, but with
glusterfs-3.1.1qa8.tar.gz
 seems work well.


rr

2010/11/22 Craig Carl 

> Hugo -
>How did you disable the quick-read translator?
>
> Thanks,
>
> Craig
>
> -->
> Craig Carl
> Senior Systems Engineer
> Gluster
>
>
> On 11/22/2010 06:48 AM, Hugo Cisneiros (Eitch) wrote:
>
>> Hi :)
>>
>> In another thread, I had problems with the quick-read translator that
>> was fixed on 3.1.1. Since I'm using 3.1.0 I disabled the translator
>> and the updates on small files began to work fine.
>>
>> Now, I'm having another problem. I'm using 2 servers in
>> replicate/mirror mode. I can't always reproduce the problem, but it's
>> happening some times at random. For example, there's a file named
>> tags.txt on the gluster filesystem, accessed on both clients using
>> fuse. Both servers can read and update it.
>>
>> Sometimes, when I update the file on one of the clients, it breaks its
>> access in the other client:
>>
>> server1$ md5sum tags.txt
>> 5c6a268f03c8d6b94dc1c3d0bbd3396a
>> server1$ cat tags.txt
>> [... full contents of the file ...]
>>
>> server2$ md5sum tags.txt
>> 5c6a268f03c8d6b94dc1c3d0bbd3396a
>> server2$ cat tags.txt
>> cat: tags.txt: No such file or directory
>>
>> Using vim to edit the file also gives me a "Permission denied". I can
>> read the directory contents, and even get a md5 checksum of the file,
>> but when trying to access the file, it fails :( The problem is fixed
>> when I remount the gluster mount point at the client 2.
>>
>> I think there's some split brain ocurring. Log messages includes some of
>> those:
>>
>> W [fuse-bridge.c:2075:fuse_readdir_cbk] glusterfs-fuse: 1074214:
>> READDIR =>  -1 (File descriptor in bad state)
>> I [afr-dir-read.c:171:afr_examine_dir_readdir_cbk] blogs-mirror:
>> entry self-heal triggered. path: /upload/19/files, reason: check
>> sums of directory differ, forced merge option set
>> E [afr-common.c:110:afr_set_split_brain] blogs-mirror: invalid argument:
>> inode
>>
>> W [fuse-bridge.c:570:fuse_fd_cbk] glusterfs-fuse: 1065386: OPEN()
>> /tags.txt =>  -1 (No such file or directory)
>>
>> There's lot of these, saying that an entry self-heal is triggered.
>>
>> Maybe a good option to solve this is re-syncing (with rsync, for
>> example) the server 2 with the server 1. But is this a known bug or
>> something? I remember this happening when I was messing with read-only
>> options and translators.
>>
>> Thanks,
>>
>>  ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Problems with Gluster 3.1 and quick-read translator

2010-11-19 Thread rickytato rickytato
In the source:
doc/translator-options.txt

Or in the source code:
xlators/mgmt/glusterd/src/glusterd-volgen.c from line 86


rr

2010/11/18 Hugo Cisneiros (Eitch) 

> On Thu, Nov 18, 2010 at 3:38 PM, Jacob Shucart  wrote:
> > This is a bug we have fixed in the 3.1.1 release which should be out
> > pretty soon.  For now you can disable the quick read translator by
> > running:
> >
> > gluster volume set VOLNAME performance.quick-read off
>
> Thanks for your answer. Good to know that there's an option to that, I
> didn't know so I changed the client file.
>
> Looking at
> http://www.gluster.com/community/documentation/index.php/Gluster_3.1:_Setting_Volume_Options
> I see some options, but now I think there's not all of them, right? :)
> Is there a more complete list? If not, and if you need, I'll be glad
> to help create one... :)
>
> --
> []'s
> Hugo
> www.devin.com.br
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users