[Gluster-users] rotating log files, again

2010-12-10 Thread Devin Reade
On 3 Nov 2010, Phil Packer had asked about rotating log files.
While brick log files get rotated via the "gluster volume log rotate"
command, the question was left unanswered for logs like:

 /var/log/glusterfs/nfs.log
 /var/log/glusterfs/VOLUME.log

Is there any update on how we're supposed to rotate these files 
that's more friendly than shutting down glusterd, rotating them,
and starting up glusterd again?

Glusterfs is chatty in its logs, and I'd prefer not to have
those files grow very big.

Finally, as far as the brick logs go, is the intent that after they
are rotated we can just run a find(1) for old ones and delete them
as required?

Devin

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] NFS with UCARP vs. GlusterFS mount question

2010-12-10 Thread Craig Carl

Anything smaller than 128KB is 'small'.

Craig


On 12/09/2010 11:23 PM, Christian Fischer wrote:

On Friday 10 December 2010 07:12:47 Craig Carl wrote:

Christian -
  For large files the Gluster native client will perform better than
NFS, but they are both good options.

Thanks Carl.
I thought nobody will answer ;-)

What are large files from your point of view?



Thanks,

Craig

-->
Craig Carl
Senior Systems Engineer
Gluster

On 12/07/2010 11:39 PM, Christian Fischer wrote:

Morning Folks,

should I prefer NFS with UCARP or native GlusterFS mounts for serving the
system images to XCP?

Which one performes better over 1G network links?

NFS is probaby easier to setup due to existing tools like rpcinfo and
showmount, both are used inside the storage container code, and there is
some code for NFS, not for GlusterFS, except I write one.

UCARP has the disadvantage that the cluster IP is moved away from dead
systems, not from dead gluster server daemons, IMHO.

What do you think about that?

Best Regards
Christian
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] 3.1.1 GFS + UCARP for simple CIFS HA?

2010-12-10 Thread Jacob Shucart
Kon,

Since CIFS is not completely stateless, something needs to maintain state.
Ucarp doesn't do this which is why something like CTDB is more appropriate
for CIFS failover.

-Jacob

-Original Message-
From: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Kon Wilms
Sent: Thursday, December 09, 2010 10:38 AM
To: gluster-users@gluster.org
Subject: [Gluster-users] 3.1.1 GFS + UCARP for simple CIFS HA?

A little information on my configuration for this task:

- I am deploying a gfs 3.1.1 cluster using 4 nodes in mirror mode.
- Each pair is running ucarp to provide failover support.
- The two ucarp ips are then made available to clients via dnsrr.
- My access mode is read-only for CIFS with no credentials for file
access.
- I am not looking to do an active/active configuration;
active/standby suits me fine.
- I am also not looking to retain state on failover. If a file access
fails, it will be retried by the client-side application.
- Clients are running Win2k8r2 and mount the cifs share to a local
volume for local server access.

Given this scenario, is ucarp suitable for deployment? Or are there
other potential unforeseen issues i.e. client-side drive disconnects
on failover?

My other option is to go to NFS on the Win2k8r2 servers. Which begs
the question of performance of CIFS vs. NFS.

Cheers
Kon
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Error running 3.1.1

2010-12-10 Thread John Preston
It was a surprise to me also. This is the first I'm having the
problem. I have installed gluster 3.1.1 on two machines so far and I
had to fix the 64 bit library issue on both. I didn't have this
problem when I was trying gluster 3.0.5 6  months ago.

John

On Fri, Dec 10, 2010 at 10:55 AM, Jacob Shucart  wrote:
> John,
>
> I have not observed that behavior before on 64-bit CentOS. It usually has
> all of the linking set up correctly during the initial installation.
>
> -Jacob
>
> -Original Message-
> From: gluster-users-boun...@gluster.org
> [mailto:gluster-users-boun...@gluster.org] On Behalf Of John Preston
> Sent: Friday, December 10, 2010 4:42 AM
> To: Shain Miley; Gluster General Discussion List
> Subject: Re: [Gluster-users] Error running 3.1.1
>
> Solved it. It seems some how the Centos wasn't searching the 64 bit
> lib directory. I added
>
> /usr/lib64
>
> to the /etc/ld.so.conf file and then ran ldconfig and it found the
> libraries. Does anyone know why I would need to add the entry. I
> thought it was done automatically for the 64 bit OS.
>
> John
>
> On Fri, Dec 10, 2010 at 7:18 AM, John Preston 
> wrote:
>> No, I'm using Centos 5.5 64 bit
>>
>> John
>>
>> On Thu, Dec 9, 2010 at 3:55 PM, Shain Miley  wrote:
>>> I believe that gluster 3.1.x now requires a 64 bit os...sounds like you
>>> might be using a 32 bit one?
>>>
>>> Shain
>>>
>>> On 12/09/2010 03:45 PM, John Preston wrote:

 Hi I've just installed fgluster 3.1.1 and I get the following error
 when I try to run any of the gluster commands

 /usr/sbin/glusterd: symbol lookup error: /usr/lib64/libgfrpc.so.0:
 undefined symbol: gf_log_xl_log_set

 I'm on Centos 5.5.

 Is there something I need to know.

 John
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

>>>
>>>
>>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Problem mounting Gluster 3.1 with NFS

2010-12-10 Thread Joe Landman

On 12/10/2010 10:56 AM, Thomas Riske wrote:


When I look at my exports, this is what I see:

showmount -e 192.168.1.88
Export list for 192.168.1.88:
/test-nfs *

so I would think mounting like this:

mount -t nfs 192.168.1.88:/test-nfs/subdir /mnt/testmount

should work...but I get the same error: No such file or directory


Interesting.  I wonder if the export is locked to the specific volume, 
so you can't mount a subdirectory, that you need to mount the whole volume.


Might be worth an inquiry.

--
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics Inc.
email: land...@scalableinformatics.com
web  : http://scalableinformatics.com
   http://scalableinformatics.com/jackrabbit
phone: +1 734 786 8423 x121
fax  : +1 866 888 3112
cell : +1 734 612 4615
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Problem mounting Gluster 3.1 with NFS

2010-12-10 Thread Jacob Shucart
Hello,

Gluster 3.1.1 does not support mounting a subdirectory of the volume.
This is going to be changed in the next release.  For now, you could mount
192.168.1.88:/raid, but not /raid/nfstest.

-Jacob

-Original Message-
From: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Joe Landman
Sent: Friday, December 10, 2010 7:45 AM
To: gluster-users@gluster.org
Subject: Re: [Gluster-users] Problem mounting Gluster 3.1 with NFS

On 12/10/2010 10:42 AM, Thomas Riske wrote:
> Hello,
>
> I tried to NFS-mount a gluster-volume using the "normal NFS-way" with
> the directory-path:
>
> mount -t nfs 192.168.1.88:/raid/nfstest /mnt/testmount
>
> This gives me only the following error message:
>
> mount.nfs: mounting 192.168.1.88:/raid/nfstest failed, reason given by
> server: No such file or directory

[...]

> Is this a bug in gluster, or am I missing something here?
>
> Mounting the Gluster-volume with the volume-name over NFS works...
> (mount -t nfs 192.168.1.88:/test-nfs /mnt/testmount)

If you created the volume with a name of test-nfs, then thats what 
should show up in your exports

showmount -e 192.168.1.88


-- 
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics Inc.
email: land...@scalableinformatics.com
web  : http://scalableinformatics.com
http://scalableinformatics.com/jackrabbit
phone: +1 734 786 8423 x121
fax  : +1 866 888 3112
cell : +1 734 612 4615
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Error running 3.1.1

2010-12-10 Thread Jacob Shucart
John,

I have not observed that behavior before on 64-bit CentOS. It usually has
all of the linking set up correctly during the initial installation.

-Jacob

-Original Message-
From: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On Behalf Of John Preston
Sent: Friday, December 10, 2010 4:42 AM
To: Shain Miley; Gluster General Discussion List
Subject: Re: [Gluster-users] Error running 3.1.1

Solved it. It seems some how the Centos wasn't searching the 64 bit
lib directory. I added

/usr/lib64

to the /etc/ld.so.conf file and then ran ldconfig and it found the
libraries. Does anyone know why I would need to add the entry. I
thought it was done automatically for the 64 bit OS.

John

On Fri, Dec 10, 2010 at 7:18 AM, John Preston 
wrote:
> No, I'm using Centos 5.5 64 bit
>
> John
>
> On Thu, Dec 9, 2010 at 3:55 PM, Shain Miley  wrote:
>> I believe that gluster 3.1.x now requires a 64 bit os...sounds like you
>> might be using a 32 bit one?
>>
>> Shain
>>
>> On 12/09/2010 03:45 PM, John Preston wrote:
>>>
>>> Hi I've just installed fgluster 3.1.1 and I get the following error
>>> when I try to run any of the gluster commands
>>>
>>> /usr/sbin/glusterd: symbol lookup error: /usr/lib64/libgfrpc.so.0:
>>> undefined symbol: gf_log_xl_log_set
>>>
>>> I'm on Centos 5.5.
>>>
>>> Is there something I need to know.
>>>
>>> John
>>> ___
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>>
>>
>>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS 3.1.1 - peer attach bug

2010-12-10 Thread Jacob Shucart
Kon,

I will try to reproduce this locally and file a bug.

-Jacob

-Original Message-
From: Kon Wilms [mailto:kon...@gmail.com]
Sent: Thursday, December 09, 2010 2:47 PM
To: Jacob Shucart
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] GlusterFS 3.1.1 - peer attach bug

On Thu, Dec 9, 2010 at 2:08 PM, Jacob Shucart  wrote:
> Can you tell me a little more about the node?  Does it have several IP
> addresses?  Did you get the same results with all of the IP addresses?
 Or
> just one of them?  When I tried to probe the IP address of the local
node,
> I received a message:
>
> Probe on localhost not needed
>
> This is the expected behavior.

One on the backend with a pair of bonded interfaces, one on the
frontend for gfs access, and another on the frontend (virtual) for
ucarp:

bond0 Link encap:Ethernet  HWaddr a4:ba:db:0c:59:4f
  inet addr:172.16.16.50  Bcast:172.16.16.0  Mask:255.255.255.0
eth0  Link encap:Ethernet  HWaddr a4:ba:db:0c:59:4b
  inet addr:10.2.16.50  Bcast:10.2.31.255  Mask:255.255.240.0
eth2  Link encap:Ethernet  HWaddr a4:ba:db:0c:59:4f
  UP BROADCAST SLAVE MULTICAST  MTU:1500  Metric:1
eth3  Link encap:Ethernet  HWaddr a4:ba:db:0c:59:4f
  UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
eth0:ucarp Link encap:Ethernet  HWaddr a4:ba:db:0c:59:4b
  inet addr:10.2.16.57  Bcast:10.2.31.255  Mask:255.255.240.0

I get the probe not needed response as you do, but a peer status
displays the bogus local node with nulled uuid (which cannot be
deleted except for doing it manually).

Cheers
Kon
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Problem mounting Gluster 3.1 with NFS

2010-12-10 Thread Joe Landman

On 12/10/2010 10:42 AM, Thomas Riske wrote:

Hello,

I tried to NFS-mount a gluster-volume using the "normal NFS-way" with
the directory-path:

mount -t nfs 192.168.1.88:/raid/nfstest /mnt/testmount

This gives me only the following error message:

mount.nfs: mounting 192.168.1.88:/raid/nfstest failed, reason given by
server: No such file or directory


[...]


Is this a bug in gluster, or am I missing something here?

Mounting the Gluster-volume with the volume-name over NFS works...
(mount -t nfs 192.168.1.88:/test-nfs /mnt/testmount)


If you created the volume with a name of test-nfs, then thats what 
should show up in your exports


showmount -e 192.168.1.88


--
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics Inc.
email: land...@scalableinformatics.com
web  : http://scalableinformatics.com
   http://scalableinformatics.com/jackrabbit
phone: +1 734 786 8423 x121
fax  : +1 866 888 3112
cell : +1 734 612 4615
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Problem mounting Gluster 3.1 with NFS

2010-12-10 Thread Thomas Riske

Hello,

I tried to NFS-mount a gluster-volume using the "normal NFS-way" with 
the directory-path:


mount -t nfs 192.168.1.88:/raid/nfstest /mnt/testmount

This gives me only the following error message:

mount.nfs: mounting 192.168.1.88:/raid/nfstest failed, reason given by 
server: No such file or directory


But if I understand the information given here correct,
(http://www.gluster.com/community/documentation/index.php/Gluster_3.1:_NFS_Frequently_Asked_Questions#How_to_export_directories_as_separate_NFS_exports.3F)
it should be possible to mount the NFS exports like mentioned above...?

The nfs.log looks like the correct options are set in the configuration...

 1: volume test-nfs-client-0
  2: type protocol/client
  3: option remote-host 192.168.1.88
  4: option remote-subvolume /raid/nfstest
  5: option transport-type tcp
  6: end-volume
  7:
  8: volume test-nfs-write-behind
  9: type performance/write-behind
 10: subvolumes test-nfs-client-0
 11: end-volume
 12:
 13: volume test-nfs-read-ahead
 14: type performance/read-ahead
 15: subvolumes test-nfs-write-behind
 16: end-volume
 17:
 18: volume test-nfs-io-cache
 19: type performance/io-cache
 20: subvolumes test-nfs-read-ahead
 21: end-volume
 22:
 23: volume test-nfs-quick-read
 24: type performance/quick-read
 25: subvolumes test-nfs-io-cache
 26: end-volume
 27:
 28: volume test-nfs
 29: type debug/io-stats
 30: subvolumes test-nfs-quick-read
 31: end-volume
 32:
 33: volume nfs-server
 34: type nfs/server
 35: option nfs.dynamic-volumes on
 36: option rpc-auth.addr.test-nfs.allow *
 37: option nfs3.test-nfs.volume-id 
7658e857-8fc0-4cac-a1ca-1882329b6fc2

 38: subvolumes test-nfs
 39: end-volume


Is this a bug in gluster, or am I missing something here?

Mounting the Gluster-volume with the volume-name over NFS works...
(mount -t nfs 192.168.1.88:/test-nfs /mnt/testmount)


Kind regards,
Thomas
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] RDMA Problems with GlusterFS 3.1.1

2010-12-10 Thread Artem Trunov
Hi all

To add some info:

1) I can query adapter settings with "ibv_devinfo -v" and get these values

2) I can vary max_cq via ib_mthca param num_cq, but that doesn't affect max_cqe.

cheers
Artem.

On Fri, Dec 10, 2010 at 1:41 PM, Artem Trunov  wrote:
> Hi, Raghavendra, Jeremy
>
> Thanks, I have tried with the patch and also with ofed 1.5.2 and got
> pretty much what Jeremy had:
>
> [2010-12-10 13:32:59.69007] E [rdma.c:2047:rdma_create_cq]
> rpc-transport/rdma: max_mr_size = 18446744073709551615, max_cq =
> 65408, max_cqe = 131071, max_mr = 131056
>
> Aren't these parameters configurable on some driver level? I am a bit
> new to the IB business, so don't know...
>
> How do you suggest to proceed? To try the unaccepted patch?
>
> cheers
> Artem.
>
> On Fri, Dec 10, 2010 at 6:22 AM, Raghavendra G  
> wrote:
>> Hi Artem,
>>
>> you can check the maximum limits using the patch I had sent earlier in the 
>> same thread. Also, the patch
>> http://patches.gluster.com/patch/5844/ (which is not accepted yet), will 
>> check for whether the number of cqe being passed in ibv_creation_cq is 
>> greater than the value allowed by the device and if so, it will try to 
>> create CQ with maximum limit allowed by the device.
>>
>> regards,
>> - Original Message -
>> From: "Artem Trunov" 
>> To: "Raghavendra G" 
>> Cc: "Jeremy Stout" , gluster-users@gluster.org
>> Sent: Thursday, December 9, 2010 7:13:40 PM
>> Subject: Re: [Gluster-users] RDMA Problems with GlusterFS 3.1.1
>>
>> Hi, Ravendra, Jeremy
>>
>> This was very interesting debugging thread to me, since I have the
>> same symptoms, but unsure of the origin. Please see log for my mount
>> command at the end of the message.
>>
>> I have installed 3.3.1. My OFED is 1.5.1 - does it make serious
>> difference between already mentioned 1.5.2?
>>
>> On hardware limitations - I have Mellanox InfiniHost III Lx 20Gb/s and
>> it says in specs:
>>
>> "Supports 16 million QPs, EEs & CQs "
>>
>> Is this enough? How can I query for actual settings on max_cq, max_cqe?
>>
>> In general, how should I proceed? What are my other debugging options?
>> Should I try to go Jeremy path with hacking the gluster code?
>>
>> cheers
>> Artem.
>>
>> Log:
>>
>> -
>> [2010-12-09 15:15:53.847595] W [io-stats.c:1644:init] test-volume:
>> dangling volume. check volfile
>> [2010-12-09 15:15:53.847643] W [dict.c:1204:data_to_str] dict: @data=(nil)
>> [2010-12-09 15:15:53.847657] W [dict.c:1204:data_to_str] dict: @data=(nil)
>> [2010-12-09 15:15:53.858574] E [rdma.c:2066:rdma_create_cq]
>> rpc-transport/rdma: test-volume-client-1: creation of send_cq failed
>> [2010-12-09 15:15:53.858805] E [rdma.c:3771:rdma_get_device]
>> rpc-transport/rdma: test-volume-client-1: could not create CQ
>> [2010-12-09 15:15:53.858821] E [rdma.c:3957:rdma_init]
>> rpc-transport/rdma: could not create rdma device for mthca0
>> [2010-12-09 15:15:53.858893] E [rdma.c:4789:init]
>> test-volume-client-1: Failed to initialize IB Device
>> [2010-12-09 15:15:53.858909] E
>> [rpc-transport.c:971:rpc_transport_load] rpc-transport: 'rdma'
>> initialization failed
>> pending frames:
>>
>> patchset: v3.1.1
>> signal received: 11
>> time of crash: 2010-12-09 15:15:53
>> configuration details:
>> argp 1
>> backtrace 1
>> dlfcn 1
>> fdatasync 1
>> libpthread 1
>> llistxattr 1
>> setfsid 1
>> spinlock 1
>> epoll.h 1
>> xattr.h 1
>> st_atim.tv_nsec 1
>> package-string: glusterfs 3.1.1
>> /lib64/libc.so.6[0x32aca302d0]
>> /lib64/libc.so.6(strcmp+0x0)[0x32aca79140]
>> /usr/lib64/glusterfs/3.1.1/rpc-transport/rdma.so[0x2c4fef6c]
>> /usr/lib64/glusterfs/3.1.1/rpc-transport/rdma.so(init+0x2f)[0x2c50013f]
>> /usr/lib64/libgfrpc.so.0(rpc_transport_load+0x389)[0x3fcca0cac9]
>> /usr/lib64/libgfrpc.so.0(rpc_clnt_new+0xfe)[0x3fcca1053e]
>> /usr/lib64/glusterfs/3.1.1/xlator/protocol/client.so(client_init_rpc+0xa1)[0x2b194f01]
>> /usr/lib64/glusterfs/3.1.1/xlator/protocol/client.so(init+0x129)[0x2b1950d9]
>> /usr/lib64/libglusterfs.so.0(xlator_init+0x58)[0x3fcc617398]
>> /usr/lib64/libglusterfs.so.0(glusterfs_graph_init+0x31)[0x3fcc640291]
>> /usr/lib64/libglusterfs.so.0(glusterfs_graph_activate+0x38)[0x3fcc6403c8]
>> /usr/sbin/glusterfs(glusterfs_process_volfp+0xfa)[0x40373a]
>> /usr/sbin/glusterfs(mgmt_getspec_cbk+0xc5)[0x406125]
>> /usr/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0xa2)[0x3fcca0f542]
>> /usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x8d)[0x3fcca0f73d]
>> /usr/lib64/libgfrpc.so.0(rpc_transport_notify+0x2c)[0x3fcca0a95c]
>> /usr/lib64/glusterfs/3.1.1/rpc-transport/socket.so(socket_event_poll_in+0x3f)[0x2ad6ef9f]
>> /usr/lib64/glusterfs/3.1.1/rpc-transport/socket.so(socket_event_handler+0x170)[0x2ad6f130]
>> /usr/lib64/libglusterfs.so.0[0x3fcc637917]
>> /usr/sbin/glusterfs(main+0x39b)[0x40470b]
>> /lib64/libc.so.6(__libc_start_main+0xf4)[0x32aca1d994]
>> /usr/sbin/glusterfs[0x402e29]
>>
>>
>>
>>
>> On Fri, Dec 3, 2010 at 1:53 PM, Raghavendra G  
>> wrote:
>>> From the logs its evident that the reason for c

Re: [Gluster-users] Error running 3.1.1

2010-12-10 Thread John Preston
Solved it. It seems some how the Centos wasn't searching the 64 bit
lib directory. I added

/usr/lib64

to the /etc/ld.so.conf file and then ran ldconfig and it found the
libraries. Does anyone know why I would need to add the entry. I
thought it was done automatically for the 64 bit OS.

John

On Fri, Dec 10, 2010 at 7:18 AM, John Preston  wrote:
> No, I'm using Centos 5.5 64 bit
>
> John
>
> On Thu, Dec 9, 2010 at 3:55 PM, Shain Miley  wrote:
>> I believe that gluster 3.1.x now requires a 64 bit os...sounds like you
>> might be using a 32 bit one?
>>
>> Shain
>>
>> On 12/09/2010 03:45 PM, John Preston wrote:
>>>
>>> Hi I've just installed fgluster 3.1.1 and I get the following error
>>> when I try to run any of the gluster commands
>>>
>>> /usr/sbin/glusterd: symbol lookup error: /usr/lib64/libgfrpc.so.0:
>>> undefined symbol: gf_log_xl_log_set
>>>
>>> I'm on Centos 5.5.
>>>
>>> Is there something I need to know.
>>>
>>> John
>>> ___
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>>
>>
>>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] RDMA Problems with GlusterFS 3.1.1

2010-12-10 Thread Artem Trunov
Hi, Raghavendra, Jeremy

Thanks, I have tried with the patch and also with ofed 1.5.2 and got
pretty much what Jeremy had:

[2010-12-10 13:32:59.69007] E [rdma.c:2047:rdma_create_cq]
rpc-transport/rdma: max_mr_size = 18446744073709551615, max_cq =
65408, max_cqe = 131071, max_mr = 131056

Aren't these parameters configurable on some driver level? I am a bit
new to the IB business, so don't know...

How do you suggest to proceed? To try the unaccepted patch?

cheers
Artem.

On Fri, Dec 10, 2010 at 6:22 AM, Raghavendra G  wrote:
> Hi Artem,
>
> you can check the maximum limits using the patch I had sent earlier in the 
> same thread. Also, the patch
> http://patches.gluster.com/patch/5844/ (which is not accepted yet), will 
> check for whether the number of cqe being passed in ibv_creation_cq is 
> greater than the value allowed by the device and if so, it will try to create 
> CQ with maximum limit allowed by the device.
>
> regards,
> - Original Message -
> From: "Artem Trunov" 
> To: "Raghavendra G" 
> Cc: "Jeremy Stout" , gluster-users@gluster.org
> Sent: Thursday, December 9, 2010 7:13:40 PM
> Subject: Re: [Gluster-users] RDMA Problems with GlusterFS 3.1.1
>
> Hi, Ravendra, Jeremy
>
> This was very interesting debugging thread to me, since I have the
> same symptoms, but unsure of the origin. Please see log for my mount
> command at the end of the message.
>
> I have installed 3.3.1. My OFED is 1.5.1 - does it make serious
> difference between already mentioned 1.5.2?
>
> On hardware limitations - I have Mellanox InfiniHost III Lx 20Gb/s and
> it says in specs:
>
> "Supports 16 million QPs, EEs & CQs "
>
> Is this enough? How can I query for actual settings on max_cq, max_cqe?
>
> In general, how should I proceed? What are my other debugging options?
> Should I try to go Jeremy path with hacking the gluster code?
>
> cheers
> Artem.
>
> Log:
>
> -
> [2010-12-09 15:15:53.847595] W [io-stats.c:1644:init] test-volume:
> dangling volume. check volfile
> [2010-12-09 15:15:53.847643] W [dict.c:1204:data_to_str] dict: @data=(nil)
> [2010-12-09 15:15:53.847657] W [dict.c:1204:data_to_str] dict: @data=(nil)
> [2010-12-09 15:15:53.858574] E [rdma.c:2066:rdma_create_cq]
> rpc-transport/rdma: test-volume-client-1: creation of send_cq failed
> [2010-12-09 15:15:53.858805] E [rdma.c:3771:rdma_get_device]
> rpc-transport/rdma: test-volume-client-1: could not create CQ
> [2010-12-09 15:15:53.858821] E [rdma.c:3957:rdma_init]
> rpc-transport/rdma: could not create rdma device for mthca0
> [2010-12-09 15:15:53.858893] E [rdma.c:4789:init]
> test-volume-client-1: Failed to initialize IB Device
> [2010-12-09 15:15:53.858909] E
> [rpc-transport.c:971:rpc_transport_load] rpc-transport: 'rdma'
> initialization failed
> pending frames:
>
> patchset: v3.1.1
> signal received: 11
> time of crash: 2010-12-09 15:15:53
> configuration details:
> argp 1
> backtrace 1
> dlfcn 1
> fdatasync 1
> libpthread 1
> llistxattr 1
> setfsid 1
> spinlock 1
> epoll.h 1
> xattr.h 1
> st_atim.tv_nsec 1
> package-string: glusterfs 3.1.1
> /lib64/libc.so.6[0x32aca302d0]
> /lib64/libc.so.6(strcmp+0x0)[0x32aca79140]
> /usr/lib64/glusterfs/3.1.1/rpc-transport/rdma.so[0x2c4fef6c]
> /usr/lib64/glusterfs/3.1.1/rpc-transport/rdma.so(init+0x2f)[0x2c50013f]
> /usr/lib64/libgfrpc.so.0(rpc_transport_load+0x389)[0x3fcca0cac9]
> /usr/lib64/libgfrpc.so.0(rpc_clnt_new+0xfe)[0x3fcca1053e]
> /usr/lib64/glusterfs/3.1.1/xlator/protocol/client.so(client_init_rpc+0xa1)[0x2b194f01]
> /usr/lib64/glusterfs/3.1.1/xlator/protocol/client.so(init+0x129)[0x2b1950d9]
> /usr/lib64/libglusterfs.so.0(xlator_init+0x58)[0x3fcc617398]
> /usr/lib64/libglusterfs.so.0(glusterfs_graph_init+0x31)[0x3fcc640291]
> /usr/lib64/libglusterfs.so.0(glusterfs_graph_activate+0x38)[0x3fcc6403c8]
> /usr/sbin/glusterfs(glusterfs_process_volfp+0xfa)[0x40373a]
> /usr/sbin/glusterfs(mgmt_getspec_cbk+0xc5)[0x406125]
> /usr/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0xa2)[0x3fcca0f542]
> /usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x8d)[0x3fcca0f73d]
> /usr/lib64/libgfrpc.so.0(rpc_transport_notify+0x2c)[0x3fcca0a95c]
> /usr/lib64/glusterfs/3.1.1/rpc-transport/socket.so(socket_event_poll_in+0x3f)[0x2ad6ef9f]
> /usr/lib64/glusterfs/3.1.1/rpc-transport/socket.so(socket_event_handler+0x170)[0x2ad6f130]
> /usr/lib64/libglusterfs.so.0[0x3fcc637917]
> /usr/sbin/glusterfs(main+0x39b)[0x40470b]
> /lib64/libc.so.6(__libc_start_main+0xf4)[0x32aca1d994]
> /usr/sbin/glusterfs[0x402e29]
>
>
>
>
> On Fri, Dec 3, 2010 at 1:53 PM, Raghavendra G  wrote:
>> From the logs its evident that the reason for completion queue creation 
>> failure is that the number of completion queue elements (in a completion 
>> queue) we had requested in ibv_create_cq, (1024 * send_count) is less than 
>> the maximum supported by the ib hardware (max_cqe = 131071).
>>
>> - Original Message -
>> From: "Jeremy Stout" 
>> To: "Raghavendra G" 
>> Cc: gluster-users@gluster.org
>> Sent: Friday, December 3, 20

Re: [Gluster-users] Error running 3.1.1

2010-12-10 Thread John Preston
No, I'm using Centos 5.5 64 bit

John

On Thu, Dec 9, 2010 at 3:55 PM, Shain Miley  wrote:
> I believe that gluster 3.1.x now requires a 64 bit os...sounds like you
> might be using a 32 bit one?
>
> Shain
>
> On 12/09/2010 03:45 PM, John Preston wrote:
>>
>> Hi I've just installed fgluster 3.1.1 and I get the following error
>> when I try to run any of the gluster commands
>>
>> /usr/sbin/glusterd: symbol lookup error: /usr/lib64/libgfrpc.so.0:
>> undefined symbol: gf_log_xl_log_set
>>
>> I'm on Centos 5.5.
>>
>> Is there something I need to know.
>>
>> John
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] posix: mknod on .. failed: File exists

2010-12-10 Thread Michael Reck

Hi List,

Sorry for the crude english !

we have an 3 Nodes gluster Installation.
Since start of this gluster we have problems like this:

[2010-12-10 07:14:57.210627] E [posix.c:437:posix_lookup] posix: lstat 
on /backup/mail8-node2/yesteryesterday_root.tgz failed: No data available
[2010-12-10 08:14:57.778585] E [posix.c:1084:posix_mknod] posix: mknod 
on /backup/mysql2/yesterday_root.tgz failed: File exists
[2010-12-10 08:14:57.813334] E [posix.c:437:posix_lookup] posix: lstat 
on /backup/mysql2/yesterday_root.tgz failed: No data available


I read tons of old maillinglist entry's and some suggest to get the 
latest version of glusterfs.

Since 18.11.2010 we have 3.1.0 in use and the errors continues.

Sometimes the problem can be solved for an longer time if unexport and 
restart all gluster processes.


I just have no (more) idea were to search for an solution. But I guess 
that this should be an trivial Problems since others have obviously 
running gluster in production.


The Hardware was selected to be reliable ( Areca Raid6 Drives, Dual Xeon 
CPU, Server Mainboards, Server NIC`s) and has continuous data rates if 
tested with bonnie.




The Config:


volume posix
  type storage/posix
  option directory /raid/cluster
end-volume

volume locks
type features/locks
subvolumes posix
end-volume

volume brick
type performance/io-threads
option thread-count 8
subvolumes locks
end-volume

volume server
type protocol/server
option transport-type tcp
option auth.addr.brick.allow IPRANGE.*
subvolumes brick
end-volume

---

volume remote1
  type protocol/client
  option transport-type tcp
  option remote-host IP1
  option remote-subvolume brick
end-volume

volume remote2
  type protocol/client
  option transport-type tcp
  option remote-host IP2
  option remote-subvolume brick
end-volume

volume remote3
  type protocol/client
  option transport-type tcp
  option remote-host IP3
  option remote-subvolume brick
end-volume

volume distribute
  type cluster/distribute
  option lookup-unhashed yes
  subvolumes remote1 remote2 remote3
end-volume

volume writebehind
  type performance/write-behind
  option window-size 1MB
  subvolumes distribute
end-volume

volume cache
  type performance/io-cache
  option cache-size 512MB
  subvolumes writebehind
end-volume
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Problems with Glusterfs and OpenVz

2010-12-10 Thread Alessandro Iurlano
Hello everybody

I am evaluating Gluster as storage backend for our virtualization
infrastructure but I am having problems using Glusterfs with OpenVZ.
What I want to do is to store OpenVZ containers on Glusterfs
filesystem and not to mount Glusterfs volumes from inside OpenVZ
containers that appears to be what everyone does, according to Google
results.

Basically I have setup an OpenVZ server (with Proxmox) and a
replicated Glusterfs volume from that server and another debian
machine.
I have been able to create an OpenVZ container on the Glusterfs
filesystem without problems with the latest Glusterfs version 3.1.2qa1
(due to the mknod bug
http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=2145). But
whenever I start this new container it lasts only a couple of minutes
(just the time to boot and a couple commands) and then Glusterfs hangs
and I get the error Transport Endpoint is not connected.

The Gluster volume is not accessible anymore from the OpenVz server
(even though this server is a gluster server itself), the container
completely stops working and killing glusterfs processes does not help
to recover the situation. The only way is to reboot the machine.

This is an extract from an error log http://pastebin.com/66ipxbnA

Any suggestions?
Does anybody have any successful experience with OpenVZ and Gluster or
know of any howto / documentation?

Thanks in advance,
Alessandro
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Start a new volume with pre-existing directories

2010-12-10 Thread Dan Bretherton

On 03/12/2010 10:15, gluster-users-requ...@gluster.org wrote:

Message: 5
Date: Fri, 03 Dec 2010 01:10:31 -0800
From: Craig Carl
Subject: Re: [Gluster-users] Start a new volume with pre-existing
 directories
To:gluster-users@gluster.org
Message-ID:<4cf8b407.2070...@gluster.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

Daniel -
 If you want to export existing data you will need to run the self
heal process so extended attributes can get written. While this should
work without any issues it isn't an officially supported process, please
make sure you have complete and up to date backups.

After you have setup and started the Gluster volume mount it locally on
one of the servers using `mount -t  glusterfs localhost:/
/`. CD into the root of the mount point and run
`find . | xargs stat>>/dev/null 2>&1` to start a self heal.

Also the command you used to create the volume should not have worked,
it is missing a volume name - gluster volume create  transport
tcp fs7:/storage/7, fs8:/storage/8, typo maybe?

Please let us know how it goes, and please let me know if you have any
other questions.

Thanks,

Craig

-->
Craig Carl
Senior Systems Engineer; Gluster, Inc.
Cell - (408) 829-9953 (California, USA)
Office - (408) 770-1884
Gtalk -craig.c...@gmail.com
Twitter - @gluster
http://rackerhacker.com/2010/08/11/one-month-with-glusterfs-in-production/

   
Craig, is this the recommended self heal method in all cases now then, 
or is "ls -aR" still better in some circumstances?

-Dan.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users