On 06/14/13 13:35, Ziemowit Pierzycki wrote:
Hi,
I created a distributed-replicated volume across two servers and two
bricks each. Here is the configuration:
Volume Name: slice1
Type: Distributed-Replicate
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: elkpinfk
On Fri, Jun 14, 2013 at 4:35 PM, Ziemowit Pierzycki
wrote:
> Hi,
>
> I created a distributed-replicated volume across two servers and two bricks
> each. Here is the configuration:
>
> Volume Name: slice1
> Type: Distributed-Replicate
> Status: Started
> Number of Bricks: 2 x 2 = 4
> Transport-typ
GigE is slower. Here is ping from same boxes but using the 1GigE cards:
[root@node0.cloud ~]# ping -c 10 10.100.0.11
PING 10.100.0.11 (10.100.0.11) 56(84) bytes of data.
64 bytes from 10.100.0.11: icmp_seq=1 ttl=64 time=0.628 ms
64 bytes from 10.100.0.11: icmp_seq=2 ttl=64 time=0.283 ms
64 bytes f
Hi,
I created a distributed-replicated volume across two servers and two bricks
each. Here is the configuration:
Volume Name: slice1
Type: Distributed-Replicate
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: elkpinfkvm05-bus:/srv/brick1
Brick2: elkpinfkvm05-bus:/
On 06/14/2013 01:04 PM, John Brunelle wrote:
> Thanks, Jeff! I ran readdir.c on all 23 bricks on the gluster nfs
> server to which my test clients are connected (one client that's
> working, and one that's not; and I ran on those, too). The results
> are attached.
>
> The values it prints are al
On Fri, 14 Jun 2013 12:13:53 -0700
Bryan Whitehead wrote:
> I'm using 40G Infiniband with IPoIB for gluster. Here are some ping
> times (from host 172.16.1.10):
>
> [root@node0.cloud ~]# ping -c 10 172.16.1.11
> PING 172.16.1.11 (172.16.1.11) 56(84) bytes of data.
> 64 bytes from 172.16.1.11: ic
I'm using 40G Infiniband with IPoIB for gluster. Here are some ping
times (from host 172.16.1.10):
[root@node0.cloud ~]# ping -c 10 172.16.1.11
PING 172.16.1.11 (172.16.1.11) 56(84) bytes of data.
64 bytes from 172.16.1.11: icmp_seq=1 ttl=64 time=0.093 ms
64 bytes from 172.16.1.11: icmp_seq=2 ttl=
Ah, I did not know that about 0x7. Is it of note that the
clients do *not* get this?
This is on an NFS mount, and the volume has nfs.enable-ino32 On. (I
should've pointed that out again when Jeff mentioned FUSE.)
Side note -- we do have a couple FUSE mounts, too, and I had not seen
this
Are the ls commands (which list partially, or loop and die of ENOMEM
eventually) executed on an NFS mount or FUSE mount? Or does it happen on
both?
Avati
On Fri, Jun 14, 2013 at 11:14 AM, Anand Avati wrote:
>
>
>
> On Fri, Jun 14, 2013 at 10:04 AM, John Brunelle > wrote:
>
>> Thanks, Jeff! I
On Fri, Jun 14, 2013 at 10:04 AM, John Brunelle
wrote:
> Thanks, Jeff! I ran readdir.c on all 23 bricks on the gluster nfs
> server to which my test clients are connected (one client that's
> working, and one that's not; and I ran on those, too). The results
> are attached.
>
> The values it pri
Be careful, I had made a big mistake here. 'mktemp -d' creates a
directory in /tmp, and, surely, it is another file system. So 'mv' will
just 'cp'. So ${tempdirname} should be in the some file system as
Regards,
Pablo.
El 12/06/2013 08:16 p.m., Liam Slusser escribió:
So combining the two appr
On 06/13/2013 03:38 PM, John Brunelle wrote:
> We have a directory containing 3,343 subdirectories. On some
> clients, ls lists only a subset of the directories (a different
> amount on different clients). On others, ls gets stuck in a getdents
> loop and consumes more and more memory until it hi
Thanks for the reply, Vijay. I set that parameter "On", but it hasn't
helped, and in fact it seems a bit worse. After making the change on
the volume and dropping caches on some test clients, some are now
seeing zero subdirectories at all. In my tests before, after dropping
caches clients go bac
On 06/13/2013 03:38 PM, John Brunelle wrote:
Hello,
We're having an issue with our distributed gluster filesystem:
* gluster 3.3.1 servers and clients
* distributed volume -- 69 bricks (4.6T each) split evenly across 3 nodes
* xfs backend
* nfs clients
* nfs.enable-ino32: On
* servers: CentOS
I have been playing around with Gluster on and off for the last 6 years or
so. Most of the things that have been keeping me from using it have been
related to latency.
In the past I have been using 10 gig infiniband or 10 gig ethernet,
recently the price of 40 gig ethernet has fallen quite a bit w
So, in other words, stay away from fallocate for now :). Thanks for the
info Brian!
On Thu, Jun 13, 2013 at 4:05 PM, Brian Foster wrote:
> On 06/13/2013 01:38 PM, Jacob Godin wrote:
> > Hey all,
> >
> > Trying to use fallocate with qcow2 images to increase performance. When
> > doing so (with
Hi, everyone:
We deploy GlusterFS3.2.7 with 24 servers and each server has 12 disks using
for brick server.
The volume type is DHT + AFR(replica=3) and transport type is socket.
Use native client and each server has 8 mount points.
The gluster client will log some error once in a while, f
Hi All,
I have installed Gluster ver 3.4 alpha on my Linux systems.
Gluster CLI command for delete, stop operations prompt for user
confirmation.
I came across command parameter --mode=script to stop the prompt or
confirmation messages.
I tried gluster> volume stop _home --mode=script
but its
Maybe my question was a bit "involved", I'll try again:
while searching the web I have found various issues connected to
"cluster.min-free-disk" (e.g., one shouldn't use % but rather a size
number). Would it be possible with an update of the status?
Thanks,
/jon
On Jun 11, 201
19 matches
Mail list logo