thanks. I will try.
On Tue, Feb 18, 2014 at 3:04 PM, Vijay Bellur wrote:
> On 02/18/2014 11:40 AM, Mingfan Lu wrote:
>
>> If my server is upgraded to 3.4 while many clients still use 3.3,
>> is there any problem? or I should update all clients.
>>
>>
> 3.4 and 3.3 are protocol compatible. We ha
On 02/18/2014 11:40 AM, Mingfan Lu wrote:
If my server is upgraded to 3.4 while many clients still use 3.3,
is there any problem? or I should update all clients.
3.4 and 3.3 are protocol compatible. We have not observed anything that
would cause problems running 3.3 clients with 3.4 servers o
On 02/17/2014 11:19 PM, Marco Zanger wrote:
Read/write operations hang for long period of time (too long). I've seen it in
that state (waiting) for something like 5 minutes, which makes every
application fail trying to read or write. These are the Errors I found in the
logs in the server A whi
The file is :
/mnt/search-prod/index_pipeline_searchengine/leaf/data/2014-02-18/shard0/index_data.index
One client's log
[2014-02-16 04:02:02.089758] I [glusterfsd-mgmt.c:1565:mgmt_getspec_cbk]
0-glusterfs: No change in volfile, continui
If my server is upgraded to 3.4 while many clients still use 3.3,
is there any problem? or I should update all clients.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
Hi,
We saw such a issue.
One client (fuse mount) updated one file, then the other client (also
fuse mount) copied the same file while the reader found that the copied
file was out-of-dated.
If the reader ran ls command to list the entries of the directory where
the target file in,then it co
Hi,
We saw such a issue.
One client (fuse mount) updated one file, then the other client (also
fuse mount) copied the same file while the reader found that the copied
file was out-of-dated.
If the reader ran ls command to list the entries of the directory where
the target file in,then it co
Hi Lala,
Photoshop will throw errors in its GUI along the lines of "this file
is already open". Microsoft Excel does similar things. I'm sorry for
the terse details at this point.
At one point I investigated this a lot more thoroughly with
tcpdump/wireshark in the middle and more verbose/debug
From: Vijay Bellur [vbel...@redhat.com]
Sent: Monday, February 17, 2014 10:13 AM
To: Franco Broi; gluster-users@gluster.org
Subject: Re: [Gluster-users] Very slow ls
On 02/17/2014 07:00 AM, Franco Broi wrote:
>
> I mounted the filesystem with trace logging
On 18 Feb 2014 00:13, Vijay Bellur wrote:
>
> On 02/17/2014 07:00 AM, Franco Broi wrote:
> >
> > I mounted the filesystem with trace logging turned on and can see that
> > after the last successful READDIRP there is a lot of other connections
> > being made the clients repeatedly which takes minu
Read/write operations hang for long period of time (too long). I've seen it in
that state (waiting) for something like 5 minutes, which makes every
application fail trying to read or write. These are the Errors I found in the
logs in the server A which is still accessible (B was down)
etc-glust
On 02/13/2014 08:06 PM, Marco Zanger wrote:
Hi all,
I’m experiencing a strange issue related to both distribute and
replicate volumes. The problem is this:
I have two servers, A and B. Both share some replicate volumes and
distribute volumes, like this:
Volume Name: upload_path
Type: Replicat
On 02/13/2014 08:52 PM, Nicholas Majeran wrote:
Hi there,
We have a distributed-replicated volume hosting KVM guests running
Gluster 3.4.1.
We've grown from 1 x 2 -> 2 x 2 -> 3 x 2,but each time we've added nodes
or run a fix layout,
some of our guests go offline (or worse with error=continue t
On 02/17/2014 07:00 AM, Franco Broi wrote:
I mounted the filesystem with trace logging turned on and can see that
after the last successful READDIRP there is a lot of other connections
being made the clients repeatedly which takes minutes to complete.
I did not observe anything specific which
On 02/17/2014 04:42 PM, Gionatan Danti wrote:
On 02/17/2014 11:18 AM, Vijay Bellur wrote:
write-behind can help with write operations but the lookup preceding the
write is sent out to all bricks today and hence that affects overall
performance.
Ok, this is in-line with my tests and the auto-g
On 02/17/2014 05:32 AM, Dan Mons wrote:
More testing of vfs_glusterfs reveals problems with Microsoft and
Adobe software, and their silly locking. (For example, Microsoft
Excel opening a .xlsx, or Adobe Photoshop opening a .jpg or .psd).
Dozens of other production applications we use are all fin
On 02/17/2014 11:18 AM, Vijay Bellur wrote:
write-behind can help with write operations but the lookup preceding the
write is sent out to all bricks today and hence that affects overall
performance.
Ok, this is in-line with my tests and the auto-generated configuration files
Not as of today.
On 02/13/2014 11:29 PM, Gionatan Danti wrote:
Hi All,
I have a few questions about GlusterFS used in active/active WAN-based
replication scenarios.
Let first start with a little ASCII chart:
HQ Linux w/SMB share -> low speed link -> Remote Linux w/SMB share ->
WIN7 clients
In short I had to re
For you shares set too: posix locking = No
EDV Daniel Müller
Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen
Tel.: 07071/206-463, Fax: 07071/206-499
eMail: muel...@tropenklinik.de
Internet: www.tropenklinik.de
"Der Mensch ist die Medizin des Menschen"
As a follow up. Lots of the following from log:[dht_lookup_everywhere_cbk] 0-glusterKumiko-dht: deleting stale linkfile 0.33702/T on glusterKumiko-client-2[client-rpc-fops.c:5724:client3_3_setattr] (-->/usr/lib64/glusterfs/3.4.2/xlator/cluster/distribute.so(dht_lookup_linkfile_create_cbk+0x262) [0x
20 matches
Mail list logo