Hi Ravi,
I updated to gluster 3.6 via PPA and that seems to work well. I did a number of
alternating reboots with ActiveMQ under load and did not see any problem. From
my rough measurements it seems we got a bit of a performance improvement as
well, although I might be making that one up.
On 08/05/2015 01:04 PM, Peter Becker wrote:
Hi Ravi,
I updated to gluster 3.6 via PPA and that seems to work well. I did a
number of alternating reboots with ActiveMQ under load and did not see
any problem. From my rough measurements it seems we got a bit of a
performance improvement as
Hello,
In addition, knowing I have reactivated the log (brick-log-level = INFO not
CRITICAL) only for the file creation duration (i.e. a few minutes), do you have
noticed the log sizes and the number of lines inside:
# ls -lh storage*
-rw--- 1 letessier staff18M 5 aoû 00:54
Hi,
As reported in https://bugzilla.redhat.com/show_bug.cgi?id=1218732, in
the event where there is no opErrstr, some gluster commands'(like
snapshot status, volume status etc.) xml output shows
opErrstr(null)/opErrstr, while other commands show just
opErrstr/. This non-uniform output is
On 08/05/2015 02:58 PM, Avra Sengupta wrote:
Hi,
As reported in https://bugzilla.redhat.com/show_bug.cgi?id=1218732, in
the event where there is no opErrstr, some gluster commands'(like
snapshot status, volume status etc.) xml output shows
opErrstr(null)/opErrstr, while other commands
On 08/05/2015 03:06 PM, Atin Mukherjee wrote:
On 08/05/2015 02:58 PM, Avra Sengupta wrote:
Hi,
As reported in https://bugzilla.redhat.com/show_bug.cgi?id=1218732, in
the event where there is no opErrstr, some gluster commands'(like
snapshot status, volume status etc.) xml output shows
Having (null) is not common in xml convention. Usually, it's either
opErrstr/
or
opErrstr/opErrstr
Regards,
-Prashanth Pai
- Original Message -
From: Avra Sengupta aseng...@redhat.com
To: Atin Mukherjee amukh...@redhat.com, Gluster Devel
gluster-de...@gluster.org,
In that case will stick with opErrstr/ for all the null elements.
On 08/05/2015 04:10 PM, Prashanth Pai wrote:
Having (null) is not common in xml convention. Usually, it's either
opErrstr/
or
opErrstr/opErrstr
Regards,
-Prashanth Pai
- Original Message -
From: Avra Sengupta
Hi All,
In about 60 minutes from now we will have the regular weekly Gluster
Community meeting.
Meeting details:
- location: #gluster-meeting on Freenode IRC
- date: every Wednesday
- time: 12:00 UTC, 14:00 CEST, 17:30 IST
(in your terminal, run: date -d 12:00 UTC)
- agenda:
On Wed, Aug 05, 2015 at 04:20:28PM +0530, Avra Sengupta wrote:
In that case will stick with opErrstr/ for all the null elements.
That would be my preference too. A (null) error string is not useful,
and xml allows empty elements easily.
Thanks,
Niels
On 08/05/2015 04:10 PM, Prashanth Pai
The minutes of the weekly community meeting held today can be found at:
Minutes:
http://meetbot.fedoraproject.org/gluster-meeting/2015-08-05/gluster-meeting.2015-08-05-12.00.html
Minutes (text):
http://meetbot.fedoraproject.org/gluster-meeting/2015-08-05/gluster-meeting.2015-08-05-12.00.txt
Looking around I get the impression that file locking (NLM) may simply not
be supported in glusterfs's built-in NFS server.
I get the impression that Ganesha is aimed at supporting NFS better, and
presumably supports locking well, so I should give it a try (If I
understand well the performance is
I am seeing a pretty big perf regression with ls -l on the 3.7 branch:
https://bugzilla.redhat.com/show_bug.cgi?id=1250241
Even when running on cached results I am not seeing what I saw on 3.6:
total threads = 32
total files = 316100
98.78% of requested files processed, minimum is 70.00
On Wed, Aug 05, 2015 at 04:11:47PM +0100, Thibault Godouet wrote:
Looking around I get the impression that file locking (NLM) may simply not
be supported in glusterfs's built-in NFS server.
This is actually supported. But note that you can not run a userspace
NLM implementation provided by a
Hi,
Gave arbiter volume a shot today but ran into som problems with it:
1. My arbiter brick is running with a much smaller disk which during a df
presents a problem, it shows the smaller disksize. If I understand the arbiter
correctly, then there is no need to match the size of the real
On 08/06/2015 02:41 AM, Fredrik Brandt wrote:
Arbiter volume
Hi,
Gave arbiter volume a shot today but ran into som problems with it:
1. My arbiter brick is running with a much smaller disk which during a
df presents a problem, it shows the smaller disksize. If I
understand the
On Tue, Aug 04, 2015 at 04:06:50PM +0300, Roman wrote:
Hi all,
I'm back and tested those things.
Michael was right. I've enabled read-ahead option and nothing changed.
So the thing causes the problem with libgfapi and d8 virtio drivers
is performance.write-behind. If it is off, everything
17 matches
Mail list logo