Emmanuel Dreyfus wrote:
> [2017-11-02 12:32:57.429885] E [MSGID: 115092]
> [server-handshake.c:586:server_setvolume] 0-gfs-server: No xlator
> /export/wd0e is found in child status list
> [2017-11-02 12:32:57.430162] I [MSGID: 115091]
> [server-handshake.c:761:server_setvolume] 0-gfs-server: Fail
On Fri, Nov 3, 2017 at 9:25 AM, Atin Mukherjee wrote:
>
> On Fri, 3 Nov 2017 at 18:31, Kaleb S. KEITHLEY
> wrote:
>
>> On 11/02/2017 10:19 AM, Atin Mukherjee wrote:
>> > While I appreciate the folks to contribute lot of coverity fixes over
>> > last few days, I have an observation for some of th
All,
While we discussed many other things, we also discussed about reducing time
taken for the regression jobs. As it stands now, it take around 5hr 40mins
to complete a single run.
There were many suggestions:
- Run them in parallel (as each .t test is independent of each other)
- Revisi
Just so I am clear the upgrade process will be as follows:
upgrade all clients to 4.0
rolling upgrade all servers to 4.0 (with GD1)
kill all GD1 daemons on all servers and run upgrade script (new clients
unable to connect at this point)
start GD2 ( necessary or does the upgrade script do this?)
On Fri, 3 Nov 2017 at 18:31, Kaleb S. KEITHLEY wrote:
> On 11/02/2017 10:19 AM, Atin Mukherjee wrote:
> > While I appreciate the folks to contribute lot of coverity fixes over
> > last few days, I have an observation for some of the patches the
> > coverity issue id(s) are *not* mentioned which g
On 11/02/2017 10:19 AM, Atin Mukherjee wrote:
> While I appreciate the folks to contribute lot of coverity fixes over
> last few days, I have an observation for some of the patches the
> coverity issue id(s) are *not* mentioned which gets maintainers in a
> difficult situation to understand the exa
GlusterFS Coverity covscan results are available from
http://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/2017-11-03-2ef2b600
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman
Below you can find three fio commands used for running each benchmark test,
sequential write, random 4k read and random 4k write.
# fio --name=writefile --size=10G --filesize=10G --filename=fio_file --bs=1M
--nrfiles=1 --direct=1 --sync=0 --randrepeat=0 --rw=write --refill_buffers
--end_fsync=
Hi,
Please find the latest report on new defect(s) introduced to gluster/glusterfs
found with Coverity Scan.
146 new defect(s) introduced to gluster/glusterfs found with Coverity Scan.
180 defect(s), reported by Coverity Scan earlier, were marked fixed in the
recent build analyzed by Coverity
As per your review comments, introduced GF_ABORT as part of patch
https://review.gluster.org/#/c/18309/5
I wouldn't get into changing anything with ASSERT at the moment, as there
are around ~2800 instances :-o
Wherever it is critical, lets call 'GF_ABORT()' in future, and also we
should have a 'c
Hi all,
I've seen that GF_ASSERT() macro is defined in different ways depending on
if we are building in debug mode or not.
In debug mode, it's an alias of assert(), but in non-debug mode it simply
logs an error message and continues.
I think that an assert should be a critical check that should
Could you please share fio command line used for this test?
Additionally, can you tell me the time needed to extract the kernel source?
Il 2 nov 2017 11:24 PM, "Ramon Selga" ha scritto:
> Hi,
>
> Just for your reference we got some similar values in a customer setup
> with three nodes single Xeo
On Thu, Nov 2, 2017 at 7:53 PM, Darrell Budic wrote:
> Will the various client packages (centos in my case) be able to
> automatically handle the upgrade vs new install decision, or will we be
> required to do something manually to determine that?
We should be able to do this with CentOS (and oth
13 matches
Mail list logo