Adding gluster ml
On Mon, Mar 4, 2019 at 7:17 AM Guillaume Pavese
wrote:
>
> I got that too so upgraded to gluster6-rc0 nit still, this morning one engine
> brick is down :
>
> [2019-03-04 01:33:22.492206] E [MSGID: 101191]
> [event-epoll.c:765:event_dispatch_epoll_worker] 0-epoll: Failed to di
Hi Hari,
thx for the hint. Do you know when this will be fixed? Is a downgrade
5.4 -> 5.3 a possibility to fix this?
Hubert
Am Di., 5. März 2019 um 08:32 Uhr schrieb Hari Gowtham :
>
> Hi,
>
> This is a known issue we are working on.
> As the checksum differs between the updated and non updated
Hi,
This is a known issue we are working on.
As the checksum differs between the updated and non updated node, the
peers are getting rejected.
The bricks aren't coming because of the same issue.
More about the issue: https://bugzilla.redhat.com/show_bug.cgi?id=1685120
On Tue, Mar 5, 2019 at 12:5
Interestingly: gluster volume status misses gluster1, while heal
statistics show gluster1:
gluster volume status workdata
Status of volume: workdata
Gluster process TCP Port RDMA Port Online Pid
Hi Miling,
well, there are such entries, but those haven't been a problem during
install and the last kernel update+reboot. The entries look like:
PUBLIC_IP gluster2.alpserver.de gluster2
192.168.0.50 gluster1
192.168.0.51 gluster2
192.168.0.52 gluster3
'ping gluster2' resolves to LAN IP; I re
Thank you for the clarification.
Am Mo., 4. März 2019 um 20:19 Uhr schrieb FNU Raghavendra Manjunath <
rab...@redhat.com>:
> Hi David,
>
> Doing full heal after deleting the gfid entries (and the bad copy) is
> fine. It is not dangerous.
>
> Regards,
> Raghavendra
>
> On Mon, Mar 4, 2019 at 9:44
There are probably DNS entries or /etc/hosts entries with the public IP
Addresses that the host names (gluster1, gluster2, gluster3) are getting
resolved to.
/etc/resolv.conf would tell which is the default domain searched for the
node names and the DNS servers which respond to the queries.
On Tu
Good morning,
i have a replicate 3 setup with 2 volumes, running on version 5.3 on
debian stretch. This morning i upgraded one server to version 5.4 and
rebooted the machine; after the restart i noticed that:
- no brick process is running
- gluster volume status only shows the server itself:
glus
Hi David,
Doing full heal after deleting the gfid entries (and the bad copy) is fine.
It is not dangerous.
Regards,
Raghavendra
On Mon, Mar 4, 2019 at 9:44 AM David Spisla wrote:
> Hello Gluster Community,
>
> I have questions and notes concerning the steps mentioned in
> https://github.com/gl
+Gluster Devel , +Gluster-users
I would like to point out another issue. Even if what I suggested prevents
disconnects, part of the solution would be only symptomatic treatment and
doesn't address the root cause of the problem. In most of the
ping-timer-expiry issues, the root cause is the increa
On 3/4/19 10:08 AM, Atin Mukherjee wrote:
>
>
> On Mon, 4 Mar 2019 at 20:33, Amar Tumballi Suryanarayan
> mailto:atumb...@redhat.com>> wrote:
>
> Thanks to those who participated.
>
> Update at present:
>
> We found 3 blocker bugs in upgrade scenarios, and hence have marked
> r
Do you mean "gluster volume heal $volname statistics heal-count"? If
yes: 0 for both volumes.
Am Mo., 4. März 2019 um 16:08 Uhr schrieb Amar Tumballi Suryanarayan
:
>
> What does self-heal pending numbers show?
>
> On Mon, Mar 4, 2019 at 7:52 PM Hu Bert wrote:
>>
>> Hi Alberto,
>>
>> wow, good hi
What does self-heal pending numbers show?
On Mon, Mar 4, 2019 at 7:52 PM Hu Bert wrote:
> Hi Alberto,
>
> wow, good hint! We switched from old servers with version 4.1.6 to new
> servers (fresh install) with version 5.3 on february 5th. I saw that
> there was more network traffic on server side,
On Mon, 4 Mar 2019 at 20:33, Amar Tumballi Suryanarayan
wrote:
> Thanks to those who participated.
>
> Update at present:
>
> We found 3 blocker bugs in upgrade scenarios, and hence have marked release
> as pending upon them. We will keep these lists updated about progress.
I’d like to clarify
Hello folks,
can someone please provide packages for Gluster v5.4 for SLES15? For CentOS
and Ubuntu there are already packages.
Regards
David Spisla
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/glus
Thanks to those who participated.
Update at present:
We found 3 blocker bugs in upgrade scenarios, and hence have marked release
as pending upon them. We will keep these lists updated about progress.
-Amar
On Mon, Feb 25, 2019 at 11:41 PM Amar Tumballi Suryanarayan <
atumb...@redhat.com> wrote:
Hello Gluster Community,
I have questions and notes concerning the steps mentioned in
https://github.com/gluster/glusterfs/issues/491
" *2. Delete the corrupted files* ":
In my experience there are two GFID files if a copy gets corrupted. Example:
*$ find /gluster/brick1/glusterbrick/.glusterf
On Mon, Mar 4, 2019 at 7:47 PM Raghavendra Gowdappa
wrote:
>
>
> On Mon, Mar 4, 2019 at 4:26 PM Hu Bert wrote:
>
>> Hi Raghavendra,
>>
>> at the moment iowait and cpu consumption is quite low, the main
>> problems appear during the weekend (high traffic, especially on
>> sunday), so either we ha
Hi Alberto,
wow, good hint! We switched from old servers with version 4.1.6 to new
servers (fresh install) with version 5.3 on february 5th. I saw that
there was more network traffic on server side, but didn't watch it on
client side - the traffic went up significantly on both sides, from
about 20
On Mon, Mar 4, 2019 at 4:26 PM Hu Bert wrote:
> Hi Raghavendra,
>
> at the moment iowait and cpu consumption is quite low, the main
> problems appear during the weekend (high traffic, especially on
> sunday), so either we have to wait until next sunday or use a time
> machine ;-)
>
> I made a scr
Hello Hubert,
On Mon, 4 Mar 2019 at 10:56, Hu Bert wrote:
> Hi Raghavendra,
>
> at the moment iowait and cpu consumption is quite low, the main
> problems appear during the weekend (high traffic, especially on
> sunday), so either we have to wait until next sunday or use a time
> machine ;-)
>
>
Hi Raghavendra,
at the moment iowait and cpu consumption is quite low, the main
problems appear during the weekend (high traffic, especially on
sunday), so either we have to wait until next sunday or use a time
machine ;-)
I made a screenshot of top (https://abload.de/img/top-hvvjt2.jpg) and
a te
what is the per thread CPU usage like on these clients? With highly
concurrent workloads we've seen single thread that reads requests from
/dev/fuse (fuse reader thread) becoming bottleneck. Would like to know what
is the cpu usage of this thread looks like (you can use top -H).
On Mon, Mar 4, 201
Good morning,
we use gluster v5.3 (replicate with 3 servers, 2 volumes, raid10 as
brick) with at the moment 10 clients; 3 of them do heavy I/O
operations (apache tomcats, read+write of (small) images). These 3
clients have a quite high I/O wait (stats from yesterday) as can be
seen here:
client:
Could you also provide the statedump of the gluster process consuming 44G
ram [1]. Please make sure the statedump is taken when the memory
consumption is very high, like 10s of GBs, otherwise we may not be able to
identify the issue. Also i see that the cache size is 10G is that something
you arriv
Hello Kotresh,
Yes, the fd was still open for larger files. I could verify this with a
500MiB file and some smaller files. After a specific time only the fd for
the 500MiB was up and the file still had no signature, for the smaller
files there were no fds and they already had a signature. I don't
26 matches
Mail list logo