Hi,
I dont have any more hosts available.
I am a bit lost here, why a replica 3 and arbiter 1? ie not replica2
arbiter1? also no distributed part? is the distributed flag
automatically assumed?with a replica3 then there is a quorum (2 of 3)
so no arbiter is needed? I have this running a
Hi Milind
I will send you links for logs.
I collected these core dumps at client and there is no glusterd process
running on client.
Kashif
On Tue, Jun 12, 2018 at 4:14 PM, Milind Changire
wrote:
> Kashif,
> Could you also send over the client/mount log file as Vijay suggested ?
> Or maybe
Hi Vijay
I have enabled TRACE for client and there are lots of Trace messages in log
but no 'crash'
The only error I can see is about inode context is NULL
[io-cache.c:564:ioc_open_cbk] 0-atlasglust-io-cache: inode context is NULL
(748157d2-274f-4595-9bb6-afb1fb5a0642) [Invalid argument]
Kashif
Kashif,
Could you also send over the client/mount log file as Vijay suggested ?
Or maybe the lines with the crash backtrace lines
Also, you've mentioned that you straced glusterd, but when you ran gdb, you
ran it over /usr/sbin/glusterfs
On Tue, Jun 12, 2018 at 8:19 PM, Vijay Bellur wrote:
>
>
On Tue, Jun 12, 2018 at 7:40 AM, mohammad kashif
wrote:
> Hi Milind
>
> The operating system is Scientific Linux 6 which is based on RHEL6. The
> cpu arch is Intel x86_64.
>
> I will send you a separate email with link to core dump.
>
You could also grep for crash in the client log file and the
Hi Milind
The operating system is Scientific Linux 6 which is based on RHEL6. The cpu
arch is Intel x86_64.
I will send you a separate email with link to core dump.
Thanks for your help.
Kashif
On Tue, Jun 12, 2018 at 3:16 PM, Milind Changire
wrote:
> Kashif,
> Could you share the core dump
Kashif,
Could you share the core dump via Google Drive or something similar
Also, let me know the CPU arch and OS Distribution on which you are running
gluster.
If you've installed the glusterfs-debuginfo package, you'll also get the
source lines in the backtrace via gdb
On Tue, Jun 12, 2018 a
Hi Milind, Vijay
Thanks, I have some more information now as I straced glusterd on client
138544 0.000131 mprotect(0x7f2f70785000, 4096, PROT_READ|PROT_WRITE) =
0 <0.26>
138544 0.000128 mprotect(0x7f2f70786000, 4096, PROT_READ|PROT_WRITE) =
0 <0.27>
138544 0.000126 mprotect
Kashif,
You can change the log level by:
$ gluster volume set diagnostics.brick-log-level TRACE
$ gluster volume set diagnostics.client-log-level TRACE
and see how things fare
If you want fewer logs you can change the log-level to DEBUG instead of
TRACE.
On Tue, Jun 12, 2018 at 3:37 PM, moha
On Tue, Jun 12, 2018 at 03:04:14PM +1200, Thing wrote:
> What I would like to do I think is a,
>
> *Distributed-Replicated volume*
>
> a) have 1 and 2 as raid1
> b) have 4 and 5 as raid1
> c) have 3 and 6 as a raid1
> d) join this as concatenation 2+2+2tb
You probably don't actually want to do t
Hi Vijay
Now it is unmounting every 30 mins !
The server log at /var/log/glusterfs/bricks/glusteratlas-brics001-gv0.log
have this line only
2018-06-12 09:53:19.303102] I [MSGID: 115013]
[server-helpers.c:289:do_fd_cleanup] 0-atlasglust-server: fd cleanup on
/atlas/atlasdata/zgubic/hmumu/histogra
On Sat, Jun 9, 2018 at 9:38 AM, Dan Lavu wrote:
> Krutika,
>
> Is it also normal for the following messages as well?
>
Yes, this should be fine. It only represents a transient state when
multiple threads/clients are trying to create the same shard at the same
time. These can be ignored.
-Krutik
typos
On Tuesday 12 June 2018 12:15 PM, Jiffin Tony Thottan wrote:
Hi,
It's time to prepare the 3.12.7 release, which falls on the 10th of
3.12.10
each month, and hence would be 08-03-2018 this time around.
13-06-2018
This mail is to call out the following,
1) Are there any pending *
13 matches
Mail list logo