Hi List,
In my 2-server gluster setup, one server is consistently restarting the
glusterd proccess. On the first second of every other minute, I get a
shutdown in my glusterd log:
W [glusterfsd.c:1596:cleanup_and_exit]
(-->/lib/x86_64-linux-gnu/libpthread.so.0(+0x7fa3) [0x7f7410fa5fa3]
atan Danti wrote:
Il 2020-09-04 01:00 Computerisms Corporation ha scritto:
For the sake of completeness I am reporting back that your suspicions
seem to have been validated. I talked to the data center, they made
some changes. we talked again some days later, and they made some
more changes, and f
a.m., Computerisms Corporation wrote:
Hi Strahil,
You can use 'virt-what' binary to find if and what type of
Virtualization is used.
cool, did not know about that. trouble server:
root@moogle:/# virt-what
hyperv
kvm
good server:
root@mooglian:/# virt-what
kvm
I have a suspicion you
Hi Xavi, Amar,
For security reasons, the value passed cannot represent a full path, so
this was changed to only tell the name of a file. The file itself is
stored inside /var/run/gluster.
If you look there, there should be a file like '-tmp-stats.txt'
(replacing '/' by '_') which contains
node's load
before starting it again?
Best Regards,
Strahil Nikolov
На 21 август 2020 г. 7:44:35 GMT+03:00, Computerisms Corporation
написа:
Hi List,
I am still struggling with my setup. One server is working reasonably
well for serving websites, but serving sites from the 2nd server
Hi Strahil,
You can use 'virt-what' binary to find if and what type of Virtualization is
used.
cool, did not know about that. trouble server:
root@moogle:/# virt-what
hyperv
kvm
good server:
root@mooglian:/# virt-what
kvm
I have a suspicion you are ontop of Openstack (which uses
Hi List,
I am still struggling with my setup. One server is working reasonably
well for serving websites, but serving sites from the 2nd server is
still using excessive amounts of cpu; a bit of which is gluster, but
most of which is apache.
Gluster docs mentions client-side-profiling:
Hi Strahil,
so over the last two weeks, the system has been relatively stable. I
have powered off both servers at least once, for about 5 minutes each
time. server came up, auto-healed what it needed to, so all of that
part is working as expected.
will answer things inline and follow with
al Robot LLC
beerpla.net <http://beerpla.net/> | @ArtemR <http://twitter.com/ArtemR>
On Wed, Aug 5, 2020 at 9:44 AM Computerisms Corporation
mailto:b...@computerisms.ca>> wrote:
Hi List,
> So, we just moved into a quieter time of the day, but maybe I just
Hi List,
So, we just moved into a quieter time of the day, but maybe I just
stumbled onto something. I was trying to figure out if/how I could
throw more RAM at the problem. gluster docs says write behind is not a
cache unless flush-behind is on. So seems that is a way to throw ram to
it?
you, please, send me more detail about this configuration?
I will appreciate that!
Thank you
---
Gilberto Nunes Ferreira
(47) 3025-5907
**
(47) 99676-7530 - Whatsapp / Telegram
Skype: gilberto.nunes36
Em ter., 4 de ago. de 2020 às 23:47, Computerisms Corporation
mailto:b...@computerisms.ca
Hi Gilberto,
My understanding is there can only be one arbiter per replicated set. I
don't have a lot of practice with gluster, so this could be bad advice,
but the way I dealt with it on my two servers was to use 6 bricks as
distributed-replicated (this is also relatively easy to migrate to
Hi Strahil,
thanks again for sticking with me on this.
Hm... OK. I guess you can try 7.7 whenever it's possible.
Acknowledged.
Perhaps I am not understanding it correctly. I tried these suggestions
before and it got worse, not better. so I have been operating under
the
assumption that
php / cache.
I'd love to figure this out as well and tune gluster for heavy reads and
moderate writes, but I haven't cracked that recipe yet.
On Mon, Aug 3, 2020, 8:08 PM Computerisms Corporation
mailto:b...@computerisms.ca>> wrote:
Hi Gurus,
I have been trying to wrap my head
Hi Strahil, thanks for your response.
I have compiled gluster 7.6 from sources on both servers.
There is a 7.7 version which is fixing somw stuff. Why do you have to compile
it from source ?
Because I have often found with other stuff in the past compiling from
source makes a bunch of
Hi Gurus,
I have been trying to wrap my head around performance improvements on my
gluster setup, and I don't seem to be making any progress. I mean
forward progress. making it worse takes practically no effort at all.
My gluster is distributed-replicated across 6 bricks and 2 servers,
share your results.
For me building the rpms from the gluster source was easy /on CentOS8/, but on
CentOS7 I got errors.
Best Regards,
Strahil Nikolov
На 25 юни 2020 г. 4:22:29 GMT+03:00, Computerisms Corporation
написа:
First, not a question, but wort mentioning; I configured
First, not a question, but wort mentioning; I configured with
--with-ipv6-default, but under the configure summary at the end it says:
IPV6 default : no
Figured this one; tirpc is necessary for the ipv6.
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at
Hi List,
today I am playing with building gluster from sources.
First, not a question, but wort mentioning; I configured with
--with-ipv6-default, but under the configure summary at the end it says:
IPV6 default : no
Next, I was wondering about memory pooler, I would expect it would
that this one
is way more straightforward - rename volume dir , rename volume files and swap
the old name of the volume in the files to reflect the new one.
Best Regards,
Strahil Nikolov
На 18 юни 2020 г. 19:22:46 GMT+03:00, Computerisms Corporation
написа:
Hi Gluster Gurus,
Due to some hasty
Hi Gluster Gurus,
Due to some hasty decisions and inadequate planning/testing, I find
myself with a single-brick Distributed gluster volume. I had initially
intended to extend it to a replicated setup with an arbiter based on a
post I found that said that was possible, but I clearly messed
, and my gluster is back online.
On 2018-10-31 10:32 a.m., Computerisms Corporation wrote:
forgot to add output of glusterd console when starting the volume:
[2018-10-31 17:31:33.887923] D [MSGID: 0]
[glusterd-volume-ops.c:572:__glusterd_handle_cli_start_volume]
0-management: Received start vol
:671:event_dispatch_epoll_worker] 0-epoll: Failed to
dispatch handler
On 2018-10-31 10:19 a.m., Computerisms Corporation wrote:
Hi,
it occurs maybe the previous email was too many words and not enough
data. so will try to display the issue differently.
gluster created (single brick volume
tmap, ProgVers: 1, Proc: 5) to rpc-transport (glusterfs)
still seeing the empty pid file and the connection attempt on failed,
(Invalid argument) as the mostly likely culprits, but have read
everything of relevance I have found on google and not discovered a
solution yet...
Hi,
Fortunately I am playing in a sandbox right now, but I am good and stuck
and hoping someone can point me in the right direction.
I have been playing for about 3 months with a gluster that currently has
one brick. The idea is that I have a server with data, I need to
migrate that server
Hi Gluster Gurus,
I was casting about today for a way to make a symlink that resolves to
different files based on the username of the process owner. The
idea/goal is to serve a webapp to multiple apache virtualhosts using
mpm-itk from a single aliased read-only directory, but have the config
Hi,
I have a problem, but am not really sure the question I need to ask. So
going to lay it all down and maybe someone can point me in the right
direction...
I have a replicated gluster volume across two servers. Each server has
its OS installed on an SSD, and a RAID array is mounted on
27 matches
Mail list logo