On Thu, Nov 22, 2018 at 5:51 PM Marco Lorenzo Crociani
wrote:
>
> Hi,
> I opened a bug on gluster because I have reading errors on files on a
> gluster volume:
> https://bugzilla.redhat.com/show_bug.cgi?id=1652548
>
> The files are many of the VMs images of the oVirt DATA storage domain.
> oVirt
Hi Nico,
The option is ganesha-server specific option(not per export). U need to mention
it in ganesha.conf in the following block
NFSv4 {
minor_versions = 1;
}
Not inside the block of block of volume configuration file
- Original Message -
From: "Nico van Royen"
Hi Atin,
Thanks for the details. I think the issue is with few of the nodes which
aren't serving any bricks in rejected state. When I remove them from pool
and stop glusterfs in those nodes, everything seems normal.
We keep those nodes as spares, but have glusterd runnin. coz in our
On Thu, Nov 22, 2018 at 3:30 AM mabi wrote:
> Hello,
>
> I would like to know if by increasing the op-version of all my GlusterFS
> volumes from the its actual version 31202 to 40100 by using the following
> command:
>
> gluster volume set all op-version 40100
>
> Will my clients using GlusterFS
On Mon, Nov 26, 2018 at 8:21 AM Atin Mukherjee wrote:
>
>
> On Sun, Nov 25, 2018 at 8:40 PM Jeevan Patnaik
> wrote:
>
>> Hi,
>>
>> I am getting output Another transaction is in progress with few gluster
>> volume commands including stop command. And with gluster volume status
>> command, it's
On Sun, Nov 25, 2018 at 8:40 PM Jeevan Patnaik wrote:
> Hi,
>
> I am getting output Another transaction is in progress with few gluster
> volume commands including stop command. And with gluster volume status
> command, it's just hung and fails with timeout error.
>
This is primarily because of
2500-3000 disks per cluster is maximum usable limit, after that almost
nothing works.
We are using 2700 disk cluster for cold storage with ec.
Be careful on heal operations, i see 1 week /8T heal throughput...
On Sun, Nov 25, 2018 at 6:16 PM Andreas Davour wrote:
>
> On Sun, 25 Nov 2018, Jeevan
Hi,
I am getting output Another transaction is in progress with few gluster
volume commands including stop command. And with gluster volume status
command, it's just hung and fails with timeout error.
So, I want to find out which transaction is hung and how to know this? I
ran volume statedump
Hi Andreas,
Before rebooting, I have tried some performance tuning inorder to prevent
timeout errors. As we have sufficient RAM and cpu power, I have
increased transport.listen-backlog in Kernel and syn_backlog and
max-connections in Kernel. So, I expected that it won't cause a problem.
Also
Ah..I am able to differentiate the hosts which are commonly rejected.
It's the hosts that aren't serving any bricks. Is it a bad idea to keep a
host that's not serving any bricks in pool? Don't they in sync with the
other hosts? Regarding my previous assumption that all nodes shoudo be
restarted
Hi,
I understand something now:
I think glusterd should not be restarted on all nodes at once. And if this
true, can anyone provide technical explanation of how it effects the
checksum?
And it seems to fix the rejected hosts, I need to clear the
/var/lib/glusterd except gluster.info and start
Hi,
I have different issues:
I have restarted glusterd service on my 72 nodes almost parallelly with
ansible while the gluster NFS clients are in mounted state
After that many of the gluster peers went to rejected state. In logs, I see
msg id 106010 stating that checksum doesn't match.
I'm
12 matches
Mail list logo