Re: [Gluster-users] set: failed: Quorum not met. Volume operation not allowed. SUCCESS

2020-08-27 Thread Karthik Subrahmanya
Hi,

You had server-quorum enabled which could be the cause of the errors
you were getting at the first place. In latest releases only
client-quorum is enabled and the server-quorum is disabled by default.
Yes, the order matters in such cases.

Regards,
Karthik

On Fri, Aug 28, 2020 at 2:37 AM WK  wrote:
>
> So success!
>
> I dont know why but when I set "server-quorum-type" to none FIRST it
> seemed to work without complaining about quorum.
>
> then quorum-type was able to be set to none as well
>
>gluster volume set VOL cluster.server-quorum-type none
>gluster volume set VOL cluster.quorum-type none
>
> Finally I used Karthik's remove-brick command and it worked this time
> and I am now copying off the needed image.
>
> So I guess order counts.
>
> Thanks.
>
> -wk
>
>
>
> On 8/27/2020 12:47 PM, WK wrote:
> > No Luck.  Same problem.
> >
> > I stopped the volume.
> >
> > I ran the remove-brick command. It warned about not being able to
> > migrate files from removed bricks and asked if I want to continue.
> >
> > when I say 'yes'
> >
> > Gluster responds with 'failed: Quorum not met Volume operation not
> > allowed'
> >
> >
> > -wk
> >
> > On 8/26/2020 9:28 PM, Karthik Subrahmanya wrote:
> >> Hi,
> >>
> >> Since your two nodes are scrapped and there is no chance that they
> >> will come back in later time, you can try reducing the replica count
> >> to 1 by removing the down bricks from the volume and then mounting the
> >> volume back to access the data which is available on the only up
> >> brick.
> >> The remove brick command looks like this:
> >>
> >> gluster volume remove-brick VOLNAME replica 1
> >> :/brick-path
> >> :/brick-path force
> >>
> >> Regards,
> >> Karthik
> >>
> >>
> >> On Thu, Aug 27, 2020 at 4:24 AM WK  wrote:
> >>> So we migrated a number of VMs from a small Gluster 2+1A volume to a
> >>> newer cluster.
> >>>
> >>> Then a few days later the client said he wanted an old forgotten
> >>> file that had been left behind on the the deprecated system.
> >>>
> >>> However the arbiter and one of the brick nodes had been scraped,
> >>> leaving only a single gluster node.
> >>>
> >>> The volume I need uses shards so I am not excited about having to
> >>> piece it back together.
> >>>
> >>> I powered it up the single node and tried to mount the volume and of
> >>> course it refused to mount due to quorum and gluster volume status
> >>> shows the volume offline
> >>>
> >>> In the past I had worked around this issue by disabling quorum, but
> >>> that was years ago, so I googled it and found list messages
> >>> suggesting the following:
> >>>
> >>>   gluster volume set VOL cluster.quorum-type none
> >>>   gluster volume set VOL cluster.server-quorum-type none
> >>>
> >>> However, the gluster 6.9 system refuses to accept those set commands
> >>> due to the quorum and spits out the set failed error.
> >>>
> >>> So in modern Gluster, what is the preferred method for starting and
> >>> mounting a  single node/volume that was once part of a actual 3 node
> >>> cluster?
> >>>
> >>> Thanks.
> >>>
> >>> -wk
> >>>
> >>>
> >>> 
> >>>
> >>>
> >>>
> >>> Community Meeting Calendar:
> >>>
> >>> Schedule -
> >>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> >>> Bridge: https://bluejeans.com/441850968
> >>>
> >>> Gluster-users mailing list
> >>> Gluster-users@gluster.org
> >>> https://lists.gluster.org/mailman/listinfo/gluster-users
> > 
> >
> >
> >
> > Community Meeting Calendar:
> >
> > Schedule -
> > Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> > Bridge: https://bluejeans.com/441850968
> >
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > https://lists.gluster.org/mailman/listinfo/gluster-users
>





Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] set: failed: Quorum not met. Volume operation not allowed. SUCCESS

2020-08-27 Thread WK

So success!

I dont know why but when I set "server-quorum-type" to none FIRST it 
seemed to work without complaining about quorum.


then quorum-type was able to be set to none as well

  gluster volume set VOL cluster.server-quorum-type none
  gluster volume set VOL cluster.quorum-type none

Finally I used Karthik's remove-brick command and it worked this time 
and I am now copying off the needed image.


So I guess order counts.

Thanks.

-wk



On 8/27/2020 12:47 PM, WK wrote:

No Luck.  Same problem.

I stopped the volume.

I ran the remove-brick command. It warned about not being able to 
migrate files from removed bricks and asked if I want to continue.


when I say 'yes'

Gluster responds with 'failed: Quorum not met Volume operation not 
allowed'



-wk

On 8/26/2020 9:28 PM, Karthik Subrahmanya wrote:

Hi,

Since your two nodes are scrapped and there is no chance that they
will come back in later time, you can try reducing the replica count
to 1 by removing the down bricks from the volume and then mounting the
volume back to access the data which is available on the only up
brick.
The remove brick command looks like this:

gluster volume remove-brick VOLNAME replica 1
:/brick-path
:/brick-path force

Regards,
Karthik


On Thu, Aug 27, 2020 at 4:24 AM WK  wrote:
So we migrated a number of VMs from a small Gluster 2+1A volume to a 
newer cluster.


Then a few days later the client said he wanted an old forgotten 
file that had been left behind on the the deprecated system.


However the arbiter and one of the brick nodes had been scraped, 
leaving only a single gluster node.


The volume I need uses shards so I am not excited about having to 
piece it back together.


I powered it up the single node and tried to mount the volume and of 
course it refused to mount due to quorum and gluster volume status 
shows the volume offline


In the past I had worked around this issue by disabling quorum, but 
that was years ago, so I googled it and found list messages 
suggesting the following:


  gluster volume set VOL cluster.quorum-type none
  gluster volume set VOL cluster.server-quorum-type none

However, the gluster 6.9 system refuses to accept those set commands 
due to the quorum and spits out the set failed error.


So in modern Gluster, what is the preferred method for starting and 
mounting a  single node/volume that was once part of a actual 3 node 
cluster?


Thanks.

-wk






Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users





Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users





Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] set: failed: Quorum not met. Volume operation not allowed.

2020-08-27 Thread WK

No Luck.  Same problem.

I stopped the volume.

I ran the remove-brick command. It warned about not being able to 
migrate files from removed bricks and asked if I want to continue.


when I say 'yes'

Gluster responds with 'failed: Quorum not met Volume operation not allowed'


-wk

On 8/26/2020 9:28 PM, Karthik Subrahmanya wrote:

Hi,

Since your two nodes are scrapped and there is no chance that they
will come back in later time, you can try reducing the replica count
to 1 by removing the down bricks from the volume and then mounting the
volume back to access the data which is available on the only up
brick.
The remove brick command looks like this:

gluster volume remove-brick VOLNAME replica 1
:/brick-path
:/brick-path force

Regards,
Karthik


On Thu, Aug 27, 2020 at 4:24 AM WK  wrote:

So we migrated a number of VMs from a small Gluster 2+1A volume to a newer 
cluster.

Then a few days later the client said he wanted an old forgotten file that had 
been left behind on the the deprecated system.

However the arbiter and one of the brick nodes had been scraped, leaving only a 
single gluster node.

The volume I need uses shards so I am not excited about having to piece it back 
together.

I powered it up the single node and tried to mount the volume and of course it 
refused to mount due to quorum and gluster volume status shows the volume 
offline

In the past I had worked around this issue by disabling quorum, but that was 
years ago, so I googled it and found list messages suggesting the following:

  gluster volume set VOL cluster.quorum-type none
  gluster volume set VOL cluster.server-quorum-type none

However, the gluster 6.9 system refuses to accept those set commands due to the 
quorum and spits out the set failed error.

So in modern Gluster, what is the preferred method for starting and mounting a  
single node/volume that was once part of a actual 3 node cluster?

Thanks.

-wk






Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users





Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster volume rebalance making things more unbalanced

2020-08-27 Thread Strahil Nikolov
Sadly I have no idea why rebalance did that , so you should check the logs on 
all nodes for clues.

Is there any reason why you used "force" in that command ?


Best Regards,
Strahil Nikolov






В четвъртък, 27 август 2020 г., 17:32:24 Гринуич+3, Pat Haley  
написа: 










Hi,

We have distributed gluster volume spread across 4 bricks.  Yesterday I noticed 
that the remaining space was uneven (about 2.7TB, 1.7TB, 1TB, 1TB) so I issued 
the following rebalance command

 
* gluster volume rebalance  start force


Today I see that instead, things have gotten even more unbalanced (64G 853G 
6.2T 20K).  I'm killing the rebalance now.  What should I do to make sure that 
I get a successful rebalance?

Thanks

Pat
-- 

-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley  Email:  pha...@mit.edu
Center for Ocean Engineering   Phone:  (617) 253-6824
Dept. of Mechanical EngineeringFax:(617) 253-8125
MIT, Room 5-213http://web.mit.edu/phaley/www/
77 Massachusetts Avenue
Cambridge, MA  02139-4301






Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Eager lock

2020-08-27 Thread Gilberto Nunes
Hi there

I wonder if eager lock for a 2-node gluster brings some improvement
specially in this new gluster 8.1...
Is there any pros?
Thanks

---
Gilberto Nunes Ferreira




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] [Gluster-devel] Announcing Gluster release 8.1

2020-08-27 Thread Rinku Kothiya
Hi,

The Gluster community is pleased to announce the release of Gluster8.1
(packages available at [1]).
Release notes for the release can be found at [2].

Major changes, features, improvements and limitations addressed in this
release:

 - Performance improvement over the creation of large files - VM disks in
oVirt by bringing down trivial lookups of non-existent shards. Issue (#1425)
 - Fsync in the replication module uses eager-lock functionality which
improves the performance of VM workloads with an improvement of more than
50% in small-block of approximately 4kb with write heavy workloads. Issue
(#1253)


Thanks,
Gluster community

References:

[1] Packages for 8.1:
https://download.gluster.org/pub/gluster/glusterfs/8/8.1/

[2] Release notes for 8.1:
https://docs.gluster.org/en/latest/release-notes/8.1/




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster volume rebalance making things more unbalanced

2020-08-27 Thread Joe Julian
When a file should be moved based on its dht hash mapping but the target that 
it should be moved to has less free space than the origin, the rebalance 
command does not move the file and leaves the dht pointer in place. When you 
use "force", you override that behavior and always move each file regardless of 
free space.

In theory, eventually when the rebalance is finished you should end up with 
utilization mostly balanced but as the rebalance is processing you may end up 
in the state you show.

On August 27, 2020 7:32:16 AM PDT, Pat Haley  wrote:
>
>Hi,
>
>We have distributed gluster volume spread across 4 bricks. Yesterday I 
>noticed that the remaining space was uneven (about 2.7TB, 1.7TB, 1TB, 
>1TB) so I issued the following rebalance command
>
>  * |gluster volume rebalance  start force|
>
>Today I see that instead, things have gotten even more unbalanced (64G 
>853G 6.2T 20K).  I'm killing the rebalance now.  What should I do to 
>make sure that I get a successful rebalance?
>
>Thanks
>
>Pat||
>
>-- 
>
>-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
>Pat Haley  Email:  pha...@mit.edu
>Center for Ocean Engineering   Phone:  (617) 253-6824
>Dept. of Mechanical EngineeringFax:(617) 253-8125
>MIT, Room 5-213http://web.mit.edu/phaley/www/
>77 Massachusetts Avenue
>Cambridge, MA  02139-4301

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster volume rebalance making things more unbalanced

2020-08-27 Thread Pat Haley


Hi Strahil

The documentation I looked at:

 * 
https://docs.google.com/document/d/18iGX6I7I0yHUZ1zAfLIEnXDRPkIoMh6CHmwyyb6J4GM/edit#heading=h.oogvisuwd2qd

suggested that not using force might leave some links behind that could 
affect performance


Thanks

Pat


On 8/27/20 10:43 AM, Strahil Nikolov wrote:

Sadly I have no idea why rebalance did that , so you should check the logs on 
all nodes for clues.

Is there any reason why you used "force" in that command ?


Best Regards,
Strahil Nikolov






В четвъртък, 27 август 2020 г., 17:32:24 Гринуич+3, Pat Haley  
написа:










Hi,

We have distributed gluster volume spread across 4 bricks.  Yesterday I noticed 
that the remaining space was uneven (about 2.7TB, 1.7TB, 1TB, 1TB) so I issued 
the following rebalance command

  
 * gluster volume rebalance  start force
 


Today I see that instead, things have gotten even more unbalanced (64G 853G 
6.2T 20K).  I'm killing the rebalance now.  What should I do to make sure that 
I get a successful rebalance?

Thanks

Pat


--

-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley  Email:  pha...@mit.edu
Center for Ocean Engineering   Phone:  (617) 253-6824
Dept. of Mechanical EngineeringFax:(617) 253-8125
MIT, Room 5-213http://web.mit.edu/phaley/www/
77 Massachusetts Avenue
Cambridge, MA  02139-4301





Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] gluster volume rebalance making things more unbalanced

2020-08-27 Thread Pat Haley


Hi,

We have distributed gluster volume spread across 4 bricks. Yesterday I 
noticed that the remaining space was uneven (about 2.7TB, 1.7TB, 1TB, 
1TB) so I issued the following rebalance command


 * |gluster volume rebalance  start force|

Today I see that instead, things have gotten even more unbalanced (64G 
853G 6.2T 20K).  I'm killing the rebalance now.  What should I do to 
make sure that I get a successful rebalance?


Thanks

Pat||

--

-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Pat Haley  Email:  pha...@mit.edu
Center for Ocean Engineering   Phone:  (617) 253-6824
Dept. of Mechanical EngineeringFax:(617) 253-8125
MIT, Room 5-213http://web.mit.edu/phaley/www/
77 Massachusetts Avenue
Cambridge, MA  02139-4301





Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users