Re: [Gluster-users] Glusterfs community status

2024-07-11 Thread Stefan Kania
Unfortunately, there are hardly any answers from developers or Readhat 
people. Either there is nobody working on the project anymore, or all 
the people who know what the future of GlusterFS will be have been 
muzzled by Redhat/IBM. The whole thing shows once again how a company 
appropriates open source projects and then runs them against the wall in 
favour of its own solution (in this case GPFS).


Am 09.07.24 um 07:53 schrieb Ilias Chasapakis forumZFD:
Hi, we at forumZFD are currently experiencing problems similar to those 
mentioned here on the mailing list especially on the latest messages.


Our gluster just doesn't heal all entries and "manual" healing is long 
and tedious. Entries accumulate in time and we have to do regular 
cleanups that take long and are risky.  Despite changing available 
options with different combinations of values, the problem persists. So 
we thought, "let's go to the community meeting" if not much is happening 
here on the list. We are at the end of our knowledge and can therefore 
no longer contribute much to the list. Unfortunately, nobody was at the 
community meeting. Somehow we have the feeling that there is no one left 
in the community or in the project who is interested in fixing the 
basics of Gluster (namely the healing). Is that the case and is gluster 
really end of life?


We appreciate a lot the contributions in the last few years and all the 
work done. As well as for the honest efforts to give a hand. But would 
be good to have an orientation on the status of the project itself.


Many thanks in advance for any replies.

Ilias






Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users






smime.p7s
Description: Kryptografische S/MIME-Signatur




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Glusterfs community status

2024-07-10 Thread Strahil Nikolov
Hi Ilias,

Usually when healing problems occur, the first step I do is to take a look if 
all clients are connected to all bricks using:
gluster volume status all client-list
gluster volume status all clients

Can you check if you have clients connected only to some of the bricks instead 
of all ?

Best Regards,
Strahil Nikolov
   В вторник, 9 юли 2024 г. в 08:59:38 ч. Гринуич+3, Ilias Chasapakis forumZFD 
 написа:  
 
 Hi, we at forumZFD are currently experiencing problems similar to those 
mentioned here on the mailing list especially on the latest messages.

Our gluster just doesn't heal all entries and "manual" healing is long 
and tedious. Entries accumulate in time and we have to do regular 
cleanups that take long and are risky.  Despite changing available 
options with different combinations of values, the problem persists. So 
we thought, "let's go to the community meeting" if not much is happening 
here on the list. We are at the end of our knowledge and can therefore 
no longer contribute much to the list. Unfortunately, nobody was at the 
community meeting. Somehow we have the feeling that there is no one left 
in the community or in the project who is interested in fixing the 
basics of Gluster (namely the healing). Is that the case and is gluster 
really end of life?

We appreciate a lot the contributions in the last few years and all the 
work done. As well as for the honest efforts to give a hand. But would 
be good to have an orientation on the status of the project itself.

Many thanks in advance for any replies.

Ilias

-- 
forumZFD
Entschieden für Frieden | Committed to Peace

Ilias Chasapakis
Referent IT | IT Referent

Forum Ziviler Friedensdienst e.V. | Forum Civil Peace Service
Am Kölner Brett 8 | 50825 Köln | Germany

Tel 0221 91273243 | Fax 0221 91273299 | http://www.forumZFD.de

Vorstand nach § 26 BGB, einzelvertretungsberechtigt|Executive Board:
Alexander Mauz, Sonja Wiekenberg-Mlalandle, Jens von Bargen
VR 17651 Amtsgericht Köln

Spenden|Donations: IBAN DE90 4306 0967 4103 7264 00  BIC GENODEM1GLS





Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
  



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Glusterfs community status

2024-07-10 Thread Kyle Maas
To be honest, despite the slowed pace of development, I've had very good 
results with GlusterFS and, while it's not perfect, I consider it to be 
stable enough for heavy production use.  The occasional glitches here 
and there are generally fixable without any downtime, at least in my 
environment.  I'd love to see it progress further - there are features 
that would be very helpful - but even if nothing but security updates 
happened for a long time, I'd still rather use GlusterFS than Ceph.  
It's feature-complete enough for me and the failover cases for it are 
much better IMHO than Ceph.


Warm Regards,
Kyle Maas



On 7/10/24 09:42, Ronny Adsetts wrote:

Hi,

We'll continue to use Gluster whilst we can and remain hopeful that a community 
will slowly form around the project. These things can take time I guess/hope.

I appreciate the simplicity of Gluster and it works well for our use case - we 
have a small distribute/replicate storage cluster backing our KVM and iSCSI 
data.

I've looked at Ceph a number of times but the complexity of it doesn't fill me 
with joy. I think something that complex is going to fall to pieces unless you 
really understand what you're doing and that's a big undertaking.

Ronny


Ilias Chasapakis forumZFD wrote on 09/07/2024 06:53:

Hi, we at forumZFD are currently experiencing problems similar to
those mentioned here on the mailing list especially on the latest
messages.

Our gluster just doesn't heal all entries and "manual" healing is
long and tedious. Entries accumulate in time and we have to do
regular cleanups that take long and are risky.  Despite changing
available options with different combinations of values, the problem
persists. So we thought, "let's go to the community meeting" if not
much is happening here on the list. We are at the end of our
knowledge and can therefore no longer contribute much to the list.
Unfortunately, nobody was at the community meeting. Somehow we have
the feeling that there is no one left in the community or in the
project who is interested in fixing the basics of Gluster (namely the
healing). Is that the case and is gluster really end of life?

We appreciate a lot the contributions in the last few years and all
the work done. As well as for the honest efforts to give a hand. But
would be good to have an orientation on the status of the project
itself.

Many thanks in advance for any replies.

Ilias







Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Glusterfs community status

2024-07-10 Thread Ronny Adsetts
Hi,

We'll continue to use Gluster whilst we can and remain hopeful that a community 
will slowly form around the project. These things can take time I guess/hope.

I appreciate the simplicity of Gluster and it works well for our use case - we 
have a small distribute/replicate storage cluster backing our KVM and iSCSI 
data.

I've looked at Ceph a number of times but the complexity of it doesn't fill me 
with joy. I think something that complex is going to fall to pieces unless you 
really understand what you're doing and that's a big undertaking.

Ronny


Ilias Chasapakis forumZFD wrote on 09/07/2024 06:53:
> Hi, we at forumZFD are currently experiencing problems similar to
> those mentioned here on the mailing list especially on the latest
> messages.
> 
> Our gluster just doesn't heal all entries and "manual" healing is
> long and tedious. Entries accumulate in time and we have to do
> regular cleanups that take long and are risky.  Despite changing
> available options with different combinations of values, the problem
> persists. So we thought, "let's go to the community meeting" if not
> much is happening here on the list. We are at the end of our
> knowledge and can therefore no longer contribute much to the list.
> Unfortunately, nobody was at the community meeting. Somehow we have
> the feeling that there is no one left in the community or in the
> project who is interested in fixing the basics of Gluster (namely the
> healing). Is that the case and is gluster really end of life?
> 
> We appreciate a lot the contributions in the last few years and all
> the work done. As well as for the honest efforts to give a hand. But
> would be good to have an orientation on the status of the project
> itself.
> 
> Many thanks in advance for any replies.
> 
> Ilias

-- 
Ronny Adsetts
Technical Director
Amazing Internet Ltd, London
t: +44 20 8977 8943
w: www.amazinginternet.com

Registered office: 85 Waldegrave Park, Twickenham, TW1 4TJ
Registered in England. Company No. 4042957





Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Glusterfs community status

2024-07-09 Thread Marcus Pedersén
Hi,
Here is my take on gluster today.
We experience the same type of problem as well with
failing heals and manual, timeconsuming work.
I asked the same type of question on the list
a number of months ago about the gluster project state
and my conclution on the answers is that the gluster project
has just slowed down more and more and people are leaving for
other file systems.
For a long time nothing has been released from the project and the
mailing list has just gone more and more quiet.
Gluster has served us well for many years and I think that
gluster has been a really great filesystem and it makes me
sad to see that gluster is comming to an end. I really like it!!
Internally in our organization we have had discussions and made
tests with cephfs and our decision is that we will leave
gluster and use cephfs instead.
As we do not see that gluster will improve we have no other option
then to use other filesystems and in our case it will be ceph.

I hope this helps!!

Best regards
Marcus



On Tue, Jul 09, 2024 at 07:53:36AM +0200, Ilias Chasapakis forumZFD wrote:
> CAUTION: This email originated from outside of the organization. Do not click 
> links or open attachments unless you recognize the sender and know the 
> content is safe.
>
>
> Hi, we at forumZFD are currently experiencing problems similar to those
> mentioned here on the mailing list especially on the latest messages.
>
> Our gluster just doesn't heal all entries and "manual" healing is long
> and tedious. Entries accumulate in time and we have to do regular
> cleanups that take long and are risky.  Despite changing available
> options with different combinations of values, the problem persists. So
> we thought, "let's go to the community meeting" if not much is happening
> here on the list. We are at the end of our knowledge and can therefore
> no longer contribute much to the list. Unfortunately, nobody was at the
> community meeting. Somehow we have the feeling that there is no one left
> in the community or in the project who is interested in fixing the
> basics of Gluster (namely the healing). Is that the case and is gluster
> really end of life?
>
> We appreciate a lot the contributions in the last few years and all the
> work done. As well as for the honest efforts to give a hand. But would
> be good to have an orientation on the status of the project itself.
>
> Many thanks in advance for any replies.
>
> Ilias
>
> --
> forumZFD
> Entschieden für Frieden | Committed to Peace
>
> Ilias Chasapakis
> Referent IT | IT Referent
>
> Forum Ziviler Friedensdienst e.V. | Forum Civil Peace Service
> Am Kölner Brett 8 | 50825 Köln | Germany
>
> Tel 0221 91273243 | Fax 0221 91273299 | http://www.forumZFD.de
>
> Vorstand nach § 26 BGB, einzelvertretungsberechtigt|Executive Board:
> Alexander Mauz, Sonja Wiekenberg-Mlalandle, Jens von Bargen
> VR 17651 Amtsgericht Köln
>
> Spenden|Donations: IBAN DE90 4306 0967 4103 7264 00   BIC GENODEM1GLS
>



> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users


--
**
* Marcus Pedersén*
* System administrator   *
**
* Interbull Centre   *
*    *
* Department of Animal Breeding & Genetics — SLU *
* Box 7023, SE-750 07*
* Uppsala, Sweden*
**
* Visiting address:  *
* Room 55614, Ulls väg 26, Ultuna*
* Uppsala*
* Sweden *
**
* Tel: +46-(0)18-67 1962 *
**
**
* ISO 9001 Bureau Veritas No SE004561-1  *
**
---
När du skickar e-post till SLU så innebär detta att SLU behandlar dina 
personuppgifter. För att läsa mer om hur detta går till, klicka här 

E-mailing SLU will result in SLU processing your personal data. For more 
information on how this is done, click here 





Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Glusterfs 10.5-1 healing issues

2024-04-09 Thread Darrell Budic
The big one I see of you is to investigate and enable sharding. It can improve 
performance and makes it much easier to heal VM style workloads. Be aware that 
once you turn it on, you can’t go back easily, and you need to copy the VM disk 
images around to get them to be sharded before it will show any real effect. A 
couple other recommendations from my main volume (three dedicated host servers 
with HDDs and SDD/NVM caching and log volumes on ZFS ). The cluster.shd-* 
entries are especially recommended. This is on gluster 9.4 at the moment, so 
some of these won’t map exactly.

Volume Name: gv1
Type: Replicate
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Options Reconfigured:
cluster.read-hash-mode: 3
performance.client-io-threads: on
performance.write-behind-window-size: 64MB
performance.cache-size: 1G
nfs.disable: on
performance.readdir-ahead: on
performance.quick-read: off
performance.read-ahead: on
performance.io-cache: off
performance.stat-prefetch: on
cluster.eager-lock: enable
network.remote-dio: enable
server.event-threads: 4
client.event-threads: 8
performance.io-thread-count: 64
performance.low-prio-threads: 32
features.shard: on
features.shard-block-size: 64MB
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10240
cluster.choose-local: false
cluster.granular-entry-heal: enable

Otherwise, more details about your servers, CPU, RAM, and Disks would be useful 
for suggestions, and details of your network as well. And if you haven’t done 
kernel level tuning on the servers, you should address that as well. These all 
vary a lot by your work load and hardware setup, so there aren’t many generic 
recommendations I can give other than to make sure you tuned your tcp stack and 
enabled the none disk elevator on SSDs or disks used by ZFS. 

There’s a lot of tuning suggesting in the archives if you go searching as well.

  -Darrell


> On Apr 9, 2024, at 3:05 AM, Ilias Chasapakis forumZFD 
>  wrote:
> 
> Dear all,
> 
> we would like to describe the situation that we have and that does not solve 
> since a long time, that means after many minor
> and major upgrades of GlusterFS
> 
> We use a KVM environment for VMs for glusterfs and host servers are updated 
> regularly. Hosts are disomogeneous hardware,
> but configured with same characteristics.
> 
> The VMs have been also harmonized to use the virtio drivers where available 
> for devices and resources reserved are the same
> on each host.
> 
> Physical switch for hosts has been substituted with a reliable one.
> 
> Probing peers has been and is quite quick in the heartbeat network and 
> communication between the servers for apparently has no issues on disruptions.
> 
> And I say apparently because what we have is:
> 
> - always pending failed heals that used to resolve by a rotated reboot of the 
> gluster vms (replica 3). Restarting only
> glusterfs related services (daemon, events etc.) has no effect, only reboot 
> brings results
> - very often failed heals are directories
> 
> We lately removed a brick that was on a vm on a host that has been entirely 
> substituted. Re-added the brick, sync went on and
> all data was eventually synced and started with 0 pending failed heals. Now 
> it develops failed heals too like its fellow
> bricks. Please take into account we healed all the failed entries (manually 
> with various methods) before adding the third brick.
> 
> After some days of operating, the count of failed heals rises again, not 
> really fast but with new entries for sure (which might solve
> with rotated reboots, or not).
> 
> We have gluster clients also on ctdbs that connect to the gluster and mount 
> via glusterfs client. Windows roaming profiles shared via smb become 
> frequently corrupted,(they are composed of a great number small files and are 
> though of big total dimension). Gluster nodes are formatted with xfs.
> 
> Also what we observer is that mounting with the vfs option in smb on the 
> ctdbs has some kind of delay. This means that you can see the shared folder 
> on for example
> a Windows client machine on a ctdb, but not on another ctdb in the cluster 
> and then after a while it appears there too. And this frequently st
> 
> 
> This is an excerpt of entries on our shd logs:
> 
>> 2024-04-08 10:13:26.213596 +] I [MSGID: 108026] 
>> [afr-self-heal-entry.c:1080:afr_selfheal_entry_do] 0-gv-ho-replicate-0: 
>> performing full entry selfheal on 2c621415-6223-4b66-a4ca-3f6f267a448d
>> [2024-04-08 10:14:08.135911 +] W [MSGID: 114031] 
>> [client-rpc-fops_v2.c:2457:client4_0_link_cbk] 0-gv-ho-client-5: remote 
>> operation failed. [{source=}, 
>> {target=(null)}, {errno=116}, {error=Veraltete Dateizugriffsnummer (file 
>> handle)}]
>> [2024-04-08 10:15:59.135908 +] W [MSGID: 114061] 
>> [client-common.c:2992:client_pre_readdir_v2] 0-gv-ho-client-5: remote_fd is 
>> -1. EBADFD [{gfid=6b5e599e-c836-4ebe-b16a-8224425b88c7}, {errno=77}, 
>>

Re: [Gluster-users] .glusterfs is over 900GB

2023-08-14 Thread Alexander Schreiber
On Mon, Aug 14, 2023 at 03:49:05PM +, Tim King-Kehl wrote:
> Hey y'all, got a weird one for you.
> 
> I am unfortunately supporting this really weird Gluster volume, where it's a 
> single-brick/single-node volume that has many files written every minute to 
> it. Its very suboptimal, but I'm kind of stuck with it until the application 
> team migrates off Linux 5 (yes... I know). gluster-server is running version 
> 6.10, gluster-client is mounting via NFS as 3.6 is not compatible with 
> gluster-server 6.10.
> 
> Anyways, the actual content of the volume is maybe a total of 15GB, but the 
> .glusterfs directory is over 900GB, full of backups of the files? I feel that 
> it's probably related to being unable to self-heal because it's only one 
> volume, but then these files still just live out there.

Curious question: If this
 - is a single brick, single node gluster system
 - whose storage is mounted over NFS anyway
Why don't you just convert the whole thing to a standard local filesystem
served via NFS and rip out the gluster bits, since you are not gaining
any of the advantages of it and just get more headaches?

And 15G is a tiny amount of storage by todays measures, so this should
be pretty simple and fast to migrate. Keep the same NFS export path so
the client won't even notice anything ;-)

Kind regards,
   Alex.
-- 
"Opportunity is missed by most people because it is dressed in overalls and
 looks like work."  -- Thomas A. Edison




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS 11 is out!

2023-02-08 Thread Gilberto Ferreira
Well
I am sorry!
Apparently it is just the folder structure that has been created.
There's nothing inside.
---
Gilberto Nunes Ferreira






Em ter., 7 de fev. de 2023 às 17:11, Gilberto Ferreira <
gilberto.nune...@gmail.com> escreveu:

> Here we go
>
> https://download.gluster.org/pub/gluster/glusterfs/11/
> ---
> Gilberto Nunes Ferreira
>
>
>
>
>
>
> Em ter., 7 de fev. de 2023 às 17:05, sacawulu 
> escreveu:
>
>> But *is* it out?
>>
>> I don't see it anywhere...
>>
>> MJ
>>
>>
>> Op 07-02-2023 om 18:07 schreef Gilberto Ferreira:
>> > Hello guys!
>> > So what is the good news about this new release?
>> > Is Anybody are using it?
>> >
>> > Thanks for any feedback!
>> >
>> >
>> > ---
>> > Gilberto Nunes Ferreira
>> >
>> >
>> >
>> >
>> >
>> > 
>> >
>> >
>> >
>> > Community Meeting Calendar:
>> >
>> > Schedule -
>> > Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> > Bridge: https://meet.google.com/cpu-eiue-hvk
>> > Gluster-users mailing list
>> > Gluster-users@gluster.org
>> > https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS 11 is out!

2023-02-07 Thread Gilberto Ferreira
Here we go

https://download.gluster.org/pub/gluster/glusterfs/11/
---
Gilberto Nunes Ferreira






Em ter., 7 de fev. de 2023 às 17:05, sacawulu 
escreveu:

> But *is* it out?
>
> I don't see it anywhere...
>
> MJ
>
>
> Op 07-02-2023 om 18:07 schreef Gilberto Ferreira:
> > Hello guys!
> > So what is the good news about this new release?
> > Is Anybody are using it?
> >
> > Thanks for any feedback!
> >
> >
> > ---
> > Gilberto Nunes Ferreira
> >
> >
> >
> >
> >
> > 
> >
> >
> >
> > Community Meeting Calendar:
> >
> > Schedule -
> > Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> > Bridge: https://meet.google.com/cpu-eiue-hvk
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > https://lists.gluster.org/mailman/listinfo/gluster-users
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS and sysctl tweaks.

2022-12-18 Thread Gilberto Ferreira
Thanks.

---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram






Em dom., 18 de dez. de 2022 às 20:03, Strahil Nikolov 
escreveu:

> oVirt (upstream of RHV which is also KVM-based) uses sharding, which
> reduces sync times as only the changed shards are synced.
>
> Check the virt group's gluster tunables at /var/lib/glusterd/groups/virt .
> Also in the source:
> https://github.com/gluster/glusterfs/blob/devel/extras/group-virt.example
>
> WARNING: ONCE THE SHARDING IS ENABLED, NEVER EVER DISABLE IT !
>
> Best Regards,
> Strahil Nikolov
>
> On Sun, Dec 18, 2022 at 6:51, Gilberto Ferreira
>  wrote:
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS and sysctl tweaks.

2022-12-18 Thread Strahil Nikolov
oVirt (upstream of RHV which is also KVM-based) uses sharding, which reduces 
sync times as only the changed shards are synced.
Check the virt group's gluster tunables at /var/lib/glusterd/groups/virt .Also 
in the 
source:https://github.com/gluster/glusterfs/blob/devel/extras/group-virt.example
WARNING: ONCE THE SHARDING IS ENABLED, NEVER EVER DISABLE IT !
Best Regards,Strahil Nikolov 
 
 
  On Sun, Dec 18, 2022 at 6:51, Gilberto Ferreira 
wrote:   



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
  




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS and sysctl tweaks.

2022-12-17 Thread Gilberto Ferreira
Em sáb, 17 de dez de 2022 13:20, Strahil Nikolov 
escreveu:

> Gluster's tuned profile 'rhgs-random-io' has the following :
>
> [main]
> include=throughput-performance
>
> [sysctl]
> vm.dirty_ratio = 5
> vm.dirty_background_ratio = 2
>

Nice


> What kind of workload do you have (sequential IO or not)?
>

It's for kvm images. Which means big files.
My main concern is about healing time between fails.


> Best Regards,
> Strahil Nikolov
>
> On Fri, Dec 16, 2022 at 21:31, Gilberto Ferreira
>  wrote:
> Hello!
>
> Is there any sysctl tuning to improve glusterfs regarding network
> configuration?
>
> Thanks
> ---
> Gilberto Nunes Ferreira
> (47) 99676-7530 - Whatsapp / Telegram
>
>
>
>
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS and sysctl tweaks.

2022-12-17 Thread Strahil Nikolov
Gluster's tuned profile 'rhgs-random-io' has the following :
[main]include=throughput-performance [sysctl]vm.dirty_ratio = 
5vm.dirty_background_ratio = 2
What kind of workload do you have (sequential IO or not) ?
Best Regards,Strahil Nikolov  
 
  On Fri, Dec 16, 2022 at 21:31, Gilberto Ferreira 
wrote:   Hello!

Is there any sysctl tuning to improve glusterfs regarding network configuration?
Thanks
---Gilberto Nunes Ferreira(47) 99676-7530 - Whatsapp / Telegram













Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
  




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS mount crash

2022-12-02 Thread Angel Docampo
Yes, I'm on 10.3 on a brand new installation (i.e.: no upgrade or
whatsoever)

Ok, I've finally read how to get core dumps on Debian. Soft limits are 0 by
default, so no core dumps can be generated. I've set up soft ulimit to
unlimited and killed a test process with a SIGSEGV signal, then I was able
to see the core dump. Great!

I've moved all the qcow2 files to another location but not destroyed the
volume or deleted any files, I will create some test VMs there and leave
this a few days to see if the dump is generated properly next time,
hopefully soon enough.

Thank you Xavi.

*Angel Docampo*

<+34-93-1592929>


El jue, 1 dic 2022 a las 18:02, Xavi Hernandez ()
escribió:

> I'm not so sure the problem is with sharding. Basically it's saying that
> seek is not supported, which means that something between shard and the
> bricks doesn't support it. DHT didn't support seek before 10.3, but if I'm
> not wrong you are already using 10.3, so the message is weird. But in any
> case this shouldn't cause a crash. The stack trace seems to indicate that
> the crash happens inside disperse, but without a core dump there's little
> more that I can do.
>
>
>
> On Thu, Dec 1, 2022 at 5:27 PM Angel Docampo 
> wrote:
>
>> Well, that last more time, but it crashed once again, same node, same
>> mountpoint... fortunately, I've moved preventively all the VMs to the
>> underlying ZFS filesystem those past days, so none of them have been
>> affected this time...
>>
>> dmesg show this
>> [2022-12-01 15:49:54]  INFO: task iou-wrk-637144:946532 blocked for more
>> than 120 seconds.
>> [2022-12-01 15:49:54]Tainted: P  IO  5.15.74-1-pve #1
>> [2022-12-01 15:49:54]  "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
>> disables this message.
>> [2022-12-01 15:49:54]  task:iou-wrk-637144  state:D stack:0
>> pid:946532 ppid: 1 flags:0x4000
>> [2022-12-01 15:49:54]  Call Trace:
>> [2022-12-01 15:49:54]   
>> [2022-12-01 15:49:54]   __schedule+0x34e/0x1740
>> [2022-12-01 15:49:54]   ? kmem_cache_free+0x271/0x290
>> [2022-12-01 15:49:54]   ? mempool_free_slab+0x17/0x20
>> [2022-12-01 15:49:54]   schedule+0x69/0x110
>> [2022-12-01 15:49:54]   rwsem_down_write_slowpath+0x231/0x4f0
>> [2022-12-01 15:49:54]   ? ttwu_queue_wakelist+0x40/0x1c0
>> [2022-12-01 15:49:54]   down_write+0x47/0x60
>> [2022-12-01 15:49:54]   fuse_file_write_iter+0x1a3/0x430
>> [2022-12-01 15:49:54]   ? apparmor_file_permission+0x70/0x170
>> [2022-12-01 15:49:54]   io_write+0xf6/0x330
>> [2022-12-01 15:49:54]   ? update_cfs_group+0x9c/0xc0
>> [2022-12-01 15:49:54]   ? dequeue_entity+0xd8/0x490
>> [2022-12-01 15:49:54]   io_issue_sqe+0x401/0x1fc0
>> [2022-12-01 15:49:54]   ? lock_timer_base+0x3b/0xd0
>> [2022-12-01 15:49:54]   io_wq_submit_work+0x76/0xd0
>> [2022-12-01 15:49:54]   io_worker_handle_work+0x1a7/0x5f0
>> [2022-12-01 15:49:54]   io_wqe_worker+0x2c0/0x360
>> [2022-12-01 15:49:54]   ? finish_task_switch.isra.0+0x7e/0x2b0
>> [2022-12-01 15:49:54]   ? io_worker_handle_work+0x5f0/0x5f0
>> [2022-12-01 15:49:54]   ? io_worker_handle_work+0x5f0/0x5f0
>> [2022-12-01 15:49:54]   ret_from_fork+0x1f/0x30
>> [2022-12-01 15:49:54]  RIP: 0033:0x0
>> [2022-12-01 15:49:54]  RSP: 002b: EFLAGS: 0207
>> [2022-12-01 15:49:54]  RAX:  RBX: 0011 RCX:
>> 
>> [2022-12-01 15:49:54]  RDX: 0001 RSI: 0001 RDI:
>> 0120
>> [2022-12-01 15:49:54]  RBP: 0120 R08: 0001 R09:
>> 00f0
>> [2022-12-01 15:49:54]  R10: 00f8 R11: 0001239a4128 R12:
>> db90
>> [2022-12-01 15:49:54]  R13: 0001 R14: 0001 R15:
>> 0100
>> [2022-12-01 15:49:54]   
>>
>> My gluster volume log shows plenty of error like this
>> The message "I [MSGID: 133017] [shard.c:7275:shard_seek] 0-vmdata-shard:
>> seek called on 73f0ad95-f7e3-4a68-8d08-9f7e03182baa. [Operation not
>> supported]" repeated 1564 times between [2022-12-01 00:20:09.578233 +]
>> and [2022-12-01 00:22:09.436927 +]
>> [2022-12-01 00:22:09.516269 +] I [MSGID: 133017]
>> [shard.c:7275:shard_seek] 0-vmdata-shard: seek called on
>> 73f0ad95-f7e3-4a68-8d08-9f7e03182baa. [Operation not supported]
>>
>> and of this
>> [2022-12-01 09:05:08.525867 +] I [MSGID: 133017]
>> [shard.c:7275:shard_seek] 0-vmdata-shard: seek called on
>> 3ed993c4-bbb5-4938-86e9-6d22b8541e8e. [Operation not supported]
>>
>> Then simply the same
>> pending frames:
>> frame : type(0) op(0)
>> frame : type(0) op(0)
>> frame : type(0) op(0)
>> frame : type(0) op(0)
>> frame : type(1) op(FSYNC)
>> frame : type(0) op(0)
>> frame : type(0) op(0)
>> frame : type(0) op(0)
>> frame : type(0) op(0)
>> frame : type(0) op(0)
>> frame : type(0) op

Re: [Gluster-users] GlusterFS mount crash

2022-12-01 Thread Xavi Hernandez
I'm not so sure the problem is with sharding. Basically it's saying that
seek is not supported, which means that something between shard and the
bricks doesn't support it. DHT didn't support seek before 10.3, but if I'm
not wrong you are already using 10.3, so the message is weird. But in any
case this shouldn't cause a crash. The stack trace seems to indicate that
the crash happens inside disperse, but without a core dump there's little
more that I can do.



On Thu, Dec 1, 2022 at 5:27 PM Angel Docampo 
wrote:

> Well, that last more time, but it crashed once again, same node, same
> mountpoint... fortunately, I've moved preventively all the VMs to the
> underlying ZFS filesystem those past days, so none of them have been
> affected this time...
>
> dmesg show this
> [2022-12-01 15:49:54]  INFO: task iou-wrk-637144:946532 blocked for more
> than 120 seconds.
> [2022-12-01 15:49:54]Tainted: P  IO  5.15.74-1-pve #1
> [2022-12-01 15:49:54]  "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
> disables this message.
> [2022-12-01 15:49:54]  task:iou-wrk-637144  state:D stack:0 pid:946532
> ppid: 1 flags:0x4000
> [2022-12-01 15:49:54]  Call Trace:
> [2022-12-01 15:49:54]   
> [2022-12-01 15:49:54]   __schedule+0x34e/0x1740
> [2022-12-01 15:49:54]   ? kmem_cache_free+0x271/0x290
> [2022-12-01 15:49:54]   ? mempool_free_slab+0x17/0x20
> [2022-12-01 15:49:54]   schedule+0x69/0x110
> [2022-12-01 15:49:54]   rwsem_down_write_slowpath+0x231/0x4f0
> [2022-12-01 15:49:54]   ? ttwu_queue_wakelist+0x40/0x1c0
> [2022-12-01 15:49:54]   down_write+0x47/0x60
> [2022-12-01 15:49:54]   fuse_file_write_iter+0x1a3/0x430
> [2022-12-01 15:49:54]   ? apparmor_file_permission+0x70/0x170
> [2022-12-01 15:49:54]   io_write+0xf6/0x330
> [2022-12-01 15:49:54]   ? update_cfs_group+0x9c/0xc0
> [2022-12-01 15:49:54]   ? dequeue_entity+0xd8/0x490
> [2022-12-01 15:49:54]   io_issue_sqe+0x401/0x1fc0
> [2022-12-01 15:49:54]   ? lock_timer_base+0x3b/0xd0
> [2022-12-01 15:49:54]   io_wq_submit_work+0x76/0xd0
> [2022-12-01 15:49:54]   io_worker_handle_work+0x1a7/0x5f0
> [2022-12-01 15:49:54]   io_wqe_worker+0x2c0/0x360
> [2022-12-01 15:49:54]   ? finish_task_switch.isra.0+0x7e/0x2b0
> [2022-12-01 15:49:54]   ? io_worker_handle_work+0x5f0/0x5f0
> [2022-12-01 15:49:54]   ? io_worker_handle_work+0x5f0/0x5f0
> [2022-12-01 15:49:54]   ret_from_fork+0x1f/0x30
> [2022-12-01 15:49:54]  RIP: 0033:0x0
> [2022-12-01 15:49:54]  RSP: 002b: EFLAGS: 0207
> [2022-12-01 15:49:54]  RAX:  RBX: 0011 RCX:
> 
> [2022-12-01 15:49:54]  RDX: 0001 RSI: 0001 RDI:
> 0120
> [2022-12-01 15:49:54]  RBP: 0120 R08: 0001 R09:
> 00f0
> [2022-12-01 15:49:54]  R10: 00f8 R11: 0001239a4128 R12:
> db90
> [2022-12-01 15:49:54]  R13: 0001 R14: 0001 R15:
> 0100
> [2022-12-01 15:49:54]   
>
> My gluster volume log shows plenty of error like this
> The message "I [MSGID: 133017] [shard.c:7275:shard_seek] 0-vmdata-shard:
> seek called on 73f0ad95-f7e3-4a68-8d08-9f7e03182baa. [Operation not
> supported]" repeated 1564 times between [2022-12-01 00:20:09.578233 +]
> and [2022-12-01 00:22:09.436927 +]
> [2022-12-01 00:22:09.516269 +] I [MSGID: 133017]
> [shard.c:7275:shard_seek] 0-vmdata-shard: seek called on
> 73f0ad95-f7e3-4a68-8d08-9f7e03182baa. [Operation not supported]
>
> and of this
> [2022-12-01 09:05:08.525867 +] I [MSGID: 133017]
> [shard.c:7275:shard_seek] 0-vmdata-shard: seek called on
> 3ed993c4-bbb5-4938-86e9-6d22b8541e8e. [Operation not supported]
>
> Then simply the same
> pending frames:
> frame : type(0) op(0)
> frame : type(0) op(0)
> frame : type(0) op(0)
> frame : type(0) op(0)
> frame : type(1) op(FSYNC)
> frame : type(0) op(0)
> frame : type(0) op(0)
> frame : type(0) op(0)
> frame : type(0) op(0)
> frame : type(0) op(0)
> frame : type(0) op(0)
> frame : type(0) op(0)
> frame : type(0) op(0)
> frame : type(0) op(0)
> frame : type(0) op(0)
> frame : type(0) op(0)
> frame : type(0) op(0)
> frame : type(0) op(0)
> frame : type(0) op(0)
> patchset: git://git.gluster.org/glusterfs.git
> signal received: 11
> time of crash:
> 2022-12-01 14:45:14 +
> configuration details:
> argp 1
> backtrace 1
> dlfcn 1
> libpthread 1
> llistxattr 1
> setfsid 1
> epoll.h 1
> xattr.h 1
> st_atim.tv_nsec 1
> package-string: glusterfs 10.3
> /lib/x86_64-linux-gnu/libglusterfs.so.0(+0x28a54)[0x7f1e23db3a54]
> /lib/x86_64-linux-gnu/libglusterfs.so.0(gf_print_trace+0x700)[0x7f1e23dbbfc0]
>
> /lib/x86_64-linux-gnu/libc.so.6(+0x38d60)[0x7f1e23b76d60]
> /usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x37a14)[0x7f1e200e9a14]
>
> /usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x19414)[0x7f1e200cb414]
>
> /usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0xd072)[0x7f1e200bf072]
>
> /usr/lib/x86_

Re: [Gluster-users] GlusterFS mount crash

2022-12-01 Thread Angel Docampo
Well, that last more time, but it crashed once again, same node, same
mountpoint... fortunately, I've moved preventively all the VMs to the
underlying ZFS filesystem those past days, so none of them have been
affected this time...

dmesg show this
[2022-12-01 15:49:54]  INFO: task iou-wrk-637144:946532 blocked for more
than 120 seconds.
[2022-12-01 15:49:54]Tainted: P  IO  5.15.74-1-pve #1
[2022-12-01 15:49:54]  "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
disables this message.
[2022-12-01 15:49:54]  task:iou-wrk-637144  state:D stack:0 pid:946532
ppid: 1 flags:0x4000
[2022-12-01 15:49:54]  Call Trace:
[2022-12-01 15:49:54]   
[2022-12-01 15:49:54]   __schedule+0x34e/0x1740
[2022-12-01 15:49:54]   ? kmem_cache_free+0x271/0x290
[2022-12-01 15:49:54]   ? mempool_free_slab+0x17/0x20
[2022-12-01 15:49:54]   schedule+0x69/0x110
[2022-12-01 15:49:54]   rwsem_down_write_slowpath+0x231/0x4f0
[2022-12-01 15:49:54]   ? ttwu_queue_wakelist+0x40/0x1c0
[2022-12-01 15:49:54]   down_write+0x47/0x60
[2022-12-01 15:49:54]   fuse_file_write_iter+0x1a3/0x430
[2022-12-01 15:49:54]   ? apparmor_file_permission+0x70/0x170
[2022-12-01 15:49:54]   io_write+0xf6/0x330
[2022-12-01 15:49:54]   ? update_cfs_group+0x9c/0xc0
[2022-12-01 15:49:54]   ? dequeue_entity+0xd8/0x490
[2022-12-01 15:49:54]   io_issue_sqe+0x401/0x1fc0
[2022-12-01 15:49:54]   ? lock_timer_base+0x3b/0xd0
[2022-12-01 15:49:54]   io_wq_submit_work+0x76/0xd0
[2022-12-01 15:49:54]   io_worker_handle_work+0x1a7/0x5f0
[2022-12-01 15:49:54]   io_wqe_worker+0x2c0/0x360
[2022-12-01 15:49:54]   ? finish_task_switch.isra.0+0x7e/0x2b0
[2022-12-01 15:49:54]   ? io_worker_handle_work+0x5f0/0x5f0
[2022-12-01 15:49:54]   ? io_worker_handle_work+0x5f0/0x5f0
[2022-12-01 15:49:54]   ret_from_fork+0x1f/0x30
[2022-12-01 15:49:54]  RIP: 0033:0x0
[2022-12-01 15:49:54]  RSP: 002b: EFLAGS: 0207
[2022-12-01 15:49:54]  RAX:  RBX: 0011 RCX:

[2022-12-01 15:49:54]  RDX: 0001 RSI: 0001 RDI:
0120
[2022-12-01 15:49:54]  RBP: 0120 R08: 0001 R09:
00f0
[2022-12-01 15:49:54]  R10: 00f8 R11: 0001239a4128 R12:
db90
[2022-12-01 15:49:54]  R13: 0001 R14: 0001 R15:
0100
[2022-12-01 15:49:54]   

My gluster volume log shows plenty of error like this
The message "I [MSGID: 133017] [shard.c:7275:shard_seek] 0-vmdata-shard:
seek called on 73f0ad95-f7e3-4a68-8d08-9f7e03182baa. [Operation not
supported]" repeated 1564 times between [2022-12-01 00:20:09.578233 +]
and [2022-12-01 00:22:09.436927 +]
[2022-12-01 00:22:09.516269 +] I [MSGID: 133017]
[shard.c:7275:shard_seek] 0-vmdata-shard: seek called on
73f0ad95-f7e3-4a68-8d08-9f7e03182baa. [Operation not supported]

and of this
[2022-12-01 09:05:08.525867 +] I [MSGID: 133017]
[shard.c:7275:shard_seek] 0-vmdata-shard: seek called on
3ed993c4-bbb5-4938-86e9-6d22b8541e8e. [Operation not supported]

Then simply the same
pending frames:
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(1) op(FSYNC)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
patchset: git://git.gluster.org/glusterfs.git
signal received: 11
time of crash:
2022-12-01 14:45:14 +
configuration details:
argp 1
backtrace 1
dlfcn 1
libpthread 1
llistxattr 1
setfsid 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 10.3
/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x28a54)[0x7f1e23db3a54]
/lib/x86_64-linux-gnu/libglusterfs.so.0(gf_print_trace+0x700)[0x7f1e23dbbfc0]

/lib/x86_64-linux-gnu/libc.so.6(+0x38d60)[0x7f1e23b76d60]
/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x37a14)[0x7f1e200e9a14]

/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x19414)[0x7f1e200cb414]

/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0xd072)[0x7f1e200bf072]

/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/performance/readdir-ahead.so(+0x316d)[0x7f1e200a316d]

/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/distribute.so(+0x5bdd4)[0x7f1e197aadd4]

/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/features/shard.so(+0x1e69c)[0x7f1e2008b69c]

/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/features/shard.so(+0x16551)[0x7f1e20083551]

/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/features/shard.so(+0x25abf)[0x7f1e20092abf]

/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/features/shard.so(+0x25d21)[0x7f1e20092d21]

/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/features/shard.so(+0x167be)[0x7f1e200837be]

/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/features/shard.so(+0x1c178)[0x7f1

Re: [Gluster-users] GlusterFS mount crash

2022-11-25 Thread Angel Docampo
I did also notice about that loop0... AFAIK, I wasn't using any loop
device, at least consciously.
After looking for the same messages at the other gluster/proxmox nodes, I
saw no trace of it.
Then I saw on that node, there is a single LXC container, which disk is
living on the glusterfs, and effectively, is using ext4.
After the crash of today, I was unable to boot it up again, and the logs
became silent, I did just try to boot it up, and immediately appeared this
on dmesg
[2022-11-25 18:04:18]  loop0: detected capacity change from 0 to 16777216
[2022-11-25 18:04:18]  EXT4-fs (loop0): error loading journal
[2022-11-25 18:05:26]  loop0: detected capacity change from 0 to 16777216
[2022-11-25 18:05:26]  EXT4-fs (loop0): INFO: recovery required on readonly
filesystem
[2022-11-25 18:05:26]  EXT4-fs (loop0): write access unavailable, cannot
proceed (try mounting with noload)

And the LXC container didn't boot up. I've manually moved the LXC container
to the underlying ZFS where gluster lives, and the LXC booted up and the
dmesg log shows
[2022-11-25 18:24:06]  loop0: detected capacity change from 0 to 16777216
[2022-11-25 18:24:06]  EXT4-fs warning (device loop0):
ext4_multi_mount_protect:326: MMP interval 42 higher than expected, please
wait.
[2022-11-25 18:24:50]  EXT4-fs (loop0): mounted filesystem with ordered
data mode. Opts: (null). Quota mode: none.

So, to recapitulate:
- the loop device on the host relies on the LXC, is not surprising, but I
didn't know it.
- the LXC container had a lot of I/O issues just before the two crashes,
the crash from today, and the crash 4 days ago, this Monday
- as side note, this gluster is in production since last Thursday, so the
first crash was exactly 4 days since this LXC was started with the storage
on the gluster, and exactly 4 days after, it crashed again.
- this crashes began to happen since the upgrade to gluster 10.3, because
it was working just fine with former versions of gluster (from 3.X to 9.X),
and from proxmox 5.X to proxmox 7.1, when all the issues begun, now I'm on
proxmox 7.2.
- underlying ZFS where gluster is, has no ZIL or ZLOG (it had before the
upgrade to gluster 10.3, but as I had to re-create the gluster, I decided
not to add them because all my disks are SSD, so there is no need to add
any of those), I've added them to test if the LXC container caused the same
issues, it did, so they don't seem to make any difference.
- there are more loop0 I/O errors on the dmesg besides the days of the
crash, but there are just "one" error per day, and not all days, but the
days gluster mountpoint become inaccessible, there are tens of errors per
millisecond just before the crash

I'm going to get rid of that LXC, as now I'm migrating from VMs to K8s
(living in a VM cluster inside proxmox), I was ready to convert this as
well, now is a must.

I don't know if anyone at gluster can replicate this scenario (proxmox +
gluster distributed disperse + LXC on a gluster directory), to see if it
can be reproducible. I know this must be a corner case, just wondering why
stopped working, if it is a bug on GlusterFS 10.3, a bug in LXC or in
Proxmox 7.1 upwards (where I'm going to post this now, but Proxmox probably
won't be interested as they explicitly suggest mounting glusterfs with the
gluster client, and not to map a directory where gluster is mounted via
fstab)

Thank you a lot Xavi, I will monitor dmesg to make sure all those loop
errors disappear, and hopefully I won't have a crash next Tuesday. :)

*Angel Docampo*

<+34-93-1592929>


El vie, 25 nov 2022 a las 13:25, Xavi Hernandez ()
escribió:

> What is "loop0" it seems it's having some issue. Does it point to a
> Gluster file ?
>
> I also see that there's an io_uring thread in D state. If that one belongs
> to Gluster, it may explain why systemd was unable to generate a core dump
> (all threads need to be stopped to generate a core dump, but a thread
> blocked inside the kernel cannot be stopped).
>
> If you are using io_uring in Gluster, maybe you can disable it to see if
> it's related.
>
> Xavi
>
> On Fri, Nov 25, 2022 at 11:39 AM Angel Docampo <
> angel.doca...@eoniantec.com> wrote:
>
>> Well, just happened again, the same server, the same mountpoint.
>>
>> I'm unable to get the core dumps, coredumpctl says there are no core
>> dumps, it would be funny if I wasn't the one suffering it, but
>> systemd-coredump service crashed as well
>> ● systemd-coredump@0-3199871-0.service - Process Core Dump (PID
>> 3199871/UID 0)
>> Loaded: loaded (/lib/systemd/system/systemd-coredump@.service;
>> static)
>> Active: failed (Result: timeout) since Fri 2022-11-25 10:54:59 CET;
>> 39min ago
>> TriggeredBy: ● systemd-coredump.socket
>>   Docs: man:systemd-coredump(8)
>>Process: 3199873 ExecS

Re: [Gluster-users] GlusterFS mount crash

2022-11-25 Thread Xavi Hernandez
What is "loop0" it seems it's having some issue. Does it point to a Gluster
file ?

I also see that there's an io_uring thread in D state. If that one belongs
to Gluster, it may explain why systemd was unable to generate a core dump
(all threads need to be stopped to generate a core dump, but a thread
blocked inside the kernel cannot be stopped).

If you are using io_uring in Gluster, maybe you can disable it to see if
it's related.

Xavi

On Fri, Nov 25, 2022 at 11:39 AM Angel Docampo 
wrote:

> Well, just happened again, the same server, the same mountpoint.
>
> I'm unable to get the core dumps, coredumpctl says there are no core
> dumps, it would be funny if I wasn't the one suffering it, but
> systemd-coredump service crashed as well
> ● systemd-coredump@0-3199871-0.service - Process Core Dump (PID
> 3199871/UID 0)
> Loaded: loaded (/lib/systemd/system/systemd-coredump@.service;
> static)
> Active: failed (Result: timeout) since Fri 2022-11-25 10:54:59 CET;
> 39min ago
> TriggeredBy: ● systemd-coredump.socket
>   Docs: man:systemd-coredump(8)
>Process: 3199873 ExecStart=/lib/systemd/systemd-coredump (code=killed,
> signal=TERM)
>   Main PID: 3199873 (code=killed, signal=TERM)
>CPU: 15ms
>
> Nov 25 10:49:59 pve02 systemd[1]: Started Process Core Dump (PID
> 3199871/UID 0).
> Nov 25 10:54:59 pve02 systemd[1]: systemd-coredump@0-3199871-0.service:
> Service reached runtime time limit. Stopping.
> Nov 25 10:54:59 pve02 systemd[1]: systemd-coredump@0-3199871-0.service:
> Failed with result 'timeout'.
>
>
> I just saw the exception on dmesg,
> [2022-11-25 10:50:08]  INFO: task kmmpd-loop0:681644 blocked for more than
> 120 seconds.
> [2022-11-25 10:50:08]Tainted: P  IO  5.15.60-2-pve #1
> [2022-11-25 10:50:08]  "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
> disables this message.
> [2022-11-25 10:50:08]  task:kmmpd-loop0 state:D stack:0 pid:681644
> ppid: 2 flags:0x4000
> [2022-11-25 10:50:08]  Call Trace:
> [2022-11-25 10:50:08]   
> [2022-11-25 10:50:08]   __schedule+0x33d/0x1750
> [2022-11-25 10:50:08]   ? bit_wait+0x70/0x70
> [2022-11-25 10:50:08]   schedule+0x4e/0xc0
> [2022-11-25 10:50:08]   io_schedule+0x46/0x80
> [2022-11-25 10:50:08]   bit_wait_io+0x11/0x70
> [2022-11-25 10:50:08]   __wait_on_bit+0x31/0xa0
> [2022-11-25 10:50:08]   out_of_line_wait_on_bit+0x8d/0xb0
> [2022-11-25 10:50:08]   ? var_wake_function+0x30/0x30
> [2022-11-25 10:50:08]   __wait_on_buffer+0x34/0x40
> [2022-11-25 10:50:08]   write_mmp_block+0x127/0x180
> [2022-11-25 10:50:08]   kmmpd+0x1b9/0x430
> [2022-11-25 10:50:08]   ? write_mmp_block+0x180/0x180
> [2022-11-25 10:50:08]   kthread+0x127/0x150
> [2022-11-25 10:50:08]   ? set_kthread_struct+0x50/0x50
> [2022-11-25 10:50:08]   ret_from_fork+0x1f/0x30
> [2022-11-25 10:50:08]   
> [2022-11-25 10:50:08]  INFO: task iou-wrk-1511979:3200401 blocked for
> more than 120 seconds.
> [2022-11-25 10:50:08]Tainted: P  IO  5.15.60-2-pve #1
> [2022-11-25 10:50:08]  "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
> disables this message.
> [2022-11-25 10:50:08]  task:iou-wrk-1511979 state:D stack:0
> pid:3200401 ppid: 1 flags:0x4000
> [2022-11-25 10:50:08]  Call Trace:
> [2022-11-25 10:50:08]   
> [2022-11-25 10:50:08]   __schedule+0x33d/0x1750
> [2022-11-25 10:50:08]   schedule+0x4e/0xc0
> [2022-11-25 10:50:08]   rwsem_down_write_slowpath+0x231/0x4f0
> [2022-11-25 10:50:08]   down_write+0x47/0x60
> [2022-11-25 10:50:08]   fuse_file_write_iter+0x1a3/0x430
> [2022-11-25 10:50:08]   ? apparmor_file_permission+0x70/0x170
> [2022-11-25 10:50:08]   io_write+0xfb/0x320
> [2022-11-25 10:50:08]   ? put_dec+0x1c/0xa0
> [2022-11-25 10:50:08]   io_issue_sqe+0x401/0x1fc0
> [2022-11-25 10:50:08]   io_wq_submit_work+0x76/0xd0
> [2022-11-25 10:50:08]   io_worker_handle_work+0x1a7/0x5f0
> [2022-11-25 10:50:08]   io_wqe_worker+0x2c0/0x360
> [2022-11-25 10:50:08]   ? finish_task_switch.isra.0+0x7e/0x2b0
> [2022-11-25 10:50:08]   ? io_worker_handle_work+0x5f0/0x5f0
> [2022-11-25 10:50:08]   ? io_worker_handle_work+0x5f0/0x5f0
> [2022-11-25 10:50:08]   ret_from_fork+0x1f/0x30
> [2022-11-25 10:50:08]  RIP: 0033:0x0
> [2022-11-25 10:50:08]  RSP: 002b: EFLAGS: 0216
> ORIG_RAX: 01aa
> [2022-11-25 10:50:08]  RAX:  RBX: 7fdb1efef640 RCX:
> 7fdd59f872e9
> [2022-11-25 10:50:08]  RDX:  RSI: 0001 RDI:
> 0011
> [2022-11-25 10:50:08]  RBP:  R08:  R09:
> 0008
> [2022-11-25 10:50:08]  R10:  R11: 0216 R12:
> 55662e5bd268
> [2022-11-25 10:50:08]  R13: 55662e5bd320 R14: 55662e5bd260 R15:
> 
> [2022-11-25 10:50:08]   
> [2022-11-25 10:52:08]  INFO: task kmmpd-loop0:681644 blocked for more than
> 241 seconds.
> [2022-11-25 10:52:08]Tainted: P  IO  5.15.60-2-pve #1
> [2022-11-25 10:52:08]  "echo 0 > /proc/sys/kernel/hun

Re: [Gluster-users] GlusterFS mount crash

2022-11-25 Thread Angel Docampo
Well, just happened again, the same server, the same mountpoint.

I'm unable to get the core dumps, coredumpctl says there are no core dumps,
it would be funny if I wasn't the one suffering it, but systemd-coredump
service crashed as well
● systemd-coredump@0-3199871-0.service - Process Core Dump (PID 3199871/UID
0)
Loaded: loaded (/lib/systemd/system/systemd-coredump@.service; static)
Active: failed (Result: timeout) since Fri 2022-11-25 10:54:59 CET;
39min ago
TriggeredBy: ● systemd-coredump.socket
  Docs: man:systemd-coredump(8)
   Process: 3199873 ExecStart=/lib/systemd/systemd-coredump (code=killed,
signal=TERM)
  Main PID: 3199873 (code=killed, signal=TERM)
   CPU: 15ms

Nov 25 10:49:59 pve02 systemd[1]: Started Process Core Dump (PID
3199871/UID 0).
Nov 25 10:54:59 pve02 systemd[1]: systemd-coredump@0-3199871-0.service:
Service reached runtime time limit. Stopping.
Nov 25 10:54:59 pve02 systemd[1]: systemd-coredump@0-3199871-0.service:
Failed with result 'timeout'.


I just saw the exception on dmesg,
[2022-11-25 10:50:08]  INFO: task kmmpd-loop0:681644 blocked for more than
120 seconds.
[2022-11-25 10:50:08]Tainted: P  IO  5.15.60-2-pve #1
[2022-11-25 10:50:08]  "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
disables this message.
[2022-11-25 10:50:08]  task:kmmpd-loop0 state:D stack:0 pid:681644
ppid: 2 flags:0x4000
[2022-11-25 10:50:08]  Call Trace:
[2022-11-25 10:50:08]   
[2022-11-25 10:50:08]   __schedule+0x33d/0x1750
[2022-11-25 10:50:08]   ? bit_wait+0x70/0x70
[2022-11-25 10:50:08]   schedule+0x4e/0xc0
[2022-11-25 10:50:08]   io_schedule+0x46/0x80
[2022-11-25 10:50:08]   bit_wait_io+0x11/0x70
[2022-11-25 10:50:08]   __wait_on_bit+0x31/0xa0
[2022-11-25 10:50:08]   out_of_line_wait_on_bit+0x8d/0xb0
[2022-11-25 10:50:08]   ? var_wake_function+0x30/0x30
[2022-11-25 10:50:08]   __wait_on_buffer+0x34/0x40
[2022-11-25 10:50:08]   write_mmp_block+0x127/0x180
[2022-11-25 10:50:08]   kmmpd+0x1b9/0x430
[2022-11-25 10:50:08]   ? write_mmp_block+0x180/0x180
[2022-11-25 10:50:08]   kthread+0x127/0x150
[2022-11-25 10:50:08]   ? set_kthread_struct+0x50/0x50
[2022-11-25 10:50:08]   ret_from_fork+0x1f/0x30
[2022-11-25 10:50:08]   
[2022-11-25 10:50:08]  INFO: task iou-wrk-1511979:3200401 blocked for more
than 120 seconds.
[2022-11-25 10:50:08]Tainted: P  IO  5.15.60-2-pve #1
[2022-11-25 10:50:08]  "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
disables this message.
[2022-11-25 10:50:08]  task:iou-wrk-1511979 state:D stack:0 pid:3200401
ppid: 1 flags:0x4000
[2022-11-25 10:50:08]  Call Trace:
[2022-11-25 10:50:08]   
[2022-11-25 10:50:08]   __schedule+0x33d/0x1750
[2022-11-25 10:50:08]   schedule+0x4e/0xc0
[2022-11-25 10:50:08]   rwsem_down_write_slowpath+0x231/0x4f0
[2022-11-25 10:50:08]   down_write+0x47/0x60
[2022-11-25 10:50:08]   fuse_file_write_iter+0x1a3/0x430
[2022-11-25 10:50:08]   ? apparmor_file_permission+0x70/0x170
[2022-11-25 10:50:08]   io_write+0xfb/0x320
[2022-11-25 10:50:08]   ? put_dec+0x1c/0xa0
[2022-11-25 10:50:08]   io_issue_sqe+0x401/0x1fc0
[2022-11-25 10:50:08]   io_wq_submit_work+0x76/0xd0
[2022-11-25 10:50:08]   io_worker_handle_work+0x1a7/0x5f0
[2022-11-25 10:50:08]   io_wqe_worker+0x2c0/0x360
[2022-11-25 10:50:08]   ? finish_task_switch.isra.0+0x7e/0x2b0
[2022-11-25 10:50:08]   ? io_worker_handle_work+0x5f0/0x5f0
[2022-11-25 10:50:08]   ? io_worker_handle_work+0x5f0/0x5f0
[2022-11-25 10:50:08]   ret_from_fork+0x1f/0x30
[2022-11-25 10:50:08]  RIP: 0033:0x0
[2022-11-25 10:50:08]  RSP: 002b: EFLAGS: 0216
ORIG_RAX: 01aa
[2022-11-25 10:50:08]  RAX:  RBX: 7fdb1efef640 RCX:
7fdd59f872e9
[2022-11-25 10:50:08]  RDX:  RSI: 0001 RDI:
0011
[2022-11-25 10:50:08]  RBP:  R08:  R09:
0008
[2022-11-25 10:50:08]  R10:  R11: 0216 R12:
55662e5bd268
[2022-11-25 10:50:08]  R13: 55662e5bd320 R14: 55662e5bd260 R15:

[2022-11-25 10:50:08]   
[2022-11-25 10:52:08]  INFO: task kmmpd-loop0:681644 blocked for more than
241 seconds.
[2022-11-25 10:52:08]Tainted: P  IO  5.15.60-2-pve #1
[2022-11-25 10:52:08]  "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
disables this message.
[2022-11-25 10:52:08]  task:kmmpd-loop0 state:D stack:0 pid:681644
ppid: 2 flags:0x4000
[2022-11-25 10:52:08]  Call Trace:
[2022-11-25 10:52:08]   
[2022-11-25 10:52:08]   __schedule+0x33d/0x1750
[2022-11-25 10:52:08]   ? bit_wait+0x70/0x70
[2022-11-25 10:52:08]   schedule+0x4e/0xc0
[2022-11-25 10:52:08]   io_schedule+0x46/0x80
[2022-11-25 10:52:08]   bit_wait_io+0x11/0x70
[2022-11-25 10:52:08]   __wait_on_bit+0x31/0xa0
[2022-11-25 10:52:08]   out_of_line_wait_on_bit+0x8d/0xb0
[2022-11-25 10:52:08]   ? var_wake_function+0x30/0x30
[2022-11-25 10:52:08]   __wait_on_buffer+0x34/0x40
[2022-11-25 10:52:08]   write_mmp_block

Re: [Gluster-users] GlusterFS mount crash

2022-11-22 Thread Angel Docampo
I've taken a look into all possible places they should be, and I couldn't
find it anywhere. Some people say the dump file is generated where the
application is running... well, I don't know where to look then, and I hope
they hadn't been generated on the failed mountpoint.

As Debian 11 has systemd, I've installed systemd-coredump, so in the case a
new crash happens, at least I will have the exact location and tool
(coredumpctl) to find them and will install then the debug symbols, which
is particularly tricky on debian. But I need to wait to happen again, now
the tool says there isn't any core dump on the system.

Thank you, Xavi, if this happens again (let's hope it won't), I will report
back.

Best regards!

*Angel Docampo*

<+34-93-1592929>


El mar, 22 nov 2022 a las 10:45, Xavi Hernandez ()
escribió:

> The crash seems related to some problem in ec xlator, but I don't have
> enough information to determine what it is. The crash should have generated
> a core dump somewhere in the system (I don't know where Debian keeps the
> core dumps). If you find it, you should be able to open it using this
> command (make sure debug symbols package is also installed before running
> it):
>
> # gdb /usr/sbin/glusterfs 
>
> And then run this command:
>
> # bt -full
>
> Regards,
>
> Xavi
>
> On Tue, Nov 22, 2022 at 9:41 AM Angel Docampo 
> wrote:
>
>> Hi Xavi,
>>
>> The OS is Debian 11 with the proxmox kernel. Gluster packages are the
>> official from gluster.org (
>> https://download.gluster.org/pub/gluster/glusterfs/10/10.3/Debian/bullseye/
>> )
>>
>> The system logs showed no other issues by the time of the crash, no OOM
>> kill or whatsoever, and no other process was interacting with the gluster
>> mountpoint besides proxmox.
>>
>> I wasn't running gdb when it crashed, so I don't really know if I can
>> obtain a more detailed trace from logs or if there is a simple way to let
>> it running in the background to see if it happens again (or there is a flag
>> to start the systemd daemon in debug mode).
>>
>> Best,
>>
>> *Angel Docampo*
>>
>> 
>> <+34-93-1592929>
>>
>>
>> El lun, 21 nov 2022 a las 15:16, Xavi Hernandez ()
>> escribió:
>>
>>> Hi Angel,
>>>
>>> On Mon, Nov 21, 2022 at 2:33 PM Angel Docampo <
>>> angel.doca...@eoniantec.com> wrote:
>>>
 Sorry for necrobumping this, but this morning I've suffered this on my
 Proxmox  + GlusterFS cluster. In the log I can see this

 [2022-11-21 07:38:00.213620 +] I [MSGID: 133017]
 [shard.c:7275:shard_seek] 11-vmdata-shard: seek called on
 fbc063cb-874e-475d-b585-f89
 f7518acdd. [Operation not supported]
 pending frames:
 frame : type(1) op(WRITE)
 frame : type(0) op(0)
 frame : type(0) op(0)
 frame : type(0) op(0)
 frame : type(0) op(0)
 frame : type(0) op(0)
 frame : type(0) op(0)
 frame : type(0) op(0)
 frame : type(0) op(0)
 frame : type(0) op(0)
 frame : type(0) op(0)
 frame : type(0) op(0)
 frame : type(0) op(0)
 frame : type(0) op(0)
 frame : type(0) op(0)
 frame : type(0) op(0)
 frame : type(0) op(0)
 ...
 frame : type(1) op(FSYNC)
 frame : type(1) op(FSYNC)
 frame : type(1) op(FSYNC)
 frame : type(1) op(FSYNC)
 frame : type(1) op(FSYNC)
 frame : type(1) op(FSYNC)
 frame : type(1) op(FSYNC)
 frame : type(1) op(FSYNC)
 frame : type(1) op(FSYNC)
 frame : type(1) op(FSYNC)
 frame : type(1) op(FSYNC)
 frame : type(1) op(FSYNC)
 frame : type(1) op(FSYNC)
 frame : type(1) op(FSYNC)
 frame : type(1) op(FSYNC)
 frame : type(1) op(FSYNC)
 frame : type(1) op(FSYNC)
 frame : type(1) op(FSYNC)
 frame : type(1) op(FSYNC)
 frame : type(1) op(FSYNC)
 patchset: git://git.gluster.org/glusterfs.git
 signal received: 11
 time of crash:
 2022-11-21 07:38:00 +
 configuration details:
 argp 1
 backtrace 1
 dlfcn 1
 libpthread 1
 llistxattr 1
 setfsid 1
 epoll.h 1
 xattr.h 1
 st_atim.tv_nsec 1
 package-string: glusterfs 10.3
 /lib/x86_64-linux-gnu/libglusterfs.so.0(+0x28a54)[0x7f74f286ba54]
 /lib/x86_64-linux-gnu/libglusterfs.so.0(gf_print_trace+0x700)[0x7f74f2873fc0]

 /lib/x86_64-linux-gnu/libc.so.6(+0x38d60)[0x7f74f262ed60]
 /usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x37a14)[0x7f74ecfcea14]

 /usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x19414)[0x7f74ecfb0414]

 /usr/lib/x86_64-linux-gnu/glusterfs/10.3

Re: [Gluster-users] GlusterFS mount crash

2022-11-22 Thread Xavi Hernandez
The crash seems related to some problem in ec xlator, but I don't have
enough information to determine what it is. The crash should have generated
a core dump somewhere in the system (I don't know where Debian keeps the
core dumps). If you find it, you should be able to open it using this
command (make sure debug symbols package is also installed before running
it):

# gdb /usr/sbin/glusterfs 

And then run this command:

# bt -full

Regards,

Xavi

On Tue, Nov 22, 2022 at 9:41 AM Angel Docampo 
wrote:

> Hi Xavi,
>
> The OS is Debian 11 with the proxmox kernel. Gluster packages are the
> official from gluster.org (
> https://download.gluster.org/pub/gluster/glusterfs/10/10.3/Debian/bullseye/
> )
>
> The system logs showed no other issues by the time of the crash, no OOM
> kill or whatsoever, and no other process was interacting with the gluster
> mountpoint besides proxmox.
>
> I wasn't running gdb when it crashed, so I don't really know if I can
> obtain a more detailed trace from logs or if there is a simple way to let
> it running in the background to see if it happens again (or there is a flag
> to start the systemd daemon in debug mode).
>
> Best,
>
> *Angel Docampo*
>
> 
> <+34-93-1592929>
>
>
> El lun, 21 nov 2022 a las 15:16, Xavi Hernandez ()
> escribió:
>
>> Hi Angel,
>>
>> On Mon, Nov 21, 2022 at 2:33 PM Angel Docampo <
>> angel.doca...@eoniantec.com> wrote:
>>
>>> Sorry for necrobumping this, but this morning I've suffered this on my
>>> Proxmox  + GlusterFS cluster. In the log I can see this
>>>
>>> [2022-11-21 07:38:00.213620 +] I [MSGID: 133017]
>>> [shard.c:7275:shard_seek] 11-vmdata-shard: seek called on
>>> fbc063cb-874e-475d-b585-f89
>>> f7518acdd. [Operation not supported]
>>> pending frames:
>>> frame : type(1) op(WRITE)
>>> frame : type(0) op(0)
>>> frame : type(0) op(0)
>>> frame : type(0) op(0)
>>> frame : type(0) op(0)
>>> frame : type(0) op(0)
>>> frame : type(0) op(0)
>>> frame : type(0) op(0)
>>> frame : type(0) op(0)
>>> frame : type(0) op(0)
>>> frame : type(0) op(0)
>>> frame : type(0) op(0)
>>> frame : type(0) op(0)
>>> frame : type(0) op(0)
>>> frame : type(0) op(0)
>>> frame : type(0) op(0)
>>> frame : type(0) op(0)
>>> ...
>>> frame : type(1) op(FSYNC)
>>> frame : type(1) op(FSYNC)
>>> frame : type(1) op(FSYNC)
>>> frame : type(1) op(FSYNC)
>>> frame : type(1) op(FSYNC)
>>> frame : type(1) op(FSYNC)
>>> frame : type(1) op(FSYNC)
>>> frame : type(1) op(FSYNC)
>>> frame : type(1) op(FSYNC)
>>> frame : type(1) op(FSYNC)
>>> frame : type(1) op(FSYNC)
>>> frame : type(1) op(FSYNC)
>>> frame : type(1) op(FSYNC)
>>> frame : type(1) op(FSYNC)
>>> frame : type(1) op(FSYNC)
>>> frame : type(1) op(FSYNC)
>>> frame : type(1) op(FSYNC)
>>> frame : type(1) op(FSYNC)
>>> frame : type(1) op(FSYNC)
>>> frame : type(1) op(FSYNC)
>>> patchset: git://git.gluster.org/glusterfs.git
>>> signal received: 11
>>> time of crash:
>>> 2022-11-21 07:38:00 +
>>> configuration details:
>>> argp 1
>>> backtrace 1
>>> dlfcn 1
>>> libpthread 1
>>> llistxattr 1
>>> setfsid 1
>>> epoll.h 1
>>> xattr.h 1
>>> st_atim.tv_nsec 1
>>> package-string: glusterfs 10.3
>>> /lib/x86_64-linux-gnu/libglusterfs.so.0(+0x28a54)[0x7f74f286ba54]
>>> /lib/x86_64-linux-gnu/libglusterfs.so.0(gf_print_trace+0x700)[0x7f74f2873fc0]
>>>
>>> /lib/x86_64-linux-gnu/libc.so.6(+0x38d60)[0x7f74f262ed60]
>>> /usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x37a14)[0x7f74ecfcea14]
>>>
>>> /usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x19414)[0x7f74ecfb0414]
>>>
>>> /usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x16373)[0x7f74ecfad373]
>>>
>>> /usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x21d59)[0x7f74ecfb8d59]
>>>
>>> /usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x22815)[0x7f74ecfb9815]
>>>
>>> /usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x377d9)[0x7f74ecfce7d9]
>>>
>>> /usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x19414)[0x7f74ecfb0414]
>>>
>>> /usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x16373)[0x7f74ecfad373]
>>>
>>> /usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x170f9)[0x7f74ecfae0f9]
>>>
>>> /usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x313bb)[0x7f74ecfc83bb]
>>>
>>> /usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/protocol/client.so(+0x48e3a)[0x7f74ed06ce3a]
>>>
>>> /lib/x86_64-linux-gnu/libgfrpc.so.0(+0xfccb)[0x7f74f2816ccb]
>>> /lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_transport_notify+0x26)[0x7f74f2812646]
>>>
>>> /usr/lib/x86_64-linux-gnu/glusterfs/10.3/rpc-transport/socket.so(+0x64c8)[0x7f74ee15f4c8]
>>>
>>> /usr/lib/x86_64-linux-gnu/glusterfs/10.3/rpc-transp

Re: [Gluster-users] GlusterFS mount crash

2022-11-22 Thread Angel Docampo
Hi Xavi,

The OS is Debian 11 with the proxmox kernel. Gluster packages are the
official from gluster.org (
https://download.gluster.org/pub/gluster/glusterfs/10/10.3/Debian/bullseye/)

The system logs showed no other issues by the time of the crash, no OOM
kill or whatsoever, and no other process was interacting with the gluster
mountpoint besides proxmox.

I wasn't running gdb when it crashed, so I don't really know if I can
obtain a more detailed trace from logs or if there is a simple way to let
it running in the background to see if it happens again (or there is a flag
to start the systemd daemon in debug mode).

Best,

*Angel Docampo*

<+34-93-1592929>


El lun, 21 nov 2022 a las 15:16, Xavi Hernandez ()
escribió:

> Hi Angel,
>
> On Mon, Nov 21, 2022 at 2:33 PM Angel Docampo 
> wrote:
>
>> Sorry for necrobumping this, but this morning I've suffered this on my
>> Proxmox  + GlusterFS cluster. In the log I can see this
>>
>> [2022-11-21 07:38:00.213620 +] I [MSGID: 133017]
>> [shard.c:7275:shard_seek] 11-vmdata-shard: seek called on
>> fbc063cb-874e-475d-b585-f89
>> f7518acdd. [Operation not supported]
>> pending frames:
>> frame : type(1) op(WRITE)
>> frame : type(0) op(0)
>> frame : type(0) op(0)
>> frame : type(0) op(0)
>> frame : type(0) op(0)
>> frame : type(0) op(0)
>> frame : type(0) op(0)
>> frame : type(0) op(0)
>> frame : type(0) op(0)
>> frame : type(0) op(0)
>> frame : type(0) op(0)
>> frame : type(0) op(0)
>> frame : type(0) op(0)
>> frame : type(0) op(0)
>> frame : type(0) op(0)
>> frame : type(0) op(0)
>> frame : type(0) op(0)
>> ...
>> frame : type(1) op(FSYNC)
>> frame : type(1) op(FSYNC)
>> frame : type(1) op(FSYNC)
>> frame : type(1) op(FSYNC)
>> frame : type(1) op(FSYNC)
>> frame : type(1) op(FSYNC)
>> frame : type(1) op(FSYNC)
>> frame : type(1) op(FSYNC)
>> frame : type(1) op(FSYNC)
>> frame : type(1) op(FSYNC)
>> frame : type(1) op(FSYNC)
>> frame : type(1) op(FSYNC)
>> frame : type(1) op(FSYNC)
>> frame : type(1) op(FSYNC)
>> frame : type(1) op(FSYNC)
>> frame : type(1) op(FSYNC)
>> frame : type(1) op(FSYNC)
>> frame : type(1) op(FSYNC)
>> frame : type(1) op(FSYNC)
>> frame : type(1) op(FSYNC)
>> patchset: git://git.gluster.org/glusterfs.git
>> signal received: 11
>> time of crash:
>> 2022-11-21 07:38:00 +
>> configuration details:
>> argp 1
>> backtrace 1
>> dlfcn 1
>> libpthread 1
>> llistxattr 1
>> setfsid 1
>> epoll.h 1
>> xattr.h 1
>> st_atim.tv_nsec 1
>> package-string: glusterfs 10.3
>> /lib/x86_64-linux-gnu/libglusterfs.so.0(+0x28a54)[0x7f74f286ba54]
>> /lib/x86_64-linux-gnu/libglusterfs.so.0(gf_print_trace+0x700)[0x7f74f2873fc0]
>>
>> /lib/x86_64-linux-gnu/libc.so.6(+0x38d60)[0x7f74f262ed60]
>> /usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x37a14)[0x7f74ecfcea14]
>>
>> /usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x19414)[0x7f74ecfb0414]
>>
>> /usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x16373)[0x7f74ecfad373]
>>
>> /usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x21d59)[0x7f74ecfb8d59]
>>
>> /usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x22815)[0x7f74ecfb9815]
>>
>> /usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x377d9)[0x7f74ecfce7d9]
>>
>> /usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x19414)[0x7f74ecfb0414]
>>
>> /usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x16373)[0x7f74ecfad373]
>>
>> /usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x170f9)[0x7f74ecfae0f9]
>>
>> /usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x313bb)[0x7f74ecfc83bb]
>>
>> /usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/protocol/client.so(+0x48e3a)[0x7f74ed06ce3a]
>>
>> /lib/x86_64-linux-gnu/libgfrpc.so.0(+0xfccb)[0x7f74f2816ccb]
>> /lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_transport_notify+0x26)[0x7f74f2812646]
>>
>> /usr/lib/x86_64-linux-gnu/glusterfs/10.3/rpc-transport/socket.so(+0x64c8)[0x7f74ee15f4c8]
>>
>> /usr/lib/x86_64-linux-gnu/glusterfs/10.3/rpc-transport/socket.so(+0xd38c)[0x7f74ee16638c]
>>
>> /lib/x86_64-linux-gnu/libglusterfs.so.0(+0x7971d)[0x7f74f28bc71d]
>> /lib/x86_64-linux-gnu/libpthread.so.0(+0x7ea7)[0x7f74f27d2ea7]
>> /lib/x86_64-linux-gnu/libc.so.6(clone+0x3f)[0x7f74f26f2aef]
>> -
>> The mount point wasn't accessible with the "Transport endpoint is not
>> connected" message and it was shown like this.
>> d?   ? ???? vmdata
>>
>> I had to stop all the VMs on that proxmox node, then stop the gluster
>> daemon to ummount de directory, and after starting the daemon and
>> re-mounting, all was working again.
>>
>> My gluster volume info returns this
>>
>> Volume Name: vmdata
>> Typ

Re: [Gluster-users] GlusterFS mount crash

2022-11-21 Thread Xavi Hernandez
Hi Angel,

On Mon, Nov 21, 2022 at 2:33 PM Angel Docampo 
wrote:

> Sorry for necrobumping this, but this morning I've suffered this on my
> Proxmox  + GlusterFS cluster. In the log I can see this
>
> [2022-11-21 07:38:00.213620 +] I [MSGID: 133017]
> [shard.c:7275:shard_seek] 11-vmdata-shard: seek called on
> fbc063cb-874e-475d-b585-f89
> f7518acdd. [Operation not supported]
> pending frames:
> frame : type(1) op(WRITE)
> frame : type(0) op(0)
> frame : type(0) op(0)
> frame : type(0) op(0)
> frame : type(0) op(0)
> frame : type(0) op(0)
> frame : type(0) op(0)
> frame : type(0) op(0)
> frame : type(0) op(0)
> frame : type(0) op(0)
> frame : type(0) op(0)
> frame : type(0) op(0)
> frame : type(0) op(0)
> frame : type(0) op(0)
> frame : type(0) op(0)
> frame : type(0) op(0)
> frame : type(0) op(0)
> ...
> frame : type(1) op(FSYNC)
> frame : type(1) op(FSYNC)
> frame : type(1) op(FSYNC)
> frame : type(1) op(FSYNC)
> frame : type(1) op(FSYNC)
> frame : type(1) op(FSYNC)
> frame : type(1) op(FSYNC)
> frame : type(1) op(FSYNC)
> frame : type(1) op(FSYNC)
> frame : type(1) op(FSYNC)
> frame : type(1) op(FSYNC)
> frame : type(1) op(FSYNC)
> frame : type(1) op(FSYNC)
> frame : type(1) op(FSYNC)
> frame : type(1) op(FSYNC)
> frame : type(1) op(FSYNC)
> frame : type(1) op(FSYNC)
> frame : type(1) op(FSYNC)
> frame : type(1) op(FSYNC)
> frame : type(1) op(FSYNC)
> patchset: git://git.gluster.org/glusterfs.git
> signal received: 11
> time of crash:
> 2022-11-21 07:38:00 +
> configuration details:
> argp 1
> backtrace 1
> dlfcn 1
> libpthread 1
> llistxattr 1
> setfsid 1
> epoll.h 1
> xattr.h 1
> st_atim.tv_nsec 1
> package-string: glusterfs 10.3
> /lib/x86_64-linux-gnu/libglusterfs.so.0(+0x28a54)[0x7f74f286ba54]
> /lib/x86_64-linux-gnu/libglusterfs.so.0(gf_print_trace+0x700)[0x7f74f2873fc0]
>
> /lib/x86_64-linux-gnu/libc.so.6(+0x38d60)[0x7f74f262ed60]
> /usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x37a14)[0x7f74ecfcea14]
>
> /usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x19414)[0x7f74ecfb0414]
>
> /usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x16373)[0x7f74ecfad373]
>
> /usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x21d59)[0x7f74ecfb8d59]
>
> /usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x22815)[0x7f74ecfb9815]
>
> /usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x377d9)[0x7f74ecfce7d9]
>
> /usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x19414)[0x7f74ecfb0414]
>
> /usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x16373)[0x7f74ecfad373]
>
> /usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x170f9)[0x7f74ecfae0f9]
>
> /usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x313bb)[0x7f74ecfc83bb]
>
> /usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/protocol/client.so(+0x48e3a)[0x7f74ed06ce3a]
>
> /lib/x86_64-linux-gnu/libgfrpc.so.0(+0xfccb)[0x7f74f2816ccb]
> /lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_transport_notify+0x26)[0x7f74f2812646]
>
> /usr/lib/x86_64-linux-gnu/glusterfs/10.3/rpc-transport/socket.so(+0x64c8)[0x7f74ee15f4c8]
>
> /usr/lib/x86_64-linux-gnu/glusterfs/10.3/rpc-transport/socket.so(+0xd38c)[0x7f74ee16638c]
>
> /lib/x86_64-linux-gnu/libglusterfs.so.0(+0x7971d)[0x7f74f28bc71d]
> /lib/x86_64-linux-gnu/libpthread.so.0(+0x7ea7)[0x7f74f27d2ea7]
> /lib/x86_64-linux-gnu/libc.so.6(clone+0x3f)[0x7f74f26f2aef]
> -
> The mount point wasn't accessible with the "Transport endpoint is not
> connected" message and it was shown like this.
> d?   ? ???? vmdata
>
> I had to stop all the VMs on that proxmox node, then stop the gluster
> daemon to ummount de directory, and after starting the daemon and
> re-mounting, all was working again.
>
> My gluster volume info returns this
>
> Volume Name: vmdata
> Type: Distributed-Disperse
> Volume ID: cace5aa4-b13a-4750-8736-aa179c2485e1
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 2 x (2 + 1) = 6
> Transport-type: tcp
> Bricks:
> Brick1: g01:/data/brick1/brick
> Brick2: g02:/data/brick2/brick
> Brick3: g03:/data/brick1/brick
> Brick4: g01:/data/brick2/brick
> Brick5: g02:/data/brick1/brick
> Brick6: g03:/data/brick2/brick
> Options Reconfigured:
> nfs.disable: on
> transport.address-family: inet
> storage.fips-mode-rchecksum: on
> features.shard: enable
> features.shard-block-size: 256MB
> performance.read-ahead: off
> performance.quick-read: off
> performance.io-cache: off
> server.event-threads: 2
> client.event-threads: 3
> performance.client-io-threads: on
> performance.stat-prefetch: off
> dht.force-readdirp: off
> performance.force-readdirp: off
> network.remote-dio: on
> features.cache-invalidation: on
> performance.parallel-readdir: on
> performance.readdir-ahead: on
>
> Xavi, do you think the open-behind off setting can help somehow? I did try
> to understand what it does (with no luck), and if 

Re: [Gluster-users] GlusterFS mount crash

2022-11-21 Thread Angel Docampo
Sorry for necrobumping this, but this morning I've suffered this on my
Proxmox  + GlusterFS cluster. In the log I can see this

[2022-11-21 07:38:00.213620 +] I [MSGID: 133017]
[shard.c:7275:shard_seek] 11-vmdata-shard: seek called on
fbc063cb-874e-475d-b585-f89
f7518acdd. [Operation not supported]
pending frames:
frame : type(1) op(WRITE)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
...
frame : type(1) op(FSYNC)
frame : type(1) op(FSYNC)
frame : type(1) op(FSYNC)
frame : type(1) op(FSYNC)
frame : type(1) op(FSYNC)
frame : type(1) op(FSYNC)
frame : type(1) op(FSYNC)
frame : type(1) op(FSYNC)
frame : type(1) op(FSYNC)
frame : type(1) op(FSYNC)
frame : type(1) op(FSYNC)
frame : type(1) op(FSYNC)
frame : type(1) op(FSYNC)
frame : type(1) op(FSYNC)
frame : type(1) op(FSYNC)
frame : type(1) op(FSYNC)
frame : type(1) op(FSYNC)
frame : type(1) op(FSYNC)
frame : type(1) op(FSYNC)
frame : type(1) op(FSYNC)
patchset: git://git.gluster.org/glusterfs.git
signal received: 11
time of crash:
2022-11-21 07:38:00 +
configuration details:
argp 1
backtrace 1
dlfcn 1
libpthread 1
llistxattr 1
setfsid 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 10.3
/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x28a54)[0x7f74f286ba54]
/lib/x86_64-linux-gnu/libglusterfs.so.0(gf_print_trace+0x700)[0x7f74f2873fc0]

/lib/x86_64-linux-gnu/libc.so.6(+0x38d60)[0x7f74f262ed60]
/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x37a14)[0x7f74ecfcea14]

/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x19414)[0x7f74ecfb0414]

/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x16373)[0x7f74ecfad373]

/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x21d59)[0x7f74ecfb8d59]

/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x22815)[0x7f74ecfb9815]

/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x377d9)[0x7f74ecfce7d9]

/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x19414)[0x7f74ecfb0414]

/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x16373)[0x7f74ecfad373]

/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x170f9)[0x7f74ecfae0f9]

/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/cluster/disperse.so(+0x313bb)[0x7f74ecfc83bb]

/usr/lib/x86_64-linux-gnu/glusterfs/10.3/xlator/protocol/client.so(+0x48e3a)[0x7f74ed06ce3a]

/lib/x86_64-linux-gnu/libgfrpc.so.0(+0xfccb)[0x7f74f2816ccb]
/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_transport_notify+0x26)[0x7f74f2812646]

/usr/lib/x86_64-linux-gnu/glusterfs/10.3/rpc-transport/socket.so(+0x64c8)[0x7f74ee15f4c8]

/usr/lib/x86_64-linux-gnu/glusterfs/10.3/rpc-transport/socket.so(+0xd38c)[0x7f74ee16638c]

/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x7971d)[0x7f74f28bc71d]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x7ea7)[0x7f74f27d2ea7]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x3f)[0x7f74f26f2aef]
-
The mount point wasn't accessible with the "Transport endpoint is not
connected" message and it was shown like this.
d?   ? ???? vmdata

I had to stop all the VMs on that proxmox node, then stop the gluster
daemon to ummount de directory, and after starting the daemon and
re-mounting, all was working again.

My gluster volume info returns this

Volume Name: vmdata
Type: Distributed-Disperse
Volume ID: cace5aa4-b13a-4750-8736-aa179c2485e1
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x (2 + 1) = 6
Transport-type: tcp
Bricks:
Brick1: g01:/data/brick1/brick
Brick2: g02:/data/brick2/brick
Brick3: g03:/data/brick1/brick
Brick4: g01:/data/brick2/brick
Brick5: g02:/data/brick1/brick
Brick6: g03:/data/brick2/brick
Options Reconfigured:
nfs.disable: on
transport.address-family: inet
storage.fips-mode-rchecksum: on
features.shard: enable
features.shard-block-size: 256MB
performance.read-ahead: off
performance.quick-read: off
performance.io-cache: off
server.event-threads: 2
client.event-threads: 3
performance.client-io-threads: on
performance.stat-prefetch: off
dht.force-readdirp: off
performance.force-readdirp: off
network.remote-dio: on
features.cache-invalidation: on
performance.parallel-readdir: on
performance.readdir-ahead: on

Xavi, do you think the open-behind off setting can help somehow? I did try
to understand what it does (with no luck), and if it could impact the
performance of my VMs (I've the setup you know so well ;))
I would like to avoid more crashings like this, version 10.3 of gluster was
working since two weeks ago, quite well until this morning.

*Angel Docampo*


Re: [Gluster-users] glusterfs on ovirt 4.3 problem after multiple power outages.

2022-10-23 Thread Strahil Nikolov
 I wouldn't do that without having a more clearer picture on the problem.

Usually those '.prob-uuid' files are nothing more than probe files : 
https://github.com/oVirt/ioprocess/blob/ae379c8de83b28d73b6bd42d84e4e942821a7753/src/exported-functions.c#L867-L873
 and deleting the old entries should not affect oVirt at all (should != will 
not).
If your oVirt is used for Prod, I would delete them in low traffic 
hours/planned maintenance window.

Can you provide the output of 'gluster volume heal volumeX info' in separate 
files ?

Best Regards,
Strahil Nikolov

 В сряда, 19 октомври 2022 г., 10:00:33 ч. Гринуич+3, Γιώργος Βασιλόπουλος 
 написа:  
 
  
I have allready done this it didn't seem to help
 
could reseting the arbiter brick be a solution?
 
 On 10/19/22 01:40, Strahil Nikolov wrote:
  
 
 Usually, I would run a full heal and check if it improves the situation: 
  gluster volume heal  full 
  Best Regards, Strahil Nikolov  
 
 
  On Tue, Oct 18, 2022 at 14:01, Γιώργος Βασιλόπουλος  
wrote:   Hello I am seeking consoultation regarding a problem with files not 
  healing after multiple (3) power outages on the servers.
  
  
  The configuration is like this :
  
  There are 3 servers (g1,g2,g3) with 3 volumes (volume1, volume2, 
  volume3) with replica 2 + arbiter.
  
  Glusterfs is   6.10.
  
  Volume 1 and volume2 are ok on Volume3 on there are about 12403 healing 
  entries who are not healing and some virtual drives on ovirt vm are not 
  starting and I cannot copy them either.
  
  For volume 3 data bricks are on g3 and g1 and arbiter brick is on g2
  
  There are .prob-uuid-something files which are identical on the 2 
  servers (g1,g3) with the data bricks of volume3 . On g2 (arbiter brick 
  there are no such files.)
  
  I have stopped the volume unmounted and runned xfs_repair on all bricks, 
  remounted the bricks and started the volume. it did not fix the problem
  
  Is there anything I can do to fix the problem ?
  
  
  
  
  
  
  
  
  Community Meeting Calendar:
  
  Schedule -
  Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
  Bridge: https://meet.google.com/cpu-eiue-hvk
  Gluster-users mailing list
  Gluster-users@gluster.org
  https://lists.gluster.org/mailman/listinfo/gluster-users
   
  



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
  



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] glusterfs on ovirt 4.3 problem after multiple power outages.

2022-10-19 Thread Γιώργος Βασιλόπουλος

I have allready done this it didn't seem to help

could reseting the arbiter brick be a solution?

On 10/19/22 01:40, Strahil Nikolov wrote:

Usually, I would run a full heal and check if it improves the situation:

gluster volume heal  full

Best Regards,
Strahil Nikolov


On Tue, Oct 18, 2022 at 14:01, Γιώργος Βασιλόπουλος
 wrote:
Hello I am seeking consoultation regarding a problem with files not
healing after multiple (3) power outages on the servers.


The configuration is like this :

There are 3 servers (g1,g2,g3) with 3 volumes (volume1, volume2,
volume3) with replica 2 + arbiter.

Glusterfs is   6.10.

Volume 1 and volume2 are ok on Volume3 on there are about 12403
healing
entries who are not healing and some virtual drives on ovirt vm
are not
starting and I cannot copy them either.

For volume 3 data bricks are on g3 and g1 and arbiter brick is on g2

There are .prob-uuid-something files which are identical on the 2
servers (g1,g3) with the data bricks of volume3 . On g2 (arbiter
brick
there are no such files.)

I have stopped the volume unmounted and runned xfs_repair on all
bricks,
remounted the bricks and started the volume. it did not fix the
problem

Is there anything I can do to fix the problem ?








Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] glusterfs on ovirt 4.3 problem after multiple power outages.

2022-10-18 Thread Strahil Nikolov
Usually, I would run a full heal and check if it improves the situation:
gluster volume heal  full
Best Regards,Strahil Nikolov 
 
 
  On Tue, Oct 18, 2022 at 14:01, Γιώργος Βασιλόπουλος 
wrote:   Hello I am seeking consoultation regarding a problem with files not 
healing after multiple (3) power outages on the servers.


The configuration is like this :

There are 3 servers (g1,g2,g3) with 3 volumes (volume1, volume2, 
volume3) with replica 2 + arbiter.

Glusterfs is   6.10.

Volume 1 and volume2 are ok on Volume3 on there are about 12403 healing 
entries who are not healing and some virtual drives on ovirt vm are not 
starting and I cannot copy them either.

For volume 3 data bricks are on g3 and g1 and arbiter brick is on g2

There are .prob-uuid-something files which are identical on the 2 
servers (g1,g3) with the data bricks of volume3 . On g2 (arbiter brick 
there are no such files.)

I have stopped the volume unmounted and runned xfs_repair on all bricks, 
remounted the bricks and started the volume. it did not fix the problem

Is there anything I can do to fix the problem ?








Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
  




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS storage driver deprecation in Kubernetes.

2022-08-11 Thread Amar Tumballi
Thanks for the heads up Humble. This would help many of the gluster
community users, who may not be following k8s threads actively to be
planning their migration plans actively.

For all the users who are currently running heketi + glusterfs, starting
from k8s v1.26, you CANNOT use heketi + glusterfs based storage in k8s.

Below are my personal suggestions for the users. Please treat these options
as my personal opinion, and not an official stand of the gluster community.

0. Use an older (< 1.25) version of k8s, and keep using the setup :-)

1. Use current storage nodes as part of storage, but managed separately,
and expose NFS and use NFS CSI to get the data in the pods. (Note the
change over to new PV through CSI based PVC, which means applications need
a migration). - I haven't tested this setup, hence can't vouch for this.

2. Use kadalu [6] operator to manage currently deployed glusterfs nodes as
'External' storage class, and use kadalu CSI (which uses native glusterfs
mount as part of CSI node plugin) to get PV for your application pods.
NOTE: here too, there is an application migration needed to use kadalu CSI
based PVC. Suggested for those users with bigger PVs used in k8s setup
already. There is a team to help with this migration if you wish to.

3. Use kadalu (or any 'other' CSI providers), provision a new storage, and
copy over the data set to new storage: Would be an option if the storage is
smaller in size. In this case, there would be extra time to do a copy of
data through starting a pod with both existing PV and new PV added as
mounts in the same pod, so you can copy off the data quickly.

In any case, considering you do not have a lot of time before kubernetes
v1.26 comes out, please do start your migration plans soon.

For the developers of the glusterfs community, what are the thoughts you
have on this? I know there is some effort started on keeping
glusterfs-containers repo relevant, and I see PRs coming out. Happy to open
up a discussion on the same.

Regards,
Amar (@amarts)

[6] - https://github.com/kadalu/kadalu


On Thu, Aug 11, 2022 at 5:47 PM Humble Chirammal 
wrote:

> Hey Gluster Community,
>
> As you might be aware, there is an effort in the kubernetes community to
> remove in-tree storage plugins to reduce external dependencies and security
> concerns in the core Kubernetes. Thus, we are in a process to gradually
> deprecate all the in-tree external storage plugins and eventually remove
> them from the core Kubernetes codebase.  GlusterFS is one of the very first
> dynamic provisioners which was made into kubernetes v1.4 ( 2016 ) release
> via https://github.com/kubernetes/kubernetes/pull/30888 . From then on
> many deployments were/are making use of this driver to consume GlusterFS
> volumes in Kubernetes/Openshift clusters.
>
> As part of this effort, we are planning to deprecate GlusterFS intree
> plugin in 1.25 release and planning to take out Heketi code from
> Kubernetes Code base in subsequent release. This code removal will not be
> following kubernetes' normal deprecation policy [1] and will be treated as
> an exception [2]. The main reason for this exception is that, Heketi is in
> "Deep Maintenance" [3], also please see [4] for the latest push back from
> the Heketi team on changes we would need to keep vendoring heketi into
> kubernetes/kubernetes. We cannot keep heketi in the kubernetes code base as
> heketi itself is literally going away. The current plan is to start
> declaring the deprecation in kubernetes v1.25 and code removal in v1.26.
>
> If you are using GlusterFS driver in your cluster setup, please reply
> with  below info before 16-Aug-2022 to d...@kubernetes.io ML on thread ( 
> Deprecation
> of intree GlusterFS driver in 1.25) or to this thread which can help us
> to make a decision on when to completely remove this code base from the
> repo.
>
> - what version of kubernetes are you running in your setup ?
>
> - how often do you upgrade your cluster?
>
> - what vendor or distro you are using ? Is it any (downstream) product
> offering or upstream GlusterFS driver directly used in your setup?
>
> Awaiting your feedback.
>
> Thanks,
>
> Humble
>
> [1] https://kubernetes.io/docs/reference/using-api/deprecation-policy/
>
> [2]
> https://kubernetes.io/docs/reference/using-api/deprecation-policy/#exceptions
>
> [3] https://github.com/heketi/heketi#maintenance-status
>
> [4] https://github.com/heketi/heketi/pull/1904#issuecomment-1197100513
> [5] https://github.com/kubernetes/kubernetes/issues/100897
>
> --
>
>
>
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS 9.5 fuse mount excessive memory usage

2022-04-03 Thread Gionatan Danti

Il 2022-04-03 16:31 Strahil Nikolov ha scritto:

It's not like that, but most of the active developers are from RH and
since RHGS is being EOL-ed - they have other priorities.


Is RHGS going to be totally replaced by Red RHCS?
Thanks.

--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.da...@assyoma.it - i...@assyoma.it
GPG public key ID: FF5F32A8




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS 9.5 fuse mount excessive memory usage

2022-04-02 Thread Zakhar Kirpichenko
I see. Looks like there's no interest from the development team to address
the issue.

Zakhar

On Sat, Apr 2, 2022 at 2:32 PM Strahil Nikolov 
wrote:

> Sadly, I can't help but you can join the regular gluster meeting and ask
> for feedback on the topic.
>
> Best Regards,
> Strahil Nikolov
>
> On Thu, Mar 31, 2022 at 9:57, Zakhar Kirpichenko
>  wrote:
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS 9.5 fuse mount excessive memory usage

2022-03-30 Thread Zakhar Kirpichenko
Hi,

Any news about this? I provided very detailed test results and proof of the
issue https://github.com/gluster/glusterfs/issues/3206 on 6 February 2022
but haven't heard back after that.

Best regards,
Zakhar

On Tue, Feb 8, 2022 at 7:14 AM Zakhar Kirpichenko  wrote:

> Hi,
>
> I've updated the github issue with more details:
> https://github.com/gluster/glusterfs/issues/3206#issuecomment-1030770617
>
> Looks like there's a memory leak.
>
> /Z
>
> On Sat, Feb 5, 2022 at 8:45 PM Zakhar Kirpichenko 
> wrote:
>
>> Hi Strahil,
>>
>> Many thanks for your reply! I've updated the Github issue with statedump
>> files taken before and after the tar operation:
>> https://github.com/gluster/glusterfs/files/8008635/glusterdump.19102.dump.zip
>>
>> Please disregard that path= entries are empty, in the original dumps
>> there are real paths but I deleted them as they might contain sensitive
>> information.
>>
>> The odd thing is that the dump file is full of:
>>
>> 1) xlator.performance.write-behind.wb_inode entries, but the tar
>> operation does not write to these files. The whole backup process is
>> read-only.
>>
>> 2) xlator.performance.quick-read.inodectx entries, which never go away.
>>
>> None of this happens on other clients, which read and write from/to the
>> same volume in a much more intense manner.
>>
>> Best regards,
>> Z
>>
>> On Sat, Feb 5, 2022 at 11:23 AM Strahil Nikolov 
>> wrote:
>>
>>> Can you generate a statedump before and after the tar ?
>>> For statedump generation , you can follow
>>> https://github.com/gluster/glusterfs/issues/1440#issuecomment-674051243
>>> .
>>>
>>> Best Regards,
>>> Strahil Nikolov
>>>
>>>
>>> В събота, 5 февруари 2022 г., 07:54:22 Гринуич+2, Zakhar Kirpichenko <
>>> zak...@gmail.com> написа:
>>>
>>>
>>> Hi!
>>>
>>> I opened a Github issue https://github.com/gluster/glusterfs/issues/3206
>>> but not sure how much attention they get there, so re-posting here just in
>>> case someone has any ideas.
>>>
>>> Description of problem:
>>>
>>> GlusterFS 9.5, 3-node cluster (2 bricks + arbiter), an attempt to tar
>>> the whole filesystem (35-40 GB, 1.6 million files) on a client succeeds but
>>> causes the glusterfs fuse mount process to consume 0.5+ GB of RAM. The
>>> usage never goes down after tar exits.
>>>
>>> The exact command to reproduce the issue:
>>>
>>> /usr/bin/tar --use-compress-program="/bin/pigz" -cf
>>> /path/to/archive.tar.gz --warning=no-file-changed /glusterfsmount
>>>
>>> The output of the gluster volume info command:
>>>
>>> Volume Name: gvol1
>>> Type: Replicate
>>> Volume ID: 0292ac43-89bd-45a4-b91d-799b49613e60
>>> Status: Started
>>> Snapshot Count: 0
>>> Number of Bricks: 1 x (2 + 1) = 3
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: 192.168.0.31:/gluster/brick1/gvol1
>>> Brick2: 192.168.0.32:/gluster/brick1/gvol1
>>> Brick3: 192.168.0.5:/gluster/brick1/gvol1 (arbiter)
>>> Options Reconfigured:
>>> performance.open-behind: off
>>> cluster.readdir-optimize: off
>>> cluster.consistent-metadata: on
>>> features.cache-invalidation: on
>>> diagnostics.count-fop-hits: on
>>> diagnostics.latency-measurement: on
>>> storage.fips-mode-rchecksum: on
>>> performance.cache-size: 256MB
>>> client.event-threads: 8
>>> server.event-threads: 4
>>> storage.reserve: 1
>>> performance.cache-invalidation: on
>>> cluster.lookup-optimize: on
>>> transport.address-family: inet
>>> nfs.disable: on
>>> performance.client-io-threads: on
>>> features.cache-invalidation-timeout: 600
>>> performance.md-cache-timeout: 600
>>> network.inode-lru-limit: 5
>>> cluster.shd-max-threads: 4
>>> cluster.self-heal-window-size: 8
>>> performance.enable-least-priority: off
>>> performance.cache-max-file-size: 2MB
>>>
>>> The output of the gluster volume status command:
>>>
>>> Status of volume: gvol1
>>> Gluster process TCP Port  RDMA Port  Online
>>>  Pid
>>>
>>> --
>>> Brick 192.168.0.31:/gluster/brick1/gvol149152 0  Y
>>>   1767
>>> Brick 192.168.0.32:/gluster/brick1/gvol149152 0  Y
>>>   1696
>>> Brick 192.168.0.5:/gluster/brick1/gvol1 49152 0  Y
>>>   1318
>>> Self-heal Daemon on localhost   N/A   N/AY
>>> 1329
>>> Self-heal Daemon on 192.168.0.31N/A   N/AY
>>> 1778
>>> Self-heal Daemon on 192.168.0.32N/A   N/AY
>>> 1707
>>>
>>> Task Status of Volume gvol1
>>>
>>> --
>>> There are no active volume tasks
>>>
>>> The output of the gluster volume heal command:
>>>
>>> Brick 192.168.0.31:/gluster/brick1/gvol1
>>> Status: Connected
>>> Number of entries: 0
>>>
>>> Brick 192.168.0.32:/gluster/brick1/gvol1
>>> Status: Connected
>>> Number of entries: 0
>>>
>>> Brick 192.168.0.5:/gluster/brick1/gvol1
>>> Status: Connected
>>> Number of entries: 0
>>>
>>> The operating system / glusterfs version:
>>>

Re: [Gluster-users] GlusterFS 9.5 fuse mount excessive memory usage

2022-02-07 Thread Zakhar Kirpichenko
Hi,

I've updated the github issue with more details:
https://github.com/gluster/glusterfs/issues/3206#issuecomment-1030770617

Looks like there's a memory leak.

/Z

On Sat, Feb 5, 2022 at 8:45 PM Zakhar Kirpichenko  wrote:

> Hi Strahil,
>
> Many thanks for your reply! I've updated the Github issue with statedump
> files taken before and after the tar operation:
> https://github.com/gluster/glusterfs/files/8008635/glusterdump.19102.dump.zip
>
> Please disregard that path= entries are empty, in the original dumps there
> are real paths but I deleted them as they might contain sensitive
> information.
>
> The odd thing is that the dump file is full of:
>
> 1) xlator.performance.write-behind.wb_inode entries, but the tar operation
> does not write to these files. The whole backup process is read-only.
>
> 2) xlator.performance.quick-read.inodectx entries, which never go away.
>
> None of this happens on other clients, which read and write from/to the
> same volume in a much more intense manner.
>
> Best regards,
> Z
>
> On Sat, Feb 5, 2022 at 11:23 AM Strahil Nikolov 
> wrote:
>
>> Can you generate a statedump before and after the tar ?
>> For statedump generation , you can follow
>> https://github.com/gluster/glusterfs/issues/1440#issuecomment-674051243 .
>>
>> Best Regards,
>> Strahil Nikolov
>>
>>
>> В събота, 5 февруари 2022 г., 07:54:22 Гринуич+2, Zakhar Kirpichenko <
>> zak...@gmail.com> написа:
>>
>>
>> Hi!
>>
>> I opened a Github issue https://github.com/gluster/glusterfs/issues/3206
>> but not sure how much attention they get there, so re-posting here just in
>> case someone has any ideas.
>>
>> Description of problem:
>>
>> GlusterFS 9.5, 3-node cluster (2 bricks + arbiter), an attempt to tar the
>> whole filesystem (35-40 GB, 1.6 million files) on a client succeeds but
>> causes the glusterfs fuse mount process to consume 0.5+ GB of RAM. The
>> usage never goes down after tar exits.
>>
>> The exact command to reproduce the issue:
>>
>> /usr/bin/tar --use-compress-program="/bin/pigz" -cf
>> /path/to/archive.tar.gz --warning=no-file-changed /glusterfsmount
>>
>> The output of the gluster volume info command:
>>
>> Volume Name: gvol1
>> Type: Replicate
>> Volume ID: 0292ac43-89bd-45a4-b91d-799b49613e60
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1 x (2 + 1) = 3
>> Transport-type: tcp
>> Bricks:
>> Brick1: 192.168.0.31:/gluster/brick1/gvol1
>> Brick2: 192.168.0.32:/gluster/brick1/gvol1
>> Brick3: 192.168.0.5:/gluster/brick1/gvol1 (arbiter)
>> Options Reconfigured:
>> performance.open-behind: off
>> cluster.readdir-optimize: off
>> cluster.consistent-metadata: on
>> features.cache-invalidation: on
>> diagnostics.count-fop-hits: on
>> diagnostics.latency-measurement: on
>> storage.fips-mode-rchecksum: on
>> performance.cache-size: 256MB
>> client.event-threads: 8
>> server.event-threads: 4
>> storage.reserve: 1
>> performance.cache-invalidation: on
>> cluster.lookup-optimize: on
>> transport.address-family: inet
>> nfs.disable: on
>> performance.client-io-threads: on
>> features.cache-invalidation-timeout: 600
>> performance.md-cache-timeout: 600
>> network.inode-lru-limit: 5
>> cluster.shd-max-threads: 4
>> cluster.self-heal-window-size: 8
>> performance.enable-least-priority: off
>> performance.cache-max-file-size: 2MB
>>
>> The output of the gluster volume status command:
>>
>> Status of volume: gvol1
>> Gluster process TCP Port  RDMA Port  Online
>>  Pid
>>
>> --
>> Brick 192.168.0.31:/gluster/brick1/gvol149152 0  Y
>> 1767
>> Brick 192.168.0.32:/gluster/brick1/gvol149152 0  Y
>> 1696
>> Brick 192.168.0.5:/gluster/brick1/gvol1 49152 0  Y
>> 1318
>> Self-heal Daemon on localhost   N/A   N/AY
>> 1329
>> Self-heal Daemon on 192.168.0.31N/A   N/AY
>> 1778
>> Self-heal Daemon on 192.168.0.32N/A   N/AY
>> 1707
>>
>> Task Status of Volume gvol1
>>
>> --
>> There are no active volume tasks
>>
>> The output of the gluster volume heal command:
>>
>> Brick 192.168.0.31:/gluster/brick1/gvol1
>> Status: Connected
>> Number of entries: 0
>>
>> Brick 192.168.0.32:/gluster/brick1/gvol1
>> Status: Connected
>> Number of entries: 0
>>
>> Brick 192.168.0.5:/gluster/brick1/gvol1
>> Status: Connected
>> Number of entries: 0
>>
>> The operating system / glusterfs version:
>>
>> CentOS Linux release 7.9.2009 (Core), fully up to date
>> glusterfs 9.5
>> kernel 3.10.0-1160.53.1.el7.x86_64
>>
>> The logs are basically empty since the last mount except for the
>> mount-related messages.
>>
>> Additional info: a statedump from the client is attached to the Github
>> issue,
>> https://github.com/gluster/glusterfs/files/8004792/glusterdump.18906.dump.1643991007.gz,
>> in case someone wants to have a look.
>>
>> T

Re: [Gluster-users] GlusterFS 9.5 fuse mount excessive memory usage

2022-02-05 Thread Zakhar Kirpichenko
Hi Strahil,

Many thanks for your reply! I've updated the Github issue with statedump
files taken before and after the tar operation:
https://github.com/gluster/glusterfs/files/8008635/glusterdump.19102.dump.zip

Please disregard that path= entries are empty, in the original dumps there
are real paths but I deleted them as they might contain sensitive
information.

The odd thing is that the dump file is full of:

1) xlator.performance.write-behind.wb_inode entries, but the tar operation
does not write to these files. The whole backup process is read-only.

2) xlator.performance.quick-read.inodectx entries, which never go away.

None of this happens on other clients, which read and write from/to the
same volume in a much more intense manner.

Best regards,
Z

On Sat, Feb 5, 2022 at 11:23 AM Strahil Nikolov 
wrote:

> Can you generate a statedump before and after the tar ?
> For statedump generation , you can follow
> https://github.com/gluster/glusterfs/issues/1440#issuecomment-674051243 .
>
> Best Regards,
> Strahil Nikolov
>
>
> В събота, 5 февруари 2022 г., 07:54:22 Гринуич+2, Zakhar Kirpichenko <
> zak...@gmail.com> написа:
>
>
> Hi!
>
> I opened a Github issue https://github.com/gluster/glusterfs/issues/3206
> but not sure how much attention they get there, so re-posting here just in
> case someone has any ideas.
>
> Description of problem:
>
> GlusterFS 9.5, 3-node cluster (2 bricks + arbiter), an attempt to tar the
> whole filesystem (35-40 GB, 1.6 million files) on a client succeeds but
> causes the glusterfs fuse mount process to consume 0.5+ GB of RAM. The
> usage never goes down after tar exits.
>
> The exact command to reproduce the issue:
>
> /usr/bin/tar --use-compress-program="/bin/pigz" -cf
> /path/to/archive.tar.gz --warning=no-file-changed /glusterfsmount
>
> The output of the gluster volume info command:
>
> Volume Name: gvol1
> Type: Replicate
> Volume ID: 0292ac43-89bd-45a4-b91d-799b49613e60
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: 192.168.0.31:/gluster/brick1/gvol1
> Brick2: 192.168.0.32:/gluster/brick1/gvol1
> Brick3: 192.168.0.5:/gluster/brick1/gvol1 (arbiter)
> Options Reconfigured:
> performance.open-behind: off
> cluster.readdir-optimize: off
> cluster.consistent-metadata: on
> features.cache-invalidation: on
> diagnostics.count-fop-hits: on
> diagnostics.latency-measurement: on
> storage.fips-mode-rchecksum: on
> performance.cache-size: 256MB
> client.event-threads: 8
> server.event-threads: 4
> storage.reserve: 1
> performance.cache-invalidation: on
> cluster.lookup-optimize: on
> transport.address-family: inet
> nfs.disable: on
> performance.client-io-threads: on
> features.cache-invalidation-timeout: 600
> performance.md-cache-timeout: 600
> network.inode-lru-limit: 5
> cluster.shd-max-threads: 4
> cluster.self-heal-window-size: 8
> performance.enable-least-priority: off
> performance.cache-max-file-size: 2MB
>
> The output of the gluster volume status command:
>
> Status of volume: gvol1
> Gluster process TCP Port  RDMA Port  Online
>  Pid
>
> --
> Brick 192.168.0.31:/gluster/brick1/gvol149152 0  Y
> 1767
> Brick 192.168.0.32:/gluster/brick1/gvol149152 0  Y
> 1696
> Brick 192.168.0.5:/gluster/brick1/gvol1 49152 0  Y
> 1318
> Self-heal Daemon on localhost   N/A   N/AY
> 1329
> Self-heal Daemon on 192.168.0.31N/A   N/AY
> 1778
> Self-heal Daemon on 192.168.0.32N/A   N/AY
> 1707
>
> Task Status of Volume gvol1
>
> --
> There are no active volume tasks
>
> The output of the gluster volume heal command:
>
> Brick 192.168.0.31:/gluster/brick1/gvol1
> Status: Connected
> Number of entries: 0
>
> Brick 192.168.0.32:/gluster/brick1/gvol1
> Status: Connected
> Number of entries: 0
>
> Brick 192.168.0.5:/gluster/brick1/gvol1
> Status: Connected
> Number of entries: 0
>
> The operating system / glusterfs version:
>
> CentOS Linux release 7.9.2009 (Core), fully up to date
> glusterfs 9.5
> kernel 3.10.0-1160.53.1.el7.x86_64
>
> The logs are basically empty since the last mount except for the
> mount-related messages.
>
> Additional info: a statedump from the client is attached to the Github
> issue,
> https://github.com/gluster/glusterfs/files/8004792/glusterdump.18906.dump.1643991007.gz,
> in case someone wants to have a look.
>
> There was also an issue with other clients, running PHP applications with
> lots of small files, where glusterfs fuse mount process would very quickly
> balloon to ~2 GB over the course of 24 hours and its performance would slow
> to a crawl. This happened very consistently with glusterfs 8.x and 9.5, I
> managed to resolve it at least partially with disabling
> performance.open-behind: th

Re: [Gluster-users] glusterfs on zfs on rockylinux

2021-12-15 Thread Arman Khalatyan
Thank you Darrel, now I have clear steps what to do. The data is very
valuable so 2xmirror +arbiter, or 3 replica nodes would be a setup.
Just for the clarification we have now LustreFS it is nice but no
redundancy. I am not using it for the VMs, the workloads are following -
gluster should be mounted on the multiple nodes, connection is Infiniband
or 10Gbit. The clients are pulling the data and making some data analysis,
IO pattern is very different - 26MB blocks or random 1k IO, different
codes, different projects. I am thinking to put all <128K files on the
special device (yes I am on the zfs 2.0.6 branch) On the gluster I have
seen .gluster folder has a lot of small folders or files,  would improve
the performance if I move them to nvme as well or better to increase the
RAM(now I cant, but for the future)?
Unfortunately cannot add more RAM, but your tuning consideration is
important note.
   a.


On Tue, Dec 14, 2021 at 12:25 AM Darrell Budic 
wrote:

> A few thoughts from another ZFS backend user:
>
> ZFS:
> use arcstats to look at your cache use over time and consider:
> Don’t mirror your cache drives, use them as 2x cache volumes to increase
> available cache.
> Add more RAM. Lots more RAM (if I’m reading that right and you have 32Gb
> ram per zfs server).
> Adjust ZFS’s max arc caching upwards if you have lots of RAM.
> Try more metadata caching & less content caching if you’re find heavy.
> compression on these volumes could help improve IO on the raidZ2s, but
> you’ll have to copy the data on with compression enabled if you didn’t
> already have it enabled. Different zStd levels are worth evaluating here.
> Read up on recordsize and consider if you would get any performance
> benefits from 64K or maybe something larger for your large data, depends on
> where the reads are being done.
> Use relatime or no atime tracking.
> Upgrade to ZFS 2.0.6 if you aren’t already at 2 or 2.1
>
> For gluster, sounds like gluster 10 would be good for your use case.
> Without knowing what your workload is (VMs, gluster mounts, nfs mounts?), I
> don’t have much else on that level, but you can probably play with the
> cluster.read-hash-mode (try 3) to spread the read load out amongst your
> servers. Search the list archives for general performance hints too, server
> & client .event-threads are probably good targets, and the various
> performance.*threads may/may not help depending on how the volumes are
> being used.
>
> More details (zfs version, gluster version, volume options currently
> applied, more details on the workload) may help if others use similar
> setups. You may be getting into the area where you just need to get your
> environment setup to try some A/B testing with different options though.
>
> Good luck!
>
>   -Darrell
>
>
> On Dec 11, 2021, at 5:27 PM, Arman Khalatyan  wrote:
>
> Hello everybody,
> I was looking for some performance consideration on glusterfs with zfs.
> The data diversity is following: 90% <50kb and 10%>10GB-100GB . totally
> over 100mln, about 100TB.
> 3replicated Jbods each one with:
> 2x8disks-RaidZ2 +special device mirror  2x1TBnvme+cache mirror 2xssd+32GB
> ram.
>
> most operations are  reading and "find file".
> i put some parameters on zfs like: xattr=sa, primarycache=all, secondary
> cache=all
> what else could be tuned?
> thank you in advanced.
> greetings from Potsdam,
> Arman.
>
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] glusterfs on zfs on rockylinux

2021-12-13 Thread Darrell Budic
A few thoughts from another ZFS backend user:

ZFS:
use arcstats to look at your cache use over time and consider:
Don’t mirror your cache drives, use them as 2x cache volumes to 
increase available cache.
Add more RAM. Lots more RAM (if I’m reading that right and you have 
32Gb ram per zfs server).
Adjust ZFS’s max arc caching upwards if you have lots of RAM.
Try more metadata caching & less content caching if you’re find heavy.
compression on these volumes could help improve IO on the raidZ2s, but you’ll 
have to copy the data on with compression enabled if you didn’t already have it 
enabled. Different zStd levels are worth evaluating here.
Read up on recordsize and consider if you would get any performance benefits 
from 64K or maybe something larger for your large data, depends on where the 
reads are being done. 
Use relatime or no atime tracking.
Upgrade to ZFS 2.0.6 if you aren’t already at 2 or 2.1

For gluster, sounds like gluster 10 would be good for your use case. Without 
knowing what your workload is (VMs, gluster mounts, nfs mounts?), I don’t have 
much else on that level, but you can probably play with the 
cluster.read-hash-mode (try 3) to spread the read load out amongst your 
servers. Search the list archives for general performance hints too, server & 
client .event-threads are probably good targets, and the various 
performance.*threads may/may not help depending on how the volumes are being 
used.

More details (zfs version, gluster version, volume options currently applied, 
more details on the workload) may help if others use similar setups. You may be 
getting into the area where you just need to get your environment setup to try 
some A/B testing with different options though.

Good luck!

  -Darrell


> On Dec 11, 2021, at 5:27 PM, Arman Khalatyan  wrote:
> 
> Hello everybody,
> I was looking for some performance consideration on glusterfs with zfs.
> The data diversity is following: 90% <50kb and 10%>10GB-100GB . totally over 
> 100mln, about 100TB.
> 3replicated Jbods each one with:
> 2x8disks-RaidZ2 +special device mirror  2x1TBnvme+cache mirror 2xssd+32GB ram.
> 
> most operations are  reading and "find file".
> i put some parameters on zfs like: xattr=sa, primarycache=all, secondary 
> cache=all
> what else could be tuned?
> thank you in advanced.
> greetings from Potsdam,
> Arman.
> 
> 
> 
> 
> 
> Community Meeting Calendar:
> 
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users





Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS 9.3 - Replicate Volume (2 Bricks / 1 Arbiter) - Self-healing does not always work

2021-11-30 Thread Thorsten Walk
Hello all,

I have now rebuilt my cluster and am currently still in the process of
putting it back into operation. Should the error occur again, I would
contact you.

I would like to switch directly to GlusterFS 10. My two Intel NUCs are
running Proxmox 7.1, so GlusterFS 10 is not an issue - there is a Debian
repo for it.

My Arbiter (a Raspberry PI) is also running Debian Bullseye, but I couldn't
find a repo for GlusterFS 10 @ arm. Can I run the Arbiter on v9 together
with v10? Or is it better to stay on v9.

Thanks & Regards,
Thorsten

Am Fr., 5. Nov. 2021 um 20:46 Uhr schrieb Strahil Nikolov <
hunter86...@yahoo.com>:

> You can mount the volume via # mount -t glusterfs -o aux-gfid-mount
> vm1:test /mnt/testvol
>
> And then obtain the path:
>
> getfattr -n trusted.glusterfs.pathinfo -e text /mnt/testvol/.gfid/
>
>
> Source: https://docs.gluster.org/en/latest/Troubleshooting/gfid-to-path/
>
> Best Regards,
> Strahil Nikolov
>
>
> On Fri, Nov 5, 2021 at 19:29, Thorsten Walk
>  wrote:
> Hi Guys,
>
> I pushed some VMs to the GlusterFS storage this week and ran them there.
> For a maintenance task, I moved these VMs to Proxmox-Node-2 and took Node-1
> offline for a short time.
> After moving them back to Node-1 there were some file corpses left (see
> attachment). In the logs I can't find anything about the gfids :)
>
>
> ┬[15:36:51] [ssh:root@pve02(192.168.1.51): /home/darkiop (755)]
> ╰─># gvi
>
> Cluster:
>  Status: Healthy GlusterFS: 9.3
>  Nodes: 3/3  Volumes: 1/1
>
> Volumes:
>
> glusterfs-1-volume
> Replicate  Started (UP) - 3/3 Bricks Up  -
> (Arbiter Volume)
>Capacity: (17.89% used) 83.00
> GiB/466.00 GiB (used/total)
>Self-Heal:
>   192.168.1.51:/data/glusterfs (4
> File(s) to heal).
>Bricks:
>   Distribute Group 1:
>  192.168.1.50:/data/glusterfs
> (Online)
>  192.168.1.51:/data/glusterfs
> (Online)
>  192.168.1.40:/data/glusterfs
> (Online)
>
>
> Brick 192.168.1.50:/data/glusterfs
> Status: Connected
> Number of entries: 0
>
> Brick 192.168.1.51:/data/glusterfs
> 
> 
> 
> 
> Status: Connected
> Number of entries: 4
>
> Brick 192.168.1.40:/data/glusterfs
> Status: Connected
> Number of entries: 0
>
>
> ┬[15:37:03] [ssh:root@pve02(192.168.1.51): /home/darkiop (755)]
> ╰─># cat
> /data/glusterfs/.glusterfs/ad/e6/ade6f31c-b80b-457e-a054-6ca1548d9cd3
> 22962
>
>
> ┬[15:37:13] [ssh:root@pve02(192.168.1.51): /home/darkiop (755)]
> ╰─># grep -ir 'ade6f31c-b80b-457e-a054-6ca1548d9cd3'
> /var/log/glusterfs/*.log
>
> Am Mo., 1. Nov. 2021 um 07:51 Uhr schrieb Thorsten Walk  >:
>
> After deleting the file, output of heal info is clear.
>
> >Not sure why you ended up in this situation (maybe unlink partially
> failed on this brick?)
>
> Neither did I, this was a completely fresh setup with 1-2 VMs and 1-2
> Proxmox LXC templates. I let it run for a few days and at some point it had
> the mentioned state. I continue to monitor and start with fill the bricks
> with data.
> Thanks for your help!
>
> Am Mo., 1. Nov. 2021 um 02:54 Uhr schrieb Ravishankar N <
> ravishanka...@pavilion.io>:
>
>
>
> On Mon, Nov 1, 2021 at 12:02 AM Thorsten Walk  wrote:
>
> Hi Ravi, the file only exists at pve01 and since only once:
>
> ┬[19:22:10] [ssh:root@pve01(192.168.1.50): ~ (700)]
> ╰─># stat
> /data/glusterfs/.glusterfs/26/c5/26c5396c-86ff-408d-9cda-106acd2b0768
>   File:
> /data/glusterfs/.glusterfs/26/c5/26c5396c-86ff-408d-9cda-106acd2b0768
>   Size: 6   Blocks: 8  IO Block: 4096   regular file
> Device: fd12h/64786dInode: 528 Links: 1
> Access: (0644/-rw-r--r--)  Uid: (0/root)   Gid: (0/root)
> Access: 2021-10-30 14:34:50.385893588 +0200
> Modify: 2021-10-27 00:26:43.988756557 +0200
> Change: 2021-10-27 00:26:43.988756557 +0200
>  Birth: -
>
> ┬[19:24:41] [ssh:root@pve01(192.168.1.50): ~ (700)]
> ╰─># ls -l
> /data/glusterfs/.glusterfs/26/c5/26c5396c-86ff-408d-9cda-106acd2b0768
> .rw-r--r-- root root 6B 4 days ago 
> /data/glusterfs/.glusterfs/26/c5/26c5396c-86ff-408d-9cda-106acd2b0768
>
> ┬[19:24:54] [ssh:root@pve01(192.168.1.50): ~ (700)]
> ╰─># cat
> /data/glusterfs/.glusterfs/26/c5/26c5396c-86ff-408d-9cda-106acd2b0768
> 28084
>
> Hi Thorsten, you can delete the file. From the file size and contents, it
> looks like it belongs to ovirt sanlock. Not sure why you ended up in this
> situation (maybe unlink partially failed on this brick?). You can check the
> mount, brick and self-heal daemon logs for this gfid to  see if you find
> related error/warning messages.
>
> -Ravi
>
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com

Re: [Gluster-users] GlusterFS 9.3 - Replicate Volume (2 Bricks / 1 Arbiter) - Self-healing does not always work

2021-11-05 Thread Thorsten Walk
Hi Guys,

I pushed some VMs to the GlusterFS storage this week and ran them there.
For a maintenance task, I moved these VMs to Proxmox-Node-2 and took Node-1
offline for a short time.
After moving them back to Node-1 there were some file corpses left (see
attachment). In the logs I can't find anything about the gfids :)


┬[15:36:51] [ssh:root@pve02(192.168.1.51): /home/darkiop (755)]
╰─># gvi

Cluster:
 Status: Healthy GlusterFS: 9.3
 Nodes: 3/3  Volumes: 1/1

Volumes:

glusterfs-1-volume
Replicate  Started (UP) - 3/3 Bricks Up  - (Arbiter
Volume)
   Capacity: (17.89% used) 83.00 GiB/466.00
GiB (used/total)
   Self-Heal:
  192.168.1.51:/data/glusterfs (4
File(s) to heal).
   Bricks:
  Distribute Group 1:
 192.168.1.50:/data/glusterfs
(Online)
 192.168.1.51:/data/glusterfs
(Online)
 192.168.1.40:/data/glusterfs
(Online)


Brick 192.168.1.50:/data/glusterfs
Status: Connected
Number of entries: 0

Brick 192.168.1.51:/data/glusterfs




Status: Connected
Number of entries: 4

Brick 192.168.1.40:/data/glusterfs
Status: Connected
Number of entries: 0


┬[15:37:03] [ssh:root@pve02(192.168.1.51): /home/darkiop (755)]
╰─># cat
/data/glusterfs/.glusterfs/ad/e6/ade6f31c-b80b-457e-a054-6ca1548d9cd3
22962


┬[15:37:13] [ssh:root@pve02(192.168.1.51): /home/darkiop (755)]
╰─># grep -ir 'ade6f31c-b80b-457e-a054-6ca1548d9cd3'
/var/log/glusterfs/*.log

Am Mo., 1. Nov. 2021 um 07:51 Uhr schrieb Thorsten Walk :

> After deleting the file, output of heal info is clear.
>
> >Not sure why you ended up in this situation (maybe unlink partially
> failed on this brick?)
>
> Neither did I, this was a completely fresh setup with 1-2 VMs and 1-2
> Proxmox LXC templates. I let it run for a few days and at some point it had
> the mentioned state. I continue to monitor and start with fill the bricks
> with data.
> Thanks for your help!
>
> Am Mo., 1. Nov. 2021 um 02:54 Uhr schrieb Ravishankar N <
> ravishanka...@pavilion.io>:
>
>>
>>
>> On Mon, Nov 1, 2021 at 12:02 AM Thorsten Walk  wrote:
>>
>>> Hi Ravi, the file only exists at pve01 and since only once:
>>>
>>> ┬[19:22:10] [ssh:root@pve01(192.168.1.50): ~ (700)]
>>> ╰─># stat
>>> /data/glusterfs/.glusterfs/26/c5/26c5396c-86ff-408d-9cda-106acd2b0768
>>>   File:
>>> /data/glusterfs/.glusterfs/26/c5/26c5396c-86ff-408d-9cda-106acd2b0768
>>>   Size: 6   Blocks: 8  IO Block: 4096   regular file
>>> Device: fd12h/64786dInode: 528 Links: 1
>>> Access: (0644/-rw-r--r--)  Uid: (0/root)   Gid: (0/root)
>>> Access: 2021-10-30 14:34:50.385893588 +0200
>>> Modify: 2021-10-27 00:26:43.988756557 +0200
>>> Change: 2021-10-27 00:26:43.988756557 +0200
>>>  Birth: -
>>>
>>> ┬[19:24:41] [ssh:root@pve01(192.168.1.50): ~ (700)]
>>> ╰─># ls -l
>>> /data/glusterfs/.glusterfs/26/c5/26c5396c-86ff-408d-9cda-106acd2b0768
>>> .rw-r--r-- root root 6B 4 days ago 
>>> /data/glusterfs/.glusterfs/26/c5/26c5396c-86ff-408d-9cda-106acd2b0768
>>>
>>> ┬[19:24:54] [ssh:root@pve01(192.168.1.50): ~ (700)]
>>> ╰─># cat
>>> /data/glusterfs/.glusterfs/26/c5/26c5396c-86ff-408d-9cda-106acd2b0768
>>> 28084
>>>
>>> Hi Thorsten, you can delete the file. From the file size and contents,
>> it looks like it belongs to ovirt sanlock. Not sure why you ended up in
>> this situation (maybe unlink partially failed on this brick?). You can
>> check the mount, brick and self-heal daemon logs for this gfid to  see if
>> you find related error/warning messages.
>>
>> -Ravi
>>
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS 9.3 - Replicate Volume (2 Bricks / 1 Arbiter) - Self-healing does not always work

2021-10-31 Thread Thorsten Walk
After deleting the file, output of heal info is clear.

>Not sure why you ended up in this situation (maybe unlink partially failed
on this brick?)

Neither did I, this was a completely fresh setup with 1-2 VMs and 1-2
Proxmox LXC templates. I let it run for a few days and at some point it had
the mentioned state. I continue to monitor and start with fill the bricks
with data.
Thanks for your help!

Am Mo., 1. Nov. 2021 um 02:54 Uhr schrieb Ravishankar N <
ravishanka...@pavilion.io>:

>
>
> On Mon, Nov 1, 2021 at 12:02 AM Thorsten Walk  wrote:
>
>> Hi Ravi, the file only exists at pve01 and since only once:
>>
>> ┬[19:22:10] [ssh:root@pve01(192.168.1.50): ~ (700)]
>> ╰─># stat
>> /data/glusterfs/.glusterfs/26/c5/26c5396c-86ff-408d-9cda-106acd2b0768
>>   File:
>> /data/glusterfs/.glusterfs/26/c5/26c5396c-86ff-408d-9cda-106acd2b0768
>>   Size: 6   Blocks: 8  IO Block: 4096   regular file
>> Device: fd12h/64786dInode: 528 Links: 1
>> Access: (0644/-rw-r--r--)  Uid: (0/root)   Gid: (0/root)
>> Access: 2021-10-30 14:34:50.385893588 +0200
>> Modify: 2021-10-27 00:26:43.988756557 +0200
>> Change: 2021-10-27 00:26:43.988756557 +0200
>>  Birth: -
>>
>> ┬[19:24:41] [ssh:root@pve01(192.168.1.50): ~ (700)]
>> ╰─># ls -l
>> /data/glusterfs/.glusterfs/26/c5/26c5396c-86ff-408d-9cda-106acd2b0768
>> .rw-r--r-- root root 6B 4 days ago 
>> /data/glusterfs/.glusterfs/26/c5/26c5396c-86ff-408d-9cda-106acd2b0768
>>
>> ┬[19:24:54] [ssh:root@pve01(192.168.1.50): ~ (700)]
>> ╰─># cat
>> /data/glusterfs/.glusterfs/26/c5/26c5396c-86ff-408d-9cda-106acd2b0768
>> 28084
>>
>> Hi Thorsten, you can delete the file. From the file size and contents, it
> looks like it belongs to ovirt sanlock. Not sure why you ended up in this
> situation (maybe unlink partially failed on this brick?). You can check the
> mount, brick and self-heal daemon logs for this gfid to  see if you find
> related error/warning messages.
>
> -Ravi
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS 9.3 - Replicate Volume (2 Bricks / 1 Arbiter) - Self-healing does not always work

2021-10-31 Thread Thorsten Walk
Hi Ravi, the file only exists at pve01 and since only once:

┬[19:22:10] [ssh:root@pve01(192.168.1.50): ~ (700)]
╰─># stat
/data/glusterfs/.glusterfs/26/c5/26c5396c-86ff-408d-9cda-106acd2b0768
  File:
/data/glusterfs/.glusterfs/26/c5/26c5396c-86ff-408d-9cda-106acd2b0768
  Size: 6   Blocks: 8  IO Block: 4096   regular file
Device: fd12h/64786dInode: 528 Links: 1
Access: (0644/-rw-r--r--)  Uid: (0/root)   Gid: (0/root)
Access: 2021-10-30 14:34:50.385893588 +0200
Modify: 2021-10-27 00:26:43.988756557 +0200
Change: 2021-10-27 00:26:43.988756557 +0200
 Birth: -

┬[19:24:41] [ssh:root@pve01(192.168.1.50): ~ (700)]
╰─># ls -l
/data/glusterfs/.glusterfs/26/c5/26c5396c-86ff-408d-9cda-106acd2b0768
.rw-r--r-- root root 6B 4 days ago 
/data/glusterfs/.glusterfs/26/c5/26c5396c-86ff-408d-9cda-106acd2b0768

┬[19:24:54] [ssh:root@pve01(192.168.1.50): ~ (700)]
╰─># cat
/data/glusterfs/.glusterfs/26/c5/26c5396c-86ff-408d-9cda-106acd2b0768
28084

Am So., 31. Okt. 2021 um 10:37 Uhr schrieb Ravishankar N <
ravishanka...@pavilion.io>:

>
>
> On Sun, Oct 31, 2021 at 1:37 PM Thorsten Walk  wrote:
>
>>
>> I think, here i need your help :) How i can find the file? I only have
>> the gfid from the out of 'gluster volume heal glusterfs-1-volume info'
>> =  on Brick 192.168.1.50:
>> /data/glusterfs.
>>
>
> First, you can do a `stat
> /data/glusterfs/.glusterfs/26/c5/26c5396c-86ff-408d-9cda-106acd2b0768` and
> check the file size and no. of  hardlinks. If Links>1, then
> `find  /data/glusterfs  -samefile
> /data/glusterfs/.glusterfs/26/c5/26c5396c-86ff-408d-9cda-106acd2b0768`
>
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS 9.3 - Replicate Volume (2 Bricks / 1 Arbiter) - Self-healing does not always work

2021-10-31 Thread Thorsten Walk
Hello,

>It should be enabled alright, but we have noticed some issues of stale
locks (https://github.com/gluster/glusterfs/issues/ {2198, 2211, 2027})
which could prevent self-heal (or any other I/O that takes a blocking lock)
from happening.

I have re-enabled cluster.eager-lock.

>But the problem here is different as you noticed. Thorsten needs to find
the actual file (`find -samefile`) corresponding to this gfid and see what
is the file size, hard-link count etc.) If it is a zero -byte file, then it
should be safe to just delete the file and its hardlink from the brick.

I think, here i need your help :) How i can find the file? I only have the
gfid from the out of 'gluster volume heal glusterfs-1-volume info'
=  on Brick 192.168.1.50:
/data/glusterfs.

Thanks and regards,
Thorsten

Am So., 31. Okt. 2021 um 07:35 Uhr schrieb Ravishankar N <
ravishanka...@pavilion.io>:

>
>
> On Sat, Oct 30, 2021 at 10:47 PM Strahil Nikolov 
> wrote:
>
>> Hi,
>>
>> based on the output it seems that for some reason the file was deployed
>> locally but not on the 2-nd brick and the arbiter , which for a 'replica 3
>> arbiter 1' (a.k.a replica 2 arbiter 1) is strange.
>>
>> It seems that cluster.eager-lock is enabled as per the virt group:
>> https://github.com/gluster/glusterfs/blob/devel/extras/group-virt.example
>>
>> @Ravi,
>>
>> do you think that it should not be enabled by default in the virt group ?
>>
>
> It should be enabled alright, but we have noticed some issues of stale
> locks (https://github.com/gluster/glusterfs/issues/ {2198, 2211, 2027})
> which could prevent self-heal (or any other I/O that takes a blocking lock)
> from happening. But the problem here is different as you noticed. Thorsten
> needs to find the actual file (`find -samefile`) corresponding to this gfid
> and see what is the file size, hard-link count etc.) If it is a zero -byte
> file, then it should be safe to just delete the file and its hardlink from
> the brick.
>
> Regards,
> Ravi
>
>
>> Best Regards,
>> Strahil Nikolov
>>
>>
>>
>> On Sat, Oct 30, 2021 at 16:14, Thorsten Walk
>>  wrote:
>> Hi Ravi & Strahil, thanks a lot for your answer!
>>
>> The file in the path .glusterfs/26/c5/.. only exists at node1 (=pve01).
>> On node2 (pve02) and the arbiter (freya), the file does not exist:
>>
>>
>>
>> ┬[14:35:48] [ssh:root@pve01(192.168.1.50): ~ (700)]
>> ╰─># getfattr -d -m. -e hex
>>  /data/glusterfs/.glusterfs/26/c5/26c5396c-86ff-408d-9cda-106acd2b0768
>> getfattr: Removing leading '/' from absolute path names
>> # file:
>> data/glusterfs/.glusterfs/26/c5/26c5396c-86ff-408d-9cda-106acd2b0768
>> trusted.afr.dirty=0x
>> trusted.afr.glusterfs-1-volume-client-1=0x00010001
>> trusted.afr.glusterfs-1-volume-client-2=0x00010001
>> trusted.gfid=0x26c5396c86ff408d9cda106acd2b0768
>>
>> trusted.glusterfs.mdata=0x01617880a33b2f0117617880a33b2f0117617880a33983a635
>>
>> ┬[14:36:49] [ssh:root@pve02(192.168.1.51):
>> /data/glusterfs/.glusterfs/26/c5 (700)]
>> ╰─># ll
>> drwx-- root root   6B 3 days ago   ./
>> drwx-- root root 8.0K 6 hours ago  ../
>>
>> ┬[14:36:58] [ssh:root@freya(192.168.1.40):
>> /data/glusterfs/.glusterfs/26/c5 (700)]
>> ╰─># ll
>> drwx-- root root   6B 3 days ago   ./
>> drwx-- root root 8.0K 3 hours ago  ../
>>
>>
>>
>> After this, i have disabled the the option you mentioned:
>>
>> gluster volume set glusterfs-1-volume cluster.eager-lock off
>>
>> After that I started another healing process manually. Unfortunately
>> without success.
>>
>> @Strahil: For your idea with
>> https://docs.gluster.org/en/latest/Troubleshooting/gfid-to-path/ i need
>> more time, maybe i can try it tomorrow. I'll be in touch.
>>
>> Thanks again and best regards,
>> Thorsten
>>
>>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS 9.3 - Replicate Volume (2 Bricks / 1 Arbiter) - Self-healing does not always work

2021-10-30 Thread Thorsten Walk
Hi Ravi & Strahil, thanks a lot for your answer!

The file in the path .glusterfs/26/c5/.. only exists at node1 (=pve01). On
node2 (pve02) and the arbiter (freya), the file does not exist:



┬[14:35:48] [ssh:root@pve01(192.168.1.50): ~ (700)]
╰─># getfattr -d -m. -e hex
 /data/glusterfs/.glusterfs/26/c5/26c5396c-86ff-408d-9cda-106acd2b0768
getfattr: Removing leading '/' from absolute path names
# file: data/glusterfs/.glusterfs/26/c5/26c5396c-86ff-408d-9cda-106acd2b0768
trusted.afr.dirty=0x
trusted.afr.glusterfs-1-volume-client-1=0x00010001
trusted.afr.glusterfs-1-volume-client-2=0x00010001
trusted.gfid=0x26c5396c86ff408d9cda106acd2b0768
trusted.glusterfs.mdata=0x01617880a33b2f0117617880a33b2f0117617880a33983a635

┬[14:36:49] [ssh:root@pve02(192.168.1.51): /data/glusterfs/.glusterfs/26/c5
(700)]
╰─># ll
drwx-- root root   6B 3 days ago   ./
drwx-- root root 8.0K 6 hours ago  ../

┬[14:36:58] [ssh:root@freya(192.168.1.40): /data/glusterfs/.glusterfs/26/c5
(700)]
╰─># ll
drwx-- root root   6B 3 days ago   ./
drwx-- root root 8.0K 3 hours ago  ../



After this, i have disabled the the option you mentioned:

gluster volume set glusterfs-1-volume cluster.eager-lock off

After that I started another healing process manually. Unfortunately
without success.

@Strahil: For your idea with
https://docs.gluster.org/en/latest/Troubleshooting/gfid-to-path/ i need
more time, maybe i can try it tomorrow. I'll be in touch.

Thanks again and best regards,
Thorsten




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS 9 and Debian 11

2021-09-29 Thread Michael Böhm
Am Mi., 29. Sept. 2021 um 14:17 Uhr schrieb Taste-Of-IT <
kont...@taste-of-it.de>:

> Hi,
> i looked again on the source share and the correct and working sources are:
>
> echo deb [arch=amd64]
> https://download.gluster.org/pub/gluster/glusterfs/9/LATEST/Debian/bullseye/amd64/apt
> bullseye main > /etc/apt/sources.list.d/gluster.list
>
> Now its working - sorry for my mistake, but thanks for your fast helping ..
> [closed]
>

np - and yes that line is also correct if you want to stay on the latest 9
release for bullseye.




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS 9 and Debian 11

2021-09-29 Thread Taste-Of-IT
Hi,
i looked again on the source share and the correct and working sources are:

echo deb [arch=amd64] 
https://download.gluster.org/pub/gluster/glusterfs/9/LATEST/Debian/bullseye/amd64/apt
 bullseye main > /etc/apt/sources.list.d/gluster.list 

Now its working - sorry for my mistake, but thanks for your fast helping ..
[closed]

Taste
Am 29.09.2021 13:24:07, schrieb Michael Böhm:
> Am Mi., 29. Sept. 2021 um 13:10 Uhr schrieb Taste-Of-IT <
> kont...@taste-of-it.de>:
> 
> > Hi,
> > server:
> >
> > # apt policy glusterfs-server
> > glusterfs-server:
> >   Installiert:   (keine)
> >   Installationskandidat: 9.3-1
> >   Versionstabelle:
> >  9.3-1 500
> > 500
> > https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/buster/amd64/apt
> > buster/main amd64 Packages
> >  9.2-1 500
> > 500 http://deb.debian.org/debian bullseye/main amd64 Packages
> >
> > Client:
> > # apt policy glusterfs-client
> > glusterfs-client:
> >   Installiert:   (keine)
> >   Installationskandidat: 9.3-1
> >   Versionstabelle:
> >  9.3-1 500
> > 500
> > https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/buster/amd64/apt
> > buster/main amd64 Packages
> >  9.2-1 500
> > 500 http://deb.debian.org/debian bullseye/main amd64 Packages
> >
> 
> Looks like the sources for buster - sources list should be:
> 
> # GlusterFS Apt Repository
> deb
> https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/11/amd64/apt
> bullseye main
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS 9 and Debian 11

2021-09-29 Thread Michael Böhm
Am Mi., 29. Sept. 2021 um 13:10 Uhr schrieb Taste-Of-IT <
kont...@taste-of-it.de>:

> Hi,
> server:
>
> # apt policy glusterfs-server
> glusterfs-server:
>   Installiert:   (keine)
>   Installationskandidat: 9.3-1
>   Versionstabelle:
>  9.3-1 500
> 500
> https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/buster/amd64/apt
> buster/main amd64 Packages
>  9.2-1 500
> 500 http://deb.debian.org/debian bullseye/main amd64 Packages
>
> Client:
> # apt policy glusterfs-client
> glusterfs-client:
>   Installiert:   (keine)
>   Installationskandidat: 9.3-1
>   Versionstabelle:
>  9.3-1 500
> 500
> https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/buster/amd64/apt
> buster/main amd64 Packages
>  9.2-1 500
> 500 http://deb.debian.org/debian bullseye/main amd64 Packages
>

Looks like the sources for buster - sources list should be:

# GlusterFS Apt Repository
deb
https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/11/amd64/apt
bullseye main




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS 9 and Debian 11

2021-09-29 Thread Taste-Of-IT
Hi,
server:

# apt policy glusterfs-server
glusterfs-server:
  Installiert:   (keine)
  Installationskandidat: 9.3-1
  Versionstabelle:
 9.3-1 500
500 
https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/buster/amd64/apt
 buster/main amd64 Packages
 9.2-1 500
500 http://deb.debian.org/debian bullseye/main amd64 Packages

Client:
# apt policy glusterfs-client
glusterfs-client:
  Installiert:   (keine)
  Installationskandidat: 9.3-1
  Versionstabelle:
 9.3-1 500
500 
https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/buster/amd64/apt
 buster/main amd64 Packages
 9.2-1 500
500 http://deb.debian.org/debian bullseye/main amd64 Packages

Taste

Am 29.09.2021 12:27:12, schrieb Michael Böhm:
> Am Mi., 29. Sept. 2021 um 11:43 Uhr schrieb Taste-Of-IT <
> kont...@taste-of-it.de>:
> 
> > Hi,
> >
> > i tested it again: i installed debian 11 from debian dvd1, i added sources
> > for glusterfs 9.3-1. After updating the system and sources i got the same
> > error message.
> >
> 
> what's the output of "apt policy glusterfs-server" / "apt policy
> glusterfs-client"
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS 9 and Debian 11

2021-09-29 Thread Michael Böhm
Am Mi., 29. Sept. 2021 um 11:43 Uhr schrieb Taste-Of-IT <
kont...@taste-of-it.de>:

> Hi,
>
> i tested it again: i installed debian 11 from debian dvd1, i added sources
> for glusterfs 9.3-1. After updating the system and sources i got the same
> error message.
>

what's the output of "apt policy glusterfs-server" / "apt policy
glusterfs-client"




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS 9 and Debian 11

2021-09-29 Thread Taste-Of-IT
Hi,

i tested it again: i installed debian 11 from debian dvd1, i added sources for 
glusterfs 9.3-1. After updating the system and sources i got the same error 
message.

Taste

Am 27.09.2021 15:09:27, schrieb Eli V:
> It's built for Debian, can't speak to the docs but an apt repo is available:
> https://download.nfs-ganesha.org/3/3.5/Debian/
> 
> On Mon, Sep 27, 2021 at 3:53 AM Eliyahu Rosenberg
>  wrote:
> >
> > Since it seems there are after all some Debian (/debian based) users on 
> > this list, can I hijack this thread just a bit and ask about ganesha and 
> > glusterfs?
> > Is that not built for Debian or is it included in the main package?
> >
> > I ask because as far as I can tell docs on doing gluster+ganesha refer to 
> > rpms that don't seem to have deb equivalents and commands referred int the 
> > docs also don't seem to exist for me.
> >
> > Thanks!
> > Eli
> >
> > On Wed, Sep 22, 2021 at 3:40 PM Kaleb Keithley  wrote:
> >>
> >>
> >> On Wed, Sep 22, 2021 at 7:51 AM Taste-Of-IT  wrote:
> >>>
> >>> Hi,
> >>>
> >>> i installed fresh Debian 11 stable and use GlusterFS latest sources. At 
> >>> installing glusterfs-server i got error missing libreadline7 Paket, which 
> >>> is not in Debian 11.
> >>>
> >>> Is GF 9 not Debian 11 ready?
> >>
> >>
> >> Our Debian 11 box has readline-common 8.1-1 and libreadline8 8.1-1 and 
> >> glusterfs 9 builds fine for us.
> >>
> >> What "latest sources" are you using?
> >>
> >> --
> >>
> >> Kaleb
> >> 
> >>
> >>
> >>
> >> Community Meeting Calendar:
> >>
> >> Schedule -
> >> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> >> Bridge: https://meet.google.com/cpu-eiue-hvk
> >> Gluster-users mailing list
> >> Gluster-users@gluster.org
> >> https://lists.gluster.org/mailman/listinfo/gluster-users
> >
> > 
> >
> >
> >
> > Community Meeting Calendar:
> >
> > Schedule -
> > Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> > Bridge: https://meet.google.com/cpu-eiue-hvk
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > https://lists.gluster.org/mailman/listinfo/gluster-users
> 
> 
> 
> 
> Community Meeting Calendar:
> 
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS 9 and Debian 11

2021-09-27 Thread Eli V
It's built for Debian, can't speak to the docs but an apt repo is available:
https://download.nfs-ganesha.org/3/3.5/Debian/

On Mon, Sep 27, 2021 at 3:53 AM Eliyahu Rosenberg
 wrote:
>
> Since it seems there are after all some Debian (/debian based) users on this 
> list, can I hijack this thread just a bit and ask about ganesha and glusterfs?
> Is that not built for Debian or is it included in the main package?
>
> I ask because as far as I can tell docs on doing gluster+ganesha refer to 
> rpms that don't seem to have deb equivalents and commands referred int the 
> docs also don't seem to exist for me.
>
> Thanks!
> Eli
>
> On Wed, Sep 22, 2021 at 3:40 PM Kaleb Keithley  wrote:
>>
>>
>> On Wed, Sep 22, 2021 at 7:51 AM Taste-Of-IT  wrote:
>>>
>>> Hi,
>>>
>>> i installed fresh Debian 11 stable and use GlusterFS latest sources. At 
>>> installing glusterfs-server i got error missing libreadline7 Paket, which 
>>> is not in Debian 11.
>>>
>>> Is GF 9 not Debian 11 ready?
>>
>>
>> Our Debian 11 box has readline-common 8.1-1 and libreadline8 8.1-1 and 
>> glusterfs 9 builds fine for us.
>>
>> What "latest sources" are you using?
>>
>> --
>>
>> Kaleb
>> 
>>
>>
>>
>> Community Meeting Calendar:
>>
>> Schedule -
>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> Bridge: https://meet.google.com/cpu-eiue-hvk
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS 9 and Debian 11

2021-09-27 Thread Eliyahu Rosenberg
Since it seems there are after all some Debian (/debian based) users on
this list, can I hijack this thread just a bit and ask about ganesha and
glusterfs?
Is that not built for Debian or is it included in the main package?

I ask because as far as I can tell docs on doing gluster+ganesha refer to
rpms that don't seem to have deb equivalents and commands referred int the
docs also don't seem to exist for me.

Thanks!
Eli

On Wed, Sep 22, 2021 at 3:40 PM Kaleb Keithley  wrote:

>
> On Wed, Sep 22, 2021 at 7:51 AM Taste-Of-IT 
> wrote:
>
>> Hi,
>>
>> i installed fresh Debian 11 stable and use GlusterFS latest sources. At
>> installing glusterfs-server i got error missing libreadline7 Paket, which
>> is not in Debian 11.
>>
>> Is GF 9 not Debian 11 ready?
>>
>
> Our Debian 11 box has readline-common 8.1-1 and libreadline8 8.1-1 and
> glusterfs 9 builds fine for us.
>
> What "latest sources" are you using?
>
> --
>
> Kaleb
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS 9 and Debian 11

2021-09-22 Thread Kaleb Keithley
On Wed, Sep 22, 2021 at 7:51 AM Taste-Of-IT  wrote:

> Hi,
>
> i installed fresh Debian 11 stable and use GlusterFS latest sources. At
> installing glusterfs-server i got error missing libreadline7 Paket, which
> is not in Debian 11.
>
> Is GF 9 not Debian 11 ready?
>

Our Debian 11 box has readline-common 8.1-1 and libreadline8 8.1-1 and
glusterfs 9 builds fine for us.

What "latest sources" are you using?

-- 

Kaleb




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS 9 and Debian 11

2021-09-22 Thread Michael Böhm
Am Mi., 22. Sept. 2021 um 13:51 Uhr schrieb Taste-Of-IT <
kont...@taste-of-it.de>:

> Hi,
>
> i installed fresh Debian 11 stable and use GlusterFS latest sources. At
> installing glusterfs-server i got error missing libreadline7 Paket, which
> is not in Debian 11.
>
> Hey,

what is the exact repo you used? Looking at the Packages-file [1] - the
glusterfs-server version from the bullseye repo of glusterfs should depend
on "libreadline8 (>= 6.0)"

[1]
https://download.gluster.org/pub/gluster/glusterfs/9/LATEST/Debian/bullseye/amd64/apt/dists/bullseye/main/binary-amd64/Packages




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Glusterfs nfs mounts not showing directories

2021-09-07 Thread John Cholewa
Update to this:  The problem was resolved when I explicitly mounted
nfsvers=3 .  I may come back to this to see if there's a reason why
it's behaving like that with nfs4, but I'll need to deal with fallout
for a while first.

On Mon, Sep 6, 2021 at 2:37 PM John Cholewa  wrote:
>
> My distributed volume had an issue on Friday which required a reboot
> of the primary node. After this, I'm having a strange issue: When I
> have the volume mounted via ganesha-nfs, using either the primary node
> itself or a random workstation on the network, I'm seeing files from
> both volumes, but I'm not seeing any directories at all. It's just a
> listing of the files. But I *can* list the contents of a directory if
> I know it exists. Similarly, that will show the files (in both nodes)
> of that directory, but it will show no subdirectories. Example:
>
> $ ls -F /mnt
> flintstone/
>
> $ ls -F /mnt/flintstone
> test test1 test2 test3
>
> $ ls -F /mnt/flintstone/wilma
> file1 file2 file3
>
> I've tried restarting glusterd on both nodes and rebooting the other
> node as well. Mount options in fstab are defaults,_netdev,nofail. I
> tried temporarily disabling the firewall in case that was a
> contributing factor.
>
> This has been working pretty well for over two years, and it's
> survived system updates and reboots on the nodes, and there hasn't
> been a recent software update that would have triggered this. The data
> itself appears to be fine. 'gluster peer status' on each node shows
> that the other is connected.
>
> What's a good way to further troubleshoot this or to tell gluster to
> figure itself out?  Would "gluster volume reset"  bring the
> configuration to its original state without damaging the data in the
> bricks?  Is there something I should look out for in the logs that
> might give a clue?
>
> Outputs:
>
> # lsb_release -a
> No LSB modules are available.
> Distributor ID: Ubuntu
> Description:Ubuntu 18.04.4 LTS
> Release:18.04
> Codename:   bionic
>
>
> # gluster --version
> glusterfs 7.5
> Repository revision: git://git.gluster.org/glusterfs.git
> Copyright (c) 2006-2016 Red Hat, Inc. 
> GlusterFS comes with ABSOLUTELY NO WARRANTY.
> It is licensed to you under your choice of the GNU Lesser
> General Public License, version 3 or any later version (LGPLv3
> or later), or the GNU General Public License, version 2 (GPLv2),
> in all cases as published by the Free Software Foundation.
>
>
> # gluster volume status
> Status of volume: gv0
> Gluster process TCP Port  RDMA Port  Online  Pid
> --
> Brick yuzz:/gfs/brick1/gv0  N/A   N/AY   2909
> Brick wum:/gfs/brick1/gv0   49152 0  Y   2885
>
> Task Status of Volume gv0
> --
> There are no active volume tasks
>
>
> # gluster volume info
> Volume Name: gv0
> Type: Distribute
> Volume ID: dcfdeed9-8fe9-4047-b18a-1a908f003d7f
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 2
> Transport-type: tcp
> Bricks:
> Brick1: yuzz:/gfs/brick1/gv0
> Brick2: wum:/gfs/brick1/gv0
> Options Reconfigured:
> nfs.disable: on
> transport.address-family: inet
> features.cache-invalidation: on
> cluster.readdir-optimize: off
> performance.parallel-readdir: off
> performance.cache-size: 8GB
> network.inode-lru-limit: 100
> performance.nfs.stat-prefetch: off
>
>
> # gluster pool list
> UUIDHostnameState
> 4b84240e-e73a-46da-9271-72f6001a8e18wum Connected
> 7de76707-cd99-4916-9c6b-ac6f26bda373localhost   Connected
>
>
> Output of gluster get-state:
> >
> [Global]
> MYUUID: 7de76707-cd99-4916-9c6b-ac6f26bda373
> op-version: 31302
>
> [Global options]
>
> [Peers]
> Peer1.primary_hostname: wum
> Peer1.uuid: 4b84240e-e73a-46da-9271-72f6001a8e18
> Peer1.state: Peer in Cluster
> Peer1.connected: Connected
> Peer1.othernames:
>
> [Volumes]
> Volume1.name: gv0
> Volume1.id: dcfdeed9-8fe9-4047-b18a-1a908f003d7f
> Volume1.type: Distribute
> Volume1.transport_type: tcp
> Volume1.status: Started
> Volume1.brickcount: 2
> Volume1.Brick1.path: yuzz:/gfs/brick1/gv0
> Volume1.Brick1.hostname: yuzz
> Volume1.Brick1.port: 0
> Volume1.Brick1.rdma_port: 0
> Volume1.Brick1.status: Started
> Volume1.Brick1.spacefree: 72715274395648Bytes
> Volume1.Brick1.spacetotal: 196003244277760Bytes
> Volume1.Brick2.path: wum:/gfs/brick1/gv0
> Volume1.Brick2.hostname: wum
> Volume1.snap_count: 0
> Volume1.stripe_count: 1
> Volume1.replica_count: 1
> Volume1.subvol_count: 2
> Volume1.arbiter_count: 0
> Volume1.disperse_count: 0
> Volume1.redundancy_count: 0
> Volume1.quorum_status: not_applicable
> Volume1.snapd_svc.online_status: Offline
> Volume1.snapd_svc.inited: True
> Volume1.rebalance.id: ----
> Volume1.rebalance.status: not_st

Re: [Gluster-users] glusterfs volume not mounted at boot time (RedHat 8.4)

2021-08-28 Thread Dario Lesca
Il giorno ven, 27/08/2021 alle 23.34 +0300, Alex K ha scritto:
> On Fri, Aug 27, 2021, 18:25 Dario Lesca  wrote:
> > Thanks Andreas,
> > I will follow your suggestion to check the order of services via
> > systemd
> > For now, I can't substitute RedHat (I use Rocky Linux, not RedHat)
> >  with Debian or Ubuntu.
> I would go with the systemd route for better control of the
> dependencies. You can create a custom systemd service which will
> mount the glysterfs volume only if glusterd and libvirt is started. I
> have had also very good results with pacemaker/crosync in case of
> cluster setups.

Thanks, I will try this way and let you know.

Many thanks
Dario


> > 
> > Many thanks
> > Dario
> > 
> > Il giorno ven, 27/08/2021 alle 15.19 +, a.schwi...@gmx.net ha
> > scritto:
> > > Hey Dario,
> > > 
> > > 
> > > I also have libvirtd running. No problems on Ubuntu dists,
> > > everything is started/mounted in correct order, but can't
> > > recommend on RedHat.
> > > I'd look at chkconfig man-page, you could edit boot order of your
> > > services if necessary.
> > > 
> > > 
> > > Andreas
> > > 
> > > 
> > > "Dario Lesca" d.le...@solinos.it – 27. August 2021 17:12
> > > > Thanks Andreas.
> > > > 
> > > > I have try to remove only the "noauto" in my fstab line but
> > > > none is change.
> > > > 
> > > > Then I have follow you suggest and I try to leave only
> > > > "defaults,_netdev"
> > > > In this case the volume after reboot is mounted.
> > > > Good!
> > > > 
> > > > But my problem is that this volume must mount before libvirtd
> > > > start, or maybe it is better to say, libvirtd must start after
> > > > "glusterd is start and volume is mount".
> > > > This is why I added those x-systemd directives
> > > > 
> > > > There is some solution to this issue?
> > > > 
> > > > Many thanks
> > > > Dario
> > > > 
> > > > 
> > > > 
> > > > Il giorno ven, 27/08/2021 alle 14.53 +,
> > > > a.schwi...@gmx.net ha scritto:
> > > > > Dario,
> > > > > 
> > > > > 
> > > > > Your fstab line includes mount option "noauto" so it won't
> > > > > automatically mount on boot!?
> > > > > 
> > > > > Try to remove noauto, reboot.
> > > > > 
> > > > > I usually only mount local gluster mount point with
> > > > > defaults,_netdev
> > > > > 
> > > > > 
> > > > > 
> > > > > Cheers
> > > > > 
> > > > > 
> > > > > Andreas
> > > > > 
> > > > > 
> > > > > 
> > > > > "Dario Lesca" d.le...@solinos.it– 27. August 2021 16:38
> > > > > 
> > > > > > Hello everybody.
> > > > > > 
> > > > > > 
> > > > > > I have setup a glusterfs volume without problem, all work
> > > > > > fine.
> > > > > > 
> > > > > > But if I reboot a node, when the node start the volume is
> > > > > > not mounted.
> > > > > > 
> > > > > > If I access to the node via SSH and run "mount /virt-gfs/"
> > > > > > the volume
> > > > > > 
> > > > > > is mounted correctly.
> > > > > > 
> > > > > > 
> > > > > > This is the /etc/fstab entry:
> > > > > > 
> > > > > > virt2.local:/gfsvol1 /virt-gfs glusterfs
> > > > > > defaults,_netdev,noauto,x-systemd.automount,x-
> > > > > > systemd.device-timeout=120,x-
> > > > > > systemd.requires=glusterd.service,x-
> > > > > > systemd.before=libvirtd.service 0 0
> > > > > > 
> > > > > > 
> > > > > > For testing, I have also set SElinux to "permissive" but
> > > > > > none is
> > > > > > 
> > > > > > change.
> > > > > > 
> > > > > > 
> > > > > > Someone can help me?
> > > > > > 
> > > > > > 
> > > > > > If you need some other info, let me know
> > > > > > 
> > > > > > 
> > > > > > Many thanks
> > > > > > 
> > > > > > 
> > > > > > 
> > > > > > 
> > 
> > 
> > 
> > 
> > Community Meeting Calendar:
> > 
> > Schedule -
> > Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> > Bridge: https://meet.google.com/cpu-eiue-hvk
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > https://lists.gluster.org/mailman/listinfo/gluster-users




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] glusterfs volume not mounted at boot time (RedHat 8.4)

2021-08-27 Thread Alex K
On Fri, Aug 27, 2021, 18:25 Dario Lesca  wrote:

> Thanks Andreas,
> I will follow your suggestion to check the order of services via systemd
> For now, I can't substitute RedHat (I use Rocky Linux, not RedHat)  with
> Debian or Ubuntu.
>
I would go with the systemd route for better control of the dependencies.
You can create a custom systemd service which will mount the glysterfs
volume only if glusterd and libvirt is started. I have had also very good
results with pacemaker/crosync in case of cluster setups.

>
> Many thanks
> Dario
>
> Il giorno ven, 27/08/2021 alle 15.19 +, a.schwi...@gmx.net ha scritto:
>
> Hey Dario,
>
>
> I also have libvirtd running. No problems on Ubuntu dists, everything is
> started/mounted in correct order, but can't recommend on RedHat.
> I'd look at chkconfig man-page, you could edit boot order of your services
> if necessary.
>
>
> Andreas
>
>
> "Dario Lesca" d.le...@solinos.it – 27. August 2021 17:12
>
> Thanks Andreas.
>
> I have try to remove only the "noauto" in my fstab line but none is change.
>
> Then I have follow you suggest and I try to leave only "defaults,_netdev"
> In this case the volume after reboot is mounted.
> Good!
>
> But my problem is that this volume must mount before libvirtd start, or
> maybe it is better to say, libvirtd must start after "glusterd is start and
> volume is mount".
> This is why I added those x-systemd directives
>
> There is some solution to this issue?
>
> Many thanks
> Dario
>
>
>
> Il giorno ven, 27/08/2021 alle 14.53 +, a.schwi...@gmx.net ha scritto:
>
> Dario,
>
>
> Your fstab line includes mount option "noauto" so it won't automatically
> mount on boot!?
>
> Try to remove noauto, reboot.
>
> I usually only mount local gluster mount point with defaults,_netdev
>
>
>
> Cheers
>
>
> Andreas
>
>
>
> "Dario Lesca" d.le...@solinos.it– 27. August 2021 16:38
>
> Hello everybody.
>
>
> I have setup a glusterfs volume without problem, all work fine.
>
> But if I reboot a node, when the node start the volume is not mounted.
>
> If I access to the node via SSH and run "mount /virt-gfs/" the volume
>
> is mounted correctly.
>
>
> This is the /etc/fstab entry:
>
> virt2.local:/gfsvol1 /virt-gfs glusterfs
> defaults,_netdev,noauto,x-systemd.automount,x-systemd.device-timeout=120,x-systemd.requires=glusterd.service,x-systemd.before=libvirtd.service
> 0 0
>
>
> For testing, I have also set SElinux to "permissive" but none is
>
> change.
>
>
> Someone can help me?
>
>
> If you need some other info, let me know
>
>
> Many thanks
>
>
>
>
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] glusterfs volume not mounted at boot time (RedHat 8.4)

2021-08-27 Thread Dario Lesca
Thanks Andreas,
I will follow your suggestion to check the order of services via
systemd
For now, I can't substitute RedHat (I use Rocky Linux, not RedHat)
 with Debian or Ubuntu.

Many thanks
Dario

Il giorno ven, 27/08/2021 alle 15.19 +, a.schwi...@gmx.net ha
scritto:
> Hey Dario,
> 
> 
> I also have libvirtd running. No problems on Ubuntu dists, everything
> is started/mounted in correct order, but can't recommend on RedHat.
> I'd look at chkconfig man-page, you could edit boot order of your
> services if necessary.
> 
> 
> Andreas
> 
> 
> "Dario Lesca" d.le...@solinos.it – 27. August 2021 17:12
> > Thanks Andreas.
> > 
> > I have try to remove only the "noauto" in my fstab line but none is
> > change.
> > 
> > Then I have follow you suggest and I try to leave only
> > "defaults,_netdev"
> > In this case the volume after reboot is mounted.
> > Good!
> > 
> > But my problem is that this volume must mount before libvirtd
> > start, or maybe it is better to say, libvirtd must start after
> > "glusterd is start and volume is mount".
> > This is why I added those x-systemd directives
> > 
> > There is some solution to this issue?
> > 
> > Many thanks
> > Dario
> > 
> > 
> > 
> > Il giorno ven, 27/08/2021 alle 14.53 +, a.schwi...@gmx.net ha
> > scritto:
> > > Dario,
> > > 
> > > 
> > > Your fstab line includes mount option "noauto" so it won't
> > > automatically mount on boot!?
> > > 
> > > Try to remove noauto, reboot.
> > > 
> > > I usually only mount local gluster mount point with
> > > defaults,_netdev
> > > 
> > > 
> > > 
> > > Cheers
> > > 
> > > 
> > > Andreas
> > > 
> > > 
> > > 
> > > "Dario Lesca" d.le...@solinos.it– 27. August 2021 16:38
> > > 
> > > > Hello everybody.
> > > > 
> > > > 
> > > > I have setup a glusterfs volume without problem, all work fine.
> > > > 
> > > > But if I reboot a node, when the node start the volume is not
> > > > mounted.
> > > > 
> > > > If I access to the node via SSH and run "mount /virt-gfs/" the
> > > > volume
> > > > 
> > > > is mounted correctly.
> > > > 
> > > > 
> > > > This is the /etc/fstab entry:
> > > > 
> > > > virt2.local:/gfsvol1 /virt-gfs glusterfs
> > > > defaults,_netdev,noauto,x-systemd.automount,x-systemd.device-
> > > > timeout=120,x-systemd.requires=glusterd.service,x-
> > > > systemd.before=libvirtd.service 0 0
> > > > 
> > > > 
> > > > For testing, I have also set SElinux to "permissive" but none
> > > > is
> > > > 
> > > > change.
> > > > 
> > > > 
> > > > Someone can help me?
> > > > 
> > > > 
> > > > If you need some other info, let me know
> > > > 
> > > > 
> > > > Many thanks
> > > > 
> > > > 
> > > > 
> > > > 




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] glusterfs volume not mounted at boot time (RedHat 8.4)

2021-08-27 Thread a . schwibbe
Hey Dario,


I also have libvirtd running. No problems on Ubuntu dists, everything is 
started/mounted in correct order, but can't recommend on RedHat.
I'd look at chkconfig man-page, you could edit boot order of your services if 
necessary.


Andreas


"Dario Lesca" d.le...@solinos.it – 27. August 2021 17:12
> Thanks Andreas.
>
> I have try to remove only the "noauto" in my fstab line but none is change.
>
> Then I have follow you suggest and I try to leave only "defaults,_netdev"
> In this case the volume after reboot is mounted.
> Good!
>
> But my problem is that this volume must mount before libvirtd start, or maybe 
> it is better to say, libvirtd must start after "glusterd is start and volume 
> is mount".
> This is why I added those x-systemd directives
>
> There is some solution to this issue?
>
> Many thanks
> Dario
>
>
>
> Il giorno ven, 27/08/2021 alle 14.53 +, a.schwi...@gmx.net ha scritto:
> > Dario,
> >
> >
> > Your fstab line includes mount option "noauto" so it won't automatically 
> > mount on boot!?
> >
> > Try to remove noauto, reboot.
> >
> > I usually only mount local gluster mount point with defaults,_netdev
> >
> >
> >
> > Cheers
> >
> >
> > Andreas
> >
> >
> >
> > "Dario Lesca" d.le...@solinos.it– 27. August 2021 16:38
> >
> > > Hello everybody.
> > >
> > >
> > > I have setup a glusterfs volume without problem, all work fine.
> > >
> > > But if I reboot a node, when the node start the volume is not mounted.
> > >
> > > If I access to the node via SSH and run "mount /virt-gfs/" the volume
> > >
> > > is mounted correctly.
> > >
> > >
> > > This is the /etc/fstab entry:
> > >
> > > virt2.local:/gfsvol1 /virt-gfs glusterfs 
> > > defaults,_netdev,noauto,x-systemd.automount,x-systemd.device-timeout=120,x-systemd.requires=glusterd.service,x-systemd.before=libvirtd.service
> > >  0 0
> > >
> > >
> > > For testing, I have also set SElinux to "permissive" but none is
> > >
> > > change.
> > >
> > >
> > > Someone can help me?
> > >
> > >
> > > If you need some other info, let me know
> > >
> > >
> > > Many thanks
> > >
> > >
> > >
> > >




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] glusterfs volume not mounted at boot time (RedHat 8.4)

2021-08-27 Thread Dario Lesca
Thanks Andreas.

I have try to remove only the "noauto" in my fstab line but none is
change.

Then I have follow you suggest and I try to leave only
"defaults,_netdev" 
In this case the volume after reboot is mounted.
Good!

But my problem is that this volume must mount before libvirtd start, or
maybe it is better to say, libvirtd must start after "glusterd is start
and volume is mount".
This is why I added those x-systemd directives

There is some solution to this issue?

Many thanks
Dario


Il giorno ven, 27/08/2021 alle 14.53 +, a.schwi...@gmx.net ha
scritto:
> Dario,
> 
> Your fstab line includes mount option "noauto" so it won't
> automatically mount on boot!?
> Try to remove noauto, reboot.
> I usually only mount local gluster mount point with defaults,_netdev
> 
> 
> Cheers
> 
> Andreas
> 
> 
> "Dario Lesca" d.le...@solinos.it – 27. August 2021 16:38
> > Hello everybody.
> > 
> > I have setup a glusterfs volume without problem, all work fine.
> > But if I reboot a node, when the node start the volume is not
> > mounted.
> > If I access to the node via SSH and run "mount /virt-gfs/" the
> > volume
> > is mounted correctly.
> > 
> > This is the /etc/fstab entry:
> > virt2.local:/gfsvol1 /virt-gfs glusterfs defaults,_netdev,noauto,x-
> > systemd.automount,x-systemd.device-timeout=120,x-
> > systemd.requires=glusterd.service,x-systemd.before=libvirtd.service
> > 0 0
> > 
> > For testing, I have also set SElinux to "permissive" but none is
> > change.
> > 
> > Someone can help me?
> > 
> > If you need some other info, let me know
> > 
> > Many thanks
> > 
> > 




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] glusterfs volume not mounted at boot time (RedHat 8.4)

2021-08-27 Thread a . schwibbe
Dario,

Your fstab line includes mount option "noauto" so it won't automatically mount 
on boot!?
Try to remove noauto, reboot.
I usually only mount local gluster mount point with defaults,_netdev


Cheers

Andreas


"Dario Lesca" d.le...@solinos.it – 27. August 2021 16:38
> Hello everybody.
>
> I have setup a glusterfs volume without problem, all work fine.
> But if I reboot a node, when the node start the volume is not mounted.
> If I access to the node via SSH and run "mount /virt-gfs/" the volume
> is mounted correctly.
>
> This is the /etc/fstab entry:
> virt2.local:/gfsvol1 /virt-gfs glusterfs 
> defaults,_netdev,noauto,x-systemd.automount,x-systemd.device-timeout=120,x-systemd.requires=glusterd.service,x-systemd.before=libvirtd.service
>  0 0
>
> For testing, I have also set SElinux to "permissive" but none is
> change.
>
> Someone can help me?
>
> If you need some other info, let me know
>
> Many thanks
>
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] glusterfs health-check failed, (brick) going down

2021-07-08 Thread Olaf Buitelaar
Hi Jiri,

Unfortunately i don't know a solution to fix it, other than what i already
mentioned, which doesn't seem to be applicable to your specific setup.
I don't think it's ovirt related (running ovirt my-self as well, but being
stuck at 4.3 atm, since centos 7 is not supported for 4.4).
If memory serves me well, i believe i start seeing this issue after
upgrading from glusterfs 3.12 to 4.x (I believe this whent together with
upgrade from ovirt 4.1 to 4.2), then since is version i've observed this
issue. currently running 7.9.
It would be nice to get to the bottom of this. I'm still not 100% sure it
might even be a glusterfs issue, or something might be wrong with XFS or
somewhere else in the IO stack. But I don't know what the next debugging
steps could be.
Just as a side note i've also observed this issue on systems without LVM
cache.

Cheers Olaf


Op do 8 jul. 2021 om 16:53 schreef Jiří Sléžka :

> Hi Olaf,
>
> thanks for reply.
>
> On 7/8/21 3:29 PM, Olaf Buitelaar wrote:
> > Hi Jiri,
> >
> > your probleem looks pretty similar to mine, see;
> >
> https://lists.gluster.org/pipermail/gluster-users/2021-February/039134.html
> > <
> https://lists.gluster.org/pipermail/gluster-users/2021-February/039134.html
> >
> > Any chance you also see the xfs errors in de brick logs?
>
> yes, I can see this log lines related to "health-check failed" items
>
> [root@ovirt-hci02 ~]# grep "aio_read" /var/log/glusterfs/bricks/*
> /var/log/glusterfs/bricks/gluster_bricks-vms2-vms2.log:[2021-07-07
> 07:13:37.408010] W [MSGID: 113075]
> [posix-helpers.c:2135:posix_fs_health_check] 0-vms-posix:
> aio_read_cmp_buf() on /gluster_bricks/vms2/vms2/.glusterfs/health_check
> returned ret is -1 error is Structure needs cleaning
> /var/log/glusterfs/bricks/gluster_bricks-vms2-vms2.log:[2021-07-07
> 16:11:14.518844] W [MSGID: 113075]
> [posix-helpers.c:2135:posix_fs_health_check] 0-vms-posix:
> aio_read_cmp_buf() on /gluster_bricks/vms2/vms2/.glusterfs/health_check
> returned ret is -1 error is Structure needs cleaning
>
> [root@ovirt-hci01 ~]# grep "aio_read" /var/log/glusterfs/bricks/*
> /var/log/glusterfs/bricks/gluster_bricks-engine-engine.log:[2021-07-05
> 13:15:51.982938] W [MSGID: 113075]
> [posix-helpers.c:2135:posix_fs_health_check] 0-engine-posix:
> aio_read_cmp_buf() on
> /gluster_bricks/engine/engine/.glusterfs/health_check returned ret is -1
> error is Structure needs cleaning
> /var/log/glusterfs/bricks/gluster_bricks-vms2-vms2.log:[2021-07-05
> 01:53:35.768534] W [MSGID: 113075]
> [posix-helpers.c:2135:posix_fs_health_check] 0-vms-posix:
> aio_read_cmp_buf() on /gluster_bricks/vms2/vms2/.glusterfs/health_check
> returned ret is -1 error is Structure needs cleaning
>
> it looks very similar to your issue but in my case I don't use LVM cache
> and brick disks are JBOD (but connected through Broadcom / LSI MegaRAID
> SAS-3 3008 [Fury] (rev 02)).
>
> > For me the situation improved once i disabled brick multiplexing, but i
> > don't see that in your volume configuration.
>
> probably important is your note...
>
> > When i kill the brick process and start with "gluser v start x force" the
> > issue seems much more unlikely to occur, but when started from a fresh
> > reboot, or when killing the process and let it being started by glusterd
> > (e.g. service glusterd start) the error seems to arise after a couple of
> > minutes.
>
> ...because in the ovirt list Jayme replied this
>
>
> https://lists.ovirt.org/archives/list/us...@ovirt.org/message/BZRONK53OGWSOPUSGQ76GIXUM7J6HHMJ/
>
> and it looks to me like something you also observes.
>
> Cheers, Jiri
>
> >
> > Cheers Olaf
> >
> > Op do 8 jul. 2021 om 12:28 schreef Jiří Sléžka  > >:
> >
> > Hello gluster community,
> >
> > I am new to this list but using glusterfs for log time as our SDS
> > solution for storing 80+TiB of data. I'm also using glusterfs for
> small
> > 3 node HCI cluster with oVirt 4.4.6 and CentOS 8 (not stream yet).
> > Glusterfs version here is 8.5-2.el8.x86_64.
> >
> > For time to time (I belive) random brick on random host goes down
> > because health-check. It looks like
> >
> > [root@ovirt-hci02 ~]# grep "posix_health_check"
> > /var/log/glusterfs/bricks/*
> > /var/log/glusterfs/bricks/gluster_bricks-vms2-vms2.log:[2021-07-07
> > 07:13:37.408184] M [MSGID: 113075]
> > [posix-helpers.c:2214:posix_health_check_thread_proc] 0-vms-posix:
> > health-check failed, going down
> > /var/log/glusterfs/bricks/gluster_bricks-vms2-vms2.log:[2021-07-07
> > 07:13:37.408407] M [MSGID: 113075]
> > [posix-helpers.c:2232:posix_health_check_thread_proc] 0-vms-posix:
> > still
> > alive! -> SIGTERM
> > /var/log/glusterfs/bricks/gluster_bricks-vms2-vms2.log:[2021-07-07
> > 16:11:14.518971] M [MSGID: 113075]
> > [posix-helpers.c:2214:posix_health_check_thread_proc] 0-vms-posix:
> > health-check failed, going down
> > /var/log/glusterfs/bricks/gluster_bricks-

Re: [Gluster-users] glusterfs health-check failed, (brick) going down

2021-07-08 Thread Jiří Sléžka

Hi Olaf,

thanks for reply.

On 7/8/21 3:29 PM, Olaf Buitelaar wrote:

Hi Jiri,

your probleem looks pretty similar to mine, see; 
https://lists.gluster.org/pipermail/gluster-users/2021-February/039134.html 


Any chance you also see the xfs errors in de brick logs?


yes, I can see this log lines related to "health-check failed" items

[root@ovirt-hci02 ~]# grep "aio_read" /var/log/glusterfs/bricks/*
/var/log/glusterfs/bricks/gluster_bricks-vms2-vms2.log:[2021-07-07 
07:13:37.408010] W [MSGID: 113075] 
[posix-helpers.c:2135:posix_fs_health_check] 0-vms-posix: 
aio_read_cmp_buf() on /gluster_bricks/vms2/vms2/.glusterfs/health_check 
returned ret is -1 error is Structure needs cleaning
/var/log/glusterfs/bricks/gluster_bricks-vms2-vms2.log:[2021-07-07 
16:11:14.518844] W [MSGID: 113075] 
[posix-helpers.c:2135:posix_fs_health_check] 0-vms-posix: 
aio_read_cmp_buf() on /gluster_bricks/vms2/vms2/.glusterfs/health_check 
returned ret is -1 error is Structure needs cleaning


[root@ovirt-hci01 ~]# grep "aio_read" /var/log/glusterfs/bricks/*
/var/log/glusterfs/bricks/gluster_bricks-engine-engine.log:[2021-07-05 
13:15:51.982938] W [MSGID: 113075] 
[posix-helpers.c:2135:posix_fs_health_check] 0-engine-posix: 
aio_read_cmp_buf() on 
/gluster_bricks/engine/engine/.glusterfs/health_check returned ret is -1 
error is Structure needs cleaning
/var/log/glusterfs/bricks/gluster_bricks-vms2-vms2.log:[2021-07-05 
01:53:35.768534] W [MSGID: 113075] 
[posix-helpers.c:2135:posix_fs_health_check] 0-vms-posix: 
aio_read_cmp_buf() on /gluster_bricks/vms2/vms2/.glusterfs/health_check 
returned ret is -1 error is Structure needs cleaning


it looks very similar to your issue but in my case I don't use LVM cache 
and brick disks are JBOD (but connected through Broadcom / LSI MegaRAID 
SAS-3 3008 [Fury] (rev 02)).


For me the situation improved once i disabled brick multiplexing, but i 
don't see that in your volume configuration.


probably important is your note...


When i kill the brick process and start with "gluser v start x force" the
issue seems much more unlikely to occur, but when started from a fresh
reboot, or when killing the process and let it being started by glusterd
(e.g. service glusterd start) the error seems to arise after a couple of
minutes.


...because in the ovirt list Jayme replied this

https://lists.ovirt.org/archives/list/us...@ovirt.org/message/BZRONK53OGWSOPUSGQ76GIXUM7J6HHMJ/

and it looks to me like something you also observes.

Cheers, Jiri



Cheers Olaf

Op do 8 jul. 2021 om 12:28 schreef Jiří Sléžka >:


Hello gluster community,

I am new to this list but using glusterfs for log time as our SDS
solution for storing 80+TiB of data. I'm also using glusterfs for small
3 node HCI cluster with oVirt 4.4.6 and CentOS 8 (not stream yet).
Glusterfs version here is 8.5-2.el8.x86_64.

For time to time (I belive) random brick on random host goes down
because health-check. It looks like

[root@ovirt-hci02 ~]# grep "posix_health_check"
/var/log/glusterfs/bricks/*
/var/log/glusterfs/bricks/gluster_bricks-vms2-vms2.log:[2021-07-07
07:13:37.408184] M [MSGID: 113075]
[posix-helpers.c:2214:posix_health_check_thread_proc] 0-vms-posix:
health-check failed, going down
/var/log/glusterfs/bricks/gluster_bricks-vms2-vms2.log:[2021-07-07
07:13:37.408407] M [MSGID: 113075]
[posix-helpers.c:2232:posix_health_check_thread_proc] 0-vms-posix:
still
alive! -> SIGTERM
/var/log/glusterfs/bricks/gluster_bricks-vms2-vms2.log:[2021-07-07
16:11:14.518971] M [MSGID: 113075]
[posix-helpers.c:2214:posix_health_check_thread_proc] 0-vms-posix:
health-check failed, going down
/var/log/glusterfs/bricks/gluster_bricks-vms2-vms2.log:[2021-07-07
16:11:14.519200] M [MSGID: 113075]
[posix-helpers.c:2232:posix_health_check_thread_proc] 0-vms-posix:
still
alive! -> SIGTERM

on other host

[root@ovirt-hci01 ~]# grep "posix_health_check"
/var/log/glusterfs/bricks/*
/var/log/glusterfs/bricks/gluster_bricks-engine-engine.log:[2021-07-05
13:15:51.983327] M [MSGID: 113075]
[posix-helpers.c:2214:posix_health_check_thread_proc] 0-engine-posix:
health-check failed, going down
/var/log/glusterfs/bricks/gluster_bricks-engine-engine.log:[2021-07-05
13:15:51.983728] M [MSGID: 113075]
[posix-helpers.c:2232:posix_health_check_thread_proc] 0-engine-posix:
still alive! -> SIGTERM
/var/log/glusterfs/bricks/gluster_bricks-vms2-vms2.log:[2021-07-05
01:53:35.769129] M [MSGID: 113075]
[posix-helpers.c:2214:posix_health_check_thread_proc] 0-vms-posix:
health-check failed, going down
/var/log/glusterfs/bricks/gluster_bricks-vms2-vms2.log:[2021-07-05
01:53:35.769819] M [MSGID: 113075]
[posix-helpers.c:2232:posix_health_check_thread_proc] 0-vms-posix:
still
alive! -> SIGTERM

I c

Re: [Gluster-users] glusterfs health-check failed, (brick) going down

2021-07-08 Thread Olaf Buitelaar
Hi Jiri,

your probleem looks pretty similar to mine, see;
https://lists.gluster.org/pipermail/gluster-users/2021-February/039134.html
Any chance you also see the xfs errors in de brick logs?
For me the situation improved once i disabled brick multiplexing, but i
don't see that in your volume configuration.

Cheers Olaf

Op do 8 jul. 2021 om 12:28 schreef Jiří Sléžka :

> Hello gluster community,
>
> I am new to this list but using glusterfs for log time as our SDS
> solution for storing 80+TiB of data. I'm also using glusterfs for small
> 3 node HCI cluster with oVirt 4.4.6 and CentOS 8 (not stream yet).
> Glusterfs version here is 8.5-2.el8.x86_64.
>
> For time to time (I belive) random brick on random host goes down
> because health-check. It looks like
>
> [root@ovirt-hci02 ~]# grep "posix_health_check"
> /var/log/glusterfs/bricks/*
> /var/log/glusterfs/bricks/gluster_bricks-vms2-vms2.log:[2021-07-07
> 07:13:37.408184] M [MSGID: 113075]
> [posix-helpers.c:2214:posix_health_check_thread_proc] 0-vms-posix:
> health-check failed, going down
> /var/log/glusterfs/bricks/gluster_bricks-vms2-vms2.log:[2021-07-07
> 07:13:37.408407] M [MSGID: 113075]
> [posix-helpers.c:2232:posix_health_check_thread_proc] 0-vms-posix: still
> alive! -> SIGTERM
> /var/log/glusterfs/bricks/gluster_bricks-vms2-vms2.log:[2021-07-07
> 16:11:14.518971] M [MSGID: 113075]
> [posix-helpers.c:2214:posix_health_check_thread_proc] 0-vms-posix:
> health-check failed, going down
> /var/log/glusterfs/bricks/gluster_bricks-vms2-vms2.log:[2021-07-07
> 16:11:14.519200] M [MSGID: 113075]
> [posix-helpers.c:2232:posix_health_check_thread_proc] 0-vms-posix: still
> alive! -> SIGTERM
>
> on other host
>
> [root@ovirt-hci01 ~]# grep "posix_health_check"
> /var/log/glusterfs/bricks/*
> /var/log/glusterfs/bricks/gluster_bricks-engine-engine.log:[2021-07-05
> 13:15:51.983327] M [MSGID: 113075]
> [posix-helpers.c:2214:posix_health_check_thread_proc] 0-engine-posix:
> health-check failed, going down
> /var/log/glusterfs/bricks/gluster_bricks-engine-engine.log:[2021-07-05
> 13:15:51.983728] M [MSGID: 113075]
> [posix-helpers.c:2232:posix_health_check_thread_proc] 0-engine-posix:
> still alive! -> SIGTERM
> /var/log/glusterfs/bricks/gluster_bricks-vms2-vms2.log:[2021-07-05
> 01:53:35.769129] M [MSGID: 113075]
> [posix-helpers.c:2214:posix_health_check_thread_proc] 0-vms-posix:
> health-check failed, going down
> /var/log/glusterfs/bricks/gluster_bricks-vms2-vms2.log:[2021-07-05
> 01:53:35.769819] M [MSGID: 113075]
> [posix-helpers.c:2232:posix_health_check_thread_proc] 0-vms-posix: still
> alive! -> SIGTERM
>
> I cannot link these errors to any storage/fs issue (in dmesg or
> /var/log/messages), brick devices looks healthy (smartd).
>
> I can force start brick with
>
> gluster volume start vms|engine force
>
> and after some healing all works fine for few days
>
> Did anybody observe this behavior?
>
> vms volume has this structure (two bricks per host, each is separate
> JBOD ssd disk), engine volume has one brick on each host...
>
> gluster volume info vms
>
> Volume Name: vms
> Type: Distributed-Replicate
> Volume ID: 52032ec6-99d4-4210-8fb8-ffbd7a1e0bf7
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 2 x 3 = 6
> Transport-type: tcp
> Bricks:
> Brick1: 10.0.4.11:/gluster_bricks/vms/vms
> Brick2: 10.0.4.13:/gluster_bricks/vms/vms
> Brick3: 10.0.4.12:/gluster_bricks/vms/vms
> Brick4: 10.0.4.11:/gluster_bricks/vms2/vms2
> Brick5: 10.0.4.13:/gluster_bricks/vms2/vms2
> Brick6: 10.0.4.12:/gluster_bricks/vms2/vms2
> Options Reconfigured:
> cluster.granular-entry-heal: enable
> performance.stat-prefetch: off
> cluster.eager-lock: enable
> performance.io-cache: off
> performance.read-ahead: off
> performance.quick-read: off
> user.cifs: off
> network.ping-timeout: 30
> network.remote-dio: off
> performance.strict-o-direct: on
> performance.low-prio-threads: 32
> features.shard: on
> storage.owner-gid: 36
> storage.owner-uid: 36
> transport.address-family: inet
> storage.fips-mode-rchecksum: on
> nfs.disable: on
> performance.client-io-threads: off
>
>
> Cheers,
>
> Jiri
>
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS for virtualization + RAID10 (HDD)

2021-04-17 Thread Strahil Nikolov
I would pick the approach of 1 RAID1 disk = 1 brick with the combination of 
LACP (hashing over IP+port ).
The fuse client's connection to the relevant brick will be using a different 
NIC port (cause LACP hashing based on IP+port will pick a different tcp port) 
and this way you will load balance and consume more bandwidth.

Also consider using cluster.choose-local which will prefer local bricks for 
reading.



Best Regards,
Strahil Nikolov






В петък, 16 април 2021 г., 21:28:35 ч. Гринуич+3, Gilberto Ferreira 
 написа: 





Hi there

I am about to deploy a gluster server with Proxmox VE.
Both servers have 4 SAS disks configured as RAID10.
RAID10 is ok? Network is 4 1GB NIC. I am thinking of using some kind
of bond with 3 1G NICS. What about it?
Thanks for any advice.
---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS mount crash

2021-03-18 Thread David Cunningham
Hi Xavi,

Thank you for that information. We'll look at upgrading it.


On Fri, 12 Mar 2021 at 05:20, Xavi Hernandez  wrote:

> Hi David,
>
> with so little information it's hard to tell, but given that there are
> several OPEN and UNLINK operations, it could be related to an already fixed
> bug (in recent versions) in open-behind.
>
> You can try disabling open-behind with this command:
>
> # gluster volume set  open-behind off
>
> But given the version you are using is very old and unmaintained, I would
> recommend you to upgrade to 8.x at least.
>
> Regards,
>
> Xavi
>
>
> On Wed, Mar 10, 2021 at 5:10 AM David Cunningham <
> dcunning...@voisonics.com> wrote:
>
>> Hello,
>>
>> We have a GlusterFS 5.13 server which also mounts itself with the native
>> FUSE client. Recently the FUSE mount crashed and we found the following in
>> the syslog. There isn't anything logged in mnt-glusterfs.log for that time.
>> After killing all processes with a file handle open on the filesystem we
>> were able to unmount and then remount the filesystem successfully.
>>
>> Would anyone have advice on how to debug this crash? Thank you in advance!
>>
>> Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: pending frames:
>> Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: frame : type(0) op(0)
>> Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: frame : type(0) op(0)
>> Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: frame : type(1) op(UNLINK)
>> Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: frame : type(1) op(UNLINK)
>> Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: frame : type(1) op(OPEN)
>> Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: message repeated 3355 times: [
>> frame : type(1) op(OPEN)]
>> Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: frame : type(1) op(OPEN)
>> Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: message repeated 6965 times: [
>> frame : type(1) op(OPEN)]
>> Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: frame : type(1) op(OPEN)
>> Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: message repeated 4095 times: [
>> frame : type(1) op(OPEN)]
>> Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: frame : type(0) op(0)
>> Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: patchset: git://
>> git.gluster.org/glusterfs.git
>> Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: signal received: 11
>> Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: time of crash:
>> Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: 2021-03-09 03:12:31
>> Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: configuration details:
>> Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: argp 1
>> Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: backtrace 1
>> Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: dlfcn 1
>> Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: libpthread 1
>> Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: llistxattr 1
>> Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: setfsid 1
>> Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: spinlock 1
>> Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: epoll.h 1
>> Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: xattr.h 1
>> Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: st_atim.tv_nsec 1
>> Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: package-string: glusterfs 5.13
>> Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: -
>> ...
>> Mar 9 05:13:50 voip1 systemd[1]: glusterfssharedstorage.service: Main
>> process exited, code=killed, status=11/SEGV
>> Mar 9 05:13:50 voip1 systemd[1]: glusterfssharedstorage.service: Failed
>> with result 'signal'.
>> ...
>> Mar 9 05:13:54 voip1 systemd[1]: glusterfssharedstorage.service: Service
>> hold-off time over, scheduling restart.
>> Mar 9 05:13:54 voip1 systemd[1]: glusterfssharedstorage.service:
>> Scheduled restart job, restart counter is at 2.
>> Mar 9 05:13:54 voip1 systemd[1]: Stopped Mount glusterfs sharedstorage.
>> Mar 9 05:13:54 voip1 systemd[1]: Starting Mount glusterfs sharedstorage...
>> Mar 9 05:13:54 voip1 mount-shared-storage.sh[20520]: ERROR: Mount point
>> does not exist
>> Mar 9 05:13:54 voip1 mount-shared-storage.sh[20520]: Please specify a
>> mount point
>> Mar 9 05:13:54 voip1 mount-shared-storage.sh[20520]: Usage:
>> Mar 9 05:13:54 voip1 mount-shared-storage.sh[20520]: man 8
>> /sbin/mount.glusterfs
>>
>> --
>> David Cunningham, Voisonics Limited
>> http://voisonics.com/
>> USA: +1 213 221 1092
>> New Zealand: +64 (0)28 2558 3782
>> 
>>
>>
>>
>> Community Meeting Calendar:
>>
>> Schedule -
>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> Bridge: https://meet.google.com/cpu-eiue-hvk
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>

-- 
David Cunningham, Voisonics Limited
http://voisonics.com/
USA: +1 213 221 1092
New Zealand: +64 (0)28 2558 3782




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS mount crash

2021-03-11 Thread Xavi Hernandez
Hi David,

with so little information it's hard to tell, but given that there are
several OPEN and UNLINK operations, it could be related to an already fixed
bug (in recent versions) in open-behind.

You can try disabling open-behind with this command:

# gluster volume set  open-behind off

But given the version you are using is very old and unmaintained, I would
recommend you to upgrade to 8.x at least.

Regards,

Xavi


On Wed, Mar 10, 2021 at 5:10 AM David Cunningham 
wrote:

> Hello,
>
> We have a GlusterFS 5.13 server which also mounts itself with the native
> FUSE client. Recently the FUSE mount crashed and we found the following in
> the syslog. There isn't anything logged in mnt-glusterfs.log for that time.
> After killing all processes with a file handle open on the filesystem we
> were able to unmount and then remount the filesystem successfully.
>
> Would anyone have advice on how to debug this crash? Thank you in advance!
>
> Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: pending frames:
> Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: frame : type(0) op(0)
> Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: frame : type(0) op(0)
> Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: frame : type(1) op(UNLINK)
> Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: frame : type(1) op(UNLINK)
> Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: frame : type(1) op(OPEN)
> Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: message repeated 3355 times: [
> frame : type(1) op(OPEN)]
> Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: frame : type(1) op(OPEN)
> Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: message repeated 6965 times: [
> frame : type(1) op(OPEN)]
> Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: frame : type(1) op(OPEN)
> Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: message repeated 4095 times: [
> frame : type(1) op(OPEN)]
> Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: frame : type(0) op(0)
> Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: patchset: git://
> git.gluster.org/glusterfs.git
> Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: signal received: 11
> Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: time of crash:
> Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: 2021-03-09 03:12:31
> Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: configuration details:
> Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: argp 1
> Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: backtrace 1
> Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: dlfcn 1
> Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: libpthread 1
> Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: llistxattr 1
> Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: setfsid 1
> Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: spinlock 1
> Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: epoll.h 1
> Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: xattr.h 1
> Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: st_atim.tv_nsec 1
> Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: package-string: glusterfs 5.13
> Mar 9 05:12:31 voip1 mnt-glusterfs[2932]: -
> ...
> Mar 9 05:13:50 voip1 systemd[1]: glusterfssharedstorage.service: Main
> process exited, code=killed, status=11/SEGV
> Mar 9 05:13:50 voip1 systemd[1]: glusterfssharedstorage.service: Failed
> with result 'signal'.
> ...
> Mar 9 05:13:54 voip1 systemd[1]: glusterfssharedstorage.service: Service
> hold-off time over, scheduling restart.
> Mar 9 05:13:54 voip1 systemd[1]: glusterfssharedstorage.service: Scheduled
> restart job, restart counter is at 2.
> Mar 9 05:13:54 voip1 systemd[1]: Stopped Mount glusterfs sharedstorage.
> Mar 9 05:13:54 voip1 systemd[1]: Starting Mount glusterfs sharedstorage...
> Mar 9 05:13:54 voip1 mount-shared-storage.sh[20520]: ERROR: Mount point
> does not exist
> Mar 9 05:13:54 voip1 mount-shared-storage.sh[20520]: Please specify a
> mount point
> Mar 9 05:13:54 voip1 mount-shared-storage.sh[20520]: Usage:
> Mar 9 05:13:54 voip1 mount-shared-storage.sh[20520]: man 8
> /sbin/mount.glusterfs
>
> --
> David Cunningham, Voisonics Limited
> http://voisonics.com/
> USA: +1 213 221 1092
> New Zealand: +64 (0)28 2558 3782
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Glusterfs as databse store

2020-10-26 Thread Alex K
HI Strahil,

Thanx for your feedback.
I had already received your feedback which seems to be very useful.
You had pointed at /var/lib/glusterd/groups/db-workload profile which
includes recommended gluster volume settings for such work-loads (includes
direct IO).
I will be testing this setup though I expect no issues apart from slower
performance then native setup.

On Sun, Oct 25, 2020 at 9:45 PM Strahil Nikolov 
wrote:

> Hey Alex,
>
> sorry for the late reply - seems you went to the SPAM dir.
>
> I think that a DB with direct I/O won't have any issues with Gluster.As a
> second thought , DBs know their data file names , so even 1 file per table
> will work quite OK.
>

> But you will need a lot of testing before putting something into
> production.
>
>
> Best Regards,
> Strahil Nikolov
>
>
>
>
>
>
> В понеделник, 12 октомври 2020 г., 21:10:03 Гринуич+3, Alex K <
> rightkickt...@gmail.com> написа:
>
>
>
>
>
>
>
> On Mon, Oct 12, 2020, 19:24 Strahil Nikolov  wrote:
> > Hi Alex,
> >
> > I can share that oVirt is using Gluster as a HCI solution and many
> people are hosting DBs in their Virtual Machines.Yet, oVirt bypasses any
> file system caches and uses Direct I/O in order to ensure consistency.
> >
> > As you will be using pacemaker, drbd is a viable solution that can be
> controlled easily.
> Thank you Strahil. I am using ovirt with glusterfs successfully for the
> last 5 years and I'm very happy about it. Though the vms gluster volume has
> sharding enabled by default and I suspect this is different if you run DB
> directly on top glusterfs. I assume there are optimizations one could apply
> at gluster volumes (use direct io?, small file workload optimizations, etc)
> and was hoping that there were success stories of DBs on top glusterfs.  I
> might go with drbd as the latest version is much more scalable and
> simplified.
> >
> > Best Regards,
> > Strahil Nikolov
> >
> >
> >
> >
> >
> >
> >
> > В понеделник, 12 октомври 2020 г., 12:12:18 Гринуич+3, Alex K <
> rightkickt...@gmail.com> написа:
> >
> >
> >
> >
> >
> >
> >
> > On Mon, Oct 12, 2020 at 9:47 AM Diego Zuccato 
> wrote:
> >> Il 10/10/20 16:53, Alex K ha scritto:
> >>
> >>> Reading from the docs i see that this is not recommended?
> >> IIUC the risk of having partially-unsynced data is is too high.
> >> DB replication is not easy to configure because it's hard to do well,
> >> even active/passive.
> >> But I can tell you that a 3-node mariadb (galera) cluster is not hard to
> >> setup. Just follow one of the tutorials. It's nearly as easy as setting
> >> up a replica3 gluster volume :)
> >> And "guarantees" consinstency in the DB data.
> > I see. Since I will not have only mariadb, then I have to setup the same
> replication for postgresql and later influxdb, which adds into the
> complexity.
> > For cluster management I will be using pacemaker/corosync.
> >
> > Thanx for your feedback
> >
> >>
> >> --
> >> Diego Zuccato
> >> DIFA - Dip. di Fisica e Astronomia
> >> Servizi Informatici
> >> Alma Mater Studiorum - Università di Bologna
> >> V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
> >> tel.: +39 051 20 95786
> >>
> > 
> >
> >
> >
> > Community Meeting Calendar:
> >
> > Schedule -
> > Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> > Bridge: https://bluejeans.com/441850968
> >
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > https://lists.gluster.org/mailman/listinfo/gluster-users
> >
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Glusterfs as databse store

2020-10-25 Thread Strahil Nikolov
Hey Alex,

sorry for the late reply - seems you went to the SPAM dir.

I think that a DB with direct I/O won't have any issues with Gluster.As a 
second thought , DBs know their data file names , so even 1 file per table will 
work quite OK.

But you will need a lot of testing before putting something into production.


Best Regards,
Strahil Nikolov






В понеделник, 12 октомври 2020 г., 21:10:03 Гринуич+3, Alex K 
 написа: 







On Mon, Oct 12, 2020, 19:24 Strahil Nikolov  wrote:
> Hi Alex,
> 
> I can share that oVirt is using Gluster as a HCI solution and many people are 
> hosting DBs in their Virtual Machines.Yet, oVirt bypasses any file system 
> caches and uses Direct I/O in order to ensure consistency.
> 
> As you will be using pacemaker, drbd is a viable solution that can be 
> controlled easily.
Thank you Strahil. I am using ovirt with glusterfs successfully for the last 5 
years and I'm very happy about it. Though the vms gluster volume has sharding 
enabled by default and I suspect this is different if you run DB directly on 
top glusterfs. I assume there are optimizations one could apply at gluster 
volumes (use direct io?, small file workload optimizations, etc) and was hoping 
that there were success stories of DBs on top glusterfs.  I might go with drbd 
as the latest version is much more scalable and simplified.
>  
> Best Regards,
> Strahil Nikolov
> 
> 
> 
> 
> 
> 
> 
> В понеделник, 12 октомври 2020 г., 12:12:18 Гринуич+3, Alex K 
>  написа: 
> 
> 
> 
> 
> 
> 
> 
> On Mon, Oct 12, 2020 at 9:47 AM Diego Zuccato  wrote:
>> Il 10/10/20 16:53, Alex K ha scritto:
>> 
>>> Reading from the docs i see that this is not recommended?
>> IIUC the risk of having partially-unsynced data is is too high.
>> DB replication is not easy to configure because it's hard to do well,
>> even active/passive.
>> But I can tell you that a 3-node mariadb (galera) cluster is not hard to
>> setup. Just follow one of the tutorials. It's nearly as easy as setting
>> up a replica3 gluster volume :)
>> And "guarantees" consinstency in the DB data.
> I see. Since I will not have only mariadb, then I have to setup the same 
> replication for postgresql and later influxdb, which adds into the 
> complexity. 
> For cluster management I will be using pacemaker/corosync. 
> 
> Thanx for your feedback
> 
>>  
>> -- 
>> Diego Zuccato
>> DIFA - Dip. di Fisica e Astronomia
>> Servizi Informatici
>> Alma Mater Studiorum - Università di Bologna
>> V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
>> tel.: +39 051 20 95786
>> 
> 
> 
> 
> 
> Community Meeting Calendar:
> 
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://bluejeans.com/441850968
> 
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
> 




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Glusterfs as databse store

2020-10-14 Thread Alex K
On Tue, Oct 13, 2020, 23:39 Gionatan Danti  wrote:

> Il 2020-10-13 21:16 Strahil Nikolov ha scritto:
> > At least it is a good start point.
>
> This can also be an interesting read:
>
> https://docs.openshift.com/container-platform/3.11/scaling_performance/optimizing_on_glusterfs_storage.html

Yes, stumbled at this one and seems that will help.

>
>
> --
> Danti Gionatan
> Supporto Tecnico
> Assyoma S.r.l. - www.assyoma.it
> email: g.da...@assyoma.it - i...@assyoma.it
> GPG public key ID: FF5F32A8
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Glusterfs as databse store

2020-10-14 Thread Alex K
On Tue, Oct 13, 2020, 22:16 Strahil Nikolov  wrote:

> It's really interesting that there is also a "profile" for databases
> available:
>
> [root@glustera groups]# cat /var/lib/glusterd/groups/db-workload
> performance.open-behind=on
> performance.write-behind=off
> performance.stat-prefetch=off
> performance.quick-read=off
> performance.strict-o-direct=on
> performance.read-ahead=off
> performance.io-cache=off
> performance.readdir-ahead=off
> performance.client-io-threads=on
> server.event-threads=4
> client.event-threads=4
> performance.read-after-open=yes
>
> At least it is a good start point.
>
Interesting indeed! Thanx

>
> Best Regards,
> Strahil Nikolov
>
>
> В вторник, 13 октомври 2020 г., 21:42:28 Гринуич+3, Alex K <
> rightkickt...@gmail.com> написа:
>
>
>
>
>
>
>
> On Mon, Oct 12, 2020, 21:50 Olaf Buitelaar 
> wrote:
> > Hi Alex,
> >
> > I've been running databases both directly and indirectly through qemu
> images vms (managed by oVirt), and since the recent gluster versions (6+,
> haven't tested 7-8) I'm generally happy with the stability. I'm running
> mostly write intensive workloads.
> > For mariadb, any gluster volume seems to workfine, i've both running
> shared and none-sharded volumes (using none-sharded for backup slave's to
> keep the file's as a whole).
> > For postgresql it's required to enable the volume
> option; performance.strict-o-direct: on.  but both shared and none-sharded
> work in that case too.
> > none the less i would advise to run any database with strict-o-direct
> on.
> Thanx Olaf for your feedback. Appreciated
> >
> > Best Olaf
> >
> >
> > Op ma 12 okt. 2020 om 20:10 schreef Alex K :
> >>
> >>
> >> On Mon, Oct 12, 2020, 19:24 Strahil Nikolov 
> wrote:
> >>> Hi Alex,
> >>>
> >>> I can share that oVirt is using Gluster as a HCI solution and many
> people are hosting DBs in their Virtual Machines.Yet, oVirt bypasses any
> file system caches and uses Direct I/O in order to ensure consistency.
> >>>
> >>> As you will be using pacemaker, drbd is a viable solution that can be
> controlled easily.
> >> Thank you Strahil. I am using ovirt with glusterfs successfully for the
> last 5 years and I'm very happy about it. Though the vms gluster volume has
> sharding enabled by default and I suspect this is different if you run DB
> directly on top glusterfs. I assume there are optimizations one could apply
> at gluster volumes (use direct io?, small file workload optimizations, etc)
> and was hoping that there were success stories of DBs on top glusterfs.  I
> might go with drbd as the latest version is much more scalable and
> simplified.
> >>>
> >>> Best Regards,
> >>> Strahil Nikolov
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>> В понеделник, 12 октомври 2020 г., 12:12:18 Гринуич+3, Alex K <
> rightkickt...@gmail.com> написа:
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>> On Mon, Oct 12, 2020 at 9:47 AM Diego Zuccato 
> wrote:
>  Il 10/10/20 16:53, Alex K ha scritto:
> 
> > Reading from the docs i see that this is not recommended?
>  IIUC the risk of having partially-unsynced data is is too high.
>  DB replication is not easy to configure because it's hard to do well,
>  even active/passive.
>  But I can tell you that a 3-node mariadb (galera) cluster is not hard
> to
>  setup. Just follow one of the tutorials. It's nearly as easy as
> setting
>  up a replica3 gluster volume :)
>  And "guarantees" consinstency in the DB data.
> >>> I see. Since I will not have only mariadb, then I have to setup the
> same replication for postgresql and later influxdb, which adds into the
> complexity.
> >>> For cluster management I will be using pacemaker/corosync.
> >>>
> >>> Thanx for your feedback
> >>>
> 
>  --
>  Diego Zuccato
>  DIFA - Dip. di Fisica e Astronomia
>  Servizi Informatici
>  Alma Mater Studiorum - Università di Bologna
>  V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
>  tel.: +39 051 20 95786
> 
> >>> 
> >>>
> >>>
> >>>
> >>> Community Meeting Calendar:
> >>>
> >>> Schedule -
> >>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> >>> Bridge: https://bluejeans.com/441850968
> >>>
> >>> Gluster-users mailing list
> >>> Gluster-users@gluster.org
> >>> https://lists.gluster.org/mailman/listinfo/gluster-users
> >>>
> >> 
> >>
> >>
> >>
> >> Community Meeting Calendar:
> >>
> >> Schedule -
> >> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> >> Bridge: https://bluejeans.com/441850968
> >>
> >> Gluster-users mailing list
> >> Gluster-users@gluster.org
> >> https://lists.gluster.org/mailman/listinfo/gluster-users
> >>
> >
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Glusterfs as databse store

2020-10-13 Thread Strahil Nikolov
It's really interesting that there is also a "profile" for databases available:

[root@glustera groups]# cat /var/lib/glusterd/groups/db-workload  
performance.open-behind=on
performance.write-behind=off
performance.stat-prefetch=off
performance.quick-read=off
performance.strict-o-direct=on
performance.read-ahead=off
performance.io-cache=off
performance.readdir-ahead=off
performance.client-io-threads=on
server.event-threads=4
client.event-threads=4
performance.read-after-open=yes

At least it is a good start point.

Best Regards,
Strahil Nikolov


В вторник, 13 октомври 2020 г., 21:42:28 Гринуич+3, Alex K 
 написа: 







On Mon, Oct 12, 2020, 21:50 Olaf Buitelaar  wrote:
> Hi Alex,
> 
> I've been running databases both directly and indirectly through qemu images 
> vms (managed by oVirt), and since the recent gluster versions (6+, haven't 
> tested 7-8) I'm generally happy with the stability. I'm running mostly write 
> intensive workloads. 
> For mariadb, any gluster volume seems to workfine, i've both running shared 
> and none-sharded volumes (using none-sharded for backup slave's to keep the 
> file's as a whole). 
> For postgresql it's required to enable the volume option; 
> performance.strict-o-direct: on.  but both shared and none-sharded work in 
> that case too.
> none the less i would advise to run any database with strict-o-direct on. 
Thanx Olaf for your feedback. Appreciated 
> 
> Best Olaf
> 
> 
> Op ma 12 okt. 2020 om 20:10 schreef Alex K :
>> 
>> 
>> On Mon, Oct 12, 2020, 19:24 Strahil Nikolov  wrote:
>>> Hi Alex,
>>> 
>>> I can share that oVirt is using Gluster as a HCI solution and many people 
>>> are hosting DBs in their Virtual Machines.Yet, oVirt bypasses any file 
>>> system caches and uses Direct I/O in order to ensure consistency.
>>> 
>>> As you will be using pacemaker, drbd is a viable solution that can be 
>>> controlled easily.
>> Thank you Strahil. I am using ovirt with glusterfs successfully for the last 
>> 5 years and I'm very happy about it. Though the vms gluster volume has 
>> sharding enabled by default and I suspect this is different if you run DB 
>> directly on top glusterfs. I assume there are optimizations one could apply 
>> at gluster volumes (use direct io?, small file workload optimizations, etc) 
>> and was hoping that there were success stories of DBs on top glusterfs.  I 
>> might go with drbd as the latest version is much more scalable and 
>> simplified.
>>>  
>>> Best Regards,
>>> Strahil Nikolov
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> В понеделник, 12 октомври 2020 г., 12:12:18 Гринуич+3, Alex K 
>>>  написа: 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> On Mon, Oct 12, 2020 at 9:47 AM Diego Zuccato  
>>> wrote:
 Il 10/10/20 16:53, Alex K ha scritto:
 
> Reading from the docs i see that this is not recommended?
 IIUC the risk of having partially-unsynced data is is too high.
 DB replication is not easy to configure because it's hard to do well,
 even active/passive.
 But I can tell you that a 3-node mariadb (galera) cluster is not hard to
 setup. Just follow one of the tutorials. It's nearly as easy as setting
 up a replica3 gluster volume :)
 And "guarantees" consinstency in the DB data.
>>> I see. Since I will not have only mariadb, then I have to setup the same 
>>> replication for postgresql and later influxdb, which adds into the 
>>> complexity. 
>>> For cluster management I will be using pacemaker/corosync. 
>>> 
>>> Thanx for your feedback
>>> 
  
 -- 
 Diego Zuccato
 DIFA - Dip. di Fisica e Astronomia
 Servizi Informatici
 Alma Mater Studiorum - Università di Bologna
 V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
 tel.: +39 051 20 95786
 
>>> 
>>> 
>>> 
>>> 
>>> Community Meeting Calendar:
>>> 
>>> Schedule -
>>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>>> Bridge: https://bluejeans.com/441850968
>>> 
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>> 
>> 
>> 
>> 
>> 
>> Community Meeting Calendar:
>> 
>> Schedule -
>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> Bridge: https://bluejeans.com/441850968
>> 
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>> 
> 




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Glusterfs as databse store

2020-10-13 Thread Gionatan Danti

Il 2020-10-13 21:16 Strahil Nikolov ha scritto:

At least it is a good start point.


This can also be an interesting read: 
https://docs.openshift.com/container-platform/3.11/scaling_performance/optimizing_on_glusterfs_storage.html


--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.da...@assyoma.it - i...@assyoma.it
GPG public key ID: FF5F32A8




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Glusterfs as databse store

2020-10-13 Thread Alex K
On Mon, Oct 12, 2020, 21:50 Olaf Buitelaar  wrote:

> Hi Alex,
>
> I've been running databases both directly and indirectly through qemu
> images vms (managed by oVirt), and since the recent gluster versions (6+,
> haven't tested 7-8) I'm generally happy with the stability. I'm running
> mostly write intensive workloads.
> For mariadb, any gluster volume seems to workfine, i've both running
> shared and none-sharded volumes (using none-sharded for backup slave's to
> keep the file's as a whole).
> For postgresql it's required to enable the volume
> option; performance.strict-o-direct: on.  but both shared and none-sharded
> work in that case too.
> none the less i would advise to run any database with strict-o-direct on.
>
Thanx Olaf for your feedback. Appreciated

>
> Best Olaf
>
>
> Op ma 12 okt. 2020 om 20:10 schreef Alex K :
>
>>
>>
>> On Mon, Oct 12, 2020, 19:24 Strahil Nikolov 
>> wrote:
>>
>>> Hi Alex,
>>>
>>> I can share that oVirt is using Gluster as a HCI solution and many
>>> people are hosting DBs in their Virtual Machines.Yet, oVirt bypasses any
>>> file system caches and uses Direct I/O in order to ensure consistency.
>>>
>>> As you will be using pacemaker, drbd is a viable solution that can be
>>> controlled easily.
>>>
>> Thank you Strahil. I am using ovirt with glusterfs successfully for the
>> last 5 years and I'm very happy about it. Though the vms gluster volume has
>> sharding enabled by default and I suspect this is different if you run DB
>> directly on top glusterfs. I assume there are optimizations one could apply
>> at gluster volumes (use direct io?, small file workload optimizations, etc)
>> and was hoping that there were success stories of DBs on top glusterfs.  I
>> might go with drbd as the latest version is much more scalable and
>> simplified.
>>
>>>
>>> Best Regards,
>>> Strahil Nikolov
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> В понеделник, 12 октомври 2020 г., 12:12:18 Гринуич+3, Alex K <
>>> rightkickt...@gmail.com> написа:
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Mon, Oct 12, 2020 at 9:47 AM Diego Zuccato 
>>> wrote:
>>> > Il 10/10/20 16:53, Alex K ha scritto:
>>> >
>>> >> Reading from the docs i see that this is not recommended?
>>> > IIUC the risk of having partially-unsynced data is is too high.
>>> > DB replication is not easy to configure because it's hard to do well,
>>> > even active/passive.
>>> > But I can tell you that a 3-node mariadb (galera) cluster is not hard
>>> to
>>> > setup. Just follow one of the tutorials. It's nearly as easy as setting
>>> > up a replica3 gluster volume :)
>>> > And "guarantees" consinstency in the DB data.
>>> I see. Since I will not have only mariadb, then I have to setup the same
>>> replication for postgresql and later influxdb, which adds into the
>>> complexity.
>>> For cluster management I will be using pacemaker/corosync.
>>>
>>> Thanx for your feedback
>>>
>>> >
>>> > --
>>> > Diego Zuccato
>>> > DIFA - Dip. di Fisica e Astronomia
>>> > Servizi Informatici
>>> > Alma Mater Studiorum - Università di Bologna
>>> > V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
>>> > tel.: +39 051 20 95786
>>> >
>>> 
>>>
>>>
>>>
>>> Community Meeting Calendar:
>>>
>>> Schedule -
>>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>>> Bridge: https://bluejeans.com/441850968
>>>
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>>
>> 
>>
>>
>>
>> Community Meeting Calendar:
>>
>> Schedule -
>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> Bridge: https://bluejeans.com/441850968
>>
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Glusterfs as databse store

2020-10-12 Thread Strahil Nikolov
Hi Alex,

I can share that oVirt is using Gluster as a HCI solution and many people are 
hosting DBs in their Virtual Machines.Yet, oVirt bypasses any file system 
caches and uses Direct I/O in order to ensure consistency.

As you will be using pacemaker, drbd is a viable solution that can be 
controlled easily.

Best Regards,
Strahil Nikolov







В понеделник, 12 октомври 2020 г., 12:12:18 Гринуич+3, Alex K 
 написа: 







On Mon, Oct 12, 2020 at 9:47 AM Diego Zuccato  wrote:
> Il 10/10/20 16:53, Alex K ha scritto:
> 
>> Reading from the docs i see that this is not recommended?
> IIUC the risk of having partially-unsynced data is is too high.
> DB replication is not easy to configure because it's hard to do well,
> even active/passive.
> But I can tell you that a 3-node mariadb (galera) cluster is not hard to
> setup. Just follow one of the tutorials. It's nearly as easy as setting
> up a replica3 gluster volume :)
> And "guarantees" consinstency in the DB data.
I see. Since I will not have only mariadb, then I have to setup the same 
replication for postgresql and later influxdb, which adds into the complexity. 
For cluster management I will be using pacemaker/corosync. 

Thanx for your feedback

>  
> -- 
> Diego Zuccato
> DIFA - Dip. di Fisica e Astronomia
> Servizi Informatici
> Alma Mater Studiorum - Università di Bologna
> V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
> tel.: +39 051 20 95786
> 




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Glusterfs as databse store

2020-10-12 Thread Olaf Buitelaar
Hi Alex,

I've been running databases both directly and indirectly through qemu
images vms (managed by oVirt), and since the recent gluster versions (6+,
haven't tested 7-8) I'm generally happy with the stability. I'm running
mostly write intensive workloads.
For mariadb, any gluster volume seems to workfine, i've both running shared
and none-sharded volumes (using none-sharded for backup slave's to keep the
file's as a whole).
For postgresql it's required to enable the volume
option; performance.strict-o-direct: on.  but both shared and none-sharded
work in that case too.
none the less i would advise to run any database with strict-o-direct on.

Best Olaf


Op ma 12 okt. 2020 om 20:10 schreef Alex K :

>
>
> On Mon, Oct 12, 2020, 19:24 Strahil Nikolov  wrote:
>
>> Hi Alex,
>>
>> I can share that oVirt is using Gluster as a HCI solution and many people
>> are hosting DBs in their Virtual Machines.Yet, oVirt bypasses any file
>> system caches and uses Direct I/O in order to ensure consistency.
>>
>> As you will be using pacemaker, drbd is a viable solution that can be
>> controlled easily.
>>
> Thank you Strahil. I am using ovirt with glusterfs successfully for the
> last 5 years and I'm very happy about it. Though the vms gluster volume has
> sharding enabled by default and I suspect this is different if you run DB
> directly on top glusterfs. I assume there are optimizations one could apply
> at gluster volumes (use direct io?, small file workload optimizations, etc)
> and was hoping that there were success stories of DBs on top glusterfs.  I
> might go with drbd as the latest version is much more scalable and
> simplified.
>
>>
>> Best Regards,
>> Strahil Nikolov
>>
>>
>>
>>
>>
>>
>>
>> В понеделник, 12 октомври 2020 г., 12:12:18 Гринуич+3, Alex K <
>> rightkickt...@gmail.com> написа:
>>
>>
>>
>>
>>
>>
>>
>> On Mon, Oct 12, 2020 at 9:47 AM Diego Zuccato 
>> wrote:
>> > Il 10/10/20 16:53, Alex K ha scritto:
>> >
>> >> Reading from the docs i see that this is not recommended?
>> > IIUC the risk of having partially-unsynced data is is too high.
>> > DB replication is not easy to configure because it's hard to do well,
>> > even active/passive.
>> > But I can tell you that a 3-node mariadb (galera) cluster is not hard to
>> > setup. Just follow one of the tutorials. It's nearly as easy as setting
>> > up a replica3 gluster volume :)
>> > And "guarantees" consinstency in the DB data.
>> I see. Since I will not have only mariadb, then I have to setup the same
>> replication for postgresql and later influxdb, which adds into the
>> complexity.
>> For cluster management I will be using pacemaker/corosync.
>>
>> Thanx for your feedback
>>
>> >
>> > --
>> > Diego Zuccato
>> > DIFA - Dip. di Fisica e Astronomia
>> > Servizi Informatici
>> > Alma Mater Studiorum - Università di Bologna
>> > V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
>> > tel.: +39 051 20 95786
>> >
>> 
>>
>>
>>
>> Community Meeting Calendar:
>>
>> Schedule -
>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> Bridge: https://bluejeans.com/441850968
>>
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://bluejeans.com/441850968
>
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Glusterfs as databse store

2020-10-12 Thread Alex K
On Mon, Oct 12, 2020, 19:24 Strahil Nikolov  wrote:

> Hi Alex,
>
> I can share that oVirt is using Gluster as a HCI solution and many people
> are hosting DBs in their Virtual Machines.Yet, oVirt bypasses any file
> system caches and uses Direct I/O in order to ensure consistency.
>
> As you will be using pacemaker, drbd is a viable solution that can be
> controlled easily.
>
Thank you Strahil. I am using ovirt with glusterfs successfully for the
last 5 years and I'm very happy about it. Though the vms gluster volume has
sharding enabled by default and I suspect this is different if you run DB
directly on top glusterfs. I assume there are optimizations one could apply
at gluster volumes (use direct io?, small file workload optimizations, etc)
and was hoping that there were success stories of DBs on top glusterfs.  I
might go with drbd as the latest version is much more scalable and
simplified.

>
> Best Regards,
> Strahil Nikolov
>
>
>
>
>
>
>
> В понеделник, 12 октомври 2020 г., 12:12:18 Гринуич+3, Alex K <
> rightkickt...@gmail.com> написа:
>
>
>
>
>
>
>
> On Mon, Oct 12, 2020 at 9:47 AM Diego Zuccato 
> wrote:
> > Il 10/10/20 16:53, Alex K ha scritto:
> >
> >> Reading from the docs i see that this is not recommended?
> > IIUC the risk of having partially-unsynced data is is too high.
> > DB replication is not easy to configure because it's hard to do well,
> > even active/passive.
> > But I can tell you that a 3-node mariadb (galera) cluster is not hard to
> > setup. Just follow one of the tutorials. It's nearly as easy as setting
> > up a replica3 gluster volume :)
> > And "guarantees" consinstency in the DB data.
> I see. Since I will not have only mariadb, then I have to setup the same
> replication for postgresql and later influxdb, which adds into the
> complexity.
> For cluster management I will be using pacemaker/corosync.
>
> Thanx for your feedback
>
> >
> > --
> > Diego Zuccato
> > DIFA - Dip. di Fisica e Astronomia
> > Servizi Informatici
> > Alma Mater Studiorum - Università di Bologna
> > V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
> > tel.: +39 051 20 95786
> >
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://bluejeans.com/441850968
>
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Glusterfs as databse store

2020-10-12 Thread Alex K
On Mon, Oct 12, 2020 at 9:47 AM Diego Zuccato 
wrote:

> Il 10/10/20 16:53, Alex K ha scritto:
>
> > Reading from the docs i see that this is not recommended?
> IIUC the risk of having partially-unsynced data is is too high.
> DB replication is not easy to configure because it's hard to do well,
> even active/passive.
> But I can tell you that a 3-node mariadb (galera) cluster is not hard to
> setup. Just follow one of the tutorials. It's nearly as easy as setting
> up a replica3 gluster volume :)
> And "guarantees" consinstency in the DB data.
>
I see. Since I will not have only mariadb, then I have to setup the same
replication for postgresql and later influxdb, which adds into the
complexity.
For cluster management I will be using pacemaker/corosync.

Thanx for your feedback


> --
> Diego Zuccato
> DIFA - Dip. di Fisica e Astronomia
> Servizi Informatici
> Alma Mater Studiorum - Università di Bologna
> V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
> tel.: +39 051 20 95786
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS performance for big files...

2020-08-18 Thread Strahil Nikolov
There is a 'virt' group optimized for virtual workloads.

Usually I recommend to start from ground up in order to optimize  on all  
levels.

- I/O scheduler of the bricks (either (mq-)deadline or noop/none)
- CPU cstates
- Tuned profile (swappiness, dirty settings)
- MTU of the gluster network, the bigger the better
- Gluster tunables  (virt group is a good  start)


If your gluster nodes are actually in the cloud, it is recommended (at least 
for AWS) to use a stripe over 8 virtual disks for each brick.

Keep in mind that shard size on RH Gluster Storage is using 512MB while the 
default on community edition is 64MB.

Best Regards,
Strahil Nikolov

На 18 август 2020 г. 16:47:01 GMT+03:00, Gilberto Nunes 
 написа:
>>> What's your workload?
>I have 6 KVM VMs which have Windows and Linux installed on it.
>
>>> Read?
>>> Write?
>iostat (I am using sdc as the main storage)
>cavg-cpu:  %user   %nice %system %iowait  %steal   %idle
>   9.150.001.251.380.00   88.22
>
>Devicer/s w/s rkB/s wkB/s   rrqm/s   wrqm/s 
>%rrqm
> %wrqm r_await w_await aqu-sz rareq-sz wareq-sz  svctm  %util
>sdc  0.001.00  0.00  1.50 0.00 0.00  
>0.00
>  0.000.000.00   0.00 0.00 1.50
>
>
>>> sequential? random?
>sequential
>>> many files?
>6 files  500G 200G 200G 250G 200G 100G size each.
>With more bricks and nodes, you should probably use sharding.
>For now I have only two bricks/nodes Plan for more is now out of
>the
>question!
>
>What are your expectations, btw?
>
>I ran many environments with Proxmox Virtual Environment, which use
>QEMU
>(not virt) and LXC...But I use majority KVM (QEMU) virtual machines.
>My goal is to use glusterfs since I think it's more resource demanding
>such
>as memory and cpu and nic, when compared to ZFS or CEPH.
>
>
>---
>Gilberto Nunes Ferreira
>
>(47) 3025-5907
>(47) 99676-7530 - Whatsapp / Telegram
>
>Skype: gilberto.nunes36
>
>
>
>
>
>Em ter., 18 de ago. de 2020 às 10:29, sankarshan <
>sankarshan.mukhopadh...@gmail.com> escreveu:
>
>> On Tue, 18 Aug 2020 at 18:50, Yaniv Kaul  wrote:
>> >
>> >
>> >
>> > On Tue, Aug 18, 2020 at 3:57 PM Gilberto Nunes <
>> gilberto.nune...@gmail.com> wrote:
>> >>
>> >> Hi friends...
>> >>
>> >> I have a 2-nodes GlusterFS, with has the follow configuration:
>> >> gluster vol info
>> >>
>>
>> I'd be interested in the chosen configuration for this deployment -
>> the 2 node set up. Was there a specific requirement which led to
>this?
>>
>> >> Volume Name: VMS
>> >> Type: Replicate
>> >> Volume ID: a4ec9cfb-1bba-405c-b249-8bd5467e0b91
>> >> Status: Started
>> >> Snapshot Count: 0
>> >> Number of Bricks: 1 x 2 = 2
>> >> Transport-type: tcp
>> >> Bricks:
>> >> Brick1: server02:/DATA/vms
>> >> Brick2: server01:/DATA/vms
>> >> Options Reconfigured:
>> >> performance.read-ahead: off
>> >> performance.io-cache: on
>> >> performance.cache-refresh-timeout: 1
>> >> performance.cache-size: 1073741824
>> >> performance.io-thread-count: 64
>> >> performance.write-behind-window-size: 64MB
>> >> cluster.granular-entry-heal: enable
>> >> cluster.self-heal-daemon: enable
>> >> performance.client-io-threads: on
>> >> cluster.data-self-heal-algorithm: full
>> >> cluster.favorite-child-policy: mtime
>> >> network.ping-timeout: 2
>> >> cluster.quorum-count: 1
>> >> cluster.quorum-reads: false
>> >> cluster.heal-timeout: 20
>> >> storage.fips-mode-rchecksum: on
>> >> transport.address-family: inet
>> >> nfs.disable: on
>> >>
>> >> HDDs are SSD and SAS
>> >> Network connections between the servers are dedicated 1GB (no
>switch!).
>> >
>> >
>> > You can't get good performance on 1Gb.
>> >>
>> >> Files are 500G 200G 200G 250G 200G 100G size each.
>> >>
>> >> Performance so far so good is ok...
>> >
>> >
>> > What's your workload? Read? Write? sequential? random? many files?
>> > With more bricks and nodes, you should probably use sharding.
>> >
>> > What are your expectations, btw?
>> > Y.
>> >
>> >>
>> >> Any other advice which could point me, let me know!
>> >>
>> >> Thanks
>> >>
>> >>
>> >>
>> >> ---
>> >> Gilberto Nunes Ferreira
>> >>
>> >> 
>> >>
>> >>
>> >>
>> >> Community Meeting Calendar:
>> >>
>> >> Schedule -
>> >> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> >> Bridge: https://bluejeans.com/441850968
>> >>
>> >> Gluster-users mailing list
>> >> Gluster-users@gluster.org
>> >> https://lists.gluster.org/mailman/listinfo/gluster-users
>> >
>> > 
>> >
>> >
>> >
>> > Community Meeting Calendar:
>> >
>> > Schedule -
>> > Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> > Bridge: https://bluejeans.com/441850968
>> >
>> > Gluster-users mailing list
>> > Gluster-users@gluster.org
>> > https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>>
>>
>> --
>> sankarshan mukhopadhyay
>> 
>>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Re: [Gluster-users] GlusterFS performance for big files...

2020-08-18 Thread Gilberto Nunes
>> What's your workload?
I have 6 KVM VMs which have Windows and Linux installed on it.

>> Read?
>> Write?
iostat (I am using sdc as the main storage)
cavg-cpu:  %user   %nice %system %iowait  %steal   %idle
   9.150.001.251.380.00   88.22

Devicer/s w/s rkB/s wkB/s   rrqm/s   wrqm/s  %rrqm
 %wrqm r_await w_await aqu-sz rareq-sz wareq-sz  svctm  %util
sdc  0.001.00  0.00  1.50 0.00 0.00   0.00
  0.000.000.00   0.00 0.00 1.50


>> sequential? random?
sequential
>> many files?
6 files  500G 200G 200G 250G 200G 100G size each.
With more bricks and nodes, you should probably use sharding.
For now I have only two bricks/nodes Plan for more is now out of the
question!

What are your expectations, btw?

I ran many environments with Proxmox Virtual Environment, which use QEMU
(not virt) and LXC...But I use majority KVM (QEMU) virtual machines.
My goal is to use glusterfs since I think it's more resource demanding such
as memory and cpu and nic, when compared to ZFS or CEPH.


---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em ter., 18 de ago. de 2020 às 10:29, sankarshan <
sankarshan.mukhopadh...@gmail.com> escreveu:

> On Tue, 18 Aug 2020 at 18:50, Yaniv Kaul  wrote:
> >
> >
> >
> > On Tue, Aug 18, 2020 at 3:57 PM Gilberto Nunes <
> gilberto.nune...@gmail.com> wrote:
> >>
> >> Hi friends...
> >>
> >> I have a 2-nodes GlusterFS, with has the follow configuration:
> >> gluster vol info
> >>
>
> I'd be interested in the chosen configuration for this deployment -
> the 2 node set up. Was there a specific requirement which led to this?
>
> >> Volume Name: VMS
> >> Type: Replicate
> >> Volume ID: a4ec9cfb-1bba-405c-b249-8bd5467e0b91
> >> Status: Started
> >> Snapshot Count: 0
> >> Number of Bricks: 1 x 2 = 2
> >> Transport-type: tcp
> >> Bricks:
> >> Brick1: server02:/DATA/vms
> >> Brick2: server01:/DATA/vms
> >> Options Reconfigured:
> >> performance.read-ahead: off
> >> performance.io-cache: on
> >> performance.cache-refresh-timeout: 1
> >> performance.cache-size: 1073741824
> >> performance.io-thread-count: 64
> >> performance.write-behind-window-size: 64MB
> >> cluster.granular-entry-heal: enable
> >> cluster.self-heal-daemon: enable
> >> performance.client-io-threads: on
> >> cluster.data-self-heal-algorithm: full
> >> cluster.favorite-child-policy: mtime
> >> network.ping-timeout: 2
> >> cluster.quorum-count: 1
> >> cluster.quorum-reads: false
> >> cluster.heal-timeout: 20
> >> storage.fips-mode-rchecksum: on
> >> transport.address-family: inet
> >> nfs.disable: on
> >>
> >> HDDs are SSD and SAS
> >> Network connections between the servers are dedicated 1GB (no switch!).
> >
> >
> > You can't get good performance on 1Gb.
> >>
> >> Files are 500G 200G 200G 250G 200G 100G size each.
> >>
> >> Performance so far so good is ok...
> >
> >
> > What's your workload? Read? Write? sequential? random? many files?
> > With more bricks and nodes, you should probably use sharding.
> >
> > What are your expectations, btw?
> > Y.
> >
> >>
> >> Any other advice which could point me, let me know!
> >>
> >> Thanks
> >>
> >>
> >>
> >> ---
> >> Gilberto Nunes Ferreira
> >>
> >> 
> >>
> >>
> >>
> >> Community Meeting Calendar:
> >>
> >> Schedule -
> >> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> >> Bridge: https://bluejeans.com/441850968
> >>
> >> Gluster-users mailing list
> >> Gluster-users@gluster.org
> >> https://lists.gluster.org/mailman/listinfo/gluster-users
> >
> > 
> >
> >
> >
> > Community Meeting Calendar:
> >
> > Schedule -
> > Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> > Bridge: https://bluejeans.com/441850968
> >
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>
> --
> sankarshan mukhopadhyay
> 
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS performance for big files...

2020-08-18 Thread sankarshan
On Tue, 18 Aug 2020 at 18:50, Yaniv Kaul  wrote:
>
>
>
> On Tue, Aug 18, 2020 at 3:57 PM Gilberto Nunes  
> wrote:
>>
>> Hi friends...
>>
>> I have a 2-nodes GlusterFS, with has the follow configuration:
>> gluster vol info
>>

I'd be interested in the chosen configuration for this deployment -
the 2 node set up. Was there a specific requirement which led to this?

>> Volume Name: VMS
>> Type: Replicate
>> Volume ID: a4ec9cfb-1bba-405c-b249-8bd5467e0b91
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1 x 2 = 2
>> Transport-type: tcp
>> Bricks:
>> Brick1: server02:/DATA/vms
>> Brick2: server01:/DATA/vms
>> Options Reconfigured:
>> performance.read-ahead: off
>> performance.io-cache: on
>> performance.cache-refresh-timeout: 1
>> performance.cache-size: 1073741824
>> performance.io-thread-count: 64
>> performance.write-behind-window-size: 64MB
>> cluster.granular-entry-heal: enable
>> cluster.self-heal-daemon: enable
>> performance.client-io-threads: on
>> cluster.data-self-heal-algorithm: full
>> cluster.favorite-child-policy: mtime
>> network.ping-timeout: 2
>> cluster.quorum-count: 1
>> cluster.quorum-reads: false
>> cluster.heal-timeout: 20
>> storage.fips-mode-rchecksum: on
>> transport.address-family: inet
>> nfs.disable: on
>>
>> HDDs are SSD and SAS
>> Network connections between the servers are dedicated 1GB (no switch!).
>
>
> You can't get good performance on 1Gb.
>>
>> Files are 500G 200G 200G 250G 200G 100G size each.
>>
>> Performance so far so good is ok...
>
>
> What's your workload? Read? Write? sequential? random? many files?
> With more bricks and nodes, you should probably use sharding.
>
> What are your expectations, btw?
> Y.
>
>>
>> Any other advice which could point me, let me know!
>>
>> Thanks
>>
>>
>>
>> ---
>> Gilberto Nunes Ferreira
>>
>> 
>>
>>
>>
>> Community Meeting Calendar:
>>
>> Schedule -
>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> Bridge: https://bluejeans.com/441850968
>>
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://bluejeans.com/441850968
>
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users



-- 
sankarshan mukhopadhyay





Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS performance for big files...

2020-08-18 Thread Yaniv Kaul
On Tue, Aug 18, 2020 at 3:57 PM Gilberto Nunes 
wrote:

> Hi friends...
>
> I have a 2-nodes GlusterFS, with has the follow configuration:
> gluster vol info
>
> Volume Name: VMS
> Type: Replicate
> Volume ID: a4ec9cfb-1bba-405c-b249-8bd5467e0b91
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: server02:/DATA/vms
> Brick2: server01:/DATA/vms
> Options Reconfigured:
> performance.read-ahead: off
> performance.io-cache: on
> performance.cache-refresh-timeout: 1
> performance.cache-size: 1073741824
> performance.io-thread-count: 64
> performance.write-behind-window-size: 64MB
> cluster.granular-entry-heal: enable
> cluster.self-heal-daemon: enable
> performance.client-io-threads: on
> cluster.data-self-heal-algorithm: full
> cluster.favorite-child-policy: mtime
> network.ping-timeout: 2
> cluster.quorum-count: 1
> cluster.quorum-reads: false
> cluster.heal-timeout: 20
> storage.fips-mode-rchecksum: on
> transport.address-family: inet
> nfs.disable: on
>
> HDDs are SSD and SAS
> Network connections between the servers are dedicated 1GB (no switch!).
>

You can't get good performance on 1Gb.

> Files are 500G 200G 200G 250G 200G 100G size each.
>
> Performance so far so good is ok...
>

What's your workload? Read? Write? sequential? random? many files?
With more bricks and nodes, you should probably use sharding.

What are your expectations, btw?
Y.


> Any other advice which could point me, let me know!
>
> Thanks
>
>
>
> ---
> Gilberto Nunes Ferreira
>
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://bluejeans.com/441850968
>
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS over multiples HDD

2020-07-31 Thread Gionatan Danti

Il 2020-07-30 15:08 Gilberto Nunes ha scritto:

I meant, if you power off the server, pull off 1 disk, and then power
on we get system errors


Hi, you are probably hitting some variant of this bug, rather than see 
LVM crashing: https://bugzilla.redhat.com/show_bug.cgi?id=1701504


If not, please write about your issue on the linux lvm mailing list.
Thanks.

--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.da...@assyoma.it - i...@assyoma.it
GPG public key ID: FF5F32A8




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS over multiples HDD

2020-07-30 Thread Strahil Nikolov
Anyway in raid1+spare  on a replica 2  volume you will be using 6 disks  in 
total.
It  will  be more optimal to get all those  disks  in 'replica 3' or 'replica 3 
 arbiter 1' (for the arbiter it would be optimal  to have a small ssd and the 
data disks for tge actual data).

Best Regards,
Strahil Nikolov

На 30 юли 2020 г. 19:32:32 GMT+03:00, Strahil Nikolov  
написа:
>When using legacy,  you need to  prepare the MBR on each disk, so the
>BIOS will be able to boot from it and load grub.
>
>On UEFI, you will need 2 entries each pointing to the other disk. There
>is a thread for each approach in the CentOS7 forums and the peocedure
>is almost the same on all linux .
>
>Best Regards,
>Strahil Nikolov
>
>На 30 юли 2020 г. 18:17:49 GMT+03:00, Gilberto Nunes
> написа:
>>I am using Legacy... But I am using Virtualbox in my labs... Perhaps
>>this
>>is the problem...
>>I don't do that in real hardware. But with spare disk (2 + 1) in mdadm
>>it's
>>fine.
>>
>>---
>>Gilberto Nunes Ferreira
>>
>>(47) 3025-5907
>>(47) 99676-7530 - Whatsapp / Telegram
>>
>>Skype: gilberto.nunes36
>>
>>
>>
>>
>>
>>Em qui., 30 de jul. de 2020 às 12:16, Strahil Nikolov
>>
>>escreveu:
>>
>>> The crash  with failed  mdadm disk is not normal. We need to check
>it
>>out.
>>>
>>> Are  you using  Legacy or  UEFI  ?
>>>
>>> Best Regards,
>>> Strahil Nikolov
>>>
>>> На 30 юли 2020 г. 16:25:47 GMT+03:00, Gilberto Nunes <
>>> gilberto.nune...@gmail.com> написа:
>>> >But I will give it a try in the lvm mirroring process Thanks
>>> >---
>>> >Gilberto Nunes Ferreira
>>> >
>>> >(47) 3025-5907
>>> >(47) 99676-7530 - Whatsapp / Telegram
>>> >
>>> >Skype: gilberto.nunes36
>>> >
>>> >
>>> >
>>> >
>>> >
>>> >Em qui., 30 de jul. de 2020 às 10:23, Gilberto Nunes <
>>> >gilberto.nune...@gmail.com> escreveu:
>>> >
>>> >> Well, actually I am not concerned ONLY about mirroring the data,
>>but
>>> >with
>>> >> reliability on it.
>>> >> Here during my lab tests I notice if I pull off a disk and then
>>power
>>> >on
>>> >> the server, the LVM crashes...
>>> >> On the other hand, with ZFS even in degraded state the system
>>booted
>>> >> normally.
>>> >> The same happens with mdadm. That's the point.
>>> >>
>>> >> ---
>>> >> Gilberto Nunes Ferreira
>>> >>
>>> >> (47) 3025-5907
>>> >> (47) 99676-7530 - Whatsapp / Telegram
>>> >>
>>> >> Skype: gilberto.nunes36
>>> >>
>>> >>
>>> >>
>>> >>
>>> >>
>>> >> Em qui., 30 de jul. de 2020 às 10:18, Alvin Starr
>>
>>> >> escreveu:
>>> >>
>>> >>> LVM supports mirroring or raid1.
>>> >>> So to create a raid1 you would do something like "lvcreate -n
>>> >raidvol  -L
>>> >>> 1T --mirrors 1  /dev/bigvg"
>>> >>> You can add a mirror to an existing logical volume using
>>lvconvert
>>> >with
>>> >>> something like "lvconvert --mirrors +1 bigvg/unmirroredlv"
>>> >>>
>>> >>> But other than simple mirroring you will need mdadm.
>>> >>>
>>> >>>
>>> >>>
>>> >>> On 7/30/20 8:39 AM, Gilberto Nunes wrote:
>>> >>>
>>> >>> Doing some research and the chvg command, which is responsable
>>for
>>> >create
>>> >>> hotspare disks in LVM is available only in AIX!
>>> >>> I have a Debian Buster box and there is no such command chvg.
>>> >Correct me
>>> >>> if I am wrong
>>> >>> ---
>>> >>> Gilberto Nunes Ferreira
>>> >>>
>>> >>> (47) 3025-5907
>>> >>> (47) 99676-7530 - Whatsapp / Telegram
>>> >>>
>>> >>> Skype: gilberto.nunes36
>>> >>>
>>> >>>
>>> >>>
>>> >>>
>>> >>>
>>> >>> Em qui., 30 de jul. de 2020 às 02:05, Strahil Nikolov <
>>> >>> hunter86...@yahoo.com> escreveu:
>>> >>>
>>>  LVM allows creating/converting striped/mirrored LVs without any
>>> >dowtime
>>>  and it's using the md module.
>>> 
>>>  Best Regards,
>>>  Strahil Nikolov
>>> 
>>>  На 28 юли 2020 г. 22:43:39 GMT+03:00, Gilberto Nunes <
>>>  gilberto.nune...@gmail.com> написа:
>>>  >Hi there
>>>  >
>>>  >'till now, I am using glusterfs over XFS and so far so good.
>>>  >Using LVM too
>>>  >Unfortunately, there is no way with XFS to merge two or more
>>HDD,
>>> >in
>>>  >order
>>>  >to use more than one HDD, like RAID1 or RAID5.
>>>  >My primary goal is to use two server with GlusterFS on top of
>>> >multiples
>>>  >HDDs for qemu images.
>>>  >I have think about BTRFS or mdadm.
>>>  >Anybody has some experience on this?
>>>  >
>>>  >Thanks a lot
>>>  >
>>>  >---
>>>  >Gilberto Nunes Ferreira
>>> 
>>> >>>
>>> >>> 
>>> >>>
>>> >>>
>>> >>>
>>> >>> Community Meeting Calendar:
>>> >>>
>>> >>> Schedule -
>>> >>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>>> >>> Bridge: https://bluejeans.com/441850968
>>> >>>
>>> >>> Gluster-users mailing
>>> >listGluster-users@gluster.orghttps://
>>> lists.gluster.org/mailman/listinfo/gluster-users
>>> >>>
>>> >>>
>>> >>> --
>>> >>> Alvin Starr   ||   land:  (647)478-6285
>>> >>> Netvel Inc.   ||   Cell:
>>> >(416)806-0133al...@netvel.net  ||
>>> >>>
>>> >>>
>>> >>> 
>>> >>>
>>> >>

Re: [Gluster-users] GlusterFS over multiples HDD

2020-07-30 Thread Strahil Nikolov
When using legacy,  you need to  prepare the MBR on each disk, so the BIOS will 
be able to boot from it and load grub.

On UEFI, you will need 2 entries each pointing to the other disk. There  is a 
thread for each approach in the CentOS7 forums and the peocedure is almost the 
same on all linux .

Best Regards,
Strahil Nikolov

На 30 юли 2020 г. 18:17:49 GMT+03:00, Gilberto Nunes 
 написа:
>I am using Legacy... But I am using Virtualbox in my labs... Perhaps
>this
>is the problem...
>I don't do that in real hardware. But with spare disk (2 + 1) in mdadm
>it's
>fine.
>
>---
>Gilberto Nunes Ferreira
>
>(47) 3025-5907
>(47) 99676-7530 - Whatsapp / Telegram
>
>Skype: gilberto.nunes36
>
>
>
>
>
>Em qui., 30 de jul. de 2020 às 12:16, Strahil Nikolov
>
>escreveu:
>
>> The crash  with failed  mdadm disk is not normal. We need to check it
>out.
>>
>> Are  you using  Legacy or  UEFI  ?
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> На 30 юли 2020 г. 16:25:47 GMT+03:00, Gilberto Nunes <
>> gilberto.nune...@gmail.com> написа:
>> >But I will give it a try in the lvm mirroring process Thanks
>> >---
>> >Gilberto Nunes Ferreira
>> >
>> >(47) 3025-5907
>> >(47) 99676-7530 - Whatsapp / Telegram
>> >
>> >Skype: gilberto.nunes36
>> >
>> >
>> >
>> >
>> >
>> >Em qui., 30 de jul. de 2020 às 10:23, Gilberto Nunes <
>> >gilberto.nune...@gmail.com> escreveu:
>> >
>> >> Well, actually I am not concerned ONLY about mirroring the data,
>but
>> >with
>> >> reliability on it.
>> >> Here during my lab tests I notice if I pull off a disk and then
>power
>> >on
>> >> the server, the LVM crashes...
>> >> On the other hand, with ZFS even in degraded state the system
>booted
>> >> normally.
>> >> The same happens with mdadm. That's the point.
>> >>
>> >> ---
>> >> Gilberto Nunes Ferreira
>> >>
>> >> (47) 3025-5907
>> >> (47) 99676-7530 - Whatsapp / Telegram
>> >>
>> >> Skype: gilberto.nunes36
>> >>
>> >>
>> >>
>> >>
>> >>
>> >> Em qui., 30 de jul. de 2020 às 10:18, Alvin Starr
>
>> >> escreveu:
>> >>
>> >>> LVM supports mirroring or raid1.
>> >>> So to create a raid1 you would do something like "lvcreate -n
>> >raidvol  -L
>> >>> 1T --mirrors 1  /dev/bigvg"
>> >>> You can add a mirror to an existing logical volume using
>lvconvert
>> >with
>> >>> something like "lvconvert --mirrors +1 bigvg/unmirroredlv"
>> >>>
>> >>> But other than simple mirroring you will need mdadm.
>> >>>
>> >>>
>> >>>
>> >>> On 7/30/20 8:39 AM, Gilberto Nunes wrote:
>> >>>
>> >>> Doing some research and the chvg command, which is responsable
>for
>> >create
>> >>> hotspare disks in LVM is available only in AIX!
>> >>> I have a Debian Buster box and there is no such command chvg.
>> >Correct me
>> >>> if I am wrong
>> >>> ---
>> >>> Gilberto Nunes Ferreira
>> >>>
>> >>> (47) 3025-5907
>> >>> (47) 99676-7530 - Whatsapp / Telegram
>> >>>
>> >>> Skype: gilberto.nunes36
>> >>>
>> >>>
>> >>>
>> >>>
>> >>>
>> >>> Em qui., 30 de jul. de 2020 às 02:05, Strahil Nikolov <
>> >>> hunter86...@yahoo.com> escreveu:
>> >>>
>>  LVM allows creating/converting striped/mirrored LVs without any
>> >dowtime
>>  and it's using the md module.
>> 
>>  Best Regards,
>>  Strahil Nikolov
>> 
>>  На 28 юли 2020 г. 22:43:39 GMT+03:00, Gilberto Nunes <
>>  gilberto.nune...@gmail.com> написа:
>>  >Hi there
>>  >
>>  >'till now, I am using glusterfs over XFS and so far so good.
>>  >Using LVM too
>>  >Unfortunately, there is no way with XFS to merge two or more
>HDD,
>> >in
>>  >order
>>  >to use more than one HDD, like RAID1 or RAID5.
>>  >My primary goal is to use two server with GlusterFS on top of
>> >multiples
>>  >HDDs for qemu images.
>>  >I have think about BTRFS or mdadm.
>>  >Anybody has some experience on this?
>>  >
>>  >Thanks a lot
>>  >
>>  >---
>>  >Gilberto Nunes Ferreira
>> 
>> >>>
>> >>> 
>> >>>
>> >>>
>> >>>
>> >>> Community Meeting Calendar:
>> >>>
>> >>> Schedule -
>> >>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> >>> Bridge: https://bluejeans.com/441850968
>> >>>
>> >>> Gluster-users mailing
>> >listGluster-users@gluster.orghttps://
>> lists.gluster.org/mailman/listinfo/gluster-users
>> >>>
>> >>>
>> >>> --
>> >>> Alvin Starr   ||   land:  (647)478-6285
>> >>> Netvel Inc.   ||   Cell:
>> >(416)806-0133al...@netvel.net  ||
>> >>>
>> >>>
>> >>> 
>> >>>
>> >>>
>> >>>
>> >>> Community Meeting Calendar:
>> >>>
>> >>> Schedule -
>> >>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> >>> Bridge: https://bluejeans.com/441850968
>> >>>
>> >>> Gluster-users mailing list
>> >>> Gluster-users@gluster.org
>> >>> https://lists.gluster.org/mailman/listinfo/gluster-users
>> >>>
>> >>
>>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org

Re: [Gluster-users] GlusterFS over multiples HDD

2020-07-30 Thread Strahil Nikolov
The crash  with failed  mdadm disk is not normal. We need to check it out.

Are  you using  Legacy or  UEFI  ?

Best Regards,
Strahil Nikolov

На 30 юли 2020 г. 16:25:47 GMT+03:00, Gilberto Nunes 
 написа:
>But I will give it a try in the lvm mirroring process Thanks
>---
>Gilberto Nunes Ferreira
>
>(47) 3025-5907
>(47) 99676-7530 - Whatsapp / Telegram
>
>Skype: gilberto.nunes36
>
>
>
>
>
>Em qui., 30 de jul. de 2020 às 10:23, Gilberto Nunes <
>gilberto.nune...@gmail.com> escreveu:
>
>> Well, actually I am not concerned ONLY about mirroring the data, but
>with
>> reliability on it.
>> Here during my lab tests I notice if I pull off a disk and then power
>on
>> the server, the LVM crashes...
>> On the other hand, with ZFS even in degraded state the system booted
>> normally.
>> The same happens with mdadm. That's the point.
>>
>> ---
>> Gilberto Nunes Ferreira
>>
>> (47) 3025-5907
>> (47) 99676-7530 - Whatsapp / Telegram
>>
>> Skype: gilberto.nunes36
>>
>>
>>
>>
>>
>> Em qui., 30 de jul. de 2020 às 10:18, Alvin Starr 
>> escreveu:
>>
>>> LVM supports mirroring or raid1.
>>> So to create a raid1 you would do something like "lvcreate -n
>raidvol  -L
>>> 1T --mirrors 1  /dev/bigvg"
>>> You can add a mirror to an existing logical volume using lvconvert 
>with
>>> something like "lvconvert --mirrors +1 bigvg/unmirroredlv"
>>>
>>> But other than simple mirroring you will need mdadm.
>>>
>>>
>>>
>>> On 7/30/20 8:39 AM, Gilberto Nunes wrote:
>>>
>>> Doing some research and the chvg command, which is responsable for
>create
>>> hotspare disks in LVM is available only in AIX!
>>> I have a Debian Buster box and there is no such command chvg.
>Correct me
>>> if I am wrong
>>> ---
>>> Gilberto Nunes Ferreira
>>>
>>> (47) 3025-5907
>>> (47) 99676-7530 - Whatsapp / Telegram
>>>
>>> Skype: gilberto.nunes36
>>>
>>>
>>>
>>>
>>>
>>> Em qui., 30 de jul. de 2020 às 02:05, Strahil Nikolov <
>>> hunter86...@yahoo.com> escreveu:
>>>
 LVM allows creating/converting striped/mirrored LVs without any
>dowtime
 and it's using the md module.

 Best Regards,
 Strahil Nikolov

 На 28 юли 2020 г. 22:43:39 GMT+03:00, Gilberto Nunes <
 gilberto.nune...@gmail.com> написа:
 >Hi there
 >
 >'till now, I am using glusterfs over XFS and so far so good.
 >Using LVM too
 >Unfortunately, there is no way with XFS to merge two or more HDD,
>in
 >order
 >to use more than one HDD, like RAID1 or RAID5.
 >My primary goal is to use two server with GlusterFS on top of
>multiples
 >HDDs for qemu images.
 >I have think about BTRFS or mdadm.
 >Anybody has some experience on this?
 >
 >Thanks a lot
 >
 >---
 >Gilberto Nunes Ferreira

>>>
>>> 
>>>
>>>
>>>
>>> Community Meeting Calendar:
>>>
>>> Schedule -
>>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>>> Bridge: https://bluejeans.com/441850968
>>>
>>> Gluster-users mailing
>listGluster-users@gluster.orghttps://lists.gluster.org/mailman/listinfo/gluster-users
>>>
>>>
>>> --
>>> Alvin Starr   ||   land:  (647)478-6285
>>> Netvel Inc.   ||   Cell: 
>(416)806-0133al...@netvel.net  ||
>>>
>>>
>>> 
>>>
>>>
>>>
>>> Community Meeting Calendar:
>>>
>>> Schedule -
>>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>>> Bridge: https://bluejeans.com/441850968
>>>
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>>
>>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS over multiples HDD

2020-07-30 Thread Strahil Nikolov
If it crashes  - that's  a  bug  and worth checking. What OS do you use ?

Best Regards,
Strahil Nikolov

На 30 юли 2020 г. 15:59:57 GMT+03:00, Gilberto Nunes 
 написа:
>Yes! But still with mdadm if you lose 1 disk and reboot the server, the
>system crashes.
>But with ZFS there's no crash.
>
>---
>Gilberto Nunes Ferreira
>
>(47) 3025-5907
>(47) 99676-7530 - Whatsapp / Telegram
>
>Skype: gilberto.nunes36
>
>
>
>
>
>Em qui., 30 de jul. de 2020 às 09:55, Strahil Nikolov
>
>escreveu:
>
>> I guess  there  is no  automatic  hot-spare  replacement in LVM, but
>> mdadm has  that  functionality.
>>
>> Best  Regards,
>> Strahil Nikolov
>>
>>
>> На 30 юли 2020 г. 15:39:18 GMT+03:00, Gilberto Nunes <
>> gilberto.nune...@gmail.com> написа:
>> >Doing some research and the chvg command, which is responsable for
>> >create
>> >hotspare disks in LVM is available only in AIX!
>> >I have a Debian Buster box and there is no such command chvg.
>Correct
>> >me if
>> >I am wrong
>> >---
>> >Gilberto Nunes Ferreira
>> >
>> >(47) 3025-5907
>> >(47) 99676-7530 - Whatsapp / Telegram
>> >
>> >Skype: gilberto.nunes36
>> >
>> >
>> >
>> >
>> >
>> >Em qui., 30 de jul. de 2020 às 02:05, Strahil Nikolov
>> >
>> >escreveu:
>> >
>> >> LVM allows creating/converting striped/mirrored LVs without any
>> >dowtime
>> >> and it's using the md module.
>> >>
>> >> Best Regards,
>> >> Strahil Nikolov
>> >>
>> >> На 28 юли 2020 г. 22:43:39 GMT+03:00, Gilberto Nunes <
>> >> gilberto.nune...@gmail.com> написа:
>> >> >Hi there
>> >> >
>> >> >'till now, I am using glusterfs over XFS and so far so good.
>> >> >Using LVM too
>> >> >Unfortunately, there is no way with XFS to merge two or more HDD,
>in
>> >> >order
>> >> >to use more than one HDD, like RAID1 or RAID5.
>> >> >My primary goal is to use two server with GlusterFS on top of
>> >multiples
>> >> >HDDs for qemu images.
>> >> >I have think about BTRFS or mdadm.
>> >> >Anybody has some experience on this?
>> >> >
>> >> >Thanks a lot
>> >> >
>> >> >---
>> >> >Gilberto Nunes Ferreira
>> >>
>>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS over multiples HDD

2020-07-30 Thread Alvin Starr

LVM supports mirroring or raid1.
So to create a raid1 you would do something like "lvcreate -n raidvol  
-L 1T --mirrors 1  /dev/bigvg"
You can add a mirror to an existing logical volume using lvconvert with 
something like "lvconvert --mirrors +1 bigvg/unmirroredlv"


But other than simple mirroring you will need mdadm.



On 7/30/20 8:39 AM, Gilberto Nunes wrote:
Doing some research and the chvg command, which is responsable for 
create hotspare disks in LVM is available only in AIX!
I have a Debian Buster box and there is no such command chvg. Correct 
me if I am wrong

---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em qui., 30 de jul. de 2020 às 02:05, Strahil Nikolov 
mailto:hunter86...@yahoo.com>> escreveu:


LVM allows creating/converting striped/mirrored LVs without any
dowtime and it's using the md module.

Best Regards,
Strahil Nikolov

На 28 юли 2020 г. 22:43:39 GMT+03:00, Gilberto Nunes
mailto:gilberto.nune...@gmail.com>>
написа:
>Hi there
>
>'till now, I am using glusterfs over XFS and so far so good.
>Using LVM too
>Unfortunately, there is no way with XFS to merge two or more HDD, in
>order
>to use more than one HDD, like RAID1 or RAID5.
>My primary goal is to use two server with GlusterFS on top of
multiples
>HDDs for qemu images.
>I have think about BTRFS or mdadm.
>Anybody has some experience on this?
>
>Thanks a lot
>
>---
>Gilberto Nunes Ferreira






Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


--
Alvin Starr   ||   land:  (647)478-6285
Netvel Inc.   ||   Cell:  (416)806-0133
al...@netvel.net  ||





Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS over multiples HDD

2020-07-30 Thread Strahil Nikolov
I guess  there  is no  automatic  hot-spare  replacement in LVM, but  mdadm has 
 that  functionality.

Best  Regards,
Strahil Nikolov


На 30 юли 2020 г. 15:39:18 GMT+03:00, Gilberto Nunes 
 написа:
>Doing some research and the chvg command, which is responsable for
>create
>hotspare disks in LVM is available only in AIX!
>I have a Debian Buster box and there is no such command chvg. Correct
>me if
>I am wrong
>---
>Gilberto Nunes Ferreira
>
>(47) 3025-5907
>(47) 99676-7530 - Whatsapp / Telegram
>
>Skype: gilberto.nunes36
>
>
>
>
>
>Em qui., 30 de jul. de 2020 às 02:05, Strahil Nikolov
>
>escreveu:
>
>> LVM allows creating/converting striped/mirrored LVs without any
>dowtime
>> and it's using the md module.
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> На 28 юли 2020 г. 22:43:39 GMT+03:00, Gilberto Nunes <
>> gilberto.nune...@gmail.com> написа:
>> >Hi there
>> >
>> >'till now, I am using glusterfs over XFS and so far so good.
>> >Using LVM too
>> >Unfortunately, there is no way with XFS to merge two or more HDD, in
>> >order
>> >to use more than one HDD, like RAID1 or RAID5.
>> >My primary goal is to use two server with GlusterFS on top of
>multiples
>> >HDDs for qemu images.
>> >I have think about BTRFS or mdadm.
>> >Anybody has some experience on this?
>> >
>> >Thanks a lot
>> >
>> >---
>> >Gilberto Nunes Ferreira
>>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS over multiples HDD

2020-07-30 Thread Gilberto Nunes
I am dealing here with just 2 nodes and a bunch of disks on it... Just a
scenario for study but real in many cases we face low budgets...
Anyway thanks for the tips!



---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em qui., 30 de jul. de 2020 às 13:34, Strahil Nikolov 
escreveu:

> Anyway in raid1+spare  on a replica 2  volume you will be using 6 disks
> in total.
> It  will  be more optimal to get all those  disks  in 'replica 3' or
> 'replica 3  arbiter 1' (for the arbiter it would be optimal  to have a
> small ssd and the data disks for tge actual data).
>
> Best Regards,
> Strahil Nikolov
>
> На 30 юли 2020 г. 19:32:32 GMT+03:00, Strahil Nikolov <
> hunter86...@yahoo.com> написа:
> >When using legacy,  you need to  prepare the MBR on each disk, so the
> >BIOS will be able to boot from it and load grub.
> >
> >On UEFI, you will need 2 entries each pointing to the other disk. There
> >is a thread for each approach in the CentOS7 forums and the peocedure
> >is almost the same on all linux .
> >
> >Best Regards,
> >Strahil Nikolov
> >
> >На 30 юли 2020 г. 18:17:49 GMT+03:00, Gilberto Nunes
> > написа:
> >>I am using Legacy... But I am using Virtualbox in my labs... Perhaps
> >>this
> >>is the problem...
> >>I don't do that in real hardware. But with spare disk (2 + 1) in mdadm
> >>it's
> >>fine.
> >>
> >>---
> >>Gilberto Nunes Ferreira
> >>
> >>(47) 3025-5907
> >>(47) 99676-7530 - Whatsapp / Telegram
> >>
> >>Skype: gilberto.nunes36
> >>
> >>
> >>
> >>
> >>
> >>Em qui., 30 de jul. de 2020 às 12:16, Strahil Nikolov
> >>
> >>escreveu:
> >>
> >>> The crash  with failed  mdadm disk is not normal. We need to check
> >it
> >>out.
> >>>
> >>> Are  you using  Legacy or  UEFI  ?
> >>>
> >>> Best Regards,
> >>> Strahil Nikolov
> >>>
> >>> На 30 юли 2020 г. 16:25:47 GMT+03:00, Gilberto Nunes <
> >>> gilberto.nune...@gmail.com> написа:
> >>> >But I will give it a try in the lvm mirroring process Thanks
> >>> >---
> >>> >Gilberto Nunes Ferreira
> >>> >
> >>> >(47) 3025-5907
> >>> >(47) 99676-7530 - Whatsapp / Telegram
> >>> >
> >>> >Skype: gilberto.nunes36
> >>> >
> >>> >
> >>> >
> >>> >
> >>> >
> >>> >Em qui., 30 de jul. de 2020 às 10:23, Gilberto Nunes <
> >>> >gilberto.nune...@gmail.com> escreveu:
> >>> >
> >>> >> Well, actually I am not concerned ONLY about mirroring the data,
> >>but
> >>> >with
> >>> >> reliability on it.
> >>> >> Here during my lab tests I notice if I pull off a disk and then
> >>power
> >>> >on
> >>> >> the server, the LVM crashes...
> >>> >> On the other hand, with ZFS even in degraded state the system
> >>booted
> >>> >> normally.
> >>> >> The same happens with mdadm. That's the point.
> >>> >>
> >>> >> ---
> >>> >> Gilberto Nunes Ferreira
> >>> >>
> >>> >> (47) 3025-5907
> >>> >> (47) 99676-7530 - Whatsapp / Telegram
> >>> >>
> >>> >> Skype: gilberto.nunes36
> >>> >>
> >>> >>
> >>> >>
> >>> >>
> >>> >>
> >>> >> Em qui., 30 de jul. de 2020 às 10:18, Alvin Starr
> >>
> >>> >> escreveu:
> >>> >>
> >>> >>> LVM supports mirroring or raid1.
> >>> >>> So to create a raid1 you would do something like "lvcreate -n
> >>> >raidvol  -L
> >>> >>> 1T --mirrors 1  /dev/bigvg"
> >>> >>> You can add a mirror to an existing logical volume using
> >>lvconvert
> >>> >with
> >>> >>> something like "lvconvert --mirrors +1 bigvg/unmirroredlv"
> >>> >>>
> >>> >>> But other than simple mirroring you will need mdadm.
> >>> >>>
> >>> >>>
> >>> >>>
> >>> >>> On 7/30/20 8:39 AM, Gilberto Nunes wrote:
> >>> >>>
> >>> >>> Doing some research and the chvg command, which is responsable
> >>for
> >>> >create
> >>> >>> hotspare disks in LVM is available only in AIX!
> >>> >>> I have a Debian Buster box and there is no such command chvg.
> >>> >Correct me
> >>> >>> if I am wrong
> >>> >>> ---
> >>> >>> Gilberto Nunes Ferreira
> >>> >>>
> >>> >>> (47) 3025-5907
> >>> >>> (47) 99676-7530 - Whatsapp / Telegram
> >>> >>>
> >>> >>> Skype: gilberto.nunes36
> >>> >>>
> >>> >>>
> >>> >>>
> >>> >>>
> >>> >>>
> >>> >>> Em qui., 30 de jul. de 2020 às 02:05, Strahil Nikolov <
> >>> >>> hunter86...@yahoo.com> escreveu:
> >>> >>>
> >>>  LVM allows creating/converting striped/mirrored LVs without any
> >>> >dowtime
> >>>  and it's using the md module.
> >>> 
> >>>  Best Regards,
> >>>  Strahil Nikolov
> >>> 
> >>>  На 28 юли 2020 г. 22:43:39 GMT+03:00, Gilberto Nunes <
> >>>  gilberto.nune...@gmail.com> написа:
> >>>  >Hi there
> >>>  >
> >>>  >'till now, I am using glusterfs over XFS and so far so good.
> >>>  >Using LVM too
> >>>  >Unfortunately, there is no way with XFS to merge two or more
> >>HDD,
> >>> >in
> >>>  >order
> >>>  >to use more than one HDD, like RAID1 or RAID5.
> >>>  >My primary goal is to use two server with GlusterFS on top of
> >>> >multiples
> >>>  >HDDs for qemu images.
> >>>  >I have think about BTRFS or mdadm.
> >>>  >Anybody has some experience on this?
> >>>

Re: [Gluster-users] GlusterFS over multiples HDD

2020-07-30 Thread Gilberto Nunes
I am using Legacy... But I am using Virtualbox in my labs... Perhaps this
is the problem...
I don't do that in real hardware. But with spare disk (2 + 1) in mdadm it's
fine.

---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em qui., 30 de jul. de 2020 às 12:16, Strahil Nikolov 
escreveu:

> The crash  with failed  mdadm disk is not normal. We need to check it out.
>
> Are  you using  Legacy or  UEFI  ?
>
> Best Regards,
> Strahil Nikolov
>
> На 30 юли 2020 г. 16:25:47 GMT+03:00, Gilberto Nunes <
> gilberto.nune...@gmail.com> написа:
> >But I will give it a try in the lvm mirroring process Thanks
> >---
> >Gilberto Nunes Ferreira
> >
> >(47) 3025-5907
> >(47) 99676-7530 - Whatsapp / Telegram
> >
> >Skype: gilberto.nunes36
> >
> >
> >
> >
> >
> >Em qui., 30 de jul. de 2020 às 10:23, Gilberto Nunes <
> >gilberto.nune...@gmail.com> escreveu:
> >
> >> Well, actually I am not concerned ONLY about mirroring the data, but
> >with
> >> reliability on it.
> >> Here during my lab tests I notice if I pull off a disk and then power
> >on
> >> the server, the LVM crashes...
> >> On the other hand, with ZFS even in degraded state the system booted
> >> normally.
> >> The same happens with mdadm. That's the point.
> >>
> >> ---
> >> Gilberto Nunes Ferreira
> >>
> >> (47) 3025-5907
> >> (47) 99676-7530 - Whatsapp / Telegram
> >>
> >> Skype: gilberto.nunes36
> >>
> >>
> >>
> >>
> >>
> >> Em qui., 30 de jul. de 2020 às 10:18, Alvin Starr 
> >> escreveu:
> >>
> >>> LVM supports mirroring or raid1.
> >>> So to create a raid1 you would do something like "lvcreate -n
> >raidvol  -L
> >>> 1T --mirrors 1  /dev/bigvg"
> >>> You can add a mirror to an existing logical volume using lvconvert
> >with
> >>> something like "lvconvert --mirrors +1 bigvg/unmirroredlv"
> >>>
> >>> But other than simple mirroring you will need mdadm.
> >>>
> >>>
> >>>
> >>> On 7/30/20 8:39 AM, Gilberto Nunes wrote:
> >>>
> >>> Doing some research and the chvg command, which is responsable for
> >create
> >>> hotspare disks in LVM is available only in AIX!
> >>> I have a Debian Buster box and there is no such command chvg.
> >Correct me
> >>> if I am wrong
> >>> ---
> >>> Gilberto Nunes Ferreira
> >>>
> >>> (47) 3025-5907
> >>> (47) 99676-7530 - Whatsapp / Telegram
> >>>
> >>> Skype: gilberto.nunes36
> >>>
> >>>
> >>>
> >>>
> >>>
> >>> Em qui., 30 de jul. de 2020 às 02:05, Strahil Nikolov <
> >>> hunter86...@yahoo.com> escreveu:
> >>>
>  LVM allows creating/converting striped/mirrored LVs without any
> >dowtime
>  and it's using the md module.
> 
>  Best Regards,
>  Strahil Nikolov
> 
>  На 28 юли 2020 г. 22:43:39 GMT+03:00, Gilberto Nunes <
>  gilberto.nune...@gmail.com> написа:
>  >Hi there
>  >
>  >'till now, I am using glusterfs over XFS and so far so good.
>  >Using LVM too
>  >Unfortunately, there is no way with XFS to merge two or more HDD,
> >in
>  >order
>  >to use more than one HDD, like RAID1 or RAID5.
>  >My primary goal is to use two server with GlusterFS on top of
> >multiples
>  >HDDs for qemu images.
>  >I have think about BTRFS or mdadm.
>  >Anybody has some experience on this?
>  >
>  >Thanks a lot
>  >
>  >---
>  >Gilberto Nunes Ferreira
> 
> >>>
> >>> 
> >>>
> >>>
> >>>
> >>> Community Meeting Calendar:
> >>>
> >>> Schedule -
> >>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> >>> Bridge: https://bluejeans.com/441850968
> >>>
> >>> Gluster-users mailing
> >listGluster-users@gluster.orghttps://
> lists.gluster.org/mailman/listinfo/gluster-users
> >>>
> >>>
> >>> --
> >>> Alvin Starr   ||   land:  (647)478-6285
> >>> Netvel Inc.   ||   Cell:
> >(416)806-0133al...@netvel.net  ||
> >>>
> >>>
> >>> 
> >>>
> >>>
> >>>
> >>> Community Meeting Calendar:
> >>>
> >>> Schedule -
> >>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> >>> Bridge: https://bluejeans.com/441850968
> >>>
> >>> Gluster-users mailing list
> >>> Gluster-users@gluster.org
> >>> https://lists.gluster.org/mailman/listinfo/gluster-users
> >>>
> >>
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS over multiples HDD

2020-07-30 Thread Gilberto Nunes
Debian Buster (Proxmox VE which is Debian Buster with Ubuntu kernel)
---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em qui., 30 de jul. de 2020 às 12:12, Strahil Nikolov 
escreveu:

> If it crashes  - that's  a  bug  and worth checking. What OS do you use ?
>
> Best Regards,
> Strahil Nikolov
>
> На 30 юли 2020 г. 15:59:57 GMT+03:00, Gilberto Nunes <
> gilberto.nune...@gmail.com> написа:
> >Yes! But still with mdadm if you lose 1 disk and reboot the server, the
> >system crashes.
> >But with ZFS there's no crash.
> >
> >---
> >Gilberto Nunes Ferreira
> >
> >(47) 3025-5907
> >(47) 99676-7530 - Whatsapp / Telegram
> >
> >Skype: gilberto.nunes36
> >
> >
> >
> >
> >
> >Em qui., 30 de jul. de 2020 às 09:55, Strahil Nikolov
> >
> >escreveu:
> >
> >> I guess  there  is no  automatic  hot-spare  replacement in LVM, but
> >> mdadm has  that  functionality.
> >>
> >> Best  Regards,
> >> Strahil Nikolov
> >>
> >>
> >> На 30 юли 2020 г. 15:39:18 GMT+03:00, Gilberto Nunes <
> >> gilberto.nune...@gmail.com> написа:
> >> >Doing some research and the chvg command, which is responsable for
> >> >create
> >> >hotspare disks in LVM is available only in AIX!
> >> >I have a Debian Buster box and there is no such command chvg.
> >Correct
> >> >me if
> >> >I am wrong
> >> >---
> >> >Gilberto Nunes Ferreira
> >> >
> >> >(47) 3025-5907
> >> >(47) 99676-7530 - Whatsapp / Telegram
> >> >
> >> >Skype: gilberto.nunes36
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >Em qui., 30 de jul. de 2020 às 02:05, Strahil Nikolov
> >> >
> >> >escreveu:
> >> >
> >> >> LVM allows creating/converting striped/mirrored LVs without any
> >> >dowtime
> >> >> and it's using the md module.
> >> >>
> >> >> Best Regards,
> >> >> Strahil Nikolov
> >> >>
> >> >> На 28 юли 2020 г. 22:43:39 GMT+03:00, Gilberto Nunes <
> >> >> gilberto.nune...@gmail.com> написа:
> >> >> >Hi there
> >> >> >
> >> >> >'till now, I am using glusterfs over XFS and so far so good.
> >> >> >Using LVM too
> >> >> >Unfortunately, there is no way with XFS to merge two or more HDD,
> >in
> >> >> >order
> >> >> >to use more than one HDD, like RAID1 or RAID5.
> >> >> >My primary goal is to use two server with GlusterFS on top of
> >> >multiples
> >> >> >HDDs for qemu images.
> >> >> >I have think about BTRFS or mdadm.
> >> >> >Anybody has some experience on this?
> >> >> >
> >> >> >Thanks a lot
> >> >> >
> >> >> >---
> >> >> >Gilberto Nunes Ferreira
> >> >>
> >>
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS over multiples HDD

2020-07-30 Thread Gilberto Nunes
Well... LVM doesn't do the job that I need...
But I found this article https://www.gonscak.sk/?p=201 which makes the
way

Thanks for all the help and comments about this Cheers.

---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em qui., 30 de jul. de 2020 às 10:25, Gilberto Nunes <
gilberto.nune...@gmail.com> escreveu:

> But I will give it a try in the lvm mirroring process Thanks
> ---
> Gilberto Nunes Ferreira
>
> (47) 3025-5907
> (47) 99676-7530 - Whatsapp / Telegram
>
> Skype: gilberto.nunes36
>
>
>
>
>
> Em qui., 30 de jul. de 2020 às 10:23, Gilberto Nunes <
> gilberto.nune...@gmail.com> escreveu:
>
>> Well, actually I am not concerned ONLY about mirroring the data, but with
>> reliability on it.
>> Here during my lab tests I notice if I pull off a disk and then power on
>> the server, the LVM crashes...
>> On the other hand, with ZFS even in degraded state the system booted
>> normally.
>> The same happens with mdadm. That's the point.
>>
>> ---
>> Gilberto Nunes Ferreira
>>
>> (47) 3025-5907
>> (47) 99676-7530 - Whatsapp / Telegram
>>
>> Skype: gilberto.nunes36
>>
>>
>>
>>
>>
>> Em qui., 30 de jul. de 2020 às 10:18, Alvin Starr 
>> escreveu:
>>
>>> LVM supports mirroring or raid1.
>>> So to create a raid1 you would do something like "lvcreate -n raidvol
>>> -L 1T --mirrors 1  /dev/bigvg"
>>> You can add a mirror to an existing logical volume using lvconvert  with
>>> something like "lvconvert --mirrors +1 bigvg/unmirroredlv"
>>>
>>> But other than simple mirroring you will need mdadm.
>>>
>>>
>>>
>>> On 7/30/20 8:39 AM, Gilberto Nunes wrote:
>>>
>>> Doing some research and the chvg command, which is responsable for
>>> create hotspare disks in LVM is available only in AIX!
>>> I have a Debian Buster box and there is no such command chvg. Correct me
>>> if I am wrong
>>> ---
>>> Gilberto Nunes Ferreira
>>>
>>> (47) 3025-5907
>>> (47) 99676-7530 - Whatsapp / Telegram
>>>
>>> Skype: gilberto.nunes36
>>>
>>>
>>>
>>>
>>>
>>> Em qui., 30 de jul. de 2020 às 02:05, Strahil Nikolov <
>>> hunter86...@yahoo.com> escreveu:
>>>
 LVM allows creating/converting striped/mirrored LVs without any dowtime
 and it's using the md module.

 Best Regards,
 Strahil Nikolov

 На 28 юли 2020 г. 22:43:39 GMT+03:00, Gilberto Nunes <
 gilberto.nune...@gmail.com> написа:
 >Hi there
 >
 >'till now, I am using glusterfs over XFS and so far so good.
 >Using LVM too
 >Unfortunately, there is no way with XFS to merge two or more HDD, in
 >order
 >to use more than one HDD, like RAID1 or RAID5.
 >My primary goal is to use two server with GlusterFS on top of multiples
 >HDDs for qemu images.
 >I have think about BTRFS or mdadm.
 >Anybody has some experience on this?
 >
 >Thanks a lot
 >
 >---
 >Gilberto Nunes Ferreira

>>>
>>> 
>>>
>>>
>>>
>>> Community Meeting Calendar:
>>>
>>> Schedule -
>>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>>> Bridge: https://bluejeans.com/441850968
>>>
>>> Gluster-users mailing 
>>> listGluster-users@gluster.orghttps://lists.gluster.org/mailman/listinfo/gluster-users
>>>
>>>
>>> --
>>> Alvin Starr   ||   land:  (647)478-6285
>>> Netvel Inc.   ||   Cell:  (416)806-0133al...@netvel.net 
>>>  ||
>>>
>>>
>>> 
>>>
>>>
>>>
>>> Community Meeting Calendar:
>>>
>>> Schedule -
>>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>>> Bridge: https://bluejeans.com/441850968
>>>
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>>
>>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS over multiples HDD

2020-07-30 Thread Gilberto Nunes
But I will give it a try in the lvm mirroring process Thanks
---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em qui., 30 de jul. de 2020 às 10:23, Gilberto Nunes <
gilberto.nune...@gmail.com> escreveu:

> Well, actually I am not concerned ONLY about mirroring the data, but with
> reliability on it.
> Here during my lab tests I notice if I pull off a disk and then power on
> the server, the LVM crashes...
> On the other hand, with ZFS even in degraded state the system booted
> normally.
> The same happens with mdadm. That's the point.
>
> ---
> Gilberto Nunes Ferreira
>
> (47) 3025-5907
> (47) 99676-7530 - Whatsapp / Telegram
>
> Skype: gilberto.nunes36
>
>
>
>
>
> Em qui., 30 de jul. de 2020 às 10:18, Alvin Starr 
> escreveu:
>
>> LVM supports mirroring or raid1.
>> So to create a raid1 you would do something like "lvcreate -n raidvol  -L
>> 1T --mirrors 1  /dev/bigvg"
>> You can add a mirror to an existing logical volume using lvconvert  with
>> something like "lvconvert --mirrors +1 bigvg/unmirroredlv"
>>
>> But other than simple mirroring you will need mdadm.
>>
>>
>>
>> On 7/30/20 8:39 AM, Gilberto Nunes wrote:
>>
>> Doing some research and the chvg command, which is responsable for create
>> hotspare disks in LVM is available only in AIX!
>> I have a Debian Buster box and there is no such command chvg. Correct me
>> if I am wrong
>> ---
>> Gilberto Nunes Ferreira
>>
>> (47) 3025-5907
>> (47) 99676-7530 - Whatsapp / Telegram
>>
>> Skype: gilberto.nunes36
>>
>>
>>
>>
>>
>> Em qui., 30 de jul. de 2020 às 02:05, Strahil Nikolov <
>> hunter86...@yahoo.com> escreveu:
>>
>>> LVM allows creating/converting striped/mirrored LVs without any dowtime
>>> and it's using the md module.
>>>
>>> Best Regards,
>>> Strahil Nikolov
>>>
>>> На 28 юли 2020 г. 22:43:39 GMT+03:00, Gilberto Nunes <
>>> gilberto.nune...@gmail.com> написа:
>>> >Hi there
>>> >
>>> >'till now, I am using glusterfs over XFS and so far so good.
>>> >Using LVM too
>>> >Unfortunately, there is no way with XFS to merge two or more HDD, in
>>> >order
>>> >to use more than one HDD, like RAID1 or RAID5.
>>> >My primary goal is to use two server with GlusterFS on top of multiples
>>> >HDDs for qemu images.
>>> >I have think about BTRFS or mdadm.
>>> >Anybody has some experience on this?
>>> >
>>> >Thanks a lot
>>> >
>>> >---
>>> >Gilberto Nunes Ferreira
>>>
>>
>> 
>>
>>
>>
>> Community Meeting Calendar:
>>
>> Schedule -
>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> Bridge: https://bluejeans.com/441850968
>>
>> Gluster-users mailing 
>> listGluster-users@gluster.orghttps://lists.gluster.org/mailman/listinfo/gluster-users
>>
>>
>> --
>> Alvin Starr   ||   land:  (647)478-6285
>> Netvel Inc.   ||   Cell:  (416)806-0133al...@netvel.net  
>> ||
>>
>>
>> 
>>
>>
>>
>> Community Meeting Calendar:
>>
>> Schedule -
>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> Bridge: https://bluejeans.com/441850968
>>
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS over multiples HDD

2020-07-30 Thread Gilberto Nunes
Well, actually I am not concerned ONLY about mirroring the data, but with
reliability on it.
Here during my lab tests I notice if I pull off a disk and then power on
the server, the LVM crashes...
On the other hand, with ZFS even in degraded state the system booted
normally.
The same happens with mdadm. That's the point.

---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em qui., 30 de jul. de 2020 às 10:18, Alvin Starr 
escreveu:

> LVM supports mirroring or raid1.
> So to create a raid1 you would do something like "lvcreate -n raidvol  -L
> 1T --mirrors 1  /dev/bigvg"
> You can add a mirror to an existing logical volume using lvconvert  with
> something like "lvconvert --mirrors +1 bigvg/unmirroredlv"
>
> But other than simple mirroring you will need mdadm.
>
>
>
> On 7/30/20 8:39 AM, Gilberto Nunes wrote:
>
> Doing some research and the chvg command, which is responsable for create
> hotspare disks in LVM is available only in AIX!
> I have a Debian Buster box and there is no such command chvg. Correct me
> if I am wrong
> ---
> Gilberto Nunes Ferreira
>
> (47) 3025-5907
> (47) 99676-7530 - Whatsapp / Telegram
>
> Skype: gilberto.nunes36
>
>
>
>
>
> Em qui., 30 de jul. de 2020 às 02:05, Strahil Nikolov <
> hunter86...@yahoo.com> escreveu:
>
>> LVM allows creating/converting striped/mirrored LVs without any dowtime
>> and it's using the md module.
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> На 28 юли 2020 г. 22:43:39 GMT+03:00, Gilberto Nunes <
>> gilberto.nune...@gmail.com> написа:
>> >Hi there
>> >
>> >'till now, I am using glusterfs over XFS and so far so good.
>> >Using LVM too
>> >Unfortunately, there is no way with XFS to merge two or more HDD, in
>> >order
>> >to use more than one HDD, like RAID1 or RAID5.
>> >My primary goal is to use two server with GlusterFS on top of multiples
>> >HDDs for qemu images.
>> >I have think about BTRFS or mdadm.
>> >Anybody has some experience on this?
>> >
>> >Thanks a lot
>> >
>> >---
>> >Gilberto Nunes Ferreira
>>
>
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://bluejeans.com/441850968
>
> Gluster-users mailing 
> listGluster-users@gluster.orghttps://lists.gluster.org/mailman/listinfo/gluster-users
>
>
> --
> Alvin Starr   ||   land:  (647)478-6285
> Netvel Inc.   ||   Cell:  (416)806-0133al...@netvel.net   
>||
>
>
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://bluejeans.com/441850968
>
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS over multiples HDD

2020-07-30 Thread Gilberto Nunes
I meant, if you power off the server, pull off 1 disk, and then power on we
get system errors

---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em qui., 30 de jul. de 2020 às 09:59, Gilberto Nunes <
gilberto.nune...@gmail.com> escreveu:

> Yes! But still with mdadm if you lose 1 disk and reboot the server, the
> system crashes.
> But with ZFS there's no crash.
>
> ---
> Gilberto Nunes Ferreira
>
> (47) 3025-5907
> (47) 99676-7530 - Whatsapp / Telegram
>
> Skype: gilberto.nunes36
>
>
>
>
>
> Em qui., 30 de jul. de 2020 às 09:55, Strahil Nikolov <
> hunter86...@yahoo.com> escreveu:
>
>> I guess  there  is no  automatic  hot-spare  replacement in LVM, but
>> mdadm has  that  functionality.
>>
>> Best  Regards,
>> Strahil Nikolov
>>
>>
>> На 30 юли 2020 г. 15:39:18 GMT+03:00, Gilberto Nunes <
>> gilberto.nune...@gmail.com> написа:
>> >Doing some research and the chvg command, which is responsable for
>> >create
>> >hotspare disks in LVM is available only in AIX!
>> >I have a Debian Buster box and there is no such command chvg. Correct
>> >me if
>> >I am wrong
>> >---
>> >Gilberto Nunes Ferreira
>> >
>> >(47) 3025-5907
>> >(47) 99676-7530 - Whatsapp / Telegram
>> >
>> >Skype: gilberto.nunes36
>> >
>> >
>> >
>> >
>> >
>> >Em qui., 30 de jul. de 2020 às 02:05, Strahil Nikolov
>> >
>> >escreveu:
>> >
>> >> LVM allows creating/converting striped/mirrored LVs without any
>> >dowtime
>> >> and it's using the md module.
>> >>
>> >> Best Regards,
>> >> Strahil Nikolov
>> >>
>> >> На 28 юли 2020 г. 22:43:39 GMT+03:00, Gilberto Nunes <
>> >> gilberto.nune...@gmail.com> написа:
>> >> >Hi there
>> >> >
>> >> >'till now, I am using glusterfs over XFS and so far so good.
>> >> >Using LVM too
>> >> >Unfortunately, there is no way with XFS to merge two or more HDD, in
>> >> >order
>> >> >to use more than one HDD, like RAID1 or RAID5.
>> >> >My primary goal is to use two server with GlusterFS on top of
>> >multiples
>> >> >HDDs for qemu images.
>> >> >I have think about BTRFS or mdadm.
>> >> >Anybody has some experience on this?
>> >> >
>> >> >Thanks a lot
>> >> >
>> >> >---
>> >> >Gilberto Nunes Ferreira
>> >>
>>
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS over multiples HDD

2020-07-30 Thread Gilberto Nunes
Yes! But still with mdadm if you lose 1 disk and reboot the server, the
system crashes.
But with ZFS there's no crash.

---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em qui., 30 de jul. de 2020 às 09:55, Strahil Nikolov 
escreveu:

> I guess  there  is no  automatic  hot-spare  replacement in LVM, but
> mdadm has  that  functionality.
>
> Best  Regards,
> Strahil Nikolov
>
>
> На 30 юли 2020 г. 15:39:18 GMT+03:00, Gilberto Nunes <
> gilberto.nune...@gmail.com> написа:
> >Doing some research and the chvg command, which is responsable for
> >create
> >hotspare disks in LVM is available only in AIX!
> >I have a Debian Buster box and there is no such command chvg. Correct
> >me if
> >I am wrong
> >---
> >Gilberto Nunes Ferreira
> >
> >(47) 3025-5907
> >(47) 99676-7530 - Whatsapp / Telegram
> >
> >Skype: gilberto.nunes36
> >
> >
> >
> >
> >
> >Em qui., 30 de jul. de 2020 às 02:05, Strahil Nikolov
> >
> >escreveu:
> >
> >> LVM allows creating/converting striped/mirrored LVs without any
> >dowtime
> >> and it's using the md module.
> >>
> >> Best Regards,
> >> Strahil Nikolov
> >>
> >> На 28 юли 2020 г. 22:43:39 GMT+03:00, Gilberto Nunes <
> >> gilberto.nune...@gmail.com> написа:
> >> >Hi there
> >> >
> >> >'till now, I am using glusterfs over XFS and so far so good.
> >> >Using LVM too
> >> >Unfortunately, there is no way with XFS to merge two or more HDD, in
> >> >order
> >> >to use more than one HDD, like RAID1 or RAID5.
> >> >My primary goal is to use two server with GlusterFS on top of
> >multiples
> >> >HDDs for qemu images.
> >> >I have think about BTRFS or mdadm.
> >> >Anybody has some experience on this?
> >> >
> >> >Thanks a lot
> >> >
> >> >---
> >> >Gilberto Nunes Ferreira
> >>
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS over multiples HDD

2020-07-30 Thread Gilberto Nunes
Doing some research and the chvg command, which is responsable for create
hotspare disks in LVM is available only in AIX!
I have a Debian Buster box and there is no such command chvg. Correct me if
I am wrong
---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em qui., 30 de jul. de 2020 às 02:05, Strahil Nikolov 
escreveu:

> LVM allows creating/converting striped/mirrored LVs without any dowtime
> and it's using the md module.
>
> Best Regards,
> Strahil Nikolov
>
> На 28 юли 2020 г. 22:43:39 GMT+03:00, Gilberto Nunes <
> gilberto.nune...@gmail.com> написа:
> >Hi there
> >
> >'till now, I am using glusterfs over XFS and so far so good.
> >Using LVM too
> >Unfortunately, there is no way with XFS to merge two or more HDD, in
> >order
> >to use more than one HDD, like RAID1 or RAID5.
> >My primary goal is to use two server with GlusterFS on top of multiples
> >HDDs for qemu images.
> >I have think about BTRFS or mdadm.
> >Anybody has some experience on this?
> >
> >Thanks a lot
> >
> >---
> >Gilberto Nunes Ferreira
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS over multiples HDD

2020-07-30 Thread Strahil Nikolov
LVM allows creating/converting striped/mirrored LVs without any dowtime and 
it's using the md module.

Best Regards,
Strahil Nikolov

На 28 юли 2020 г. 22:43:39 GMT+03:00, Gilberto Nunes 
 написа:
>Hi there
>
>'till now, I am using glusterfs over XFS and so far so good.
>Using LVM too
>Unfortunately, there is no way with XFS to merge two or more HDD, in
>order
>to use more than one HDD, like RAID1 or RAID5.
>My primary goal is to use two server with GlusterFS on top of multiples
>HDDs for qemu images.
>I have think about BTRFS or mdadm.
>Anybody has some experience on this?
>
>Thanks a lot
>
>---
>Gilberto Nunes Ferreira




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS over multiples HDD

2020-07-30 Thread Strahil Nikolov
Keep in mind that if you use each disk as a separate brick, use  replica 3 
volumes.

Also,  mdadm  allows you to specify a hot spare,  so in case of disk failure it 
will automatically kick in and replace the failed drive.

I'm not sure if LVM has that option,  but as it relies  on the same mechanism 
mdadm is  using -> it should be possible.

Best Regards,
Strahil Nikolov

На 29 юли 2020 г. 0:10:44 GMT+03:00, Darrell Budic  
написа:
>ZFS isn’t that resource intensive, although it does like RAM.
>
>But why not just add additional bricks? Gluster is kind of built to use
>disks as bricks, and rely on gluster replication to provide redundancy
>in the data. Either replication or distribute-replication can provide
>protection against disk failures.
>
>> On Jul 28, 2020, at 3:27 PM, Gilberto Nunes
> wrote:
>> 
>> But still, have some doubt how LVM will handle in case of disk
>failure!
>> Or even mdadm.
>> Both need intervention when one or more disks die!
>> What I need is something like ZFS but that uses less resources...
>> 
>> Thanks any way
>> 
>> 
>> 
>> ---
>> Gilberto Nunes Ferreira
>> 
>> 
>> 
>> 
>> Em ter., 28 de jul. de 2020 às 17:16, Gilberto Nunes
>mailto:gilberto.nune...@gmail.com>>
>escreveu:
>> Good to know that...
>> Thanks
>> ---
>> Gilberto Nunes Ferreira
>> 
>> 
>> 
>> 
>> Em ter., 28 de jul. de 2020 às 17:08, Alvin Starr > escreveu:
>> Having just been burnt by BTRFS I would stick with XFS and
>LVM/others.
>> 
>> LVM will do disk replication or raid1. I do not believe that
>raid3,4,5,6.. is supported.
>> mdadm does support all the various raid modes and I have used it
>quite reliably for years.
>> You may want to look at the raid456 write-journal but that will
>require an SSD or NVME deivce to be used effectively.
>> 
>> 
>> On 7/28/20 3:43 PM, Gilberto Nunes wrote:
>>> Hi there
>>> 
>>> 'till now, I am using glusterfs over XFS and so far so good.
>>> Using LVM too
>>> Unfortunately, there is no way with XFS to merge two or more HDD, in
>order to use more than one HDD, like RAID1 or RAID5.
>>> My primary goal is to use two server with GlusterFS on top of
>multiples HDDs for qemu images.
>>> I have think about BTRFS or mdadm.
>>> Anybody has some experience on this?
>>> 
>>> Thanks a lot
>>> 
>>> ---
>>> Gilberto Nunes Ferreira
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> Community Meeting Calendar:
>>> 
>>> Schedule -
>>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>>> Bridge: https://bluejeans.com/441850968
>
>>> 
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org 
>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>> 
>> -- 
>> Alvin Starr   ||   land:  (647)478-6285
>> Netvel Inc.   ||   Cell:  (416)806-0133
>> al...@netvel.net   ||
>> 
>> 
>> 
>> 
>> 
>> Community Meeting Calendar:
>> 
>> Schedule -
>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> Bridge: https://bluejeans.com/441850968
>> 
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS over multiples HDD

2020-07-28 Thread Darrell Budic
ZFS isn’t that resource intensive, although it does like RAM.

But why not just add additional bricks? Gluster is kind of built to use disks 
as bricks, and rely on gluster replication to provide redundancy in the data. 
Either replication or distribute-replication can provide protection against 
disk failures.

> On Jul 28, 2020, at 3:27 PM, Gilberto Nunes  
> wrote:
> 
> But still, have some doubt how LVM will handle in case of disk failure!
> Or even mdadm.
> Both need intervention when one or more disks die!
> What I need is something like ZFS but that uses less resources...
> 
> Thanks any way
> 
> 
> 
> ---
> Gilberto Nunes Ferreira
> 
> 
> 
> 
> Em ter., 28 de jul. de 2020 às 17:16, Gilberto Nunes 
> mailto:gilberto.nune...@gmail.com>> escreveu:
> Good to know that...
> Thanks
> ---
> Gilberto Nunes Ferreira
> 
> 
> 
> 
> Em ter., 28 de jul. de 2020 às 17:08, Alvin Starr  > escreveu:
> Having just been burnt by BTRFS I would stick with XFS and LVM/others.
> 
> LVM will do disk replication or raid1. I do not believe that raid3,4,5,6.. is 
> supported.
> mdadm does support all the various raid modes and I have used it quite 
> reliably for years.
> You may want to look at the raid456 write-journal but that will require an 
> SSD or NVME deivce to be used effectively.
> 
> 
> On 7/28/20 3:43 PM, Gilberto Nunes wrote:
>> Hi there
>> 
>> 'till now, I am using glusterfs over XFS and so far so good.
>> Using LVM too
>> Unfortunately, there is no way with XFS to merge two or more HDD, in order 
>> to use more than one HDD, like RAID1 or RAID5.
>> My primary goal is to use two server with GlusterFS on top of multiples HDDs 
>> for qemu images.
>> I have think about BTRFS or mdadm.
>> Anybody has some experience on this?
>> 
>> Thanks a lot
>> 
>> ---
>> Gilberto Nunes Ferreira
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> Community Meeting Calendar:
>> 
>> Schedule -
>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> Bridge: https://bluejeans.com/441850968 
>> 
>> Gluster-users mailing list
>> Gluster-users@gluster.org 
>> https://lists.gluster.org/mailman/listinfo/gluster-users 
>> 
> 
> -- 
> Alvin Starr   ||   land:  (647)478-6285
> Netvel Inc.   ||   Cell:  (416)806-0133
> al...@netvel.net   ||
> 
> 
> 
> 
> 
> Community Meeting Calendar:
> 
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://bluejeans.com/441850968
> 
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users





Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS over multiples HDD

2020-07-28 Thread Gilberto Nunes
But still, have some doubt how LVM will handle in case of disk failure!
Or even mdadm.
Both need intervention when one or more disks die!
What I need is something like ZFS but that uses less resources...

Thanks any way



---
Gilberto Nunes Ferreira




Em ter., 28 de jul. de 2020 às 17:16, Gilberto Nunes <
gilberto.nune...@gmail.com> escreveu:

> Good to know that...
> Thanks
> ---
> Gilberto Nunes Ferreira
>
>
>
>
> Em ter., 28 de jul. de 2020 às 17:08, Alvin Starr 
> escreveu:
>
>> Having just been burnt by BTRFS I would stick with XFS and LVM/others.
>>
>> LVM will do disk replication or raid1. I do not believe that
>> raid3,4,5,6.. is supported.
>> mdadm does support all the various raid modes and I have used it quite
>> reliably for years.
>> You may want to look at the raid456 write-journal but that will require
>> an SSD or NVME deivce to be used effectively.
>>
>>
>> On 7/28/20 3:43 PM, Gilberto Nunes wrote:
>>
>> Hi there
>>
>> 'till now, I am using glusterfs over XFS and so far so good.
>> Using LVM too
>> Unfortunately, there is no way with XFS to merge two or more HDD, in
>> order to use more than one HDD, like RAID1 or RAID5.
>> My primary goal is to use two server with GlusterFS on top of multiples
>> HDDs for qemu images.
>> I have think about BTRFS or mdadm.
>> Anybody has some experience on this?
>>
>> Thanks a lot
>>
>> ---
>> Gilberto Nunes Ferreira
>>
>>
>>
>>
>> 
>>
>>
>>
>> Community Meeting Calendar:
>>
>> Schedule -
>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> Bridge: https://bluejeans.com/441850968
>>
>> Gluster-users mailing 
>> listGluster-users@gluster.orghttps://lists.gluster.org/mailman/listinfo/gluster-users
>>
>>
>> --
>> Alvin Starr   ||   land:  (647)478-6285
>> Netvel Inc.   ||   Cell:  (416)806-0133al...@netvel.net  
>> ||
>>
>>
>>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS over multiples HDD

2020-07-28 Thread Gilberto Nunes
Good to know that...
Thanks
---
Gilberto Nunes Ferreira




Em ter., 28 de jul. de 2020 às 17:08, Alvin Starr 
escreveu:

> Having just been burnt by BTRFS I would stick with XFS and LVM/others.
>
> LVM will do disk replication or raid1. I do not believe that raid3,4,5,6..
> is supported.
> mdadm does support all the various raid modes and I have used it quite
> reliably for years.
> You may want to look at the raid456 write-journal but that will require an
> SSD or NVME deivce to be used effectively.
>
>
> On 7/28/20 3:43 PM, Gilberto Nunes wrote:
>
> Hi there
>
> 'till now, I am using glusterfs over XFS and so far so good.
> Using LVM too
> Unfortunately, there is no way with XFS to merge two or more HDD, in order
> to use more than one HDD, like RAID1 or RAID5.
> My primary goal is to use two server with GlusterFS on top of multiples
> HDDs for qemu images.
> I have think about BTRFS or mdadm.
> Anybody has some experience on this?
>
> Thanks a lot
>
> ---
> Gilberto Nunes Ferreira
>
>
>
>
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://bluejeans.com/441850968
>
> Gluster-users mailing 
> listGluster-users@gluster.orghttps://lists.gluster.org/mailman/listinfo/gluster-users
>
>
> --
> Alvin Starr   ||   land:  (647)478-6285
> Netvel Inc.   ||   Cell:  (416)806-0133al...@netvel.net   
>||
>
>
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


  1   2   3   4   5   6   7   8   9   10   >