[Gluster-users] Gluster 5.5 slower than 3.12.15

2019-04-02 Thread Strahil Nikolov
-qlength: 1features.shard: onuser.cifs: offstorage.owner-uid: 36storage.owner-gid: 36network.ping-timeout: 30performance.strict-o-direct: oncluster.granular-entry-heal: enablecluster.enable-shared-storage: enable Network: 1 gbit/s Filesystem:XFS Best Regards,Strahil Nikolov

Re: [Gluster-users] [ovirt-users] Re: Hosted-Engine constantly dies

2019-04-05 Thread Strahil Nikolov
er dev teams. Best Regards,Strahil Nikolov ___ Gluster-users mailing list Gluster-users@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Отн: Gluster performance issues - need advise

2019-01-24 Thread Strahil Nikolov
in order to fix the situation as the fuse client is the best way to use glusterfs, and it seems the glusterfs-server is not the guilty one. Thanks in advance for your guidance.I have learned so much. Best Regards,Strahil Nikolov От: Strahil До: Amar Tumballi Suryanarayan Копие: Gluster

[Gluster-users] Gluster snapshot fails

2019-04-09 Thread Strahil Nikolov
t2 are on /dev/gluster_vg_ssd/gluster_lv_engine , while arbiter is on /dev/gluster_vg_sda3/gluster_lv_engine. Is that the issue ? Should I rename my brick's VG ?If so, why there is no mentioning in the documentation ? Best Regards,Strahil Nikolov ___ Gluster

Re: [Gluster-users] Gluster snapshot fails

2019-04-11 Thread Strahil Nikolov
    0.16   1.58 As you can see - all bricks are thin LV and space is not the issue. Can someone hint me how to enable debug , so gluster logs can show the reason for that pre-check failure ? Best Regards,Strahil Nikolov В сряда, 10 април 2019 г., 9:05:15 ч. Гринуич-4, Rafi Kavungal Chun

Re: [Gluster-users] Gluster snapshot fails

2019-04-12 Thread Strahil Nikolov
I hope this is the last update on the issue -> opened a bug https://bugzilla.redhat.com/show_bug.cgi?id=1699309 Best regards,Strahil Nikolov В петък, 12 април 2019 г., 7:32:41 ч. Гринуич-4, Strahil Nikolov написа: Hi All, I have tested gluster snapshot without systemd.automo

Re: [Gluster-users] Cannot see all data in mount

2019-05-15 Thread Strahil Nikolov
It seems that I got confused.So you see the files on the bricks (servers) , but not when you mount glusterfs on the clients ? If so - this is not the sharding feature as it works the opposite way. Best Regards,Strahil Nikolov В четвъртък, 16 май 2019 г., 0:35:04 ч. Гринуич+3, Paul van der

Re: [Gluster-users] Proposing to previous ganesha HA clustersolution back to gluster code as gluster-7 feature

2019-04-30 Thread Strahil Nikolov
of the NFS locks, so no disruption will be felt by the clients. Still, this will be a lot of work to achieve. Best Regards, Strahil Nikolov On Apr 30, 2019 15:19, Jim Kinney wrote: >   > +1! > I'm using nfs-ganesha in my next upgrade so my client systems can use NFS > instead of

Re: [Gluster-users] Gluster 5.6 slow read despite fast local brick

2019-04-22 Thread Strahil Nikolov
cluster.enable-shared-storage: enable Any issues expected when downgrading the version ? Best Regards,Strahil Nikolov В понеделник, 22 април 2019 г., 0:26:51 ч. Гринуич-4, Strahil написа: Hello Community, I have been left with the impression that FUSE mounts will read from both local

[Gluster-users] Gluster v6.6 replica 2 arbiter 1 - gfid on arbiter is different

2019-11-13 Thread Strahil Nikolov
in advance for your response. Best Regards,Strahil Nikolov Community Meeting Calendar: APAC Schedule - Every 2nd and 4th Tuesday at 11:30 AM IST Bridge: https://bluejeans.com/118564314 NA/EMEA Schedule - Every 1st and 3rd Tuesday at 01:00 PM EDT Bridge: https://bluejeans.com/11856431

Re: [Gluster-users] Gluster v6.6 replica 2 arbiter 1 - gfid on arbiter is different

2019-11-13 Thread Strahil Nikolov
It seems that whenever I reboot a gluster node , I got this problem - so it's not an arbiter issue.Obviously there is something wrong with v6.6 ,as I never had such issues with v6.5 . Any ideas where should I start this up ? Best Regards,Strahil Nikolov В сряда, 13 ноември 2019 г., 22:23:38

Re: [Gluster-users] Client Handling of Elastic Clusters

2019-10-16 Thread Strahil Nikolov
and you can keep the number of servers the same ... still the server bandwidth will be a limit at some point . I'm not sure how other SDS deal with such elasticity . I guess many users in the list will hate me for saying this , but have you checked CEPH for your needs ? Best Regards,Strahil Nikolov

Re: [Gluster-users] Use GlusterFS as storage for images of virtual machines - available issues

2019-11-27 Thread Strahil Nikolov
/usr/share/glusterfs/scripts/stop-all-gluster-processes.sh [Install] RequiredBy=shutdown.target Of course systemd has to be reloaded :) Best Regards, Strahil Nikolov В сряда, 27 ноември 2019 г., 8:07:52 ч. Гринуич-5, Sankarshan Mukhopadhyay написа: On Wed, Nov 27, 2019 at 6:10 PM Rav

Re: [Gluster-users] GFS performance under heavy traffic

2019-12-19 Thread Strahil Nikolov
. Best Regards,Strahil Nikolov В четвъртък, 19 декември 2019 г., 02:28:55 ч. Гринуич+2, David Cunningham написа: Hi Raghavendra and Strahil, We are using GFS version 5.6-1.el7 from the CentOS repository. Unfortunately we can't modify the application and it expects to read and write

Re: [Gluster-users] GlusterFS problems & alternatives

2020-02-11 Thread Strahil Nikolov
fore the vendor provided a new patch), so the issues in Gluster are nothing new and we should not forget that Gluster is free (and doesn't costs millions like some arrays). The only mitigation is to thoroughly test each patch on a cluster that provides storage for your dev/test clients. I hope you

Re: [Gluster-users] Advice on moving volumes/bricks to new servers

2020-02-29 Thread Strahil Nikolov
ry data. Another way to migrate the data is to: 1. Add the new disks on the old srv1,2,3 2. Add the new disks to the VG 3. pvmove all LVs to the new disks (I prefer to use the '--atomic' option) 4. vgreduce with the old disks 5. pvremove the old disks 6. Then just delete the block devices from the

Re: [Gluster-users] writing to fuse device failed: No such file or directory

2020-03-02 Thread Strahil Nikolov
Hi Felix, can you test /on non-prod system/ the latest minor version of gluster v6 ? Best Regards, Strahil Nikolov В понеделник, 2 март 2020 г., 21:43:48 ч. Гринуич+2, Felix Kölzow написа: Dear Community, this message appears for me to on GlusterFS 6.0. Before that, we had GlusterFS

Re: [Gluster-users] Disk use with GlusterFS

2020-03-06 Thread Strahil Nikolov
ransport-type: tcp >> >>>> Bricks: >> >>>> Brick1: myhost:/nodirectwritedata/gluster/gvol0 >> >>>> Options Reconfigured: >> >>>> transport.address-family: inet >> >>>> nfs.disable: on >> >>

Re: [Gluster-users] Geo-replication

2020-03-02 Thread Strahil Nikolov
ix is added while adding public key to remote >>>> node’s authorized_keys file, So that if anyone gain access using >this key >>>> can access only gsyncd command. >>>> >>>> ``` >>>> command=gsyncd ssh-key…. >>>> ``` >>>> >>>> >>>> >>>> Thanks for your help. >>>> >>>> -- >>>> David Cunningham, Voisonics Limited >>>> http://voisonics.com/ >>>> USA: +1 213 221 1092 >>>> New Zealand: +64 (0)28 2558 3782 >>>> >>>> >>>> >>>> >>>> Community Meeting Calendar: >>>> >>>> Schedule - >>>> Every Tuesday at 14:30 IST / 09:00 UTC >>>> Bridge: https://bluejeans.com/441850968 >>>> >>>> Gluster-users mailing list >>>> Gluster-users@gluster.org >>>> https://lists.gluster.org/mailman/listinfo/gluster-users >>>> >>>> >>>> >>>> — >>>> regards >>>> Aravinda Vishwanathapura >>>> https://kadalu.io >>>> >>>> >>> >>> -- >>> David Cunningham, Voisonics Limited >>> http://voisonics.com/ >>> USA: +1 213 221 1092 >>> New Zealand: +64 (0)28 2558 3782 >>> >> >> >> -- >> David Cunningham, Voisonics Limited >> http://voisonics.com/ >> USA: +1 213 221 1092 >> New Zealand: +64 (0)28 2558 3782 >> >> >> Hey David, Why don't you set the B cluster's hostnames in /etc/hosts of all A cluster nodes ? Maybe you won't need to rebuild the whole B cluster. I guess the A cluster nodes nees to be able to reach all nodes from B cluster, so you might need to change the firewall settings. Best Regards, Strahil Nikolov Community Meeting Calendar: Schedule - Every Tuesday at 14:30 IST / 09:00 UTC Bridge: https://bluejeans.com/441850968 Gluster-users mailing list Gluster-users@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] writing to fuse device failed: No such file or directory

2020-03-03 Thread Strahil Nikolov
emains in v6.0. Actually, we do not have a non-prod gluster system, so >it will take some time > >to do this. > >Regards, > >Felix > > >On 02/03/2020 23:25, Strahil Nikolov wrote: >> Hi Felix, >> >> can you test /on non-prod system/ the latest minor

Re: [Gluster-users] Brick Goes Offline After server reboot/Or Gluster Container is restarted, on which a gluster node is running

2020-02-28 Thread Strahil Nikolov
Gluster-users@gluster.org mailto:Gluster-users@gluster.org >> > https://lists.gluster.org/mailman/listinfo/gluster-users >> > >> > > >> >> >> Met vriendelijke groet, With kind regards, >> >> Jorick Astrego >>

Re: [Gluster-users] Geo-replication

2020-03-01 Thread Strahil Nikolov
f anyone gain access using >this key >>> can access only gsyncd command. >>> >>> ``` >>> command=gsyncd ssh-key…. >>> ``` >>> >>> >>> >>> Thanks for your help. >>> >>> -- >>> David C

Re: [Gluster-users] No possible to mount a gluster volume via /etc/fstab?

2020-01-24 Thread Strahil Nikolov
ith following >> parameters. >> > >> > net.ipv6.conf.all.disable_ipv6 = 1 >> > net.ipv6.conf.default.disable_ipv6 = 1 >> > >> > That did not help. >> > >> > Volumes are configured with inet. >> > >> > sudo gluster volum

[Gluster-users] Upgrade gluster v7.0 to 7.2 posix-acl.c:262:posix_acl_log_permit_denied

2020-01-31 Thread Strahil Nikolov
Hello Community, I am experiencing again the issue with the ACL and none of the fixes , previously stated, are helping out. Bug report -> https://bugzilla.redhat.com/show_bug.cgi?id=1797099 Any ideas would be helpful. Best Regards, Strahil Nikolov Community Meeting Calen

Re: [Gluster-users] [ovirt-users] Re: ACL issue v6.6, v6.7, v7.1, v7.2

2020-02-07 Thread Strahil Nikolov
ome other things should be different. >> >> >> Greetings, >> >>     Paolo >> >> Il 06/02/20 23:30, Christian Reiss ha scritto: >>> Hey, >>> >>> I hit this bug, too. With disastrous results. >>> I second this post. >>

Re: [Gluster-users] [Errno 107] Transport endpoint is not connected

2020-01-30 Thread Strahil Nikolov
e 4.2 and there were 2 older VM's which had snapshots >from >prior versions, while the leaf was in compatibility level 4.2. note; >the >backup was taken on the engine running 4.3. > >Thanks Olaf > > > >Op di 28 jan. 2020 om 17:31 schreef Strahil Nikolov >: > >> On

Re: [Gluster-users] interpreting heal info and reported entries

2020-01-30 Thread Strahil Nikolov
/mailman/listinfo/gluster-users Hi Ravi, This is the third time an oVirt user (one is me and I think my email is in the list) that report such issue. We need a through investigation as this is reoccurring. Best Regards, Strahil Nikolov Community Meeting Calendar: APAC Schedule - Every

Re: [Gluster-users] Gluster Performance Issues

2020-02-20 Thread Strahil Nikolov
mation will allow more experienced adminiatrators and the developers to identify any pattern that could cause the symptoms. Tuning Gluster is one of the hardest topics, so you should prepare yourself for a lot of test untill you reach the optimal settings for your volumes. Best Regards, Strahi

Re: [Gluster-users] It appears that readdir is not cached for FUSE mounts

2020-02-10 Thread Strahil Nikolov
On February 10, 2020 5:32:29 PM GMT+02:00, Matthias Schniedermeyer wrote: >On 10.02.20 16:21, Strahil Nikolov wrote: >> On February 10, 2020 2:25:17 PM GMT+02:00, Matthias Schniedermeyer > wrote: >>> Hi >>> >>> >>> I would describe our basic use

Re: [Gluster-users] Permission denied at some directories/files after a split brain

2020-02-10 Thread Strahil Nikolov
rsion of gluster are you using ? In my case only a downgrade has restored the operation of the cluster, so you should consider that as an option (last, but still an option). You can try to run a find against the fuse and 'find /path/to/fuse -exec setfacl -m u:root:rw {} \;' Maybe that will force

Re: [Gluster-users] geo-replication sync issue

2020-03-11 Thread Strahil Nikolov
2020-03-11 20:08:55.286410] I [master(worker >/srv/media-storage):1441:process] _GMaster: Batch Completed >changelog_end=1583917610 entry_stime=None changelog_start=1583917610 >stime=None duration=153.5185 num_changelogs=1 mode=xsync >[2020-03-11 20:08:55.315442] I [master(worker >/srv/media

Re: [Gluster-users] Erroneous "No space left on device." messages

2020-03-11 Thread Strahil Nikolov
_all_smalldom.m    pe_PB.in >PE_Data_Comparison_glider_sp011_smalldom.m  pe_PB.log >PE_Data_Comparison_glider_sp064_smalldom.m pe_PB_short.in >PeManJob.log        PlotJob > >mseas(DSMccfzR75deg_001b)% ls PeManJob >PeManJob > >mseas(DSMccfzR75deg_001b)% ls

Re: [Gluster-users] geo-replication sync issue

2020-03-11 Thread Strahil Nikolov
-node Active >Hybrid Crawl N/A > >Any idea? please. Thank you. Hi Etem, Have you checked the log on both source and destination. Maybe they can hint you what the issue is. Best Regards, Strahil Nikolov Community Meeting Calendar: Schedule - Every Tues

Re: [Gluster-users] Stale file handle

2020-03-12 Thread Strahil Nikolov
Email: pha...@mit.edu >Center for Ocean Engineering Phone: (617) 253-6824 >Dept. of Mechanical EngineeringFax:(617) 253-8125 >MIT, Room 5-213http://web.mit.edu/phaley/www/ >77 Massachusetts Avenue >Cambridge, MA 02139-4301 > >

Re: [Gluster-users] geo-replication sync issue

2020-03-12 Thread Strahil Nikolov
(--> >> /lib64/libc.so.6(clone+0x6d)[0x7fc5ccfebbad] ) 0-glusterfs-fuse: >> writing to fuse device failed: No such file or directory >> [2020-03-08 11:40:54.362173] E [fuse-bridge.c:4188:fuse_xattr_cbk] >> 0-glusterfs-fuse: extended attribute not supported by

Re: [Gluster-users] Erroneous "No space left on device." messages

2020-03-10 Thread Strahil Nikolov
; Pid  : 4650 >> File System  : xfs >> Device   : /dev/sda >> Mount Options    : rw >> Inode Size   : 512 >> Disk Space Free  : 325.3GB >> Total Disk Space : 91.0TB >> Inode Count  : 692001992 &g

Re: [Gluster-users] Erroneous "No space left on device." messages

2020-03-10 Thread Strahil Nikolov
M, Pat Haley wrote: >> >> Hi, >> >> I get the following >> >> [root@mseas-data2 bricks]# gluster  volume get data-volume all | grep > >> cluster.min-free >> cluster.min-free-disk 10% >> cluster.min-free-inodes 5% >> >> >> On 3

Re: [Gluster-users] Gluster Performance Issues

2020-03-10 Thread Strahil Nikolov
before, > >the performance is lower than the individual brick performance. Is this >a normal behavior or > >or what can be done to improve the single client performance as pointed >out in this case? > > >Regards, > >Felix > > > > >On 20/02/2020 22:26, Stra

Re: [Gluster-users] Replica 2 to replica 3

2020-04-09 Thread Strahil Nikolov
xample mounted on /gluster) from which you can create 6 directories.The arbiter stores only metadata and the SSD random access performance will be the optimal approach. Something like: arbiter:/gluster/data1 arbiter:/gluster/data2 arbiter:/gluster/data3 arbiter:/gluster/data4 arbiter:/gluster/data5

Re: [Gluster-users] gluster v6.8: systemd units disabled after install

2020-04-11 Thread Strahil Nikolov
egards, >Hubert > >Am Sa., 11. Apr. 2020 um 11:12 Uhr schrieb Strahil Nikolov >: >> >> On April 11, 2020 8:40:47 AM GMT+03:00, Hu Bert > wrote: >> >Hi, >> > >> >so no one has seen the problem of disabled systemd units before? >>

Re: [Gluster-users] gluster v6.8: systemd units disabled after install

2020-04-11 Thread Strahil Nikolov
8 >> >> Gluster-users mailing list >> Gluster-users@gluster.org >> https://lists.gluster.org/mailman/listinfo/gluster-users >> > > > > >Community Meeting Calendar: > >Schedule - >Every 2nd and 4th Tuesday at 14:30 IST / 09:0

Re: [Gluster-users] gluster v6.8: systemd units disabled after install

2020-04-11 Thread Strahil Nikolov
known? >> >> >> Regards, >> Hubert > > > > >Community Meeting Calendar: > >Schedule - >Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC >Bridge: https://bluejeans.com/441850968 > >Gluster-users mailing list >Gluster-user

Re: [Gluster-users] [Gluster-devel] Announcing Gluster release 7.5

2020-04-20 Thread Strahil Nikolov
the data to fresh volumes and everything is working. Best Regards, Strahil Nikolov В понеделник, 20 април 2020 г., 17:02:46 Гринуич+3, Rinku Kothiya написа: Hi, The Gluster community is pleased to announce the release of Gluster7.5 (packages available at [1]). Release notes

Re: [Gluster-users] Working with uid/guid

2020-04-16 Thread Strahil Nikolov
our user , or to use ACLs (maybe with a find -exec ). Still you got the option for '0777' , but then security will be just a word. I think the first one is easier to implement. Best Regards, Strahil Nikolov Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IS

Re: [Gluster-users] cannot remove empty directory on gluster file system

2020-03-25 Thread Strahil Nikolov
On March 25, 2020 3:32:59 PM GMT+02:00, Mauro Tridici wrote: >Dear All, > >some users tht use regularly our gluster file system are experiencing a >strange error during attempting to remove a empty directory. >All bricks are up and running, no perticular error has been detected, >but they are

Re: [Gluster-users] cannot remove empty directory on gluster file system

2020-03-25 Thread Strahil Nikolov
Take a look at Stefan Solbrig's e-mail Best Regards, Strahil Nikolov В сряда, 25 март 2020 г., 22:55:23 Гринуич+2, Mauro Tridici написа: Hi Strahil, unfortunately, no process is holding file or directory. Do you know if some other community user could help me? Thank you, Mauro

Re: [Gluster-users] cannot remove empty directory on gluster file system

2020-03-25 Thread Strahil Nikolov
x 2 das oclab_prod 4096 Mar 25 10:02 . drwxr-xr-x 3 das oclab_prod 4096 Mar 25 10:02 .. Any other idea related this issue? Many thanks, Mauro > On 25 Mar 2020, at 18:32, Strahil Nikolov wrote: > > On March 25, 2020 3:32:59 PM GMT+02:00, Mauro Tridici > wrote: >> Dear All,

Re: [Gluster-users] Red Hat Bugzilla closed for GlusterFS bugs?

2020-04-03 Thread Strahil Nikolov
Community Meeting Calendar: > >Schedule - >Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC >Bridge: https://bluejeans.com/441850968 > >Gluster-users mailing list >Gluster-users@gluster.org >https://lists.gluster.org/mailman/listinfo/gluster-users Everything was moved

Re: [Gluster-users] Cann't mount NFS,please help!

2020-04-01 Thread Strahil Nikolov
t; Gluster-users mailing list >> Gluster-users@gluster.org >> https://lists.gluster.org/mailman/listinfo/gluster-users > > > >Erik Jacobson >Software Engineer > >erik.jacob...@hpe.com >+1 612 851 0550 Office > >Eagan, MN >hpe.com > >

Re: [Gluster-users] not support so called “structured data”

2020-04-01 Thread Strahil Nikolov
hus a 'replica 3' volume or a 'replica 3 arbiter 1' volume should be used and a different set of options are needed (compared to other workloads). Best Regards, Strahil Nikolov Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://b

Re: [Gluster-users] Upgrade from 5.13 to 7.5 full of weird messages

2020-05-05 Thread Strahil Nikolov
nch on which patch would have caused an >increase >in logs! > >-Amar > > >> >> On Sat, May 2, 2020, 12:47 AM Strahil Nikolov >> wrote: >> >>> On May 1, 2020 8:03:50 PM GMT+03:00, Artem Russakovskii < >>> archon...@gmail.com

Re: [Gluster-users] Lightweight read

2020-04-29 Thread Strahil Nikolov
opposite on the brick2 - then only metadata at the Arbiter level can show us which data is good and which has to be fixed. > >On Sat, 25 Apr 2020 at 19:41, Strahil Nikolov >wrote: > >> On April 25, 2020 9:00:30 AM GMT+03:00, David Cunningham < >> dcunning...@voisonics.com

Re: [Gluster-users] Gluster on top of xfs inode size 1024

2020-05-12 Thread Strahil Nikolov
>Gluster-users mailing list >Gluster-users@gluster.org >https://lists.gluster.org/mailman/listinfo/gluster-users Inode size 1024 is the recommended for Gluster used with Openstack (SWIFT) , so it shouldn't have any issues. Best Regards, Strahil Nikolov Community Mee

Re: [Gluster-users] geo-replication sync issue

2020-03-18 Thread Strahil Nikolov
; Could you try disabling syncing xattrs and check ? >> >> gluster vol geo-rep :: config >sync-xattrs >> false >> >> On Fri, Mar 13, 2020 at 1:42 AM Strahil Nikolov > >> wrote: >> >>> On March 12, 2020 9:41:45 AM GMT+02:00, "Etem Bayoğlu&

Re: [Gluster-users] systemd kill mode

2020-09-03 Thread Strahil Nikolov
, Strahil Nikolov В четвъртък, 3 септември 2020 г., 10:11:42 Гринуич+3, Ward Poelmans написа: Hi Strahil, On 2/09/2020 21:30, Strahil Nikolov wrote: > you shouldn't do that,as it is intentional - glusterd is just a management > layer and you might need to restart it in order to recon

Re: [Gluster-users] Upgrade from 6.9 to 7.7 stuck (peer is rejected)

2020-09-06 Thread Strahil Nikolov
Regards, Strahil Nikolov В неделя, 23 август 2020 г., 11:05:27 Гринуич+3, mabi написа: Hello, So to be precise I am exactly having the following issue: https://github.com/gluster/glusterfs/issues/1332 I could not wait any longer to find some workarounds or quick fixes so I decided

Re: [Gluster-users] performance

2020-09-06 Thread Strahil Nikolov
Seems that this time your email went to yahoo's spam , I am still too lazy to get my own domain ... It is good to know that they fixed your issues. Best Regards, Strahil Nikolov В петък, 4 септември 2020 г., 02:00:32 Гринуич+3, Computerisms Corporation написа: Hi Strahil

Re: [Gluster-users] Upgrade best practices request

2020-09-02 Thread Strahil Nikolov
the cutover time comes, you can just stop the geo rep and reconfigure the clients to use the new cluster. Best Regards, Strahil Nikolov В вторник, 1 септември 2020 г., 22:01:49 Гринуич+3, Thomas Cameron написа: Howdy, all - I have inherited a system which is running an ancient version

Re: [Gluster-users] gluster volume rebalance making things more unbalanced

2020-09-07 Thread Strahil Nikolov
Hi Pat, I checked  https://docs.gluster.org/en/latest/Administrator%20Guide/Managing%20Volumes/#rebalancing-volumes and indeed that is possible to affect your performance. What is your current status ? Did you run a new rebalance ? Best Regards, Strahil Nikolov В четвъртък, 27 август

Re: [Gluster-users] gluster heal performance

2020-09-11 Thread Strahil Nikolov
to a healing operation ongoing.Also , check the logs on all nodes for any errors during the healing - maybe you got some issues that were not noticed before. Best Regards, Strahil Nikolov В петък, 11 септември 2020 г., 12:13:06 Гринуич+3, Martin Bähr написа: Excerpts from Gionatan Danti's

Re: [Gluster-users] systemd kill mode

2020-09-02 Thread Strahil Nikolov
ll-gluster-processes.sh are provided by the glusterfs-server package. Best Regards, Strahil Nikolov В сряда, 2 септември 2020 г., 21:59:45 Гринуич+3, Ward Poelmans написа: Hi, I've playing with glusterfs on a couple of VMs to get some feeling with it. The setup is 2 bricks with replicat

Re: [Gluster-users] systemd kill mode

2020-09-02 Thread Strahil Nikolov
And it seems gdeploy is deprecated in favour of gluster-ansible ->  gluster/gluster-ansible . gluster/gluster-ansible A core library of gluster specific roles and modules for ansible/ansible tower. - gluster/gluster-ansible Best Regards, Strahil Nikolov В сряда

Re: [Gluster-users] Memory leak und very slow speed

2020-10-08 Thread Strahil Nikolov
Do you have the option to update your cluster to 8.1 ? Are your clients in a HCI (server & client are the same system) ? Best Regards, Strahil Nikolov В четвъртък, 8 октомври 2020 г., 17:07:31 Гринуич+3, Knoth, Benjamin написа:    Dear community, actually, I'm running

Re: [Gluster-users] Glusterfs as databse store

2020-10-13 Thread Strahil Nikolov
t-o-direct=on performance.read-ahead=off performance.io-cache=off performance.readdir-ahead=off performance.client-io-threads=on server.event-threads=4 client.event-threads=4 performance.read-after-open=yes At least it is a good start point. Best Regards, Strahil Nikolov В вторник, 13 октомври 2020 г.

Re: [Gluster-users] Memory leak und very slow speed

2020-10-13 Thread Strahil Nikolov
Thanks for sharing. Best Regards, Strahil Nikolov В вторник, 13 октомври 2020 г., 18:17:23 Гринуич+3, Benjamin Knoth написа: Dear all, I add the community repository, to update Gluster to 8.1. This fix my memory leak. But in my logfile I got every second many errors

Re: [Gluster-users] Change arbiter brick mount point

2020-10-13 Thread Strahil Nikolov
512 /dev/arbiter/disk 3. mount /dev/arbiter/disk /new/path/to/brick Next just add the brick: gluster volume add-brick VOL replica 3 arbiter 1 Arbiter:/new/path/to/brick In both cases (the reset-brick and remove/add-brick) you need to work with a fresh new brick . Best Regards, Strahil Nikolov

Re: [Gluster-users] Low cost, extendable, failure tolerant home cloud storage question

2020-10-04 Thread Strahil Nikolov
need to wipe the fs and 'reset-brick'. Best Regards, Strahil Nikolov Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://bluejeans.com/441850968 Gluster-users mailing list Gluster-users@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Glusterfs as databse store

2020-10-12 Thread Strahil Nikolov
be controlled easily. Best Regards, Strahil Nikolov В понеделник, 12 октомври 2020 г., 12:12:18 Гринуич+3, Alex K написа: On Mon, Oct 12, 2020 at 9:47 AM Diego Zuccato wrote: > Il 10/10/20 16:53, Alex K ha scritto: > >> Reading from the docs i see that this is not recomme

Re: [Gluster-users] Setup recommendations

2020-10-19 Thread Strahil Nikolov
ements , I would add some SSDs in the game (tier 1+ storage) and use the SSD-based LUNs as lvm caching. Best Regards, Strahil Nikolov Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://bluejeans.com/441850968 Gluster-users mailing list Gluster-users@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] New Brick Not Showing On Remote Mounts

2020-10-19 Thread Strahil Nikolov
Hi Rusty, please provide more details like: gluster volume status VOL gluster volume info VOL and from the client: df -h /path/to/mounted/volume P.S.: What was the command you used to add and then rebalance the volume ? Best Regards, Strahil Nikolov В понеделник, 19 октомври 2020 г

Re: [Gluster-users] Setup recommendations

2020-10-19 Thread Strahil Nikolov
but should not stale). Best Regards, Strahil Nikolov В понеделник, 19 октомври 2020 г., 14:40:01 Гринуич+3, Nico van Royen написа: Hello, >4 cores is quite low, especially when healing. The 4 cores (and, by default, 8GB RAM), is a standard offering in our situations.  It would be up to

Re: [Gluster-users] high load when copy directory with many files

2020-10-06 Thread Strahil Nikolov
  /proc/cmdline' - tuned profile - Number of clients and TSP nodes in the cluster - Are you using it in HCI mode (where client and server is the same system)  Best Regards, Strahil Nikolov В понеделник, 5 октомври 2020 г., 21:27:28 Гринуич+3, Adrian Quintero написа: Hi, Have you tried tuned

Re: [Gluster-users] GlusterFS performance for big files...

2020-08-18 Thread Strahil Nikolov
on community edition is 64MB. Best Regards, Strahil Nikolov На 18 август 2020 г. 16:47:01 GMT+03:00, Gilberto Nunes написа: >>> What's your workload? >I have 6 KVM VMs which have Windows and Linux installed on it. > >>> Read? >>> Write? >iostat (I am usin

Re: [Gluster-users] Removing spurious hostname from peer configuration

2020-08-28 Thread Strahil Nikolov
of VMs. Best Regards, Strahil Nikolov В петък, 28 август 2020 г., 19:57:18 Гринуич+3, Pat Haley написа: Hi All, We have a distributed gluster filesystem across 2 servers.  We recently realized that one of the servers (mseas-data3) has 2 hostnames for the other server (mseas

Re: [Gluster-users] performance

2020-08-21 Thread Strahil Nikolov
На 20 август 2020 г. 3:46:41 GMT+03:00, Computerisms Corporation написа: >Hi Strahil, > >so over the last two weeks, the system has been relatively stable. I >have powered off both servers at least once, for about 5 minutes each >time. server came up, auto-healed what it needed to, so all

Re: [Gluster-users] client side profiling

2020-08-21 Thread Strahil Nikolov
>master# gluster vol profile webisms start >Profile on Volume webisms is already started It seems that it was already started. Can you stop it and check node's load before starting it again? Best Regards, Strahil Nikolov На 21 август 2020 г. 7:44:35 GMT+03:00, Computerisms Corpo

Re: [Gluster-users] gluster volume rebalance making things more unbalanced

2020-08-27 Thread Strahil Nikolov
Sadly I have no idea why rebalance did that , so you should check the logs on all nodes for clues. Is there any reason why you used "force" in that command ? Best Regards, Strahil Nikolov В четвъртък, 27 август 2020 г., 17:32:24 Гринуич+3, Pat Haley написа: Hi

Re: [Gluster-users] Wrong directory quota usage

2020-08-15 Thread Strahil Nikolov
to the 'ProjectB' volume's brick directories and remove the dirty flag. When you stat that dir ('du' or 'stat' from fuse should work), the quota should get fixed . Best Regards, Strahil Nikolov На 14 август 2020 г. 14:39:49 GMT+03:00, "João Baúto" написа: >Hi Strahil, > >I have tried

Re: [Gluster-users] Monitoring tools for GlusterFS

2020-08-15 Thread Strahil Nikolov
Best Regards, Strahil Nikolov На 14 август 2020 г. 17:08:57 GMT+03:00, Gilberto Nunes написа: >Yes! I see! >For many small files is complicated... >Here I am generally using 2 or 3 large files (VM disk images!)... >I think this could be at least some progress bar or percent about

Re: [Gluster-users] Geo-replication causes OOM

2020-08-15 Thread Strahil Nikolov
It might help narrowing the problem. Best Regards, Strahil Nikolov На 14 август 2020 г. 20:22:16 GMT+03:00, Matthew Benstead написа: >Hi, > >We are building a new storage system, and after geo-replication has >been >running for a few hours the server runs out of memory and oo

Re: [Gluster-users] Description of performance.cache-size

2020-09-30 Thread Strahil Nikolov
Sadly I can't help much here. Is this a Hyperconverged setup (host is also a client) ? Best Regards, Strahil Nikolov В вторник, 29 септември 2020 г., 18:29:20 Гринуич+3, Shreyansh Shah написа: Hi All, Can anyone help me out with this? On Tue, Sep 22, 2020 at 2:59 PM Shreyansh Shah

Re: [Gluster-users] Description of performance.cache-size

2020-09-30 Thread Strahil Nikolov
At least you can bumb the cluster op-version (in case you don't plan to add older clients)via: gluster volume set all cluster.op-version 50400 If it happens again, try to remount the client in order to verify that it is not a memory leak. Best Regards, Strahil Nikolov В сряда, 30

Re: [Gluster-users] Description of performance.cache-size

2020-09-30 Thread Strahil Nikolov
to update it to 5.11 (if the Gluster Cluster is on 5.11 or higher) and monitor it closely. Best Regards, Strahil Nikolov В сряда, 30 септември 2020 г., 18:22:33 Гринуич+3, Shreyansh Shah написа: Hi Strahil, Thanks for taking out time to help me. This is not a hyperconverged setup. We

Re: [Gluster-users] Replica 3 scale out and ZFS bricks

2020-09-17 Thread Strahil Nikolov
В четвъртък, 17 септември 2020 г., 13:16:06 Гринуич+3, Alexander Iliev написа: On 9/16/20 9:53 PM, Strahil Nikolov wrote: > В сряда, 16 септември 2020 г., 11:54:57 Гринуич+3, Alexander Iliev > написа: > >  From what I understood, in order to be able to scale

Re: [Gluster-users] Replica 3 scale out and ZFS bricks

2020-09-19 Thread Strahil Nikolov
, so keep that in mind. You can check  https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/chap-configuring_red_hat_storage_for_enhancing_performance for more details. Best Regards, Strahil Nikolov В петък, 18 септември 2020 г., 17:01:30 Гринуич

Re: [Gluster-users] Upgrading from glusterfs 3.12.15 on CentOS7 to gluster-6

2020-10-02 Thread Strahil Nikolov
to upgrade without downtime to higher versions of Gluster. As far as I remember I upgraded from that version to 5.5 some time ago, but should also work with 6.X ... Best Regards, Strahil Nikolov В петък, 2 октомври 2020 г., 19:19:38 Гринуич+3, Pasi Kärkkäinen написа: Hello list, I

Re: [Gluster-users] Readdirp (ls -l) Performance Improvement

2020-05-27 Thread Strahil Nikolov
Hi Rafi, I have a test oVirt 4.3.9 cluster with Gluster v7.5 on CentOS7. Can you provide the rpms and I will try to test. Also, please share the switch that disables this behaviour (in case something goes wrong). Best Regards, Strahil Nikolov На 27 май 2020 г. 14:54:34 GMT+03:00, RAFI KC

Re: [Gluster-users] File system very slow

2020-05-27 Thread Strahil Nikolov
Also, can you provide a ping between the nodes, so we get an idea of the lattency between the nodes. Also, I'm interested how much time it takes on the bricks to 'du'. Best Regards, Strahil Nikolov На 27 май 2020 г. 10:27:34 GMT+03:00, Karthik Subrahmanya написа: >Hi, > >Pleas

Re: [Gluster-users] Configuring legacy Gulster NFS

2020-05-25 Thread Strahil Nikolov
На 25 май 2020 г. 5:49:00 GMT+03:00, Olivier написа: >Strahil Nikolov writes: > >> On May 23, 2020 7:29:23 AM GMT+03:00, Olivier > wrote: >>>Hi, >>> >>>I have been struggling with NFS Ganesha: one gluster node with >ganesha >>>serving

Re: [Gluster-users] Configuring legacy Gulster NFS

2020-05-25 Thread Strahil Nikolov
I forgot to mention that you need to verify/set the VMware machines for high-performance/low-lattency workload. На 25 май 2020 г. 17:13:52 GMT+03:00, Strahil Nikolov написа: > > >На 25 май 2020 г. 5:49:00 GMT+03:00, Olivier > написа: >>Strahil Nikolov writes: >> >

Re: [Gluster-users] Readdirp (ls -l) Performance Improvement

2020-05-28 Thread Strahil Nikolov
Hey Rafi, what do you mean with volume configuration and tree structure. Best Regards, Strahil Nikolov На 27 май 2020 г. 16:18:36 GMT+03:00, RAFI KC написа: >Sure, I have back-ported the patch to release-7. Now I will see How I >can build the rpms. > >On the other hand, if possibl

Re: [Gluster-users] Geo-replication: Entry not present on master. Fixing gfid mismatch in slave

2020-05-30 Thread Strahil Nikolov
move away the file from the slave , does it fixes the issue ? Best Regards, Strahil Nikolov На 30 май 2020 г. 1:10:56 GMT+03:00, David Cunningham написа: >Hello, > >We're having an issue with a geo-replication process with unusually >high >CPU use and giving "Entry n

Re: [Gluster-users] One error/warning message after upgrade 5.11 -> 6.8

2020-05-30 Thread Strahil Nikolov
Yesterday another ovirt user hit the issue (or similar one) after gluster v6.6 to 6.8 upgrade. I guess Adrian can provide the logs, so we check if it is the same issue or not. Best Regards, Strahil Nikolov На 28 май 2020 г. 14:15:55 GMT+03:00, Alan Orth написа: >We upgraded from 5.10 or 5

Re: [Gluster-users] Expecting to achieve atomic read in a FUSE mount of Gluster

2020-05-30 Thread Strahil Nikolov
ncy level 3 (8 +3) 12 bricks with redundancy level 4 (8 + 4) In your case if 2 bricks fail - the volume will be available without any disruption. Sadly there is no way to convert replicated to dispersed volume and based on your workload dispersed volume might not be suitable. Best Regards, Stra

Re: [Gluster-users] Faulty staus in geo-replication session of a sub-volume

2020-05-30 Thread Strahil Nikolov
Hello Naranderan, what OS are you using ? Do you have SELINUX in enforcing mode (verify via 'sestatus') ? Best Regards, Strahil Nikolov В събота, 30 май 2020 г., 13:33:05 ч. Гринуич+3, Naranderan Ramakrishnan написа: Dear Developers/Users, A geo-rep session of a sub-volume

Re: [Gluster-users] GlusterFS saturates server disk IO due to write brick temporary file to ".glusterfs" directory

2020-05-30 Thread Strahil Nikolov
this article (it's for small files tuning, but describes the options above): https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.1/html/administration_guide/small_file_performance_enhancements Best Regards, Strahil Nikolov На 29 май 2020 г. 22:25:29 GMT+03:00, Qing Wang написа

Re: [Gluster-users] Geo-replication: Entry not present on master. Fixing gfid mismatch in slave

2020-06-02 Thread Strahil Nikolov
looping over some data causing the CPU hog. Sadly, I can't find an instruction for increasing the log level of the geo rep log . Best Regards, Strahil Nikolov На 2 юни 2020 г. 6:14:46 GMT+03:00, David Cunningham написа: >Hi Strahil and Sunny, > >Thank you for the replies. I checked

Re: [Gluster-users] Correct way to migrate brick to new server (Gluster 6.10)

2020-09-16 Thread Strahil Nikolov
Actually I used 'replace-brick' several times and I had no issues. I guess you can 'remove brick replica old_brick' and later 'add brick replica new brick' ... Best Regards, Strahil Nikolov В сряда, 16 септември 2020 г., 08:41:29 Гринуич+3, Alex Wakefield написа: Hi all, We have

Re: [Gluster-users] Replica 3 scale out and ZFS bricks

2020-09-16 Thread Strahil Nikolov
reason to use ZFS ? It uses a lot of memory . Best Regards, Strahil Nikolov Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://bluejeans.com/441850968 Gluster-users mailing list Gluster-users@gluster.org https://lists.gluster.org

Re: [Gluster-users] gluster replica 3 with third less powerful machine

2020-10-21 Thread Strahil Nikolov
er will pick up the third node to be an arbiter... Is there any parameter to pass during volume creation or just replica 1 and than gluster will analyze the hosts and then pick up the less powerful? Sorry for newbie question... Thanks  --- Gilberto Nunes Ferreira Em qua., 21 de out. de

  1   2   3   4   >