Re: [Gluster-users] Gluter 3.12.12: performance during heal and in general

2018-07-27 Thread Hu Bert
2018-07-27 8:52 GMT+02:00 Pranith Kumar Karampuri : > > > On Fri, Jul 27, 2018 at 11:53 AM, Hu Bert wrote: >> >> > Do you already have all the 19 directories already created? If not >> > could you find out which of the paths need it and do a stat di

Re: [Gluster-users] Gluter 3.12.12: performance during heal and in general

2018-07-27 Thread Hu Bert
> Do you already have all the 19 directories already created? If not could > you find out which of the paths need it and do a stat directly instead of > find? Quite probable not all of them have been created (but counting how much would take very long...). Hm, maybe running stat in a double

Re: [Gluster-users] Gluter 3.12.12: performance during heal and in general

2018-07-27 Thread Hu Bert
2018-07-27 9:22 GMT+02:00 Pranith Kumar Karampuri : > > > On Fri, Jul 27, 2018 at 12:36 PM, Hu Bert wrote: >> >> 2018-07-27 8:52 GMT+02:00 Pranith Kumar Karampuri : >> > >> > >> > On Fri, Jul 27, 2018 at 11:53 AM, Hu Bert >> > wrote: >

Re: [Gluster-users] Gluter 3.12.12: performance during heal and in general

2018-07-26 Thread Hu Bert
and reasonable to run 2 finds in parallel, maybe on different subdirectories? E.g. running one one $volume/public/ and on one $volume/private/ ? 2018-07-26 11:29 GMT+02:00 Pranith Kumar Karampuri : > > > On Thu, Jul 26, 2018 at 2:41 PM, Hu Bert wrote: >> >> > Sorry, bad

Re: [Gluster-users] Gluter 3.12.12: performance during heal and in general

2018-07-27 Thread Hu Bert
>> Btw.: i've seen in the munin stats that the disk utilization for >> bricksdd1 on the healthy gluster servers is between 70% (night) and >> almost 99% (daytime). So it looks like that the basic problem is the >> disk which seems not to be able to work faster? If so (heal) >> performance won't

Re: [Gluster-users] Gluter 3.12.12: performance during heal and in general

2018-07-26 Thread Hu Bert
l-window-size). I > think for your environment bumping upto MBs is better. Say 2MB i.e. > 16*128KB? > > Command to do that is: > gluster volume set cluster.data-self-heal-window-size 16 > > > On Thu, Jul 26, 2018 at 10:40 AM, Hu Bert wrote: >> >> Hi Pranith,

Re: [Gluster-users] Gluter 3.12.12: performance during heal and in general

2018-07-25 Thread Hu Bert
can do to raise performance a bit. thx in advance :-) 2018-07-24 10:40 GMT+02:00 Pranith Kumar Karampuri : > > > On Mon, Jul 23, 2018 at 4:16 PM, Hu Bert wrote: >> >> Well, over the weekend about 200GB were copied, so now there are >> ~400GB copied to the brick. That's far beyon

Re: [Gluster-users] Gluter 3.12.12: performance during heal and in general

2018-07-26 Thread Hu Bert
ers are interested... :-) ) 2018-07-26 10:17 GMT+02:00 Pranith Kumar Karampuri : > > > On Thu, Jul 26, 2018 at 12:59 PM, Hu Bert wrote: >> >> Hi Pranith, >> >> thanks a lot for your efforts and for tracking "my" problem with an issue. >> :-)

Re: [Gluster-users] Gluter 3.12.12: performance during heal and in general

2018-08-01 Thread Hu Bert
Hello :-) Just wanted to give a short report... >> It could be saturating in the day. But if enough self-heals are going on, >> even in the night it should have been close to 100%. > > Lowest utilization was 70% over night, but i'll check this > evening/weekend. Also that 'stat...' is running.

Re: [Gluster-users] Gluter 3.12.12: performance during heal and in general

2018-08-15 Thread Hu Bert
/gluster13_cpud7eni.png This can't be normal. 2 of the servers under heavy load and one not that much. Does anyone have an explanation of this strange behaviour? Thx :-) 2018-08-14 9:37 GMT+02:00 Hu Bert : > Hi there, > > well, it seems the heal has finally finished. Couldn't see/find any >

[Gluster-users] glusterd.log after server reboot: forced unwinding frame type / Error setting index on brick status rsp dict

2018-08-16 Thread Hu Bert
Good morning, today, after a gluster server reboot (including a brick not coming up, which happens at every reboot), i've seen these error message in glusterd.log file. Maybe i've copied+pasted too much, but i hope the maintainers can sort it out :-) [2018-08-16 05:22:18.818910] I [MSGID:

[Gluster-users] Previously replaced brick not coming up after reboot

2018-08-16 Thread Hu Bert
Hi there, 2 times i had to replace a brick on 2 different servers; replace went fine, heal took very long but finally finished. From time to time you have to reboot the server (kernel upgrades), and i've noticed that the replaced brick doesn't come up after the reboot. Status after reboot:

Re: [Gluster-users] Previously replaced brick not coming up after reboot

2018-08-16 Thread Hu Bert
glusterfs 3.12.12 2018-08-16 9:26 GMT+02:00 Serkan Çoban : > What is your gluster version? There was a bug in 3.10, when you reboot > a node some bricks may not come online but it fixed in later versions. > > On 8/16/18, Hu Bert wrote: >> Hi there, >> >> 2 times

Re: [Gluster-users] Gluter 3.12.12: performance during heal and in general

2018-08-16 Thread Hu Bert
Hi, well, as the situation doesn't get better, we're quite helpless and mostly in the dark, so we're thinking about hiring some professional support. Any hint? :-) 2018-08-15 11:07 GMT+02:00 Hu Bert : > Hello again :-) > > The self heal must have finished as there are no lo

Re: [Gluster-users] Gluter 3.12.12: performance during heal and in general

2018-08-14 Thread Hu Bert
Hi there, well, it seems the heal has finally finished. Couldn't see/find any related log message; is there such a message in a specific log file? But i see the same behaviour when the last heal finished: all CPU cores are consumed by brick processes; not only by the formerly failed bricksdd1,

Re: [Gluster-users] Gluter 3.12.12: performance during heal and in general

2018-08-17 Thread Hu Bert
r >> io-threads. Please follow the documentation at >> https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Monitoring%20Workload/ >> section: " >> >> Running GlusterFS Volume Profile Command" >> >> and attach output of "gluster vo

Re: [Gluster-users] Gluter 3.12.12: performance during heal and in general

2018-08-17 Thread Hu Bert
:30 GMT+02:00 Pranith Kumar Karampuri : > There seems to be too many lookup operations compared to any other > operations. What is the workload on the volume? > > On Fri, Aug 17, 2018 at 12:47 PM Hu Bert wrote: >> >> i hope i did get it right. >> >> gluster vo

Re: [Gluster-users] Gluter 3.12.12: performance during heal and in general

2018-08-21 Thread Hu Bert
t;> >> >> On Mon, Aug 20, 2018 at 3:20 PM Hu Bert wrote: >>> >>> Regarding hardware the machines are identical. Intel Xeon E5-1650 v3 >>> Hexa-Core; 64 GB DDR4 ECC; Dell PERC H330 8 Port SAS/SATA 12 GBit/s >>> RAID Controller; operating system runni

[Gluster-users] Error after upgrade 3.12 -> 4.1 : Maximum supported op-version not set in destination dictionary

2018-08-21 Thread Hu Bert
Hello there, i just installed a replicate setup to test if the upgrade 3.12 -> 4.1 runs smoothly. Well, it did :-) though there is no real usage/load on the test installation, just some test files. I upgraded from 3.12.12-1 to 4.1.2-1 on a debian stretch. Besides some warnings and errors

Re: [Gluster-users] Possibly missing two steps in upgrade to 4.1 guide

2018-08-21 Thread Hu Bert
I think point 2 is already covered by the guide; see: "Upgrade procedure for clients" Following are the steps to upgrade clients to the 4.1.x version, NOTE: x is the minor release number for the release >>> Unmount all glusterfs mount points on the client Stop all applications that access the

Re: [Gluster-users] Gluter 3.12.12: performance during heal and in general

2018-08-22 Thread Hu Bert
GMT+02:00 Pranith Kumar Karampuri : > > > On Tue, Aug 21, 2018 at 11:40 AM Hu Bert wrote: >> >> Good morning :-) >> >> gluster11: >> ls -l /gluster/bricksdd1/shared/.glusterfs/indices/xattrop/ >> total 0 >> -- 1 root root 0 Aug 14 06

Re: [Gluster-users] Gluter 3.12.12: performance during heal and in general

2018-08-20 Thread Hu Bert
gt; On Fri, Aug 17, 2018 at 1:49 PM Hu Bert wrote: >> >> I don't know what you exactly mean with workload, but the main >> function of the volume is storing (incl. writing, reading) images >> (from hundreds of bytes up to 30 MBs, overall ~7TB). The work is done >&

Re: [Gluster-users] Gluter 3.12.12: performance during heal and in general

2018-08-20 Thread Hu Bert
perf record --call-graph=dwarf -p-o > then > perf report -i > > > On Mon, Aug 20, 2018 at 2:40 PM Hu Bert wrote: >> >> gluster volume heal shared info | grep -i number >> Number of entries: 0 >> Number of entries: 0 >> Number of entries: 0 >

Re: [Gluster-users] Gluter 3.12.12: performance during heal and in general

2018-08-27 Thread Hu Bert
red -- There are no active volume tasks Very strange. Thanks for reading if you've reached this line :-) 2018-08-23 13:58 GMT+02:00 Pranith Kumar Karampuri : > > > On Wed, Aug 22, 2018 at 12:01 PM Hu Bert wrote

Re: [Gluster-users] Gluter 3.12.12: performance during heal and in general

2018-08-28 Thread Hu Bert
c1 68 threads, bricksdd1 85 threads gluster12: bricksda1 65 threads, bricksdb1 60 threads, bricksdc1 61 threads, bricksdd1_new 58 threads gluster13: bricksda1 61 threads, bricksdb1 60 threads, bricksdc1 61 threads, bricksdd1_new 82 threads Don't know if that could be relevant. 2018-08-28 7:04 GMT+

Re: [Gluster-users] Gluter 3.12.12: performance during heal and in general

2018-08-28 Thread Hu Bert
:186:dht_layout_search] 0-shared-dht: no subvolume for hash (value) = 2841655539 [2018-08-28 07:19:55.466352] W [MSGID: 109011] [dht-layout.c:186:dht_layout_search] 0-shared-dht: no subvolume for hash (value) = 3049465001 Don't know if that could be related. 2018-08-28 8:54 GMT+02:00 Hu Bert

Re: [Gluster-users] Possibly missing two steps in upgrade to 4.1 guide

2018-08-21 Thread Hu Bert
today i tested an upgrade 3.12.12 -> 4.1.2, and the glustereventsd service was restarted. We use debian stretch; maybe it depends on the operating system? 2018-08-21 16:17 GMT+02:00 mabi : > Oops missed that part at the bottom, thanks Hu Bert! > > Now the only thing missing from the u

Re: [Gluster-users] Gluter 3.12.12: performance during heal and in general

2018-08-27 Thread Hu Bert
Changire : > On Thu, Aug 23, 2018 at 5:28 PM, Pranith Kumar Karampuri > wrote: >> >> On Wed, Aug 22, 2018 at 12:01 PM Hu Bert wrote: >>> >>> Just an addition: in general there are no log messages in >>> /var/log/glusterfs/ (if you don't all 'gluster volu

Re: [Gluster-users] Gluter 3.12.12: performance during heal and in general

2018-07-20 Thread Hu Bert
-04b1ea4458ba is this behaviour normal? I'd expect these messages on the server with the failed brick, not on the other ones. 2018-07-19 8:31 GMT+02:00 Hu Bert : > Hi there, > > sent this mail yesterday, but somehow it didn't work? Wasn't archived, > so please be indulgent it you receive thi

[Gluster-users] Gluter 3.12.12: performance during heal and in general

2018-07-20 Thread Hu Bert
Hi there, sent this mail yesterday, but somehow it didn't work? Wasn't archived, so please be indulgent it you receive this mail again :-) We are currently running a replicate setup and are experiencing a quite poor performance. It got even worse when within a couple of weeks 2 bricks (disks)

Re: [Gluster-users] Gluter 3.12.12: performance during heal and in general

2018-07-23 Thread Hu Bert
that bad? No chance of speeding this up? 2018-07-20 9:41 GMT+02:00 Hu Bert : > hmm... no one any idea? > > Additional question: the hdd on server gluster12 was changed, so far > ~220 GB were copied. On the other 2 servers i see a lot of entries in > glustershd.log, about 312.000 respe

[Gluster-users] Gluter 3.12.12: performance during heal and in general

2018-07-19 Thread Hu Bert
Hi there, sent this mail yesterday, but somehow it didn't work? Wasn't archived, so please be indulgent it you receive this mail again :-) We are currently running a replicate setup and are experiencing a quite poor performance. It got even worse when within a couple of weeks 2 bricks (disks)

Re: [Gluster-users] Gluter 3.12.12: performance during heal and in general

2018-08-31 Thread Hu Bert
gluster servers. 2018-08-28 9:24 GMT+02:00 Hu Bert : > Hm, i noticed that in the shared.log (volume log file) on gluster11 > and gluster12 (but not on gluster13) i now see these warnings: > > [2018-08-28 07:18:57.224367] W [MSGID: 109011] > [dht-layout.c:186:dht_layout_searc

Re: [Gluster-users] Gluter 3.12.12: performance during heal and in general

2018-09-19 Thread Hu Bert
is within brick) - replace glusterfs with $whatever (defeat... :-( ) thx Hubert 2018-09-03 7:55 GMT+02:00 Pranith Kumar Karampuri : > > > On Fri, Aug 31, 2018 at 1:18 PM Hu Bert wrote: >> >> Hi Pranith, >> >> i just wanted to ask if you were able to get any feedb

[Gluster-users] gluster 4.1.6 brick problems: 2 processes for one brick, performance problems

2018-12-12 Thread Hu Bert
Hello, we started with a gluster installation: 3.12.11. 3 servers (gluster11, gluster12, gluster13) and 4 bricks (each hdd == brick, JBOD behind controller) per server: bricksda1, bricksdb1, bricksdc1, bricksdd1; full information: see here: https://pastebin.com/0ndDSstG In the beginning

Re: [Gluster-users] usage of harddisks: each hdd a brick? raid?

2019-01-09 Thread Hu Bert
Hi Mike, > We have similar setup, and I do not test restoring... > How many volumes do you have - one volume on one (*3) disk 10 TB in size > - then 4 volumes? Testing could be quite easy: reset-brick start, then delete partition/fs/etc., reset-brick commit force - and then watch. We only

Re: [Gluster-users] usage of harddisks: each hdd a brick? raid?

2019-01-10 Thread Hu Bert
Hi, > > We ara also using 10TB disks, heal takes 7-8 days. > > You can play with "cluster.shd-max-threads" setting. It is default 1 I > > think. I am using it with 4. > > Below you can find more info: > > https://access.redhat.com/solutions/882233 > cluster.shd-max-threads: 8 >

Re: [Gluster-users] Glusterfs 4.1.6

2019-01-06 Thread Hu Bert
luster/bricksdd1 commit force thx Hubert Am Mo., 7. Jan. 2019 um 08:21 Uhr schrieb Ashish Pandey : > > comments inline > > ________ > From: "Hu Bert" > To: "Ashish Pandey" > Cc: "Gluster Users" > Sent: Monday, January 7,

[Gluster-users] usage of harddisks: each hdd a brick? raid?

2019-01-09 Thread Hu Bert
Hi @all, we have 3 servers, 4 disks (10TB) each, in a replicate 3 setup. We're having some problems after a disk failed; the restore via reset-brick takes way too long (way over a month), disk utilization is at 100%, it doesn't get any faster, some params have already been tweaked. Only about

Re: [Gluster-users] Glusterfs 4.1.6

2019-01-06 Thread Hu Bert
Hi Ashish & all others, if i may jump in... i have a little question if that's ok? replace-brick and reset-brick are different commands for 2 distinct problems? I once had a faulty disk (=brick), it got replaced (hot-swap) and received the same identifier (/dev/sdd again); i followed this guide:

Re: [Gluster-users] gluster 4.1.6 brick problems: 2 processes for one brick, performance problems

2018-12-19 Thread Hu Bert
ple operation like ls on a directory with a couple of hundreds subdirs takes too long. umount+mount fixes this. but it seems the setup is too messed up to rescue. seems like we have to look for a different/reliable/suitable solution. Am Mi., 12. Dez. 2018 um 10:41 Uhr schrieb Hu Bert : > > Hell

Re: [Gluster-users] Upgrade 5.3 -> 5.4 on debian: public IP is used instead of LAN IP

2019-03-21 Thread Hu Bert
ges. Fine :-) Best regards, Hubert Am Mi., 20. März 2019 um 09:39 Uhr schrieb Hu Bert : > > Hi, > > i updated our live systems (debian stretch) from 5.3 -> 5.5 this > morning; update went fine so far :-) > > However, on 3 (of 9) clients, the log entries still appear. The >

Re: [Gluster-users] Lots of connections on clients - appropriate values for various thread parameters

2019-03-29 Thread Hu Bert
md-cache or kernel attribute cache or nl-cache will help to cut > down lookups. > > regards, > Raghavendra > > On Mon, Mar 25, 2019 at 12:13 PM Hu Bert wrote: >> >> Hi Raghavendra, >> >> sorry, this took a while. The last weeks the weather was bad -> less &

Re: [Gluster-users] Lots of connections on clients - appropriate values for various thread parameters

2019-04-01 Thread Hu Bert
. Regards, Hubert Am Fr., 29. März 2019 um 07:47 Uhr schrieb Hu Bert : > > Hi Raghavendra, > > i'll try to gather the information you need, hopefully this weekend. > > One thing i've done this week: deactivate performance.quick-read > (https://bugzilla.redhat.com/show_bug.c

[Gluster-users] Lots of connections on clients - appropriate values for various thread parameters

2019-03-04 Thread Hu Bert
Good morning, we use gluster v5.3 (replicate with 3 servers, 2 volumes, raid10 as brick) with at the moment 10 clients; 3 of them do heavy I/O operations (apache tomcats, read+write of (small) images). These 3 clients have a quite high I/O wait (stats from yesterday) as can be seen here: client:

Re: [Gluster-users] Lots of connections on clients - appropriate values for various thread parameters

2019-03-04 Thread Hu Bert
> > On Mon, Mar 4, 2019 at 3:39 PM Hu Bert wrote: >> >> Good morning, >> >> we use gluster v5.3 (replicate with 3 servers, 2 volumes, raid10 as >> brick) with at the moment 10 clients; 3 of them do heavy I/O >> operations (apache tomcats, read+write of (s

Re: [Gluster-users] Lots of connections on clients - appropriate values for various thread parameters

2019-03-04 Thread Hu Bert
Do you mean "gluster volume heal $volname statistics heal-count"? If yes: 0 for both volumes. Am Mo., 4. März 2019 um 16:08 Uhr schrieb Amar Tumballi Suryanarayan : > > What does self-heal pending numbers show? > > On Mon, Mar 4, 2019 at 7:52 PM Hu Bert wrote: >> &g

Re: [Gluster-users] Lots of connections on clients - appropriate values for various thread parameters

2019-03-04 Thread Hu Bert
Hubert, > > On Mon, 4 Mar 2019 at 10:56, Hu Bert wrote: >> >> Hi Raghavendra, >> >> at the moment iowait and cpu consumption is quite low, the main >> problems appear during the weekend (high traffic, especially on >> sunday), so either we have to

[Gluster-users] gluster 5.3: file or directory not read-/writeable, although it exists - cache?

2019-02-19 Thread Hu Bert
Hello @ll, one of our backend developers told me that, in the tomcat logs, he sees errors that directories on a glusterfs mount aren't readable. Within tomcat the errors look like this: 2019-02-19 07:39:27,124 WARN Path /data/repository/shared/public/staticmap/370/626 is existed but it is not

Re: [Gluster-users] gluster 5.3: file or directory not read-/writeable, although it exists - cache?

2019-02-19 Thread Hu Bert
s. Must be something on the client itself. Am Di., 19. Feb. 2019 um 10:47 Uhr schrieb Hu Bert : > > Hello @ll, > > one of our backend developers told me that, in the tomcat logs, he > sees errors that directories on a glusterfs mount aren't readable. > Within tomcat the error

Re: [Gluster-users] gluster 5.3: file or directory not read-/writeable, although it exists - cache?

2019-02-19 Thread Hu Bert
gt; -1 (File exists) [2019-02-19 12:26:01.065483] W [fuse-bridge.c:582:fuse_entry_cbk] 0-glusterfs-fuse: 70406931: MKDIR() /images/370/435/37043597 => -1 (File exists) The directory exists -> warning is OK, but why doesn't it appear first? Am Di., 19. Feb. 2019 um 14:10 Uhr schrieb Hu Bert : &

Re: [Gluster-users] Upgrade 5.3 -> 5.4 on debian: public IP is used instead of LAN IP

2019-03-04 Thread Hu Bert
(gluster1, gluster2, gluster3) are getting > resolved to. > /etc/resolv.conf would tell which is the default domain searched for the node > names and the DNS servers which respond to the queries. > > > On Tue, Mar 5, 2019 at 12:14 PM Hu Bert wrote: >> >> Good morning,

Re: [Gluster-users] Upgrade 5.3 -> 5.4 on debian: public IP is used instead of LAN IP

2019-03-04 Thread Hu Bert
:/gluster/md4/workdata Number of entries: 10744 Am Di., 5. März 2019 um 08:18 Uhr schrieb Hu Bert : > > Hi Miling, > > well, there are such entries, but those haven't been a problem during > install and the last kernel update+reboot. The entries look like: > > PUBLIC_IP gluster2.

[Gluster-users] Upgrade 5.3 -> 5.4 on debian: public IP is used instead of LAN IP

2019-03-04 Thread Hu Bert
Good morning, i have a replicate 3 setup with 2 volumes, running on version 5.3 on debian stretch. This morning i upgraded one server to version 5.4 and rebooted the machine; after the restart i noticed that: - no brick process is running - gluster volume status only shows the server itself:

Re: [Gluster-users] Upgrade 5.3 -> 5.4 on debian: public IP is used instead of LAN IP

2019-03-04 Thread Hu Bert
ed and non updated node, the > peers are getting rejected. > The bricks aren't coming because of the same issue. > > More about the issue: https://bugzilla.redhat.com/show_bug.cgi?id=1685120 > > On Tue, Mar 5, 2019 at 12:56 PM Hu Bert wrote: > > > > Interestingly: g

Re: [Gluster-users] Upgrade 5.3 -> 5.4 on debian: public IP is used instead of LAN IP

2019-03-05 Thread Hu Bert
t 5.4 should be void of this upgrade > issue. > > In the meantime, you can use 5.3 for this cluster. > Downgrading to 5.3 will work if it was just one node that was upgrade to 5.4 > and the other nodes are still in 5.3. > > On Tue, Mar 5, 2019 at 1:07 PM Hu Bert wrote: > >

Re: [Gluster-users] ganesha-gfapi

2019-03-14 Thread Hu Bert
; a developer told me that a fix will find its way into a 5.x update. Am Mi., 13. März 2019 um 16:34 Uhr schrieb Valerio Luccio : > > On 3/13/19 11:06 AM, Hu Bert wrote: > > > Hi Valerio, > > > > is an already known "behaviour" and maybe a bug: > > https

Re: [Gluster-users] ganesha-gfapi

2019-03-13 Thread Hu Bert
Hi Valerio, is an already known "behaviour" and maybe a bug: https://bugzilla.redhat.com/show_bug.cgi?id=1674225 Regards, Hubert Am Mi., 13. März 2019 um 15:43 Uhr schrieb Valerio Luccio : > > Hi all, > > I recently mounting my gluster from another server using NFS. I started > ganesha and my

Re: [Gluster-users] Upgrade 5.3 -> 5.4 on debian: public IP is used instead of LAN IP

2019-03-20 Thread Hu Bert
gt; > Sincerely, > Artem > > -- > Founder, Android Police, APK Mirror, Illogical Robot LLC > beerpla.net | +ArtemRussakovskii | @ArtemR > > > On Mon, Mar 18, 2019 at 5:41 AM Hu Bert wrote: >> >> Hi Amar, >> >> if you refer to this bug: >> https

Re: [Gluster-users] Upgrade 5.3 -> 5.4 on debian: public IP is used instead of LAN IP

2019-03-18 Thread Hu Bert
files > > seem to be replicating correctly as well. > > > > So what's actually affected - just the status > > command, or leaving 5.4 on one of the nodes is doing > > some damage to the und

Re: [Gluster-users] Upgrade 5.3 -> 5.4 on debian: public IP is used instead of LAN IP

2019-03-18 Thread Hu Bert
update: upgrade from 5.3 -> 5.5 in a replicate 3 test setup with 2 volumes done. In 'gluster peer status' the peers stay connected during the upgrade, no 'peer rejected' messages. No cksum mismatches in the logs. Looks good :-) Am Mo., 18. März 2019 um 09:54 Uhr schrieb Hu Bert : > > Goo

Re: [Gluster-users] Upgrade 5.3 -> 5.4 on debian: public IP is used instead of LAN IP

2019-03-18 Thread Hu Bert
sday or wednesday. Maybe other users can do an update to 5.4 as well and report back here. Hubert Am Mo., 18. März 2019 um 11:36 Uhr schrieb Amar Tumballi Suryanarayan : > > Hi Hu Bert, > > Appreciate the feedback. Also are the other boiling issues related to logs > fixed now? > &

Re: [Gluster-users] Disabling read-ahead and io-cache for native fuse mounts

2019-02-12 Thread Hu Bert
fyi: we have 3 servers, each with 2 SW RAID10 used as bricks in a replicate 3 setup (so 2 volumes); the default values set by OS (debian stretch) are: /dev/md3 Array Size : 29298911232 (27941.62 GiB 30002.09 GB) /sys/block/md3/queue/read_ahead_kb : 3027 /dev/md4 Array Size : 19532607488

Re: [Gluster-users] usage of harddisks: each hdd a brick? raid?

2019-02-06 Thread Hu Bert
er. No information about RAID5/6 possible, wasn't taken into consideration... just my 2 €cents from (still) a gluster amateur :-) Best regards, Hubert Am Di., 22. Jan. 2019 um 07:11 Uhr schrieb Amar Tumballi Suryanarayan : > > > > On Thu, Jan 10, 2019 at 1:56 PM Hu Bert wrote: >> &

Re: [Gluster-users] gluster 5.3: transport endpoint gets disconnected - Assertion failed: GF_MEM_TRAILER_MAGIC

2019-02-06 Thread Hu Bert
Balachandran : > > Hi, > > The client logs indicates that the mount process has crashed. > Please try mounting the volume with the volume option lru-limit=0 and see if > it still crashes. > > Thanks, > Nithya > > On Thu, 24 Jan 2019 at 12:47, Hu Bert wrote: >> >&

Re: [Gluster-users] gluster 5.3: file or directory not read-/writeable, although it exists - cache?

2019-02-20 Thread Hu Bert
shows up. First i thought that might be some caching problem, but that seems not quite probable with a directory that's 7 days old. Regards, Hubert Am Mi., 20. Feb. 2019 um 06:12 Uhr schrieb Nithya Balachandran : > > > > On Tue, 19 Feb 2019 at 15:18, Hu Bert wrote: >> &g

[Gluster-users] gluster 5.3: transport endpoint gets disconnected - Assertion failed: GF_MEM_TRAILER_MAGIC

2019-01-23 Thread Hu Bert
Good morning, we currently transfer some data to a new glusterfs volume; to check the throughput of the new volume/setup while the transfer is running i decided to create some files on one of the gluster servers with dd in loop: while true; do dd if=/dev/urandom of=/shared/private/1G.file bs=1M

Re: [Gluster-users] Upgrade 5.5 -> 5.6: network traffic bug fixed?

2019-04-15 Thread Hu Bert
Hu Bert : > > Good Morning, > > today i updated my replica 3 setup (debian stretch) from version 5.5 > to 5.6, as i thought the network traffic bug (#1673058) was fixed and > i could re-activate 'performance.quick-read' again. See release notes: > > https://review.gluster.o

[Gluster-users] Upgrade 5.5 -> 5.6: network traffic bug fixed?

2019-04-15 Thread Hu Bert
Good Morning, today i updated my replica 3 setup (debian stretch) from version 5.5 to 5.6, as i thought the network traffic bug (#1673058) was fixed and i could re-activate 'performance.quick-read' again. See release notes: https://review.gluster.org/#/c/glusterfs/+/22538/

Re: [Gluster-users] Upgrade 5.5 -> 5.6: network traffic bug fixed?

2019-04-16 Thread Hu Bert
t test. Thx, Hubert Am Di., 16. Apr. 2019 um 09:43 Uhr schrieb Hu Bert : > > In my first test on my testing setup the traffic was on a normal > level, so i thought i was "safe". But on my live system the network > traffic was a multiple of the traffic one would expe

Re: [Gluster-users] Advice for setup: SW RAID 6 vs JBOD

2019-06-06 Thread Hu Bert
actly the same as > files, but let's use that for a rough estimate), for an average file > size of about 539 KB per file. > > Thanks a lot for your time and insights! > > On 6/6/19 8:53, Hu Bert wrote: > > Good morning, > > > > my comment won't help you directl

Re: [Gluster-users] Advice for setup: SW RAID 6 vs JBOD

2019-06-06 Thread Hu Bert
Good morning, my comment won't help you directly, but i thought i'd send it anyway... Our first glusterfs setup had 3 servers withs 4 disks=bricks (10TB, JBOD) each. Was running fine in the beginning, but then 1 disk failed. The following heal took ~1 month, with a bad performance (quite high

[Gluster-users] gluster 5.6: Gfid mismatch detected

2019-05-22 Thread Hu Bert
Hi @ll, today i updated and rebooted the 3 servers of my replicate 3 setup; after the 3rd one came up again i noticed this error: [2019-05-22 06:41:26.781165] E [MSGID: 108008] [afr-self-heal-common.c:392:afr_gfid_split_brain_source] 0-workdata-replicate-0: Gfid mismatch detected for

Re: [Gluster-users] gluster 5.6: Gfid mismatch detected

2019-05-22 Thread Hu Bert
orkdata statistics heal-count" there are 0 entries left. Files/directories are there. Happened the first time with this setup, but everything ok now. Thx for your fast help :-) Hubert Am Mi., 22. Mai 2019 um 09:32 Uhr schrieb Ravishankar N : > > > On 22/05/19 12:39 PM, Hu Bert wrote:

Re: [Gluster-users] Upgrade 5.5 -> 5.6: network traffic bug fixed?

2019-04-29 Thread Hu Bert
have already restarted the client processes, then there must be > something related to workload in the live system that is triggering a bug in > quick-read. Would need wireshark capture if possible, to debug further. > > Regards, > Poornima > > On Tue, Apr 16, 2019 at 6:25 PM

Re: [Gluster-users] Upgrade 5.5 -> 5.6: network traffic bug fixed?

2019-04-16 Thread Hu Bert
ting this. I had done testing on my local setup and the > issue was resolved even with quick-read enabled. Let me test it again. > > Regards, > Poornima > > On Mon, Apr 15, 2019 at 12:25 PM Hu Bert wrote: >> >> fyi: after setting performance.quick-read to off netwo

Re: [Gluster-users] Announcing Gluster release 5.10

2019-10-20 Thread Hu Bert
Good morning, i just wanted to check for version 5.10 for debian stretch - but it doesn't seem to be available. https://download.gluster.org/pub/gluster/glusterfs/5/LATEST/Debian/stretch/amd64/apt/pool/main/g/glusterfs/ -> only version 5.9

Re: [Gluster-users] Announcing Gluster release 5.10

2019-10-21 Thread Hu Bert
rya : > > Hi Hu Bert, > > Thanks for informing about the issue. > Now, you can find correct packages at > https://download.gluster.org/pub/gluster/glusterfs/5/LATEST/Debian/stretch/arm64/apt/pool/main/g/glusterfs/ > > Regards, > > Shwetha > > On Mon, Oct 21, 2019

Re: [Gluster-users] Announcing Gluster release 5.10

2019-10-21 Thread Hu Bert
a Acharya : > > Hu Bert, > > Find my reply inline. > > Regards, > Shwetha > On Mon, Oct 21, 2019 at 1:22 PM Hu Bert wrote: >> >> Hi Shwetha, >> >> thx, now there are the 5.10 packages. But maybe I should've been more >> precise: >>

Re: [Gluster-users] Disk use with GlusterFS

2020-03-05 Thread Hu Bert
Hi, just a guess and easy to test/try: inodes? df -i? regards, Hubert Am Fr., 6. März 2020 um 04:42 Uhr schrieb David Cunningham : > > Hi Aravinda, > > That's what was reporting 54% used, at the same time that GlusterFS was > giving no space left on device errors. It's a bit worrying that

Re: [Gluster-users] Disk use with GlusterFS

2020-03-06 Thread Hu Bert
., 6. März 2020 um 09:20 Uhr schrieb David Cunningham : > > Hi Hu. > > Just to clarify, what should we be looking for with "df -i"? > > > On Fri, 6 Mar 2020 at 18:51, Hu Bert wrote: >> >> Hi, >> >> just a guess and easy to test/try: inodes? df -i?

Re: [Gluster-users] No possible to mount a gluster volume via /etc/fstab?

2020-01-23 Thread Hu Bert
Hi Sherry, maybe at the time, when the mount from /etc/fstab should take place, name resolution is not yet working? In your case i'd try to place proper entries in /etc/hosts and test it with a reboot. regards Hubert Am Fr., 24. Jan. 2020 um 02:37 Uhr schrieb Sherry Reese : > > Hello everyone,

[Gluster-users] recommendation: gluster version upgrade and/or OS dist-upgrade

2020-02-18 Thread Hu Bert
Hello, i currently have a replicate 3 setup, gluster version 5.11 and debian stretch. In the next weeks i want to migrate to gluster version 6.x and upgrade the OS to debian buster. So... any recommendation of what to do first? First upgrade gluster or the operating system? Thx, Hubert

Re: [Gluster-users] recommendation: gluster version upgrade and/or OS dist-upgrade

2020-02-18 Thread Hu Bert
/dpkg returned an error code (1) so i first did the dist-upgrade; but the buster-glusterfs-packages didn't want to install again, so i had to remove the "old" glusterfs packages and install the buster packages. Strange. Am Di., 18. Feb. 2020 um 08:51 Uhr schrieb Hu Bert : > > Hello,

Re: [Gluster-users] recommendation: gluster version upgrade and/or OS dist-upgrade

2020-02-20 Thread Hu Bert
me it looks like that it isn't important which one you do first. Regards, Hubert Am Mi., 19. Feb. 2020 um 09:50 Uhr schrieb Michael Böhm : > > > Am Di., 18. Feb. 2020 um 08:51 Uhr schrieb Hu Bert : >> >> Hello, >> >> i currently have a replicate 3 setup, gluster versio

Re: [Gluster-users] To RAID or not to RAID...

2020-01-14 Thread Hu Bert
Hi, our old setup is not really comparable, but i thought i'd drop some lines... we once had a Distributed-Replicate setup with 4 x 3 = 12 disks (10 TB hdd). Simple JBOD, every disk == brick. Was running pretty good, until one of the disks died. The restore (reset-brick) took about a month,

Re: [Gluster-users] gluster v6.8: systemd units disabled after install

2020-04-11 Thread Hu Bert
t; > >I guess, to really know the reasoning, the respective package > >maintainers would need to jump in and share their idea behind this > >decision. > > > >Best regards, > >-- > >alexander iliev > > > >On 4/11/20 7:40 AM, Hu Bert wrote: > >

Re: [Gluster-users] gluster 6.8: brick logs flooded by Information messages

2020-04-11 Thread Hu Bert
as i wrote in the opening post: the previous version was 5.11. Best regards, Hubert Am Sa., 11. Apr. 2020 um 13:25 Uhr schrieb Strahil Nikolov : > > On April 11, 2020 2:06:22 PM GMT+03:00, Hu Bert > wrote: > >Hi Strahil, > > > >looking into the mount logs i think

Re: [Gluster-users] gluster v6.8: systemd units disabled after install

2020-04-11 Thread Hu Bert
Hi Strahil, hmm... i still don't think it has something to do with the mounts being not ready. See other mail :-) Best regards, Hubert Am Sa., 11. Apr. 2020 um 13:22 Uhr schrieb Strahil Nikolov : > > On April 11, 2020 1:41:55 PM GMT+03:00, Hu Bert > wrote: > >Hi Strahil, >

Re: [Gluster-users] gluster 6.8: brick logs flooded by Information messages

2020-04-11 Thread Hu Bert
mes between [2020-04-11 11:01:21.791598] and [2020-04-11 11:03:21.444357] Best regards, Hubert Am Sa., 11. Apr. 2020 um 11:11 Uhr schrieb Strahil Nikolov : > > On April 11, 2020 8:35:41 AM GMT+03:00, Hu Bert > wrote: > >Hi, > > > >this week i upgraded from 5.11 to 6.8.

Re: [Gluster-users] gluster v6.8: systemd units disabled after install

2020-04-11 Thread Hu Bert
April 11, 2020 8:40:47 AM GMT+03:00, Hu Bert > wrote: > >Hi, > > > >so no one has seen the problem of disabled systemd units before? > > > >Regards, > >Hubert > > > >Am Mo., 6. Apr. 2020 um 12:30 Uhr schrieb Hu Bert > >: > >> >

Re: [Gluster-users] gluster 6.8: brick logs flooded by Information messages

2020-04-21 Thread Hu Bert
sooner or later for every file/dir the ctime-mdata is set, it should be a matter of time, right? Best regards, Hubert Am Sa., 11. Apr. 2020 um 15:38 Uhr schrieb Hu Bert : > > as i wrote in the opening post: the previous version was 5.11. > > Best regards, > Hubert > > Am Sa.

[Gluster-users] gluster 6.8: brick logs flooded by Information messages

2020-04-10 Thread Hu Bert
Hi, this week i upgraded from 5.11 to 6.8. Since that the brick logs get flooded by such messages: [2020-04-11 05:22:48.774688] I [MSGID: 139001] [posix-acl.c:263:posix_acl_log_permit_denied] 0-workdata-access-control: client:

Re: [Gluster-users] gluster v6.8: systemd units disabled after install

2020-04-10 Thread Hu Bert
Hi, so no one has seen the problem of disabled systemd units before? Regards, Hubert Am Mo., 6. Apr. 2020 um 12:30 Uhr schrieb Hu Bert : > > Hello, > > after a server reboot (with a fresh gluster 6.8 install) i noticed > that the gluster services weren't running. >

Re: [Gluster-users] One error/warning message after upgrade 5.11 -> 6.8

2020-04-10 Thread Hu Bert
Hi, no one has seen such messages? Regards, Hubert Am Mo., 6. Apr. 2020 um 06:13 Uhr schrieb Hu Bert : > > Hello, > > i just upgraded my servers and clients from 5.11 to 6.8; besides one > connection problem to the gluster download server everything went > fine. > > On

Re: [Gluster-users] Gluster 6.8 & debian

2020-04-01 Thread Hu Bert
building instead >> of doing it manually. It seems this is a bug in the script that all lib >> packages are excluded. >> Thanks for trying and pointing it out. We are working to resolve this. I >> will update the package once build is complete. >> >> Regards,

[Gluster-users] One error/warning message after upgrade 5.11 -> 6.8

2020-04-05 Thread Hu Bert
Hello, i just upgraded my servers and clients from 5.11 to 6.8; besides one connection problem to the gluster download server everything went fine. On the 3 gluster servers i mount the 2 volumes as well, and only there (and not on all the other clients) there are some messages in the log file of

Re: [Gluster-users] Repository down ?

2020-04-05 Thread Hu Bert
schrieb Hu Bert : > > ok, half an hour later it worked. Not funny during an upgrade. Strange... :-) > > > Regards, > Hubert > > Am Fr., 3. Apr. 2020 um 10:19 Uhr schrieb Hu Bert : > > > > Hi, > > > > i'm currently preparing an upgrade 5.x -> 6.8; the

[Gluster-users] Gluster 6.8 & debian

2020-03-26 Thread Hu Bert
Hello, i just wanted to test an upgrade from version 5.12 to version 6.8, but there are no packages for debian buster in version 6.8. https://download.gluster.org/pub/gluster/glusterfs/6/6.8/Debian/buster/amd64/apt/ This directory is empty. LATEST still links to version 6.7

[Gluster-users] Gluster 6.8: some error messages during op-version-update

2020-04-01 Thread Hu Bert
Hi, i just upgraded a test cluster from version 5.12 to 6.8; that went fine, but iirc after setting the new op-version i saw some error messages: 3 servers: becquerel, dirac, tesla 2 volumes: workload, mounted on /shared/public persistent, mounted on /shared/private server becquerel, volume

  1   2   >