[Gluster-users] writing to fuse device yielded ENOENT

2023-04-18 Thread David Cunningham
in heal pending: 0 Number of entries in split-brain: 0 Number of entries possibly healing: 0 Thank you, -- David Cunningham, Voisonics Limited http://voisonics.com/ USA: +1 213 221 1092 New Zealand: +64 (0)28 2558 3782 Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at

Re: [Gluster-users] Big problems after update to 9.6

2023-02-24 Thread David Cunningham
, > Anant > ------ > *From:* Gluster-users on behalf of > David Cunningham > *Sent:* 23 February 2023 9:56 PM > *To:* gluster-users > *Subject:* Re: [Gluster-users] Big problems after update to 9.6 > > > *EXTERNAL: Do not click links or open att

Re: [Gluster-users] Big problems after update to 9.6

2023-02-23 Thread David Cunningham
Is it possible that version 9.1 and 9.6 can't talk to each other? My understanding was that they should be able to. On Fri, 24 Feb 2023 at 10:36, David Cunningham wrote: > We've tried to remove "sg" from the cluster so we can re-install the > GlusterFS node on it, but

Re: [Gluster-users] Big problems after update to 9.6

2023-02-23 Thread David Cunningham
r" to just remove "sg" without trying to contact it? On Fri, 24 Feb 2023 at 10:31, David Cunningham wrote: > Hello, > > We have a cluster with two nodes, "sg" and "br", which were running > GlusterFS 9.1, installed via the Ubuntu package manager. We up

[Gluster-users] Big problems after update to 9.6

2023-02-23 Thread David Cunningham
CTX_ID:46b23c19-5114-4a20-9306-9ea6faf02d51-GRAPH_ID:0-PID:35568-HOST:br.m5voip.com-PC_NAME:gvol0-client-0-RECON_NO:-0 Thanks for your help, -- David Cunningham, Voisonics Limited http://voisonics.com/ USA: +1 213 221 1092 New Zealand: +64 (0)28 2558 3782 Community Meeting Calendar:

Re: [Gluster-users] Conflict resolution

2021-10-20 Thread David Cunningham
Hi Strahil and Ravi, Thank you very much for your replies, that makes sense. On Thu, 21 Oct 2021 at 03:25, Ravishankar N wrote: > Hi David, > > On Wed, Oct 20, 2021 at 6:23 AM David Cunningham < > dcunning...@voisonics.com> wrote: > >> Hello, >> >&

[Gluster-users] Conflict resolution

2021-10-19 Thread David Cunningham
t state? 4. Is the outcome of conflict resolution at a file level the same whether node3 is a full replica or just an arbiter? Thank you very much for any advice, -- David Cunningham, Voisonics Limited http://voisonics.com/ USA: +1 213 221 1092 New Zealand: +64 (0)28 2558 3782 Co

Re: [Gluster-users] Brick offline problem

2021-08-29 Thread David Cunningham
> Strahil Nikolov > > Sent from Yahoo Mail on Android > <https://go.onelink.me/107872968?pid=InProduct&c=Global_Internal_YGrowth_AndroidEmailSig__AndroidUsers&af_wl=ym&af_sub1=Internal&af_sub2=Global_YGrowth&af_sub3=EmailSignature> > > On Sat, Aug 28, 2021

Re: [Gluster-users] Brick offline problem

2021-08-27 Thread David Cunningham
kolov > > Sent from Yahoo Mail on Android > <https://go.onelink.me/107872968?pid=InProduct&c=Global_Internal_YGrowth_AndroidEmailSig__AndroidUsers&af_wl=ym&af_sub1=Internal&af_sub2=Global_YGrowth&af_sub3=EmailSignature> > > On Fri, Aug 27, 2021 at 7:01, Davi

Re: [Gluster-users] Brick offline problem

2021-08-26 Thread David Cunningham
ment}] > [2021-08-25 20:10:44.803984 +] E [MSGID: 114031] > [client-rpc-fops_v2.c:214:client4_0_mkdir_cbk] 0-gvol0-client-1: remote > operation failed. [{path=(null)}, {errno=22}, {error=Invalid argument}] > [2021-08-25 20:20:45.132601 +] E [MSGID: 114031] > [client-rpc-fops

[Gluster-users] Brick offline problem

2021-08-25 Thread David Cunningham
+] E [MSGID: 114031] [client-rpc-fops_v2.c:214:client4_0_mkdir_cbk] 0-gvol0-client-0: remote operation failed. [{path=(null)}, {errno=22}, {error=Invalid argument}] ... repeated... -- David Cunningham, Voisonics Limited http://voisonics.com/ USA: +1 213 221 1092 New Zealand: +64 (0)28 2558 3

Re: [Gluster-users] [EXT] Permission denied closing file when accessing GlusterFS via NFS

2021-08-12 Thread David Cunningham
for a newer > version than yours. > > > > You can check if gluster issue #876 matches your case. > > > > Which version of gluster are you using ? > > > > Best Regards, > > Strahil Nikolov > > > > > > On Thu, Aug 12, 2021 at 6:34, Davi

Re: [Gluster-users] Permission denied closing file when accessing GlusterFS via NFS

2021-08-11 Thread David Cunningham
les are written correctly, with the full file data, and there's no error. So it appears that the problem does not occur if the destination file exists. Does that give anyone a clue as to what's happening? Thanks. On Thu, 12 Aug 2021 at 13:50, David Cunningham wrote: > Hi, > &

Re: [Gluster-users] Permission denied closing file when accessing GlusterFS via NFS

2021-08-11 Thread David Cunningham
rreira wrote: > Those options you need put it in the NFS server options, generally in > /etc/exports > --- > Gilberto Nunes Ferreira > (47) 99676-7530 - Whatsapp / Telegram > > > > > > > Em ter., 10 de ago. de 2021 às 18:24, David Cunningham < > dcunnin

Re: [Gluster-users] Permission denied closing file when accessing GlusterFS via NFS

2021-08-11 Thread David Cunningham
USE, right? > > Best Regards, > Strahil Nikolov > > On Wed, Aug 11, 2021 at 0:24, David Cunningham > wrote: > Hi Strahil and Gilberto, > > Thanks very much for your replies. SELinux is disabled on the NFS server > (and the client too), and both have the same UID and GID

Re: [Gluster-users] Permission denied closing file when accessing GlusterFS via NFS

2021-08-10 Thread David Cunningham
Hey David, >> >> can you give the volume info ? >> >> Also, I assume SELINUX is in permissive/disabled state. >> >> What about the uod of the user on the nfs client and the nfs server ? Is >> it the same ? >> >> Best Regards, >> Strahil Nik

Re: [Gluster-users] Read from fastest node only

2021-08-10 Thread David Cunningham
2021 at 22:32, Ravishankar N wrote: > > > On Tue, Aug 10, 2021 at 3:23 PM David Cunningham < > dcunning...@voisonics.com> wrote: > >> Thanks Ravi, so if I understand correctly latency to all the nodes >> remains an issue on all file reads. >> >> > Hi

Re: [Gluster-users] Read from fastest node only

2021-08-10 Thread David Cunningham
Thanks Ravi, so if I understand correctly latency to all the nodes remains an issue on all file reads. On Tue, 10 Aug 2021 at 16:49, Ravishankar N wrote: > > > On Tue, Aug 10, 2021 at 8:07 AM David Cunningham < > dcunning...@voisonics.com> wrote: > >> Hi Gionatan,

[Gluster-users] Permission denied closing file when accessing GlusterFS via NFS

2021-08-09 Thread David Cunningham
960f15), client: CTX_ID:8f69363a-f0f4-44e1-84e9-69dfa77a8164-GRAPH_ID:0-PID:2657-HOST:gfs1.company.com-PC_NAME:gvol0-client-0-RECON_NO:-0, error-xlator: gvol0-access-control [Permission denied] -- David Cunningham, Voisonics Limited http://voisonics.com/ USA: +1 213 221 1092 New Zealand: +64

Re: [Gluster-users] Read from fastest node only

2021-08-09 Thread David Cunningham
e-local all reads which can really be local (ie: the > requested file is available) should not suffer from remote party > latency. > Is that correct? > > Thanks. > > -- > Danti Gionatan > Supporto Tecnico > Assyoma S.r.l. - www.assyoma.it > email: g.da...@assyoma.it -

Re: [Gluster-users] Read from fastest node only

2021-08-04 Thread David Cunningham
u as long as both data bricks > are running. > > Keep in mind that thin arbiter is less used. For example, I have never > deployed a thin arbiter. > > Best Regards, > Strahil Nikolov > > On Tue, Aug 3, 2021 at 7:40, David Cunningham > wrote: > Hi Strahil, > &g

Re: [Gluster-users] Read from fastest node only

2021-08-02 Thread David Cunningham
only 1 > node is 'remote' , then you can give a try to gluster's thin arbiter (for > the 'remote' node). > > > Best Regards, > Strahil Nikolov > > On Mon, Aug 2, 2021 at 5:02, David Cunningham > wrote: > Hi Ravi and Strahil, > > Thanks aga

Re: [Gluster-users] Read from fastest node only

2021-08-01 Thread David Cunningham
> > > Community Meeting Calendar: > > Schedule - > Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC > Bridge: https://meet.google.com/cpu-eiue-hvk > Gluster-users mailing list > Gluster-users@gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-users > >

Re: [Gluster-users] Read from fastest node only

2021-07-29 Thread David Cunningham
to stay! > Regards. > > [1] > https://lists.gluster.org/pipermail/gluster-users/2015-June/022288.html > > > -- > Danti Gionatan > Supporto Tecnico > Assyoma S.r.l. - www.assyoma.it > email: g.da...@assyoma.it - i...@assyoma.it > GPG public key ID: FF5F32A8 >

Re: [Gluster-users] Read from fastest node only

2021-07-27 Thread David Cunningham
ick having the least outstanding read requests. 4 = brick having the least network ping latency. Thanks again. On Tue, 27 Jul 2021 at 19:16, Yaniv Kaul wrote: > > > On Tue, Jul 27, 2021 at 9:50 AM David Cunningham < > dcunning...@voisonics.com> wrote: > >> Hello, >>

[Gluster-users] Read from fastest node only

2021-07-26 Thread David Cunningham
up the read by simply reading the file from the fastest node. This would be especially beneficial if some of the other nodes have higher latency from the client. Is it possible to do this? Thanks in advance for any assistance. -- David Cunningham, Voisonics Limited http://voisonics.com/ USA: +1

Re: [Gluster-users] Geo-replication adding new master node

2021-06-09 Thread David Cunningham
lication "stop" and then "start" and are pleased to see the two new master nodes are now in "Passive" status. Thank you for your help! On Tue, 1 Jun 2021 at 10:06, David Cunningham wrote: > Hi Aravinda, > > Thank you very much - we will give that a try. >

Re: [Gluster-users] Geo-replication adding new master node

2021-05-31 Thread David Cunningham
Hi Aravinda, Thank you very much - we will give that a try. On Mon, 31 May 2021 at 20:29, Aravinda VK wrote: > Hi David, > > On 31-May-2021, at 10:37 AM, David Cunningham > wrote: > > Hello, > > We have a GlusterFS configuration with mirrored nodes on the master s

[Gluster-users] Geo-replication adding new master node

2021-05-30 Thread David Cunningham
Ideally we would normally have something like: master A -> secondary A master B -> secondary B master C -> secondary C so that any master or secondary node could go offline but geo-replication would keep working. Thank you very much in advance. -- David Cunningham, Voisonics Limited htt

Re: [Gluster-users] Brick offline after upgrade

2021-04-01 Thread David Cunningham
Best Regards, > Strahil Nikolov > > On Tue, Mar 30, 2021 at 4:13, David Cunningham > wrote: > Thank you Strahil. So if we take into account the depreciated options from > all release notes then the direct upgrade should be okay. > > > On Fri, 26 Mar 2021 at 02:01, Strah

Re: [Gluster-users] Brick offline after upgrade

2021-03-29 Thread David Cunningham
elease notes (usually '.0) as some options are deprecated like tiering. > > Best Regards, > Strahil Nikolov > > On Tue, Mar 23, 2021 at 2:47, David Cunningham > wrote: > Hello, > > We ended up restoring the backup since it was easy on a test system. > > Does

Re: [Gluster-users] Brick offline after upgrade

2021-03-22 Thread David Cunningham
between? Thanks in advance. On Sat, 20 Mar 2021 at 09:58, David Cunningham wrote: > Hi Strahil, > > It's as follows. Do you see anything unusual? Thanks. > > root@caes8:~# ls -al /var/lib/glusterd/vols/gvol0/ > total 52 > drwxr-xr-x 3 root root 4096 Mar 18 17:06 . > dr

Re: [Gluster-users] Brick offline after upgrade

2021-03-19 Thread David Cunningham
; failed, review your > volfile again > > What is the content of : > > /var/lib/glusterd/vols/gvol0 ? > > > Best Regards, > > Strahil Nikolov > > On Fri, Mar 19, 2021 at 3:02, David Cunningham > wrote: > Hello, > > We have a single node/brick GlusterFS test

Re: [Gluster-users] GlusterFS mount crash

2021-03-18 Thread David Cunningham
least. > > Regards, > > Xavi > > > On Wed, Mar 10, 2021 at 5:10 AM David Cunningham < > dcunning...@voisonics.com> wrote: > >> Hello, >> >> We have a GlusterFS 5.13 server which also mounts itself with the native >> FUSE client. Recently

[Gluster-users] Brick offline after upgrade

2021-03-18 Thread David Cunningham
d we can't see any error message explaining exactly why. Would anyone have an idea of where to look? Since the logs from the time of the upgrade and reboot are a bit lengthy I've attached them in a text file. Thank you in advance for any advice! -- David Cunningham, Voisonics Limited

[Gluster-users] GlusterFS mount crash

2021-03-09 Thread David Cunningham
point does not exist Mar 9 05:13:54 voip1 mount-shared-storage.sh[20520]: Please specify a mount point Mar 9 05:13:54 voip1 mount-shared-storage.sh[20520]: Usage: Mar 9 05:13:54 voip1 mount-shared-storage.sh[20520]: man 8 /sbin/mount.glusterfs -- David Cunningham, Voisonics Limited http://voisoni

Re: [Gluster-users] Geo-replication log file not closed

2020-08-30 Thread David Cunningham
these processes are supposed to have gsyncd.log open? If so, how do we tell them to close and re-open their file handle? Thanks in advance! On Tue, 25 Aug 2020 at 15:24, David Cunningham wrote: > Hello, > > We're having an issue with the rotated gsyncd.log not being released.

[Gluster-users] Geo-replication force active server

2020-08-26 Thread David Cunningham
o, then we know it's geo-replication and not the server that's the problem. Thanks in advance, -- David Cunningham, Voisonics Limited http://voisonics.com/ USA: +1 213 221 1092 New Zealand: +64 (0)28 2558 3782 Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday

Re: [Gluster-users] How safe are major version upgrades?

2020-08-26 Thread David Cunningham
lems here, upgrade took between 10 > to 20 minutes (wait until healing is done) - but no geo replication, > so i can't say anything about that part. > > Best regards, > Hubert > > Am Di., 25. Aug. 2020 um 05:47 Uhr schrieb David Cunningham > : > > > > Hello, >

[Gluster-users] How safe are major version upgrades?

2020-08-24 Thread David Cunningham
if a complete re-install is necessary for safety. We have a maximum window of around 4 hours for this upgrade and would not want any significant risk of an unsuccessful upgrade at the end of that time. Is version 8.0 considered stable? Thanks in advance, -- David Cunningham, Voisonics Limited

[Gluster-users] Geo-replication log file not closed

2020-08-24 Thread David Cunningham
gvol0_nvfs10_gvol0/mnt-nodirectwritedata-gluster-gvol0.log --volfile-server=localhost --volfile-id=gvol0 --client-pid=-1 /tmp/gsyncd-aux-mount-Tq_3sU Perhaps the problem is that the kill -HUP in the logrotate script doesn't act on the right process? If so, does anyone have a command

Re: [Gluster-users] Geo-replication: Entry not present on master. Fixing gfid mismatch in slave

2020-06-18 Thread David Cunningham
@Sankarshan, > any idea how to enable debug on the python script ? > > > Best Regards, > Strahil Nikolov > > > На 12 юни 2020 г. 6:49:57 GMT+03:00, David Cunningham < > dcunning...@voisonics.com> написа: > >Hi Strahil, > > > >Is there a trick to gett

Re: [Gluster-users] Geo-replication: Entry not present on master. Fixing gfid mismatch in slave

2020-06-11 Thread David Cunningham
to discuss the problem ? > > > Best Regards, > Strahil Nikolov > > На 11 юни 2020 г. 3:15:36 GMT+03:00, David Cunningham < > dcunning...@voisonics.com> написа: > >Hi Strahil, > > > >Thanks for that. I did search for a file with the gfid in the name, on > &

Re: [Gluster-users] Geo-replication: Entry not present on master. Fixing gfid mismatch in slave

2020-06-10 Thread David Cunningham
number. > > Once you have the full path to the file , test: > - Mount with FUSE > - Check file exists ( no '??' for permissions, size, etc) and can be > manipulated (maybe 'touch' can be used ?) > - Find (on all replica sets ) the file and check the gfid >

Re: [Gluster-users] Geo-replication: Entry not present on master. Fixing gfid mismatch in slave

2020-06-09 Thread David Cunningham
was "normal" for the push node (which could be > another one) . > > As this script is python, I guess you can put some debug print > statements in it. > > Best Regards, > Strahil Nikolov > > На 9 юни 2020 г. 5:07:11 GMT+03:00, David Cunningham < > dcunning.

Re: [Gluster-users] Geo-replication: Entry not present on master. Fixing gfid mismatch in slave

2020-06-08 Thread David Cunningham
the debug level in the log. High > CPU usage by a geo-replication process would need to be traced back to > why it really requires that %-age of CPU if it was not doing so > previously. > > On Mon, 8 Jun 2020 at 05:29, David Cunningham > wrote: > > > > Hi Strahil, >

Re: [Gluster-users] Geo-replication: Entry not present on master. Fixing gfid mismatch in slave

2020-06-07 Thread David Cunningham
in geo-replication-slave logs. > > Does the issue still occurs ? > > Best Regards, > Strahil Nikolov > > На 6 юни 2020 г. 1:21:55 GMT+03:00, David Cunningham < > dcunning...@voisonics.com> написа: > >Hi Sunny and Strahil, > > > >Thanks again for your respon

Re: [Gluster-users] Geo-replication: Entry not present on master. Fixing gfid mismatch in slave

2020-06-05 Thread David Cunningham
ct issue. I have a vague feeling that that python script is constantly > looping over some data causing the CPU hog. > > > > Sadly, I can't find an instruction for increasing the log level of the > geo rep log . > > > > > > Best Regards, > > Strahil

Re: [Gluster-users] Geo-replication: Entry not present on master. Fixing gfid mismatch in slave

2020-06-01 Thread David Cunningham
of the file on both source and destination, > do they really match or they are different ? > > What happens when you move away the file from the slave , does it fixes > the issue ? > > Best Regards, > Strahil Nikolov > > На 30 май 2020 г. 1:10:56 GMT+03:00, Davi

[Gluster-users] Geo-replication: Entry not present on master. Fixing gfid mismatch in slave

2020-05-29 Thread David Cunningham
writedata/gluster/gvol0):1197:process_change] _GMaster: Sucessfully fixed all entry ops with gfid mismatch [2020-05-29 21:57:31.747319] I [master(worker /nodirectwritedata/gluster/gvol0):1954:syncjob] Syncer: Sync Time Taken duration=0.7409 num_files=18job=1 return_code=0 We've verified tha

Re: [Gluster-users] Lightweight read

2020-04-29 Thread David Cunningham
native client. BTW, under normal circumstances when the client checks all bricks, does that include checking an arbiter? Or are arbiters not checked? On Sat, 25 Apr 2020 at 19:41, Strahil Nikolov wrote: > On April 25, 2020 9:00:30 AM GMT+03:00, David Cunningham < > dcunning...@vois

Re: [Gluster-users] Lightweight read

2020-04-24 Thread David Cunningham
are talking about replica volumes, in which case the read > does happen from only one of the replica bricks. The client only sends > lookups to all the bricks to figure out which are the good copies. Post > that, the reads themselves are served from only one of the good copies. > > -

[Gluster-users] Lightweight read

2020-04-23 Thread David Cunningham
available, but it's actually not critical if we end up with an old version of the file in the case of a server down or net-split etc. Significantly improved read performance would be desirable instead. Thanks in advance for any help. -- David Cunningham, Voisonics Limited http://voisonics.com/ US

[Gluster-users] Another transaction is in progress

2020-03-15 Thread David Cunningham
etry for? We're running GlusterFS 5.12. Thanks in advance, -- David Cunningham, Voisonics Limited http://voisonics.com/ USA: +1 213 221 1092 New Zealand: +64 (0)28 2558 3782 Community Meeting Calendar: Schedule - Every Tuesday at 14:30 IST / 09:00 UTC Bridge: https://bluejeans.co

Re: [Gluster-users] Disk use with GlusterFS

2020-03-06 Thread David Cunningham
MT+02:00, David Cunningham < > dcunning...@voisonics.com> wrote: > >Hi Hu. > > > >Just to clarify, what should we be looking for with "df -i"? > > > > > >On Fri, 6 Mar 2020 at 18:51, Hu Bert wrote: > > > >> Hi, > >>

Re: [Gluster-users] Disk use with GlusterFS

2020-03-06 Thread David Cunningham
Hi Hu. Just to clarify, what should we be looking for with "df -i"? On Fri, 6 Mar 2020 at 18:51, Hu Bert wrote: > Hi, > > just a guess and easy to test/try: inodes? df -i? > > regards, > Hubert > > Am Fr., 6. März 2020 um 04:42 Uhr schrieb David

Re: [Gluster-users] Disk use with GlusterFS

2020-03-05 Thread David Cunningham
What is it reporting for brick’s `df` output? > > ``` > df /nodirectwritedata/gluster/gvol0 > ``` > > — > regards > Aravinda Vishwanathapura > https://kadalu.io > > On 06-Mar-2020, at 2:52 AM, David Cunningham > wrote: > > Hello, > > A major concern we have

Re: [Gluster-users] Disk use with GlusterFS

2020-03-05 Thread David Cunningham
ng on here? Thanks in advance. On Thu, 5 Mar 2020 at 21:35, David Cunningham wrote: > Hi Aravinda, > > Thanks for the reply. This test server is indeed the master server for > geo-replication to a slave. > > I'm really surprised that geo-replication simply keeps writing

Re: [Gluster-users] Disk use with GlusterFS

2020-03-05 Thread David Cunningham
b.com/gluster/glusterfs/issues/833#issuecomment-594436009 > > If Changelogs files are causing issue, you can use archival tool to remove > processed changelogs. > https://github.com/aravindavk/archive_gluster_changelogs > > — > regards > Aravinda Vishwanathapura > https://kada

[Gluster-users] Disk use with GlusterFS

2020-03-04 Thread David Cunningham
re-pid-check: on changelog.changelog: on -- David Cunningham, Voisonics Limited http://voisonics.com/ USA: +1 213 221 1092 New Zealand: +64 (0)28 2558 3782 Community Meeting Calendar: Schedule - Every Tuesday at 14:30 IST / 09:00 UTC Bridge: https://bluejeans.com/441850968 Gluster-user

Re: [Gluster-users] Geo-replication

2020-03-04 Thread David Cunningham
://kadalu.io > > On 04-Mar-2020, at 4:13 AM, David Cunningham > wrote: > > Hi Strahil, > > The B cluster are communicating with each other via a LAN, and it seems > the A cluster has got B's LAN addresses (which aren't accessible from the > internet including th

Re: [Gluster-users] Geo-replication

2020-03-03 Thread David Cunningham
he B cluster to replicate using public addresses instead of the LAN. Thank you. On Tue, 3 Mar 2020 at 18:07, Strahil Nikolov wrote: > On March 3, 2020 4:13:38 AM GMT+02:00, David Cunningham < > dcunning...@voisonics.com> wrote: > >Hello, > > > >Thanks for that. When

Re: [Gluster-users] Geo-replication

2020-03-02 Thread David Cunningham
> > Please try with push-pem option during Geo-rep create command. > > — > regards > Aravinda Vishwanathapura > https://kadalu.io > > > On 02-Mar-2020, at 6:03 AM, David Cunningham > wrote: > > Hello, > > We've set up geo-replication but it isn

Re: [Gluster-users] Geo-replication

2020-03-01 Thread David Cunningham
much for any assistance. On Tue, 25 Feb 2020 at 15:46, David Cunningham wrote: > Hi Aravinda and Sunny, > > Thank you for the replies. We have 3 replicating nodes on the master side, > and want to geo-replicate their data to the remote slave side. As I > understand it if the master

Re: [Gluster-users] Geo-replication

2020-02-24 Thread David Cunningham
the replicating master nodes if one of them goes down. Thank you! On Tue, 25 Feb 2020 at 14:32, Aravinda VK wrote: > Hi David, > > > On 25-Feb-2020, at 3:45 AM, David Cunningham > wrote: > > Hello, > > I've a couple of questions on geo-replication that hopeful

[Gluster-users] Geo-replication

2020-02-24 Thread David Cunningham
.With regard to copying SSH keys, presumably the SSH key of all master nodes should be authorized on the geo-replication client side? Thanks for your help. -- David Cunningham, Voisonics Limited http://voisonics.com/ USA: +1 213 221 1092 New Zealand: +64 (0)28 2558 3782 Community Me

Re: [Gluster-users] GFS performance under heavy traffic

2020-01-07 Thread David Cunningham
TU - it's reducing the ammount of packages that the kernel has to > process - but requires infrastructure to support that too. You can test by > setting MTU on both sides to 9000 and then run 'tracepath remote-ip'. Also > run a ping with large size without do not fra

Re: [Gluster-users] GFS performance under heavy traffic

2020-01-07 Thread David Cunningham
che - that should not be like that. > > Are you using Jumbo frames (MTU 9000)? > What is yoir brick's I/O scheduler ? > > Best Regards, > Strahil Nikolov > On Jan 7, 2020 01:34, David Cunningham wrote: > > Hi Strahil, > > We may have had a heal since the GFS arbite

Re: [Gluster-users] GFS performance under heavy traffic

2020-01-06 Thread David Cunningham
you decide to reset your gluster volume to the defaults, you can > create a new volume (same type as current one), the get the options for > that volume and put them in a file and then bulk deploy via 'gluster volume > setgroup custom-group' , where the file is located >

Re: [Gluster-users] GFS performance under heavy traffic

2019-12-26 Thread David Cunningham
~]# gluster volume get all cluster.op-version Option Value -- - cluster.op-version 5 On Fri, 27 Dec 2019 at 14:22, David Cunningham wrote: > Hi Strahil, > > Our volume options are as below. T

Re: [Gluster-users] GFS performance under heavy traffic

2019-12-26 Thread David Cunningham
ote: > Hi David, > > On Dec 24, 2019 02:47, David Cunningham wrote: > > > > Hello, > > > > In testing we found that actually the GFS client having access to all 3 > nodes made no difference to performance. Perhaps that's because the 3rd > node that wasn&#x

Re: [Gluster-users] GFS performance under heavy traffic

2019-12-23 Thread David Cunningham
uster processes in >> '/usr/share/gluster/scripts' which can be used for setting up a systemd >> service to do that for you on shutdown. >> >> Best Regards, >> Strahil Nikolov >> On Dec 20, 2019 23:49, David Cunningham >> wrote: >> >> Hi St

Re: [Gluster-users] GFS performance under heavy traffic

2019-12-22 Thread David Cunningham
for you on shutdown. > > Best Regards, > Strahil Nikolov > On Dec 20, 2019 23:49, David Cunningham wrote: > > Hi Stahil, > > Ah, that is an important point. One of the nodes is not accessible from > the client, and we assumed that it only needed to reach the GFS node

Re: [Gluster-users] GFS performance under heavy traffic

2019-12-20 Thread David Cunningham
> > Best Regards, > Strahil Nikolov > > В петък, 20 декември 2019 г., 01:49:56 ч. Гринуич+2, David Cunningham < > dcunning...@voisonics.com> написа: > > > Hi Strahil, > > The chart attached to my original email is taken from the GFS server. > > I'm

Re: [Gluster-users] GFS performance under heavy traffic

2019-12-19 Thread David Cunningham
kolov > > > > В четвъртък, 19 декември 2019 г., 02:28:55 ч. Гринуич+2, David Cunningham < > dcunning...@voisonics.com> написа: > > > Hi Raghavendra and Strahil, > > We are using GFS version 5.6-1.el7 from the CentOS repository. > Unfortunately we can't mo

Re: [Gluster-users] GFS performance under heavy traffic

2019-12-19 Thread David Cunningham
e client mounts? As it > is mostly static content it would help to use the kernel caching and > read-ahead mechanisms. > > I think the default is enabled. > > Regards, > > Jorick Astrego > On 12/19/19 1:28 AM, David Cunningham wrote: > > Hi Raghavendra and Stra

Re: [Gluster-users] GFS performance under heavy traffic

2019-12-18 Thread David Cunningham
.redhat.com/show_bug.cgi?id=1393419 > > On Wed, Dec 18, 2019 at 2:50 AM David Cunningham < > dcunning...@voisonics.com> wrote: > >> Hello, >> >> We switched a production system to using GFS instead of NFS at the >> weekend, however it didn't go well on

[Gluster-users] GFS performance under heavy traffic

2019-12-17 Thread David Cunningham
n do about it? NFS traffic doesn't exceed 4MBps, so 120MBps for GFS seems awfully high. It would also be good to have faster read performance from GFS, but that's another issue. Thanks in advance for any assistance. -- David Cunningham, Voisonics Limited http://voisonics.com/ USA:

Re: [Gluster-users] Thin-arbiter questions

2019-11-19 Thread David Cunningham
gluster/glusterfs/issues/763 > > Looks like there are some more minor issues in v7.0. I am planning to send > fixes soon, so these can be fixed in v7.1 > > -Amar > > > On Tue, Nov 19, 2019 at 2:49 AM David Cunningham < > dcunning...@voisonics.com> wrote: > >&g

Re: [Gluster-users] Thin-arbiter questions

2019-11-18 Thread David Cunningham
.gluster.org/#/c/glusterfs/+/22992/ - release 7 > https://review.gluster.org/#/c/glusterfs/+/22612/ - master > > > I am trying to come up with modified document/blog for this asap. > > --- > Ashish > > > -- > *From: *"David Cunningha

Re: [Gluster-users] Thin-arbiter questions

2019-07-31 Thread David Cunningham
committing in master won't be enough for it to make it to a release. If >> it has to be a part of release 6 then after being committed into master we >> have to back port it to the release 6 branch and it should get committed in >> that particular branch as well. Only then it w

Re: [Gluster-users] Thin-arbiter questions

2019-07-23 Thread David Cunningham
it has to be a part of release 6 then after being committed into master we > have to back port it to the release 6 branch and it should get committed in > that particular branch as well. Only then it will be a part of the package > released for that branch. > > > On Wed, 19 Jun,

Re: [Gluster-users] Thin-arbiter questions

2019-06-18 Thread David Cunningham
back ported to the particular release branch before tagging, then > it will be a part of the tagging. And this tag is the one used for creating > packaging. This is the procedure for CentOS, Fedora and Debian. > > Regards, > Hari. > > On Tue, 18 Jun, 2019, 4:06 AM Dav

Re: [Gluster-users] Thin-arbiter questions

2019-06-17 Thread David Cunningham
n as we are in last phase of patch reviews. You > can follow this patch - https://review.gluster.org/#/c/glusterfs/+/22612/ > > --- > Ashish > > ------ > *From: *"David Cunningham" > *To: *"Ashish Pandey" > *Cc: *"gluster-u

Re: [Gluster-users] Thin-arbiter questions

2019-06-10 Thread David Cunningham
Hi Ashish and Amar, Is there any news on when thin-arbiter might be in the regular GlusterFS, and the CentOS packages please? Thanks for your help. On Mon, 6 May 2019 at 20:34, Ashish Pandey wrote: > > > -- > *From: *"David Cunningham"

Re: [Gluster-users] Transport endpoint is not connected

2019-06-09 Thread David Cunningham
, 3 юни 2019 г., 18:16:00 ч. Гринуич-4, David Cunningham < > dcunning...@voisonics.com> написа: > > > Hello all, > > We confirmed that the network provider blocking port 49152 was the issue. > Thanks for all the help. > > > On Thu, 30 May 2019 at 16:11, Strahil

Re: [Gluster-users] Transport endpoint is not connected

2019-06-03 Thread David Cunningham
> it's definately a firewall. > > Best Regards, > Strahil Nikolov > On May 30, 2019 01:33, David Cunningham wrote: > > Hi Ravi, > > I think it probably is a firewall issue with the network provider. I was > hoping to see a specific connection failure message w

Re: [Gluster-users] Transport endpoint is not connected

2019-05-29 Thread David Cunningham
;t see a "Connected to gvol0-client-1" in the log. Perhaps a > firewall issue like the last time? Even in the earlier add-brick log from > the other email thread, connection to the 2nd brick was not established. > > -Ravi > On 29/05/19 2:26 PM, David Cunningham wrote: > > Hi

Re: [Gluster-users] Transport endpoint is not connected

2019-05-29 Thread David Cunningham
-heal Daemon on gfs2N/A N/AY 7634 Task Status of Volume gvol0 -- There are no active volume tasks On Wed, 29 May 2019 at 16:26, Ravishankar N wrote: > > On 29/05/19 6:21 AM, David Cunningham wrote:

[Gluster-users] Transport endpoint is not connected

2019-05-28 Thread David Cunningham
:435:gf_client_unref] 0-gvol0-server: Shutting down connection CTX_ID:30d74196-fece-4380-adc0-338760188b81-GRAPH_ID:0-PID:7718-HOST:gfs2.xxx.com-PC_NAME:gvol0-client-2-RECON_NO:-0 Thanks in advance for any assistance. -- David Cunningham, Voisonics Limited http://voisonics.com/ USA: +1 213 221 1092

Re: [Gluster-users] add-brick: failed: Commit failed

2019-05-26 Thread David Cunningham
ll issue all along. Thanks for all your help. On Sat, 25 May 2019 at 01:49, Ravishankar N wrote: > Hi David, > On 23/05/19 3:54 AM, David Cunningham wrote: > > Hi Ravi, > > Please see the log attached. > > When I grep -E "Connected to |disconnected from" >

Re: [Gluster-users] add-brick: failed: Commit failed

2019-05-22 Thread David Cunningham
g the add-brick and attach the > gvol0-add-brick-mount.log here. After that, you can change the > client-log-level back to INFO. > > -Ravi > On 22/05/19 11:32 AM, Ravishankar N wrote: > > > On 22/05/19 11:23 AM, David Cunningham wrote: > > Hi Ravi, > > I'd

Re: [Gluster-users] add-brick: failed: Commit failed

2019-05-21 Thread David Cunningham
ol0' on gfs3. > > 4. `gluster volume add-brick gvol0 replica 3 arbiter 1 > gfs3:/nodirectwritedata/gluster/gvol0` from gfs1. > > 5. Check that the files are getting healed on to the new brick. > Thanks, > Ravi > On 22/05/19 6:50 AM, David Cunningham wrote: > > Hi Rav

Re: [Gluster-users] add-brick: failed: Commit failed

2019-05-21 Thread David Cunningham
ily: inet On Wed, 22 May 2019 at 12:43, Ravishankar N wrote: > Hi David, > Could you provide the `getfattr -d -m. -e hex > /nodirectwritedata/gluster/gvol0` output of all bricks and the output of > `gluster volume info`? > > Thanks, > Ravi > On 22/05/19 4:57 AM, David Cunn

Re: [Gluster-users] add-brick: failed: Commit failed

2019-05-21 Thread David Cunningham
thya Balachandran > wrote: > >> >> >> On Fri, 17 May 2019 at 06:01, David Cunningham >> wrote: >> >>> Hello, >>> >>> We're adding an arbiter node to an existing volume and having an issue. >>> Can anyone help? The root cause e

Re: [Gluster-users] add-brick: failed: Commit failed

2019-05-19 Thread David Cunningham
at, 18 May 2019 at 22:34, Strahil wrote: > Just run 'gluster volume heal my_volume info summary'. > > It will report any issues - everything should be 'Connected' and show '0'. > > Best Regards, > Strahil Nikolov > On May 18, 2019 02:01, David Cu

Re: [Gluster-users] add-brick: failed: Commit failed

2019-05-17 Thread David Cunningham
ankar N wrote: > > On 17/05/19 5:59 AM, David Cunningham wrote: > > Hello, > > We're adding an arbiter node to an existing volume and having an issue. > Can anyone help? The root cause error appears to be > "----0001: failed to res

[Gluster-users] add-brick: failed: Commit failed

2019-05-16 Thread David Cunningham
socket --xlator-option *replicate*.node-uuid=2069cfb3-c798-47e3-8cf8-3c584cf7c412 --process-name glustershd root 16856 16735 0 21:21 pts/000:00:00 grep --color=auto gluster -- David Cunningham, Voisonics Limited http://voisonics.com/ USA: +1 2

Re: [Gluster-users] Thin-arbiter questions

2019-05-06 Thread David Cunningham
roviding thin-arbiter support on glusted however, > it is not available right now. > https://review.gluster.org/#/c/glusterfs/+/22612/ > > --- > Ashish > > -- > *From: *"David Cunningham" > *To: *gluster-users@gluster.org > *Sent: *Friday, M

Re: [Gluster-users] Thin-arbiter questions

2019-05-03 Thread David Cunningham
been sent and it only requires reviews. I hope it > should be completed in next 1 month or so. > https://review.gluster.org/#/c/glusterfs/+/22612/ > > --- > Ashish > > ------ > *From: *"David Cunningham" > *To: *"Ashish Pandey" >

  1   2   >