[Gluster-users] Pre Validation failed on 192.168.3.31. Volume gv1 does not exist

2024-01-03 Thread Paul Watson
Hostname: 192.168.3.31 Uuid: 7e085d9f-a0f9-4ed6-a850-44b6ed991081 State: Peer in Cluster (Connected) -- Paul Watson Oninitwww.oninit.com Tel: +1 913 364 0360 Cell: +1 913 387 7529 Oninit® is a registered trademark of Oninit LLC If you want to improve, be content to be thought foolish and stupid

Re: [Gluster-users] [External] Re: Problems with gluster distributed mode and numpy memory mapped files

2020-01-07 Thread Jewell, Paul
has been solved. So I guess I would recommend using this "direct-io-mode=disable" when working with numpy files. Thanks, -Paul From: gluster-users-boun...@gluster.org on behalf of Jewell, Paul Sent: Thursday, December 12, 2019 10:52 AM To: glust

Re: [Gluster-users] Problems with gluster distributed mode and numpy memory mapped files

2019-12-12 Thread Jewell, Paul
! ____ From: Jewell, Paul Sent: Monday, December 9, 2019 1:40 PM To: gluster-users@gluster.org Subject: Re: Problems with gluster distributed mode and numpy memory mapped files Hi All, I am using gluster in order to share data between four development servers. It is just

Re: [Gluster-users] Cannot see all data in mount

2019-05-16 Thread Paul van der Vlis
Op 16-05-19 om 05:43 schreef Nithya Balachandran: > > > On Thu, 16 May 2019 at 03:05, Paul van der Vlis <mailto:p...@vandervlis.nl>> wrote: > > Op 15-05-19 om 15:45 schreef Nithya Balachandran: > > Hi Paul, > > > > A few questions: >

Re: [Gluster-users] Cannot see all data in mount

2019-05-15 Thread Paul van der Vlis
Op 15-05-19 om 15:45 schreef Nithya Balachandran: > Hi Paul, > > A few questions: > Which version of gluster are you using? On the server and some clients: glusterfs 4.1.2 On a new client: glusterfs 5.5 > Did this behaviour start recently? As in were the contents of that >

Re: [Gluster-users] Cannot see all data in mount

2019-05-15 Thread Paul van der Vlis
32a Status: Started Snapshot Count: 0 Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: xxx-vpn:/DATA Options Reconfigured: transport.address-family: inet nfs.disable: on (I have edited this a bit for privacy of my customer). I think they have used glusterfs because it can do ACLs. With reg

[Gluster-users] Cannot see all data in mount

2019-05-15 Thread Paul van der Vlis
don't see any data in /data/ALGEMEEN/. I don't see something special in /etc/exports or in /etc/glusterfs on the server. Is there maybe a mechanism in Glusterfs what can exclude data from export? Or is there a way to debug this problem? With regards, Paul van der Vlis # file: VOORBEELD # o

[Gluster-users] cluster brick logs filling after upgrade from 3.6 to 3.12

2018-05-23 Thread Paul Allen
Recently we updated a Gluster replicated setup from 3.6 to 3.12 (stepping through 3.8 first before going to 3.12). Afterwards I noticed the brick logs were filling at an alarming rate on the server we have the NFS service running from: $ sudo tail -20

Re: [Gluster-users] SQLite3 on 3 node cluster FS?

2018-03-08 Thread Paul Anderson
the database operations to prevent data loss. You also can't do any caching in your volume mount on the client side. The performance settings server side appear not to matter, provided you're up to date on client/server code. I hope this helps someone! Paul On Tue, Mar 6, 2018 at 12:32 PM

Re: [Gluster-users] gluster debian build repo redirection loop on apt-get update on docker

2018-03-07 Thread Paul Anderson
ile, but on debian, it appears to have to be a real file. Paul On Tue, Mar 6, 2018 at 10:28 PM, Kaleb S. KEITHLEY <kkeit...@redhat.com> wrote: > On 03/06/2018 05:50 PM, Paul Anderson wrote: >> When I follow the directions at >> http://docs.gluster.org/en/latest/Install-G

[Gluster-users] gluster debian build repo redirection loop on apt-get update on docker

2018-03-06 Thread Paul Anderson
ume-yes install glusterfs-client Thanks, Paul ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] SQLite3 on 3 node cluster FS?

2018-03-06 Thread Paul Anderson
, Raghavendra Gowdappa <rgowd...@redhat.com> wrote: > +Csaba. > > On Tue, Mar 6, 2018 at 2:52 AM, Paul Anderson <p...@umich.edu> wrote: >> >> Raghavendra, >> >> Thanks very much for your reply. >> >> I fixed our data corruption problem by dis

Re: [Gluster-users] SQLite3 on 3 node cluster FS?

2018-03-05 Thread Paul Anderson
would like our test scripts, I can either tar them up and email them or put them in github - either is fine with me. (they rely on current builds of docker and docker-compose) Thanks again!! Paul On Mon, Mar 5, 2018 at 11:26 AM, Raghavendra Gowdappa <rgowd...@redhat.com> wrote: > > &

[Gluster-users] SQLite3 on 3 node cluster FS?

2018-03-05 Thread Paul Anderson
that flushes won't block as would be needed by SQLite3. Does anyone have any suggestions? Any words of widsom would be much appreciated. Thanks, Paul ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster

[Gluster-users] "linkfile not having link" occurrs sometimes after renaming

2018-01-15 Thread Paul
There are two users u1 & u2 in the cluster. Some files are created by u1, and they are read only for u2. Of course u2 can read these files. Later these files are renamed by u1. Then I switch to the user u2. I find that u2 can't list or access the renamed files. I see these errors in log:

Re: [Gluster-users] A Problem of readdir-optimize

2017-12-29 Thread Paul
server.keepalive-interval: 1 server.keepalive-time: 2 transport.keepalive: 1 client.keepalive-count: 1 client.keepalive-interval: 1 client.keepalive-time: 2 features.cache-invalidation: off network.ping-timeout: 30 user.smb.guest: no user.id: 8148 nfs.disable: on snap-activate-on-create: enable Thanks, Paul

[Gluster-users] A Problem of readdir-optimize

2017-12-28 Thread Paul
don't see this problem. Is there a way to solve this problem? If ls doesn't return the correct file names, Thanks, Paul ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] xfs_rename error and brick offline

2017-11-22 Thread Paul
Vijay, Yes, I find it's a problem of xfs later. After upgrading xfs code, I've not seen this problem again. Thanks a lot! Paul On Fri, Nov 17, 2017 at 12:08 AM, Vijay Bellur <vbel...@redhat.com> wrote: > > > On Thu, Nov 16, 2017 at 6:23 AM, Paul <fly...@gmail.com> wrote:

Re: [Gluster-users] error "Not able to add to index" in brick logs

2017-11-22 Thread Paul Robert Marino
Yes indeed it is probably what's going on. what filesystem are you using and what are the mount options?   Original Message   From: li...@bago.org Sent: November 22, 2017 4:26 PM To: gluster-users@gluster.org Subject: [Gluster-users] error "Not able to add to index" in brick logs in my

[Gluster-users] xfs_rename error and brick offline

2017-11-16 Thread Paul
sks are new and I don't see any low level IO error. Is it a bug related to xfs or GlusterFS? Is there a workaround? Thanks, Paul ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Ignore failed connection messages during copying files with tiering

2017-11-07 Thread Paul
the problems happens again. Later we find the problem seems to happen when creating directories. The GlusterFS version is 3.11.0. Does anyone knows what’s the problem? Is it related to tiering? Thanks, Paul ___ Gluster-users mailing list Gluster-users@gluster.org

[Gluster-users] Fwd: Ignore failed connection messages during copying files with tiering

2017-11-04 Thread Paul
happens again. Later we find the problem seems to happen when creating directories. The GlusterFS version is 3.11.0. Does anyone knows what’s the problem? Is it related to tiering? Thanks, Paul ___ Gluster-users mailing list Gluster-users@gluster.org http

Re: [Gluster-users] [Gluster-devel] BoF - Gluster for VM store use case

2017-10-31 Thread Paul Cuzner
Just wanted to pick up on the EC for vm storage domains option.. > > * Erasure coded volumes with sharding - seen as a good fit for VM disk > > storage > > I am working on this with a customer, we have been able to do 400-500 MB / > sec writes! Normally things max out at ~150-250. The trick

Re: [Gluster-users] Elasticsearch facing CorruptIndexException exception with GlusterFs 3.10.1

2017-05-08 Thread Abhijit Paul
poking for previous mail reply On Sun, May 7, 2017 at 1:06 AM, Abhijit Paul <er.abhijitp...@gmail.com> wrote: > https://pkalever.wordpress.com/2017/03/14/elasticsearch-wit > h-gluster-block/ > here used tested environment is Fedora , > but i am using RHEL based Oracle lin

Re: [Gluster-users] Elasticsearch facing CorruptIndexException exception with GlusterFs 3.10.1

2017-05-06 Thread Abhijit Paul
umar Karampuri <pkara...@redhat.com > wrote: > > > On Fri, May 5, 2017 at 5:40 PM, Pranith Kumar Karampuri < > pkara...@redhat.com> wrote: > >> >> >> On Fri, May 5, 2017 at 5:36 PM, Abhijit Paul <er.abhijitp...@gmail.com> >> wrote: &g

Re: [Gluster-users] Elasticsearch facing CorruptIndexException exception with GlusterFs 3.10.1

2017-05-05 Thread Abhijit Paul
5, 2017 at 5:06 PM, Pranith Kumar Karampuri <pkara...@redhat.com > wrote: > Abhijit we just started making the efforts to get all of this stable. > > On Fri, May 5, 2017 at 4:45 PM, Abhijit Paul <er.abhijitp...@gmail.com> > wrote: > >> I yet to try gluster-block

Re: [Gluster-users] Elasticsearch facing CorruptIndexException exception with GlusterFs 3.10.1

2017-05-05 Thread Abhijit Paul
ich > doesn't require solving all these caching issues. > Here's a blog post on the same - https://pkalever.wordpress. > com/2017/03/14/elasticsearch-with-gluster-block/ > > Adding Prasanna and Pranith who worked on this, in case you need more info > on this. > > -Krutika >

Re: [Gluster-users] Elasticsearch facing CorruptIndexException exception with GlusterFs 3.10.1

2017-05-04 Thread Abhijit Paul
gi?id=1426548> ? *FYI i am using glusterfs 3.10.1 tar.gz* Regards, Abhijit On Thu, May 4, 2017 at 10:58 PM, Amar Tumballi <atumb...@redhat.com> wrote: > > > On Thu, May 4, 2017 at 10:41 PM, Abhijit Paul <er.abhijitp...@gmail.com> > wrote: > >> Since i am n

Re: [Gluster-users] Elasticsearch facing CorruptIndexException exception with GlusterFs 3.10.1

2017-05-04 Thread Abhijit Paul
tors to make that work complete. Cced the relevant folks for more >> information. Can you please turn off all the perf xlator options as a work >> around to move forward? >> >> On Wed, May 3, 2017 at 8:04 PM, Abhijit Paul <er.abhijitp...@gmail.com> >> wrote: >

Re: [Gluster-users] Elasticsearch facing CorruptIndexException exception with GlusterFs 3.10.1

2017-05-04 Thread Abhijit Paul
at work complete. Cced the relevant folks for more > information. Can you please turn off all the perf xlator options as a work > around to move forward? > > On Wed, May 3, 2017 at 8:04 PM, Abhijit Paul <er.abhijitp...@gmail.com> > wrote: > >> Dear folks, >> &

[Gluster-users] Elasticsearch facing CorruptIndexException exception with GlusterFs 3.10.1

2017-05-03 Thread Abhijit Paul
Dear folks, I setup Glusterfs(3.10.1) NFS type as persistence volume for Elasticsearch(5.1.2) but currently facing issue with *"CorruptIndexException" *with Elasticseach logs and due to that index health trued RED in Elasticsearch. Later found that there was an issue with gluster < 3.10 (

[Gluster-users] Gluster 3.8.9 with NFS-Ganesha HA Errors

2017-03-17 Thread Paul Cammarata
--node=NFSPROD02 --lifetime=forever --name=grace-active --update=1 failed Mar 17 11:47:10 NFSPROD02 ganesha_grace(nfs-grace)[25420]: INFO: crm_attribute --query --node=NFSPROD02 --name=grace-active failed Everything is working properly, but any idea what the problem is here? Paul Cammarata

[Gluster-users] NFS-Ganesha HA reboot

2017-03-13 Thread Paul Cammarata
? Paul Cammarata SIEM System Administrator SecurIT360 530 Beacon Pkwy W, Suite 901 | Birmingham, AL 35209 O: 205.419.9066 x1022 | P: 205.532.9646 | F: 205.449.1425 www.securit360.com<http://www.securit360.com/> | p...@securit360.com<mailto:p...@securit360.com> CONFIDENTIALITY: This emai

[Gluster-users] NFS-Ganesha HA reboot

2017-03-13 Thread Paul Cammarata
? Paul Cammarata SIEM System Administrator SecurIT360 530 Beacon Pkwy W, Suite 901 | Birmingham, AL 35209 O: 205.419.9066 x1022 | P: 205.532.9646 | F: 205.449.1425 www.securit360.com<http://www.securit360.com/> | p...@securit360.com<mailto:p...@securit360.com> CONFIDENTIALITY: This emai

[Gluster-users] volume start fails

2017-01-17 Thread Paul Bickerstaff [DATACOM]
Paul ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] NFS service dying

2017-01-11 Thread Paul Allen
here, I was hoping to have this in production last Friday. If anyone has any ideas I'd be very grateful. -- Paul Allen Inetz System Administrator ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster

[Gluster-users] restoring a volume: best practise for backup and restore

2016-12-04 Thread Paul Bickerstaff [DATACOM]
know what others have found works well. Thanks Paul ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] capacities

2016-11-28 Thread Paul Feuvraux
Hey all, What is the max storage capacity of GlusterFS please? -- Paul Feuvraux <https://super-baleine.github.io/> ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Reliably mounting a gluster volume

2016-10-24 Thread Paul Boven
s also get properly started on boot once the /gluster filesystem is there. Regards, Paul Boven. -- Paul Boven <bo...@jive.eu> +31 (0)521-596547 Unix/Linux/Networking specialist Joint Institute for VLBI in Europe - www.jive.eu VLBI - It's a fringe science ___

[Gluster-users] Reliably mounting a gluster volume

2016-10-21 Thread Paul Boven
ate-0: All subvolumes are down. Going offline until atleast one of them comes back up. Once the machine has fully booted and I log in, simply typing 'mount /gluster' always succeeds. I would really appreciate your help in making this happening on boot without intervention. Regards, Paul Bove

Re: [Gluster-users] CFP for Gluster Developer Summit

2016-08-31 Thread Paul Cuzner
Sounds great! I had to knit together different cli commands in the past for 'gstatus' to provide a view of the cluster - so this is a cool. Would it be possible to add an example of the output to the RFE BZ* 1353156 * <https://bugzilla.redhat.com/show_bug.cgi?id=1353156> *?* Paul C On We

Re: [Gluster-users] release-3.6 end of life

2016-08-26 Thread Osborne, Paul (paul.osbo...@canterbury.ac.uk)
wntime etc to upgrade my entire clusters this is not a short term plan and takes careful testing, planning and approval of management to disrupt to services that are dependant. Best wishes Paul -- Paul Osborne Senior Systems Engineer Canterbury Christ Church University Tel: 01

[Gluster-users] Data encryption

2016-07-27 Thread Paul Warren
this. I was following http://www.gluster.org/community/documentation/index.php/Features/disk-encryption - but this doesn't exist any more. Thanks Paul ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo

Re: [Gluster-users] [Gluster-devel] Non Shared Persistent Gluster Storage with Kubernetes

2016-07-05 Thread Paul Cuzner
Just to pick up on how the block device is defined. I think sharding is the best option - it's already the 'standard' for virtual disks, and the images files for iSCSI are no different in my mind. They have pretty much the same requirements around sizing, fault tolerance and recovery. Let's keep

Re: [Gluster-users] RAM/Disk ratio question

2016-05-27 Thread Paul Robert Marino
Unfortunately that kind of tuning doesn't have any simple answers, and any one who says there is should not be listened to. It really depends on your workload and a lot of other factors such as your hardware. for example a 20 plater RAID 1+0 on spinning disks with a wide stripe needs very little

Re: [Gluster-users] How to identify a files shard?

2016-04-24 Thread Paul Cuzner
Just wondering how shards can silently be different across bricks in a replica? Lindsay caught this issue due to her due diligence taking on 'new' tech - and resolved the inconsistency, but tbh this shouldn't be an admin's job :( On Sun, Apr 24, 2016 at 7:06 PM, Krutika Dhananjay

Re: [Gluster-users] Does gluster have a "scrub"?

2016-04-13 Thread Paul Cuzner
Hi Lindsay, As I understand it, the current logic of bitd/scrubd does not address the problem you asked about "a process where all replicas are compared for inconsistencies." bitd/scrubd operate independently within each node, signing each file and validating the checksum - which is part of the

Re: [Gluster-users] Blog on Hyperconverged Infrastructure using oVirt and Gluster

2016-01-13 Thread Paul Cuzner
copying the vm's across the storage domains can be done with the storage migrate feature. I did see a few problems in the past with migrating running vm's in this way, but powered off vm's were fine. However, it's not the fastest process though! On Wed, Jan 13, 2016 at 4:28 AM, Krutika Dhananjay

Re: [Gluster-users] 3.6.6 healing issues?

2015-10-16 Thread Osborne, Paul (paul.osbo...@canterbury.ac.uk)
ter actually use? Thanks Paul From: gluster-users-boun...@gluster.org <gluster-users-boun...@gluster.org> on behalf of Osborne, Paul (paul.osbo...@canterbury.ac.uk) <paul.osbo...@canterbury.ac.uk> Sent: 15 October 2015 16:40 To: gluster-us

Re: [Gluster-users] Test results and Performance Tuning efforts ...

2015-10-12 Thread Paul Cuzner
Hi, *IF* your seeing crashes in glusterd, Atin sent out an workaround that needs to be applied to 3.7.x to avoid the issue (introduced with epoll) add # for epoll issue glusterd crash fix option ping-timeout 0 option event-threads 1 to your glusterd.vol files

Re: [Gluster-users] Test results and Performance Tuning efforts ...

2015-10-12 Thread Paul Cuzner
wrote: > > On 13 October 2015 at 11:51, Paul Cuzner <pcuz...@redhat.com> wrote: > >> add >> # for epoll issue glusterd crash fix >> option ping-timeout 0 >> option event-threads 1 >> >> to your glusterd.vol files (/etc/glusterfs/glusterd.vol)

Re: [Gluster-users] How to clear volume options

2015-10-10 Thread Paul Cuzner
yep, try gluster vol reset Paul C On Sun, Oct 11, 2015 at 11:30 AM, Lindsay Mathieson < lindsay.mathie...@gmail.com> wrote: > Once set, is there any way to "unset" a volume option, so that it returns > to its default v

[Gluster-users] Advice for auto-scaling

2015-09-16 Thread Paul Thomas
to support each public instance individually? Paul ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Advice for auto-scaling

2015-09-16 Thread Paul Thomas
Would you run puppet in init.d of the new node to sync infrastructure? Then you could use rundeck to trigger the shared config on each instance, for on demand syncing. On 16/09/15 13:23, Paul Thomas wrote: Hi, I’m new to shared file systems and horizontal cloud scaling. I have already

Re: [Gluster-users] Keeping it Simple replication for HA

2015-09-14 Thread Paul Cuzner
Have you considered the disperse volume? We'd normally advocate 6 servers for a +2 redundancy factor though. Paul C On Tue, Sep 15, 2015 at 5:47 AM, <aa...@ajserver.com> wrote: > Gluster users, > > I am looking to implement GlusterFS on my network for large, expandable, > and

Re: [Gluster-users] Locking failed - since upgrade to 3.6.4

2015-08-03 Thread Osborne, Paul (paul.osbo...@canterbury.ac.uk)
right... Thanks Paul ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Locking failed - since upgrade to 3.6.4

2015-08-03 Thread Osborne, Paul (paul.osbo...@canterbury.ac.uk)
and servers have been sequentially rebooted in the hope that this would clear any issue - however that doe not appear to be the case. Thanks Paul Paul Osborne Senior Systems Engineer Canterbury Christ Church University Tel: 01227 782751 From: Atin Mukherjee

[Gluster-users] 3.5 Debian Wheezy packages?

2015-07-10 Thread Osborne, Paul (paul.osbo...@canterbury.ac.uk)
that this is temporary as I am loathe to move forward to Jessie at present due to it's immaturity. Many thanks Paul Paul Osborne Senior Systems Engineer Canterbury Christ Church University Tel: 01227 782751 ___ Gluster-users mailing list Gluster-users

[Gluster-users] Invisible during 'ls' but able to 'cd'

2015-06-09 Thread Paul Anderson
files on them. Gluster is replicating them, so I am not sure if this is just a it will take time issue or if there is a real problem. Also potentially unrelated but maybe not is if I ask gluster to do a rebalance glusterd crashes. Thank you for the advice and assistance, Paul

Re: [Gluster-users] seq read performance comparion between libgfapi and fuse

2015-05-25 Thread Paul Guo
On Fri, May 22, 2015 at 06:50:40PM +0800, Paul Guo wrote: Hello, I wrote two simple single-process seq read test case to compare libgfapi and fuse. The logic looks like this. char buf[32768]; while (1) { cnt = read(fd, buf, sizeof(buf)); if (cnt == 0

[Gluster-users] seq read performance comparion between libgfapi and fuse

2015-05-22 Thread Paul Guo
cache). I tested direct io because I suspected that fuse kernel readahead helped more than the read optimization solutions in gluster. I searched a lot but I did not find much about the comparison between fuse and libgfapi. Anyone has known about this and known why? Thanks, Paul

[Gluster-users] slow seek times

2015-05-13 Thread Osborne, Paul (paul.osbo...@canterbury.ac.uk)
seeks_cpu: 9.6 Server 2: seeks: 2175.4seeks_cpu: 65.4 At this point I am confused why there should be such a difference in seek info and am uncertain how to proceed futher. Suggestions are welcome. Thanks Paul ___ Gluster-users

Re: [Gluster-users] [Gluster-devel] Got a slogan idea?

2015-04-07 Thread Paul Robert Marino
Want a storage cluster? Get Gluster! On Tue, Apr 7, 2015 at 3:37 PM, Dustin L. Black dbl...@redhat.com wrote: {Flexible|Adaptive|Versatile} Open Data Store Dustin L. Black, RHCA Principal Technical Account Manager Red Hat, Inc. - Strategic Customer Engagement (o) +1.212.510.4138 (m)

Re: [Gluster-users] iscsi and distributed volume

2015-04-01 Thread Paul Robert Marino
You do realize you would have to put the ISCSI target disk image on the mounted Gluster volume not directly on the brick. So as long as you have replication your volume would remain accessible. You can not point the ISCSI process directly to the brick or replication and striping wont work

Re: [Gluster-users] tune2fs exited with non-zero exit status

2015-03-24 Thread Osborne, Paul (paul.osbo...@canterbury.ac.uk)
Ah, that is handy to know. Will this patch get applied to the 3.5. release stream or am I going to have to look at moving onto 3.6 at some point. Thanks Paul -- Paul Osborne Senior Systems Engineer Infrastructure Services IT Department Canterbury Christ Church University -Original

[Gluster-users] tune2fs exited with non-zero exit status

2015-03-16 Thread Osborne, Paul (paul.osbo...@canterbury.ac.uk)
it is not an issue for me as this is still proof of concept for what we are doing, what I need to know whether doing so will stop the continual log churn. Many thanks Paul ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org

Re: [Gluster-users] Geo-replication (v3.5.3)

2015-03-15 Thread Paul Mc Auley
One thing I've noticed is that you need to make sure that the SSH host keys of _each_ of the slave bricks needs to be in the known_hosts of each of the master bricks. Failure to ensure this can cause failure in a non-obvious way. Regards, Paul On 12 March 2015 at 20:29, John Gardeniers jgardeni

[Gluster-users] lost quorum and disable NFS server

2015-03-11 Thread Osborne, Paul (paul.osbo...@canterbury.ac.uk)
to do this? Client side I am not concerned (yet) as I am using autofs with NFS server failover via weighting - this demonstrably deals with loss of a gluster node but what I do not want is a client continuing to use a gluster node that is off in a world of it's own. Thanks Paul -- Paul

[Gluster-users] Debian stable gluster packages

2015-03-09 Thread Osborne, Paul (paul.osbo...@canterbury.ac.uk)
reasonable long term support? Many thanks Paul -- Paul Osborne Senior Systems Engineer Infrastructure Services IT Department Canterbury Christ Church University ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman/listinfo

[Gluster-users] quorum sanity check please

2015-03-09 Thread Osborne, Paul (paul.osbo...@canterbury.ac.uk)
. Does it make sense for me to have web1 mount gfs1 and gfsq, web2 to mount gfs2 and gfsq with weightings set to away from the quorum server? I think this makes sense but experienced folk out there may tell me better. Thanks Paul -- Paul Osborne Senior Systems Engineer Infrastructure Services

Re: [Gluster-users] High latency on FSYNC, INODELK, FINODELK

2015-01-28 Thread Paul E Stallworth
7332621.00 us 1 FSYNC I repeated the test and the results for this brick are similar: 29.60 176.56 us 27.00 us 1008658.00 us 55273 WRITE 62.41 1714773.67 us 10098.00 us 10996032.00 us 12 FSYNC Thanks, Paul Paul Stallworth Housing

[Gluster-users] High latency on FSYNC, INODELK, FINODELK

2015-01-27 Thread Paul E Stallworth
7.98 1843032.83 us 22261.00 us 16023044.00 us 12 FSYNC 8.19 1620932.64 us 27.00 us 17171453.00 us 14 INODELK 9.03 94.52 us 22.00 us 9533.00 us 264526 LOOKUP 73.67 563743.05 us 14.00 us 17173239.00 us 362 FINODELK Thanks, Paul Paul Stallworth Housing Information Technology University of Colorado

[Gluster-users] Non-root user geo-replication in 3.6?

2015-01-05 Thread Paul Mc Auley
the element of setting GLUSTERD_WORKDIR to /var/lib/glusterd and running /usr/libexec/glusterfs/set_geo_rep_pem_keys.sh What is the current situation with this? Thanks, Paul ___ Gluster-users mailing list Gluster-users@gluster.org http://www.gluster.org/mailman

[Gluster-users] Help interpreting profile results

2014-12-19 Thread Paul E Stallworth
I'd like to at least know which direction to head in. Any other tips or resources that you could send my way would also be appreciated. Thanks, Paul Paul Stallworth Housing IT University of Colorado Boulder Boulder, Colorado 80309 T: 303.735.6623

Re: [Gluster-users] # of replica != number pf bricks?

2014-12-11 Thread Paul Robert Marino
Yes-- Sent from my HP Pre3On Dec 10, 2014 3:54 PM, Michael Schwartzkopff m...@sys4.de wrote: Hi, what happens if the number of replica in a volume is not equal to the number of bricks? Sample: I have a volume with 4 bricks (on four peers) and only what to have two replicats of every file. Is

Re: [Gluster-users] Add space to a volume

2014-12-10 Thread Paul Robert Marino
Of course you can always add space to the volume that works well. The reason you may want to consider adding bricks is if you enable striping in gluster with the mirroring you will probably see better performance on your reads and writes.-- Sent from my HP Pre3On Dec 10, 2014 11:21 AM, wodel

[Gluster-users] 3.5.3 NFS locking issues

2014-12-09 Thread Osborne, Paul (paul.osbo...@canterbury.ac.uk)
then why does the gluster documentation not recommend this? (this is not intended as snarky dig - honest) Thanks Paul Paul Osborne Senior Systems Engineer Infrastructure Services IT Department Canterbury Christ Church University +44 1227 782751

Re: [Gluster-users] Gluster volume not automounted when peer is down

2014-11-25 Thread Paul Robert Marino
volume not automounted when peer is down A much simpler answer is to assign a hostname to multiple IP addresses (round robin dns). When gethostbyname() returns multiple entries, the client will try them all until it's successful. On 11/24/2014 06:23 PM, Paul Robert Marino wrote: This is simple

Re: [Gluster-users] Gluster volume not automounted when peer is down

2014-11-24 Thread Paul Robert Marino
This is simple and can be handled in many ways.Some background first.The mount point is a single IP or host name. The only thing the client uses it for is to download a describing all the bricks in the cluster. The next thing is it opens connections to all the nodes containing bricks for that

Re: [Gluster-users] NFS crashes - bug 1010241

2014-11-19 Thread Paul Robert Marino
In my experience this unusually happens because of NFS lockd trying too traverse a firewall.Turn off NFS locking on the source host and you will be fine. The root cause is not a problem with cluster its actually a deficiency in the NFS RFCs about RPC which has never been properly addressed.-- Sent

Re: [Gluster-users] iowait - ext4 + ssd journal

2014-11-17 Thread Paul Robert Marino
you are partially correct. the -i and -L options have not been implemented in xfs_grow. that said 1) The journal size will increase automatically to the appropriate size based data area size in the its just you cant manually specify the size. 2) While it is true you cant switch between internal

Re: [Gluster-users] SNMP monitoring

2014-11-11 Thread Osborne, Paul (paul.osbo...@canterbury.ac.uk)
Hi, I had seen it but didn’t want to go down that route without seeing if there was something obvious that I was missing, hence my mail to the list. Looks like I will be putting that on the list to look at in anger when I get time to do so. Many thanks Paul From: Juan José Pavlik Salles

[Gluster-users] SNMP monitoring

2014-11-10 Thread Osborne, Paul (paul.osbo...@canterbury.ac.uk)
I need, however rather than just try what could be random code, is there anything that the users here can recommend? Thanks Paul ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] SNMP monitoring

2014-11-10 Thread Osborne, Paul (paul.osbo...@canterbury.ac.uk)
to query it through the command line which I can then call via SNMP – in an ideal world someone will have done that already… Regards Paul From: gluster-users-boun...@gluster.org [mailto:gluster-users-boun...@gluster.org] On Behalf Of Juan José Pavlik Salles Sent: 11 November 2014 00:05

Re: [Gluster-users] iowait - ext4 + ssd journal

2014-11-03 Thread Paul Robert Marino
Use XFS instead of EXT4There are many very good reasons its the new default filesystem in RHEL 7Also SSDs are faster at random IO and small file; however a properly built RAID of spinning disks is still faster at linear reads of large file.In general qemu does large linear reads or at least very

Re: [Gluster-users] Firewall ports with v 3.5.2 grumble time

2014-10-31 Thread Osborne, Paul (paul.osbo...@canterbury.ac.uk)
that is a fundamental behaviour change should surely be a whole lot easier to find than something that is blatantly wrong. Anyhow thanks for the clarification. Paul From: Joe Julian [mailto:j...@julianfamily.org] Sent: 30 October 2014 19:22 To: Todd Stansell; Osborne, Paul (paul.osbo

[Gluster-users] Firewall ports with v 3.5.2 grumble time

2014-10-30 Thread Osborne, Paul (paul.osbo...@canterbury.ac.uk)
in the documentation and are there any other ports that I should be aware of? Thanks Paul -- Paul Osborne Senior Systems Engineer Infrastructure Services IT Department Canterbury Christ Church University ___ Gluster-users mailing list Gluster-users@gluster.org http

[Gluster-users] WORM seems to be broken.

2014-09-23 Thread Paul Guo
Here are the steps to reproduce this issue. (gluster version 3.5.2) On one server lab1 (There is another server lab2 for replica 2): [root@lab1 ~]# gluster volume set g1 worm on volume set: success [root@lab1 ~]# gluster volume stop g1 Stopping volume will make its data inaccessible. Do you

Re: [Gluster-users] error when using mount point as a brick directory.

2014-09-15 Thread Paul Guo
: Claudio Kuenzler;c...@claudiokuenzler.com; Date: Sep 12, 2014 To: Juan José Pavlik Sallesjjpav...@gmail.com; Cc: gluster-usersgluster-users@gluster.org; Paul Guobigpaul...@foxmail.com; Subject: Re: [Gluster-users] error when using mount point as a brick directory. Thanks for the hint about

Re: [Gluster-users] Questions about gluster reblance

2014-09-11 Thread Paul Guo
Hello Shyam. Thanks for the reply. Please see my reply below, starting with [paul:] Please add me in address list besides gluster-uses when replying so that I can easier reply since I subscribed gluster-users with the digest mode (No other choice if I remember correctly.) Date: Wed, 10 Sep

Re: [Gluster-users] glusterfs replica volume self heal dir very slow!!why?

2014-09-07 Thread Paul Robert Marino
Its the small files there is an overhead on any operation on small lots of small file this is not unique to Gluster. Also are you using XFS As the underlying filesystem if you are not that would play a big part ext has issues with performance when dealing with small file due to its over reliance

Re: [Gluster-users] Expanding Volumes and Geo-replication

2014-09-04 Thread Paul Mc Auley
On 04/09/2014 09:24, M S Vishwanath Bhat wrote: On 04/09/14 00:33, Vijaykumar Koppad wrote: On Wed, Sep 3, 2014 at 8:20 PM, M S Vishwanath Bhat vb...@redhat.com mailto:vb...@redhat.com wrote: On 01/09/14 23:09, Paul Mc Auley wrote: Is geo-replication from a replica 3 volume

[Gluster-users] Expanding Volumes and Geo-replication

2014-09-01 Thread Paul Mc Auley
to add the passwordless SSH key back in? (As opposed to the restricted secret.pem) For that matter in the inital setup is it an expected failure mode that the initial geo-replication create will fail if the slave host's SSH key isn't known? Thanks, Paul

Re: [Gluster-users] NFS to Gluster Hangs

2014-06-10 Thread Paul Robert Marino
Ive also seen this happen when there is a firewall in the middle and nfslockd malfunctioned because of it. On Tue, Jun 10, 2014 at 12:20 PM, Gene Liverman glive...@westga.edu wrote: Thanks! I turned off drc as suggested and will have to wait and see how that works. Here are the packages I have

Re: [Gluster-users] [Gluster-devel] [RFC] GlusterFS Operations Guide

2014-06-03 Thread Paul Cuzner
This is a really good initiative Lala. Anything that helps Operations folks always gets my vote :) I've added a few items to the etherpad. Cheers, PC - Original Message - From: Lalatendu Mohanty lmoha...@redhat.com To: gluster-users@gluster.org, gluster-de...@gluster.org

Re: [Gluster-users] User-serviceable snapshots design

2014-05-05 Thread Paul Cuzner
Just one question relating to thoughts around how you apply a filter to the snapshot view from a user's perspective. In the considerations section, it states - We plan to introduce a configurable option to limit the number of snapshots visible under the USS feature. Would it not be possible

Re: [Gluster-users] libgfapi failover problem on replica bricks

2014-04-21 Thread Paul Penev
brick failure. What this means to me: there's a problem in libgfapi, gluster 3.4.2 and 3.4.3 (at least) and/or kvm 1.7.1 (I'm running the latest 1.7 source tree in production). Joe: we're in your hands. I hope you find the problem somewhere. Paul

Re: [Gluster-users] libgfapi failover problem on replica bricks

2014-04-21 Thread Paul Penev
on swap-device (252:0:96626664) Read-error on swap-device (252:0:96626672) Read-error on swap-device (252:0:96626680) Read-error on swap-device (252:0:96626688) This is all. Not much I'm afraid. Paul 2014-04-21 18:21 GMT+02:00 Paul Penev ppqu...@gmail.com: Joe, it will take some time for redo

Re: [Gluster-users] libgfapi failover problem on replica bricks

2014-04-21 Thread Paul Penev
that libgfapi is responsible for maintaining connections to the bricks and to reestablish them as needed (makes sense, but feel free to prove me wrong). Paul ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman

Re: [Gluster-users] libgfapi failover problem on replica bricks

2014-04-21 Thread Paul Penev
I sent the brick logs earlier. But I'm not able to produce logs from events in KVM. I can't find any logging or debugging interface. It is somewhat weird. Paul 2014-04-21 18:30 GMT+02:00 Joe Julian j...@julianfamily.org: I don't expect much from the bricks either, but in combination

  1   2   3   >