Re: [Gluster-users] No gluster NFS server on localhost

2020-01-06 Thread Jim Kinney
I am 99% certain that gluster-nfs was deprecated several major releases back and not dropped on version 7. NFS-ganesha is the version that is recommended. On January 6, 2020 6:28:28 AM EST, DUCARROZ Birgit wrote: >Hi all, > >I installed glusterfs 7.0 and wanted to ask if gluster NFS is still

[Gluster-users] mix of replicated and distributed bricks

2019-12-03 Thread Jim Kinney
All, I need to have a portion of my gluster setup in high-performance mode while the rest is in high-availability mode. I currently have a replica 3 setup with about 180TB on each of three servers. I want to add 4 new servers to act as distributed brick sources (feeding data over a 100G IB

Re: [Gluster-users] untrusted users

2019-10-22 Thread Jim Kinney
I think >we >can build something on top of it with only a few tweaks. It supports >ACLs >already. If we could write a layer such that we force a container to be >able to mount as the specified user only or not mount at all, we have >what >we want already. > > >Thanks &

Re: [Gluster-users] untrusted users

2019-10-21 Thread Jim Kinney
Shell access to untrusted users. I would fight that tooth and nail as a sysadmin. User that are untrusted get accounts deactivated. If they have no sudo, they can't mount. Make mounts for them in fstab. Set ownership and groups on mount points so each user is restricted to their folder only.

Re: [Gluster-users] du output showing corrupt file system

2019-08-22 Thread Jim Kinney
e right and I don't have any problem in >accessing issue reported folders but only when running "du" command for >the >folder it throws error msg. > >regards >Amudhan > > >On Wed, Aug 21, 2019 at 6:22 PM Jim Kinney >wrote: > >> Run the du com

Re: [Gluster-users] du output showing corrupt file system

2019-08-21 Thread Jim Kinney
ill a user space symlink error. It's just compounded by gluster. On August 21, 2019 3:49:45 AM EDT, Amudhan P wrote: >it is definitely issue with gluster there is no symlink involved. > > >On Tue, Aug 20, 2019 at 5:08 PM Jim Kinney >wrote: > >> That's not necessarily

Re: [Gluster-users] du output showing corrupt file system

2019-08-20 Thread Jim Kinney
That's not necessarily a gluster issue. Users can create symlinks from a subdirectory up to a parent and that will create a loop. On August 20, 2019 2:22:44 AM EDT, Amudhan P wrote: >Hi, > >Can anyone suggest what could be the error and to fix this issue? > >regards >Amudhan P > >On Sat, Aug

Re: [Gluster-users] Advice for setup: SW RAID 6 vs JBOD

2019-06-06 Thread Jim Kinney
I have about 200TB in a gluster replicate only 3-node setup. We stopped using hardware RAID6 after the third drive failed on one array at the same time we replaced the other two and before recovery could complete. 200TB is a mess to resync. So now each hard drive is a single entity. We add 1 drive

Re: [Gluster-users] Proposing to previous ganesha HA cluster solution back to gluster code as gluster-7 feature

2019-04-30 Thread Jim Kinney
+1! I'm using nfs-ganesha in my next upgrade so my client systems can use NFS instead of fuse mounts. Having an integrated, designed in process to coordinate multiple nodes into an HA cluster will very welcome. On April 30, 2019 3:20:11 AM EDT, Jiffin Tony Thottan wrote: >Hi all, > >Some of

Re: [Gluster-users] Expanding brick size in glusterfs 3.7.11

2019-04-25 Thread Jim Kinney
I've expanded bricks using lvm and there was no problems at all with gluster seeing the change. The expansion was performed basically simultaneously on both existing bricks of a replica. I would expect the raid expansion to behave similarly. On April 25, 2019 9:05:45 PM EDT, Pat Haley wrote:

Re: [Gluster-users] Rsync in place of heal after brick failure

2019-04-01 Thread Jim Kinney
Nice! I didn't use -H -X and the system had to do some clean up. I'll add this in my next migration progress as I move 120TB to new hard drives. On Mon, 2019-04-01 at 14:27 -0400, Tom Fite wrote: > Hi all, > I have a very large (65 TB) brick in a replica 2 volume that needs to > be re-copied from

[Gluster-users] upgrade best practices

2019-03-29 Thread Jim Kinney
Currently running 3.12 on Centos 7.6. Doing cleanups on split-brain and out of sync, need heal files. We need to migrate the three replica servers to gluster v. 5 or 6. Also will need to upgrade about 80 clients as well. Given that a complete removal of gluster will not touch the 200+TB of data

Re: [Gluster-users] Configure SSH -command

2019-03-24 Thread Jim Kinney
If the os is using selinux, a policy change is needed to allow the ssh daemon to connect to the new port. Look at audit2allow for a solution. On March 24, 2019 8:04:27 AM EDT, Andrey Volodin wrote: >you may find some reference here: >http://fatphil.org/linux/ssh_ports.html > >On Sun, Mar 24,

[Gluster-users] .glusterfs GFID links

2019-03-20 Thread Jim Kinney
I have half a zillion broken symlinks in the .glusterfs folder on 3 of 11 volumes. It doesn't make sense to me that a GFID should linklike some of the ones below: /data/glusterfs/home/brick/brick/.glusterfs/9e/75/9e75a16f-fe4f-411e- 937d-1a6c4758fd0e -> ../../c7/6f/c76ff719-dde6-41f5-a327-

Re: [Gluster-users] Proposal to mark few features as Deprecated / SunSet from Version 5.0

2019-03-19 Thread Jim Kinney
let us know the output > of `gluster volume info`? > Regards, > Vijay > > On Tue, Mar 19, 2019 at 12:53 PM Jim Kinney > wrote: > > This python will fail when writing to a file in a glusterfs fuse > > mounted directory. > > > > import mmap > >

Re: [Gluster-users] Proposal to mark few features as Deprecated / SunSet from Version 5.0

2019-03-19 Thread Jim Kinney
# note that new content must have same sizemm[6:] = " world!\n"# ... and read again using standard file methodsmm.seek(0)print mm.readline() # prints "Hello world!"# close the mapmm.close() On Tue, 2019-03-19 at 12:06 -0400, Jim Kinney wrote: > Nativ

Re: [Gluster-users] Proposal to mark few features as Deprecated / SunSet from Version 5.0

2019-03-19 Thread Jim Kinney
+0530, Amar Tumballi Suryanarayan wrote: > Hi Jim, > > On Tue, Mar 19, 2019 at 6:21 PM Jim Kinney > wrote: > > > > > > > > Issues with glusterfs fuse mounts cause issues with python file > > open for write. We have to use nfs to avoid this. &

Re: [Gluster-users] Proposal to mark few features as Deprecated / SunSet from Version 5.0

2019-03-19 Thread Jim Kinney
0">For my uses, the RDMA transport is essential. Much of my storage is used for HPC systems and IB is the network layer. We still use v3.12. Issues with glusterfs fuse mounts cause issues with python file open for write. We have to use nfs to avoid this. Really want to see better back-end

Re: [Gluster-users] Fwd: Added bricks with wrong name and now need to remove them without destroying volume.

2019-02-28 Thread Jim Kinney
bricks, how does the metadata get created so >the >new cluster volume can find and access the data? It seems like I would >be >laying the glusterfs on top on hardware and "hiding" the data. > > > >On Wed, Feb 27, 2019 at 5:08 PM Jim Kinney >wrote: > >> It

Re: [Gluster-users] Fwd: Added bricks with wrong name and now need to remove them without destroying volume.

2019-02-27 Thread Jim Kinney
ng a brick from the volume, which in my > beginner's mind, would be a straight-forward way of remedying the > problem. Hopefully once the empty bricks are removed, the "missing" > data will be visible again in the volume. > > On Wed, Feb 27, 2019 at 3:59 PM Jim Kinney

Re: [Gluster-users] Fwd: Added bricks with wrong name and now need to remove them without destroying volume.

2019-02-27 Thread Jim Kinney
Keep in mind that gluster is a metadata process. It doesn't really touch the actual volume files. The exception is the .glusterfs and .trashcan folders in the very top directory of the gluster volume. When you create a gluster volume from brick, it doesn't format the filesystem. It uses what's

Re: [Gluster-users] Gluster and bonding

2019-02-25 Thread Jim Kinney
Unless the link between the two switches is set as a dedicated management link, won't that link create a problem? On the dual switch setup I have, there's a dedicated connection that handles inter-switch data. I'm not using bonding or teaming at the servers as I have 40Gb ethernet nics. Gluster

Re: [Gluster-users] Can't write to volume using vim/nano

2019-01-24 Thread Jim Kinney
ain right >now. >Recommend you to use IPoIB option, and use tcp/socket transport type >(which >is default). That should mostly fix all the issues. > >-Amar > >On Thu, Jan 24, 2019 at 5:31 AM Jim Kinney >wrote: > >> That really sounds like a bug with the sha

Re: [Gluster-users] Can't write to volume using vim/nano

2019-01-23 Thread Jim Kinney
ume ID: b5ef065f-1ba2-481f- > > > 8108-e8f6d2d3f036Status: StartedSnapshot Count: 0Number of > > > Bricks: 6Transport-type: rdmaBricks:Brick1: pfs01- > > > ib:/mnt/dataBrick2: pfs02-ib:/mnt/dataBrick3: pfs03- > > > ib:/mnt/dataBrick4: pfs04-ib:/mnt/dataBrick5:

Re: [Gluster-users] Can't write to volume using vim/nano

2019-01-23 Thread Jim Kinney
Check permissions on the mount. I have multiple dozens of systems mounting 18 "exports" using fuse and it works for multiple user read/write based on user access permissions to the mount point space. /home is mounted for 150+ users plus another dozen+ lab storage spaces. I do manage user access

Re: [Gluster-users] Volume Creation - Best Practices

2018-08-25 Thread Jim Kinney
I use single disks as a physical volume. Each gluster host is identical. As more space is needed for a mount point, a set of disks is added to the logical volume of each host. As my primary need is HA, all of my host nodes are simply replicates. Prior to this config I had a physical raid6

Re: [Gluster-users] Question to utime feature for release 4.1.0

2018-08-15 Thread Jim Kinney
Is this 'or' of atime settings also in 3.10/3.12 versions? If it's set off in gluster but on in mount will atime be updated? On August 15, 2018 2:15:17 PM EDT, Kotresh Hiremath Ravishankar wrote: >Hi David, > >The feature is to provide consistent time attributes (atime, ctime, >mtime) >across

Re: [Gluster-users] gluster 3.12 memory leak

2018-08-13 Thread Jim Kinney
Did this get released yet? Fuse client mounting on a computational cluster that writes a few thousand files a day on a slow day is causing many oomkiller problems. On Tue, 2018-08-07 at 11:33 +0530, Hari Gowtham wrote: > Hi, > The reason for memory leak was found. The patch ( >

Re: [Gluster-users] Memory leak with the libgfapi in 3.12 ?

2018-08-01 Thread Jim Kinney
Hmm. I just had to jump through lots of issues with a gluster 3.12.9 setup under Ovirt. The mounts are stock fuse.glusterfs. The RAM usage had been climbing and I had to move VMs around, put hosts in maintenance mode, do updates, restart. When the VMs were moved back the memory usage dropped back

Re: [Gluster-users] Proposal to mark few features as Deprecated / SunSet from Version 5.0

2018-07-19 Thread Jim Kinney
Too bad the RDMA will be abandoned. It's the perfect transport for intranode processing and data sync. I currently use RDMA on a computational cluster between nodes and gluster storage. The older IB cards will support 10G IP and 40G IB. I've had some success with connectivity but am still

Re: [Gluster-users] storhaug, a utility for setting up gluster+nfs-ganesha highly available NFSv3, NFSv4, NFSv4.1

2018-06-15 Thread Jim Kinney
YAY!!! Glad to see this! Now for a specific use case question: I have a 3-node gluster service in replica 3. Each node has multiple network interfaces: a 40G ethernet and a 40G Infiniband with TCP. The infiniband is a separate IP network from the 40G ethernet. There is no (known) way to bridge the

Re: [Gluster-users] peer detach fails

2018-06-04 Thread Jim Kinney
AH HA! Found the errant 3rd node. In testing to use corosync for NFS a lock volume was created and that was still holding a use of the peer. Dropped that volume and the peer detached as expected. On Thu, 2018-05-31 at 14:41 +0530, Atin Mukherjee wrote: > On Wed, May 30, 2018 at 10:55 PM,

[Gluster-users] peer detach fails

2018-05-30 Thread Jim Kinney
All, I added a third peer for a arbiter brick host to replica 2 cluster. Then I realized I can't use it since it has no infiniband like the other two hosts (infiniband and ethernet for clients). So I removed the new arbiter bricks from all of the volumes. However, I can't detach the peer as it

Re: [Gluster-users] shard corruption bug

2018-05-29 Thread Jim Kinney
and RHEV/KVM data, trying to figure out if it's >related. > >Thanks. > >On Fri, May 4, 2018 at 11:13 AM, Jim Kinney >wrote: > >> I upgraded my ovirt stack to 3.12.9, added a brick to a volume and >left it >> to settle. No problems. I am now running replic

Re: [Gluster-users] Some more questions

2018-05-09 Thread Jim Kinney
It all depends on how you are set up on the distribute. Think RAID 10 with 4 drives - each pair strips (distribute) and the pair of pairs replicates. On Wed, 2018-05-09 at 19:34 +, Gandalf Corvotempesta wrote: > Il giorno mer 9 mag 2018 alle ore 21:31 Jim Kinney <jim.kinney@gmail. >

Re: [Gluster-users] Some more questions

2018-05-09 Thread Jim Kinney
. On Wed, 2018-05-09 at 19:25 +, Gandalf Corvotempesta wrote: > Il giorno mer 9 mag 2018 alle ore 21:22 Jim Kinney <jim.kinney@gmail. > com> > ha scritto: > > You can change the replica count. Add a fourth server, add it's > > brick to > > existing volume w

Re: [Gluster-users] Some more questions

2018-05-09 Thread Jim Kinney
On Wed, 2018-05-09 at 18:26 +, Gandalf Corvotempesta wrote: > Ok, some more question as I'm still planning our SDS (but I'm prone > to use > LizardFS, gluster is too inflexible) > > Let's assume a replica 3: > > 1) currently, is not possbile to add a single server and rebalance > like any >

[Gluster-users] 3.12, ganesha and storhaug

2018-05-09 Thread Jim Kinney
All, I am upgrading the storage cluster from 3.8 to 3.10 or 3.12. I have 3.12 on the ovirt cluster. I would like to change the client connection method to NFS/NFS-Ganesha as the FUSE method causes some issues with heavy python users (mmap errors on file open for write). I see that nfs-ganesha

Re: [Gluster-users] shard corruption bug

2018-05-04 Thread Jim Kinney
I upgraded my ovirt stack to 3.12.9, added a brick to a volume and left it to settle. No problems. I am now running replica 4 (preparing to remove a brick and host to replica 3). On Fri, 2018-05-04 at 14:24 +, Gandalf Corvotempesta wrote: > Il giorno ven 4 mag 2018 alle ore 14:06 Jim Kin

Re: [Gluster-users] shard corruption bug

2018-05-04 Thread Jim Kinney
It stopped being an outstanding issue at 3.12.7. I think it's now fixed. On May 4, 2018 6:28:40 AM EDT, Gandalf Corvotempesta wrote: >Hi to all >is the "famous" corruption bug when sharding enabled fixed or still a >work >in progress ?

Re: [Gluster-users] Reconstructing files from shards

2018-04-27 Thread Jim Kinney
For me, the process of copying out the drive file from Ovirt is a tedious, very manual process. Each vm has a single drive file with tens of thousands of shards each. Typical vm size is 100G for me. And it's all mostly sparse. So, yes, a copy out from the gluster share is best. Did the

Re: [Gluster-users] Reconstructing files from shards

2018-04-23 Thread Jim Kinney
Are there any plans to create an unsharding tool? On April 23, 2018 4:11:06 AM EDT, Gandalf Corvotempesta wrote: >2018-04-23 9:34 GMT+02:00 Alessandro Briosi : >> Is it that really so? > >yes, i've opened a bug asking developers to block

Re: [Gluster-users] Reconstructing files from shards

2018-04-22 Thread Jim Kinney
So a stock ovirt with gluster install that uses sharding A. Can't safely have sharding turned off once files are in use B. Can't be expanded with additional bricks Ouch. On April 22, 2018 5:39:20 AM EDT, Gandalf Corvotempesta wrote: >Il dom 22 apr 2018, 10:46

Re: [Gluster-users] Is the size of bricks limiting the size of files I can store?

2018-04-13 Thread Jim Kinney
On April 12, 2018 3:48:32 PM EDT, Andreas Davour <a...@update.uu.se> wrote: >On Mon, 2 Apr 2018, Jim Kinney wrote: > >> On Mon, 2018-04-02 at 20:07 +0200, Andreas Davour wrote: >>> On Mon, 2 Apr 2018, Nithya Balachandran wrote: >>> >>>> On 2 April

Re: [Gluster-users] Is the size of bricks limiting the size of files I can store?

2018-04-02 Thread Jim Kinney
On Mon, 2018-04-02 at 20:07 +0200, Andreas Davour wrote: > On Mon, 2 Apr 2018, Nithya Balachandran wrote: > > > On 2 April 2018 at 14:48, Andreas Davour wrote: > > > > > Hi > > > > > > I've found something that works so weird I'm certain I have > > > missed how > > > gluster

Re: [Gluster-users] Kernel NFS on GlusterFS

2018-03-07 Thread Jim Kinney
Gluster does the sync part better than corosync. It's not an active/passive failover system. It more all active. Gluster handles the recovery once all nodes are back online. That requires the client tool chain to understand that a write goes to all storage devices not just the active one. 3.10 is

Re: [Gluster-users] Convert replica 2 to replica 2+1 arbiter

2018-02-25 Thread Jim Kinney
e...@arnes.si> wrote: >I must ask again, just to be sure. Is what you are proposing definitely > >supported in v3.8? > >Kind regards, >Mitja > >On 25/02/2018 13:55, Jim Kinney wrote: >> gluster volume add-brick volname replica 3 arbiter 1 >> brickhost:brickp

Re: [Gluster-users] Convert replica 2 to replica 2+1 arbiter

2018-02-25 Thread Jim Kinney
gluster volume add-brick volname replica 3 arbiter 1 brickhost:brickpath/to/new/arbitervol Yes. The replica 3 looks odd. Somewhere in 3.12 (?) or not until v4 a change in command will happen so it won't count the arbiter as a replica. On February 25, 2018 5:05:04 AM EST, "Mitja Mihelič"

Re: [Gluster-users] Replacing a third data node with an arbiter one

2018-01-25 Thread Jim Kinney
On Fri, 2018-01-26 at 07:12 +0530, Sankarshan Mukhopadhyay wrote: > On Fri, Jan 26, 2018 at 7:05 AM, Ravishankar N m> wrote: > > > > On 01/24/2018 07:20 PM, Hoggins! wrote: > > > > Hello, > > > > The subject says it all. I have a replica 3 cluster : > > > >

Re: [Gluster-users] Reading over than the file size on dispersed volume

2018-01-12 Thread Jim Kinney
On a disperse setup shouldn't the various nodes have different parts of the files and thus the md5sums would be different? On Fri, 2018-01-12 at 18:17 +0900, jungeun kim wrote: > Hi All, > > I'm using gluster as dispersed volume and I send to ask for very > serious thing. > I have 3 servers and

Re: [Gluster-users] Integration of GPU with glusterfs

2018-01-12 Thread Jim Kinney
On January 11, 2018 10:58:28 PM EST, Lindsay Mathieson wrote: >On 12/01/2018 3:14 AM, Darrell Budic wrote: >> It would also add physical resource requirements to future client >> deploys, requiring more than 1U for the server (most likely), and I’m > >> not likely

Re: [Gluster-users] Integration of GPU with glusterfs

2018-01-11 Thread Jim Kinney
I like the idea immensely. As long as the gpu usage can be specified for server-only, client and server, client and server with a client limit of X. Don't want to take gpu cycles away from machine learning for file IO. Also must support multiple GPUs and GPU pinning. Really useful for

[Gluster-users] different names for bricks

2018-01-08 Thread Jim Kinney
I just noticed that gluster volume info foo and gluster volume heal foo statistics use different indices for brick numbers. Info uses 1 based but heal statistics uses 0 based. gluster volume info clifford Volume Name: cliffordType: Distributed- ReplicateVolume ID:

Re: [Gluster-users] Syntax for creating arbiter volumes in gluster 4.0

2017-12-20 Thread Jim Kinney
I think the replica 2 arbiter 1 is more clear towards the intent of the configuration. I would also support : replica n ,,..., arbiter m ,,... as that makes it very clear what brick(s) should be the arbiter(s). On Wed, 2017-12-20 at 15:44 +0530, Ravishankar N wrote: > Hi, > > The existing syntax

Re: [Gluster-users] SAMBA VFS module for GlusterFS crashes

2017-12-05 Thread Jim Kinney
Keep in mind a local disk is 3,6,12 Gbps but a network connection is typically 1Gbps. A local disk quad in raid 10 will outperform a 10G ethernet (especially using SAS drives). On December 5, 2017 6:11:38 AM EST, Riccardo Murri wrote: >Hello, > >I'm trying to set up a

Re: [Gluster-users] Adding a slack for communication?

2017-11-08 Thread Jim Kinney
The archival process of the mailing list makes searching for past issues possible. Slack, and irc in general, is a more closed garden than a public archived mailing list. That said, irc/slack is good for immediate interaction between people, say, gluster user with a nightmare and a

Re: [Gluster-users] gfid entries in volume heal info that do not heal

2017-11-06 Thread Jim Kinney
them have the link count 2? > If the link count is 2 then "find -samefile > bits of gfid>//" > should give you the file path. > > Regards, > Karthik > > On Tue, Oct 24, 2017 at 3:28 AM, Jim Kinney <jim.kin...@gmail.com> > wrote: > > > &

Re: [Gluster-users] Gluster Health Report tool

2017-10-25 Thread Jim Kinney
Very nice! Thanks! On October 25, 2017 8:11:36 AM EDT, Aravinda wrote: >Hi, > >We started a new project to identify issues/misconfigurations in >Gluster nodes. This project is very young and not yet ready for >Production use, Feedback on the existing reports and ideas for

Re: [Gluster-users] gfid entries in volume heal info that do not heal

2017-10-24 Thread Jim Kinney
gfid>//" > should give you the file path. > > Regards, > Karthik > > On Tue, Oct 24, 2017 at 3:28 AM, Jim Kinney <jim.kin...@gmail.com> > wrote: > > > > > > > > I'm not so lucky. ALL of mine show 2 links and none have the attr > > d

Re: [Gluster-users] gfid entries in volume heal info that do not heal

2017-10-23 Thread Jim Kinney
Sent: Monday, October 23, 2017 1:52 AM > > To: Jim Kinney <jim.kin...@gmail.com>; Matt Waymack <mwaymack@nsgdv.c > om> > > Cc: gluster-users <Gluster-users@gluster.org> > > Subject: Re: [Gluster-users] gfid entries in volume heal info that do > not heal

Re: [Gluster-users] gfid entries in volume heal info that do not heal

2017-10-19 Thread Jim Kinney
I've been following this particular thread as I have a similar issue (RAID6 array failed out with 3 dead drives at once while a 12 TB load was being copied into one mounted space - what a mess) I have >700K GFID entries that have no path data:Example:getfattr -d -e hex -m .

Re: [Gluster-users] Access from multiple hosts where users have different uid/gid

2017-10-05 Thread Jim Kinney
Ouch! I use a unified UID/GID process. I personally use FreeIPA. It can also be done with just LDAP or (not recommended for security reasons) NIS+ Baring those, a well-disciplined manual process will work by copying passwd, group, shadow and gshadow files around to all systems. Create new users

[Gluster-users] date/time on gluster-users mailman

2017-09-22 Thread Jim Kinney
For the past week or so (since the spam issue) the gluster-users mail arrives dated 12 hours in the past. Did a fix for the spam upset the time of the server?___ Gluster-users mailing list Gluster-users@gluster.org

[Gluster-users] recover from failed RAID

2017-09-05 Thread Jim Kinney
All, I had a "bad timing" event where I lost 3 drives in a RAID6 array and the structure of all of the LVM pools and nodes was lost. All total, nearly 100TB of storage was scrambled. This array was 1/2 of a redundant (replica 2) gluster config (will be adding additional 3rd soon for split

[Gluster-users] returning from a failed RAID

2017-09-05 Thread Jim Kinney
All, I had a "bad timing" event where I lost 3 drives in a RAID6 array and the structure of all of the LVM pools and nodes was lost. This array was 1/2 of a redundant (replica 2) gluster config (will be adding additional 3rd soon for split brain/redundancy with failure issues). The failed