I am 99% certain that gluster-nfs was deprecated several major releases back
and not dropped on version 7. NFS-ganesha is the version that is recommended.
On January 6, 2020 6:28:28 AM EST, DUCARROZ Birgit
wrote:
>Hi all,
>
>I installed glusterfs 7.0 and wanted to ask if gluster NFS is still
All,
I need to have a portion of my gluster setup in high-performance mode
while the rest is in high-availability mode. I currently have a replica
3 setup with about 180TB on each of three servers.
I want to add 4 new servers to act as distributed brick sources
(feeding data over a 100G IB connec
s what we are trying to do here. We like GlusterFS and I think
>we
>can build something on top of it with only a few tweaks. It supports
>ACLs
>already. If we could write a layer such that we force a container to be
>able to mount as the specified user only or not mount at all, we have
Shell access to untrusted users. I would fight that tooth and nail as a
sysadmin. User that are untrusted get accounts deactivated.
If they have no sudo, they can't mount. Make mounts for them in fstab. Set
ownership and groups on mount points so each user is restricted to their folder
only. Us
issue right and I don't have any problem in
>accessing issue reported folders but only when running "du" command for
>the
>folder it throws error msg.
>
>regards
>Amudhan
>
>
>On Wed, Aug 21, 2019 at 6:22 PM Jim Kinney
>wrote:
>
>> Run the du
's still a user space symlink error. It's just compounded by gluster.
On August 21, 2019 3:49:45 AM EDT, Amudhan P wrote:
>it is definitely issue with gluster there is no symlink involved.
>
>
>On Tue, Aug 20, 2019 at 5:08 PM Jim Kinney
>wrote:
>
>> That
That's not necessarily a gluster issue. Users can create symlinks from a
subdirectory up to a parent and that will create a loop.
On August 20, 2019 2:22:44 AM EDT, Amudhan P wrote:
>Hi,
>
>Can anyone suggest what could be the error and to fix this issue?
>
>regards
>Amudhan P
>
>On Sat, Aug 17
I have about 200TB in a gluster replicate only 3-node setup. We stopped
using hardware RAID6 after the third drive failed on one array at the
same time we replaced the other two and before recovery could complete.
200TB is a mess to resync.
So now each hard drive is a single entity. We add 1 drive
+1!
I'm using nfs-ganesha in my next upgrade so my client systems can use NFS
instead of fuse mounts. Having an integrated, designed in process to coordinate
multiple nodes into an HA cluster will very welcome.
On April 30, 2019 3:20:11 AM EDT, Jiffin Tony Thottan
wrote:
>Hi all,
>
>Some of yo
I've expanded bricks using lvm and there was no problems at all with gluster
seeing the change. The expansion was performed basically simultaneously on both
existing bricks of a replica. I would expect the raid expansion to behave
similarly.
On April 25, 2019 9:05:45 PM EDT, Pat Haley wrote:
>
Nice! I didn't use -H -X and the system had to do some clean up.
I'll add this in my next migration progress as I move 120TB to new hard
drives.
On Mon, 2019-04-01 at 14:27 -0400, Tom Fite wrote:
> Hi all,
> I have a very large (65 TB) brick in a replica 2 volume that needs to
> be re-copied from s
Currently running 3.12 on Centos 7.6. Doing cleanups on split-brain and
out of sync, need heal files.
We need to migrate the three replica servers to gluster v. 5 or 6. Also
will need to upgrade about 80 clients as well. Given that a complete
removal of gluster will not touch the 200+TB of data on
If the os is using selinux, a policy change is needed to allow the ssh daemon
to connect to the new port. Look at audit2allow for a solution.
On March 24, 2019 8:04:27 AM EDT, Andrey Volodin wrote:
>you may find some reference here:
>http://fatphil.org/linux/ssh_ports.html
>
>On Sun, Mar 24, 201
I have half a zillion broken symlinks in the .glusterfs folder on 3 of
11 volumes. It doesn't make sense to me that a GFID should linklike
some of the ones below:
/data/glusterfs/home/brick/brick/.glusterfs/9e/75/9e75a16f-fe4f-411e-
937d-1a6c4758fd0e -> ../../c7/6f/c76ff719-dde6-41f5-a327-
7e13fdf6
let us know the output
> of `gluster volume info`?
> Regards,
> Vijay
>
> On Tue, Mar 19, 2019 at 12:53 PM Jim Kinney
> wrote:
> > This python will fail when writing to a file in a glusterfs fuse
> > mounted directory.
> >
> > import mmap
> >
# note that new
content must have same sizemm[6:] = " world!\n"# ... and read
again using standard file methodsmm.seek(0)print
mm.readline() # prints "Hello world!"# close the
mapmm.close()
On Tue, 2019-03-19 at 12:06 -0400, Jim Kinney wrote:
> Nativ
+0530, Amar Tumballi Suryanarayan wrote:
> Hi Jim,
>
> On Tue, Mar 19, 2019 at 6:21 PM Jim Kinney
> wrote:
> >
> >
> >
> > Issues with glusterfs fuse mounts cause issues with python file
> > open for write. We have to use nfs to avoid this.
&
0">For my uses, the RDMA transport is essential. Much of my storage is used for
HPC systems and IB is the network layer. We still use v3.12.
Issues with glusterfs fuse mounts cause issues with python file open for write.
We have to use nfs to avoid this.
Really want to see better back-end tool
e cluster volume, import bricks, how does the metadata get created so
>the
>new cluster volume can find and access the data? It seems like I would
>be
>laying the glusterfs on top on hardware and "hiding" the data.
>
>
>
>On Wed, Feb 27, 2019 at 5:08 PM
removing a brick from the volume, which in my
> beginner's mind, would be a straight-forward way of remedying the
> problem. Hopefully once the empty bricks are removed, the "missing"
> data will be visible again in the volume.
>
> On Wed, Feb 27, 2019 at 3:59 PM
Keep in mind that gluster is a metadata process. It doesn't really
touch the actual volume files. The exception is the .glusterfs and
.trashcan folders in the very top directory of the gluster volume.
When you create a gluster volume from brick, it doesn't format the
filesystem. It uses what's alre
Unless the link between the two switches is set as a dedicated management link,
won't that link create a problem? On the dual switch setup I have, there's a
dedicated connection that handles inter-switch data. I'm not using bonding or
teaming at the servers as I have 40Gb ethernet nics. Gluster
rts in that domain right
>now.
>Recommend you to use IPoIB option, and use tcp/socket transport type
>(which
>is default). That should mostly fix all the issues.
>
>-Amar
>
>On Thu, Jan 24, 2019 at 5:31 AM Jim Kinney
>wrote:
>
>> That really sounds like a bug w
> Volume Name: gfsType: DistributeVolume ID: b5ef065f-1ba2-481f-
> > > 8108-e8f6d2d3f036Status: StartedSnapshot Count: 0Number of
> > > Bricks: 6Transport-type: rdmaBricks:Brick1: pfs01-
> > > ib:/mnt/dataBrick2: pfs02-ib:/mnt/dataBrick3: pfs03-
> > > ib:/mnt/data
Check permissions on the mount. I have multiple dozens of systems
mounting 18 "exports" using fuse and it works for multiple user
read/write based on user access permissions to the mount point space.
/home is mounted for 150+ users plus another dozen+ lab storage spaces.
I do manage user access wit
I use single disks as a physical volume. Each gluster host is identical. As
more space is needed for a mount point, a set of disks is added to the logical
volume of each host. As my primary need is HA, all of my host nodes are simply
replicates.
Prior to this config I had a physical raid6 array
Is this 'or' of atime settings also in 3.10/3.12 versions?
If it's set off in gluster but on in mount will atime be updated?
On August 15, 2018 2:15:17 PM EDT, Kotresh Hiremath Ravishankar
wrote:
>Hi David,
>
>The feature is to provide consistent time attributes (atime, ctime,
>mtime)
>across r
Did this get released yet? Fuse client mounting on a computational
cluster that writes a few thousand files a day on a slow day is causing
many oomkiller problems.
On Tue, 2018-08-07 at 11:33 +0530, Hari Gowtham wrote:
> Hi,
> The reason for memory leak was found. The patch (
> https://review.glus
Hmm. I just had to jump through lots of issues with a gluster 3.12.9
setup under Ovirt. The mounts are stock fuse.glusterfs. The RAM usage
had been climbing and I had to move VMs around, put hosts in
maintenance mode, do updates, restart. When the VMs were moved back the
memory usage dropped back t
Too bad the RDMA will be abandoned. It's the perfect transport for intranode
processing and data sync.
I currently use RDMA on a computational cluster between nodes and gluster
storage. The older IB cards will support 10G IP and 40G IB. I've had some
success with connectivity but am still falte
YAY!!!
Glad to see this!
Now for a specific use case question:
I have a 3-node gluster service in replica 3. Each node has multiple
network interfaces: a 40G ethernet and a 40G Infiniband with TCP. The
infiniband is a separate IP network from the 40G ethernet. There is no
(known) way to bridge the
AH HA! Found the errant 3rd node. In testing to use corosync for NFS a
lock volume was created and that was still holding a use of the peer.
Dropped that volume and the peer detached as expected.
On Thu, 2018-05-31 at 14:41 +0530, Atin Mukherjee wrote:
> On Wed, May 30, 2018 at 10:55 PM,
All,
I added a third peer for a arbiter brick host to replica 2 cluster.
Then I realized I can't use it since it has no infiniband like the
other two hosts (infiniband and ethernet for clients). So I removed the
new arbiter bricks from all of the volumes. However, I can't detach the
peer as it keep
hards and RHEV/KVM data, trying to figure out if it's
>related.
>
>Thanks.
>
>On Fri, May 4, 2018 at 11:13 AM, Jim Kinney
>wrote:
>
>> I upgraded my ovirt stack to 3.12.9, added a brick to a volume and
>left it
>> to settle. No problems. I am now running
It all depends on how you are set up on the distribute. Think RAID 10
with 4 drives - each pair strips (distribute) and the pair of pairs
replicates.
On Wed, 2018-05-09 at 19:34 +, Gandalf Corvotempesta wrote:
> Il giorno mer 9 mag 2018 alle ore 21:31 Jim Kinney com>
> h
nope.
On Wed, 2018-05-09 at 19:25 +, Gandalf Corvotempesta wrote:
> Il giorno mer 9 mag 2018 alle ore 21:22 Jim Kinney com>
> ha scritto:
> > You can change the replica count. Add a fourth server, add it's
> > brick to
>
> existing volume with gluster volu
On Wed, 2018-05-09 at 18:26 +, Gandalf Corvotempesta wrote:
> Ok, some more question as I'm still planning our SDS (but I'm prone
> to use
> LizardFS, gluster is too inflexible)
>
> Let's assume a replica 3:
>
> 1) currently, is not possbile to add a single server and rebalance
> like any
> o
All,
I am upgrading the storage cluster from 3.8 to 3.10 or 3.12. I have
3.12 on the ovirt cluster. I would like to change the client connection
method to NFS/NFS-Ganesha as the FUSE method causes some issues with
heavy python users (mmap errors on file open for write).
I see that nfs-ganesha was
I upgraded my ovirt stack to 3.12.9, added a brick to a volume and left
it to settle. No problems. I am now running replica 4 (preparing to
remove a brick and host to replica 3).
On Fri, 2018-05-04 at 14:24 +, Gandalf Corvotempesta wrote:
> Il giorno ven 4 mag 2018 alle ore 14:06 Jim Kin
It stopped being an outstanding issue at 3.12.7. I think it's now fixed.
On May 4, 2018 6:28:40 AM EDT, Gandalf Corvotempesta
wrote:
>Hi to all
>is the "famous" corruption bug when sharding enabled fixed or still a
>work
>in progress ?
>___
>Gluster-us
For me, the process of copying out the drive file from Ovirt is a tedious, very
manual process. Each vm has a single drive file with tens of thousands of
shards each. Typical vm size is 100G for me. And it's all mostly sparse. So,
yes, a copy out from the gluster share is best.
Did the outstan
Are there any plans to create an unsharding tool?
On April 23, 2018 4:11:06 AM EDT, Gandalf Corvotempesta
wrote:
>2018-04-23 9:34 GMT+02:00 Alessandro Briosi :
>> Is it that really so?
>
>yes, i've opened a bug asking developers to block removal of sharding
>when volume has data on it or to wri
So a stock ovirt with gluster install that uses sharding
A. Can't safely have sharding turned off once files are in use
B. Can't be expanded with additional bricks
Ouch.
On April 22, 2018 5:39:20 AM EDT, Gandalf Corvotempesta
wrote:
>Il dom 22 apr 2018, 10:46 Alessandro Briosi ha
>scritto:
>
On April 12, 2018 3:48:32 PM EDT, Andreas Davour wrote:
>On Mon, 2 Apr 2018, Jim Kinney wrote:
>
>> On Mon, 2018-04-02 at 20:07 +0200, Andreas Davour wrote:
>>> On Mon, 2 Apr 2018, Nithya Balachandran wrote:
>>>
>>>> On 2 April 2018 at 1
On Mon, 2018-04-02 at 20:07 +0200, Andreas Davour wrote:
> On Mon, 2 Apr 2018, Nithya Balachandran wrote:
>
> > On 2 April 2018 at 14:48, Andreas Davour wrote:
> >
> > > Hi
> > >
> > > I've found something that works so weird I'm certain I have
> > > missed how
> > > gluster is supposed to be u
Gluster does the sync part better than corosync. It's not an
active/passive failover system. It more all active. Gluster handles the
recovery once all nodes are back online.
That requires the client tool chain to understand that a write goes to
all storage devices not just the active one.
3.10 is
uot;
wrote:
>I must ask again, just to be sure. Is what you are proposing definitely
>
>supported in v3.8?
>
>Kind regards,
>Mitja
>
>On 25/02/2018 13:55, Jim Kinney wrote:
>> gluster volume add-brick volname replica 3 arbiter 1
>> brickhost:brickpath/to/new/arb
gluster volume add-brick volname replica 3 arbiter 1
brickhost:brickpath/to/new/arbitervol
Yes. The replica 3 looks odd. Somewhere in 3.12 (?) or not until v4 a change in
command will happen so it won't count the arbiter as a replica.
On February 25, 2018 5:05:04 AM EST, "Mitja Mihelič"
wrote
On Fri, 2018-01-26 at 07:12 +0530, Sankarshan Mukhopadhyay wrote:
> On Fri, Jan 26, 2018 at 7:05 AM, Ravishankar N m> wrote:
> >
> > On 01/24/2018 07:20 PM, Hoggins! wrote:
> >
> > Hello,
> >
> > The subject says it all. I have a replica 3 cluster :
> >
> > gluster> volume info thedude
> >
On a disperse setup shouldn't the various nodes have different parts of
the files and thus the md5sums would be different?
On Fri, 2018-01-12 at 18:17 +0900, jungeun kim wrote:
> Hi All,
>
> I'm using gluster as dispersed volume and I send to ask for very
> serious thing.
> I have 3 servers and th
On January 11, 2018 10:58:28 PM EST, Lindsay Mathieson
wrote:
>On 12/01/2018 3:14 AM, Darrell Budic wrote:
>> It would also add physical resource requirements to future client
>> deploys, requiring more than 1U for the server (most likely), and I’m
>
>> not likely to want to do this if I’m try
I like the idea immensely. As long as the gpu usage can be specified for
server-only, client and server, client and server with a client limit of X.
Don't want to take gpu cycles away from machine learning for file IO.
Also must support multiple GPUs and GPU pinning. Really useful for
encryptio
I just noticed that gluster volume info foo and gluster volume heal
foo statistics use different indices for brick numbers. Info uses 1
based but heal statistics uses 0 based.
gluster volume info clifford Volume Name: cliffordType: Distributed-
ReplicateVolume ID: 0e33ff98-53e8-40cf-bdb0-3e18406a
I think the replica 2 arbiter 1 is more clear towards the intent of the
configuration.
I would also support :
replica n ,,..., arbiter m ,,...
as that makes it very clear what brick(s) should be the arbiter(s).
On Wed, 2017-12-20 at 15:44 +0530, Ravishankar N wrote:
> Hi,
>
> The existing syntax i
Keep in mind a local disk is 3,6,12 Gbps but a network connection is typically
1Gbps. A local disk quad in raid 10 will outperform a 10G ethernet (especially
using SAS drives).
On December 5, 2017 6:11:38 AM EST, Riccardo Murri
wrote:
>Hello,
>
>I'm trying to set up a SAMBA server serving a G
The archival process of the mailing list makes searching for past issues
possible. Slack, and irc in general, is a more closed garden than a public
archived mailing list.
That said, irc/slack is good for immediate interaction between people, say,
gluster user with a nightmare and a knowledgeab
th of them have the link count 2?
> If the link count is 2 then "find -samefile
> bits of gfid>//"
> should give you the file path.
>
> Regards,
> Karthik
>
> On Tue, Oct 24, 2017 at 3:28 AM, Jim Kinney
> wrote:
> >
> >
> >
> >
Very nice! Thanks!
On October 25, 2017 8:11:36 AM EDT, Aravinda wrote:
>Hi,
>
>We started a new project to identify issues/misconfigurations in
>Gluster nodes. This project is very young and not yet ready for
>Production use, Feedback on the existing reports and ideas for more
>Reports are welcom
efile
> bits of gfid>//"
> should give you the file path.
>
> Regards,
> Karthik
>
> On Tue, Oct 24, 2017 at 3:28 AM, Jim Kinney
> wrote:
> >
> >
> >
> > I'm not so lucky. ALL of mine show 2 links and none have the attr
> &g
> Sent: Monday, October 23, 2017 1:52 AM
>
> To: Jim Kinney ; Matt Waymack om>
>
> Cc: gluster-users
>
> Subject: Re: [Gluster-users] gfid entries in volume heal info that do
> not heal
>
>
>
>
> Hi Jim & Matt,
>
> Can you also check for t
I've been following this particular thread as I have a similar issue
(RAID6 array failed out with 3 dead drives at once while a 12 TB load
was being copied into one mounted space - what a mess)
I have >700K GFID entries that have no path data:Example:getfattr -d -e
hex -m . .glusterfs/00/00/a5e
Ouch!
I use a unified UID/GID process. I personally use FreeIPA. It can also be
done with just LDAP or (not recommended for security reasons) NIS+
Baring those, a well-disciplined manual process will work by copying
passwd, group, shadow and gshadow files around to all systems. Create new
users o
For the past week or so (since the spam issue) the gluster-users mail
arrives dated 12 hours in the past. Did a fix for the spam upset the
time of the server?___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinf
All,
I had a "bad timing" event where I lost 3 drives in a RAID6 array and
the structure of all of the LVM pools and nodes was lost. All total,
nearly 100TB of storage was scrambled.
This array was 1/2 of a redundant (replica 2) gluster config (will be
adding additional 3rd soon for split brain/r
All,
I had a "bad timing" event where I lost 3 drives in a RAID6 array and
the structure of all of the LVM pools and nodes was lost.
This array was 1/2 of a redundant (replica 2) gluster config (will be
adding additional 3rd soon for split brain/redundancy with failure
issues).
The failed drives
65 matches
Mail list logo