Hi all,
We had a BoF about Upstream Testing and increasing coverage.
Discussion included:
- More docs on using the gluster-specific libraries.
- Templates, examples, and testcase scripts with common functionality as a
jumping off point to create a new test script.
- Reduce the number of
Hi all,
I'm taking a stab at deploying a storage cluster to explore the Halo AFR
feature and running into some trouble. In GCE, I have 4 instances, each with
one 10gb brick. 2 instances are in the US and the other 2 are in Asia (with the
hope that it will drive up latency sufficiently). The
nevermind, fixed it and created pull request on github
filename / makefile problems
Am 07.11.2017 um 10:51 schrieb InterNetX - Juergen Gotteswinter:
> Hi,
>
> i am currently struggling around with gluster restapi (not heketi),
> somehow i am a bit stuck. During startup of glusterrestd service
> On 6 Nov 2017, at 3:32 pm, Laura Bailey wrote:
>
> Do the users have permission to see/interact with the directories, in
> addition to the files?
Yes, full access to directories and files.
Also testing using the root user.
>
> On Mon, Nov 6, 2017 at 1:55 PM, Nithya
Thanks Atin.
Peer probes were done using FQDN and I was able to make these changes.
The only thing I had to do on rest of the nodes was to flush nscd; after that
everything was good and I did not have to restart gluster services on these
rest of the nodes.
- Hemant
On 10/30/17 11:46 AM, Atin
Folks,
Currently we have the archive log files from a Oracle DB been written to
a nas location using NFS. We are planning to move the nas location use
glusterfs file system. Has anybody been able to write archive log files
of a Oracle DB onto a glusterfs file system/mount point.
If not done
Hi, All,
We create a GlusterFS cluster with tiers. The hot tier is
distributed-replicated SSDs. The cold tier is a n*(6+2) disperse volume.
When copy millions of files to the cluster, we find these logs:
W [socket.c:3292:socket_connect] 0-tierd: Ignore failed connection attempt
on
Hi,
I am using glusterfs 3.10.1 and i am seeing below msg in fuse-mount log
file.
what does this error mean? should i worry about this and how do i resolve
this?
[2017-11-07 11:59:17.218973] W [MSGID: 109005]
[dht-selfheal.c:2113:dht_selfheal_directory] 0-glustervol-dht: Directory
selfheal fail
Hi,
I have always found the default behaviour of subvolume failure a bit
confusing for users or applications that need deterministic failure
behaviour.
When a subvolume fails the clients can still use and access files on
other subvolumes. However, accessing files on a failed subvolume returns
an
We had a BOF about how to do file-level volume encryption.
Coupled with geo-replication, this feature would be useful for secure
off-site archiving/backup/disaster-recovery of Gluster volumes.
TLDR: It might be possible using EncFS stacked file system on top of a
Gluster
mount, but it is
10 matches
Mail list logo