If you are using ZFS as the underlying filesystem.
ZFS by default stores the extended attributes in a hidden directory instead of
extending the file inode size like what XFS do!
There is a problem in ZFS on Linux implementation which the function
responsible for deleting the files, it deletes on
ZFS :: glusterfs 3.7.6 :: samba-vfs-glusterfs 3.7
Migrated distributed data from glusterfs 3.5 to newly installed OS with
glusterfs 3.7.6
When using gluster-client, folder/file manipulation all works normaily.
However, when using samba -vfs-glusterfs module, file manipulation fails at
deletion. It
Yes, I am using it with 3.7.6, EC code is in xlators/cluster/ec
On Fri, Feb 12, 2016 at 8:08 AM, jayakrishnan mm
wrote:
> Hi
> I am using glusterfs 3.7.6 on Ubuntu 14.04. Is disperse volume supported
> in this version ?
>
> I want to know whether the ida (https://forge.gluster.org/dispers
Hi
I am using glusterfs 3.7.6 on Ubuntu 14.04. Is disperse volume supported
in this version ?
I want to know whether the ida (https://forge.gluster.org/disperse/ida) ,
heal (https://forge.gluster.org/disperse/heal) and dfc (
https://forge.gluster.org/disperse/dfc)
translators are offici
Hi Steve,
Here is how quota usage accounting works
For each file, below extended attributes are set:
trusted.glusterfs.quota..contri -> This value tells how much size this
file/dir has contributed to its parent (key will have a gfid of parent)
For each directory, below extended attributes ar
On 02/11/2016 08:33 PM, Oleksandr Natalenko wrote:
And "API" test.
I used custom API app [1] and did brief file manipulations through it
(create/remove/stat).
Then I performed drop_caches, finished API [2] and got the following
Valgrind log [3].
I believe there are still some leaks occurring
Upgrade steps looks good wrt Geo-replication.
Since 3.7.8 is released early due to some issues with 3.7.7, we couldn't
get the following Geo-rep patches in the release as discussed in
previous mails.
http://review.gluster.org/#/c/13316/
http://review.gluster.org/#/c/13189/
Thanks
regards
A
Hi Bill,
Can you enable virt-profile setting for your volume and see if that
helps? You need to enable this optimization when you create the volume
using ovrit, or use the following command for an existing volume:
#gluster volume set group virt
-Ravi
On 02/12/2016 05:22 AM, Bill James wrot
I am sorting a fairly large file (27-million lines) and the output is
being written to my gluster storage. This seems to crash glusterfsd for
3.7.8 as noted below.
Can anyone help?
David
[Thu Feb 11 18:25:24 2016] glusterfsd: page allocation failure. order:5,
mode:0x20
[Thu Feb 11 18:25:24
My apologies, I'm showing how much of a noob I am.
Ignore last direct to gluster numbers, as that wasn't really glusterfs.
[root@ovirt2 test ~]# mount -t glusterfs ovirt2-ks.test.j2noc.com:/gv1
/mnt/tmp/
[root@ovirt2 test ~]# time dd if=/dev/zero of=/mnt/tmp/testfile2 bs=1M
count=1000 oflag=di
don't know if it helps, but I ran a few more tests, all from the same
hardware node.
The VM:
[root@billjov1 ~]# time dd if=/dev/zero of=/root/testfile bs=1M
count=1000 oflag=direct
1048576000 bytes (1.0 GB) copied, 62.5535 s, 16.8 MB/s
Writing directly to gluster volume:
[root@ovirt2 test ~]#
xml attached.
On 02/11/2016 12:28 PM, Nir Soffer wrote:
On Thu, Feb 11, 2016 at 8:27 PM, Bill James wrote:
thank you for the reply.
We setup gluster using the names associated with NIC 2 IP.
Brick1: ovirt1-ks.test.j2noc.com:/gluster-store/brick1/gv1
Brick2: ovirt2-ks.test.j2noc.com:/glu
On 2016-02-11 04:27, Dietmar Putz wrote:
and i strongly believe you have to update all your clients too.
maybe a developer can give you more background information about the
need to do that...
I intend to update all of my clients as soon as possible -- I was more
concerned about whether the o
Hello,
I would like to upgrade my Gluster 3.7.6 installation to Gluster 3.7.8 and made
up the following procedure below. Can anyone check it and let me know if it is
correct or if I am missing anything? Note here that I am using Debian 8 and the
Debian packages from Gluster's APT repository. I
What would happen if I:
- Did not disable quotas
- Did not stop the volume (140T volume takes at least 3-4 days to do
any find operations, which is too much downtime)
- Find and remove all xattrs:
trusted.glusterfs.quota.242dcfd9-6aea-4cb8-beb2-c0ed91ad70d3.contri on
the /brick/volumename/modules
-
find my answers inline.
— Bishoy
> On Feb 11, 2016, at 11:42 AM, Atul Yadav wrote:
>
> HI Team,
>
>
> I am totally new in Glusterfs and evaluating glusterfs for my requirement.
>
> Need your valuable input on achieving below requirement :-
> File locking
Gluster uses DLM for locking.
> Perf
HI Team,
I am totally new in Glusterfs and evaluating glusterfs for my requirement.
Need your valuable input on achieving below requirement :-
File locking
Performance
High Availability
Existing Infra details is given below:-
CentOS 6.6
glusterfs-server-3.7.8-1.el6.x86_64
glusterfs-client-xlat
Hi Dominique,
I saw the logs attached. At some point all bricks seem to have gone down as I
see
[2016-01-31 16:17:20.907680] E [MSGID: 108006] [afr-common.c:3999:afr_notify]
0-cluster1-replicate-0: All subvolumes are down. Going offline until atleast
one of them comes back up.
in the client
And "API" test.
I used custom API app [1] and did brief file manipulations through it
(create/remove/stat).
Then I performed drop_caches, finished API [2] and got the following
Valgrind log [3].
I believe there are still some leaks occurring in glfs_lresolve() call
chain.
Soumya?
[1] ht
Hi,
That list of changes/fixes is from 3.7.7 I presume?
Were could I find the changes from 3.7.6 to 3.7.7?
(
https://github.com/gluster/glusterfs/blob/release-3.7/doc/release-notes/3.7.7.md
doesn't exist)
Thank you,
Thib.
On 10 Feb 2016 2:45 p.m., "Kaushal M" wrote:
> Hi all,
>
> Because of a u
Hi Kaleb and Kaushal,
I installed the wheezy packages and they’re working great. Thanks for building
them! It’s very much appreciated.
Ben
NPR | Benjamin Wilson | Sr. Systems Administrator | 202-513-4454 |
bwil...@npr.org
On 2/4/16, 4:19 AM, "gluster-users-boun...@gluster.org on beha
Dear Dave,
On 02/11/2016 10:53 AM, Dave Warren wrote:
> I'm finally hoping to be able to upgrade a relatively ancient gluster
> 3.4.2 on Ubuntu 14.04 to a more modern version of Gluster and want to
> verify that I am prepared.
I haven't done it yet, but I'm in a very similar situation - and
there
Hi Dave,
first of all I'm not a developer, just a user as you and recently i had
a gluster update ( 6 bricks in distributed replicated conf + the same as
slave for the geo-replication)
on ubuntu from 12.04 LTS / GFS 3.4.7 to 14.04 LTS - 3.5.x - 3.6.7 - 3.7.6.
most problem i got is/was regardi
I'm finally hoping to be able to upgrade a relatively ancient gluster
3.4.2 on Ubuntu 14.04 to a more modern version of Gluster and want to
verify that I am prepared.
I currently have two gluster servers and hope to have all volumes in a
mirror configuration prior to upgrading. There are vario
And here goes "rsync" test results (v3.7.8 + two patches by Soumya).
2 volumes involved: source and target.
=== Common indicators ===
slabtop before drop_caches: [1]
slabtop after drop_caches: [2]
=== Source volume (less interesting part) ===
RAM usage before drop_caches: [3]
statedump before
25 matches
Mail list logo