I am running into two problems (possibly related?).
1) Every once in a while, when I do a 'rm -rf DIRNAME', it comes back
with an error:
rm: cannot remove `DIRNAME` : Directory not empty
If I try the 'rm -rf' again after the error, it deletes the
directory. The issue is that
-group limitation for all fuse clients.
David (Sent from mobile)
===
David F. Robinson, Ph.D.
President - Corvid Technologies
704.799.6944 x101 [office]
704.252.1310 [cell]
704.799.7974 [fax]
david.robin...@corvidtec.com
http://www.corvidtechnologies.com
>
clients can no longer use more than 32-groups. If
we take the "ACL" out of the mount on the server, ACL no longer works on our
clients, but the 32-group limit is gone.
David (Sent from mobile)
===========
David F. Robinson, Ph.D.
President - Corvid Technologies
failed: No such file or directory. Path:
(df69a1ee-cc85-47a9-b8ca-a32db565c340)
===
David F. Robinson, Ph.D.
President - Corvid Technologies
704.799.6944 x101 [office]
704.252.1310 [cell]
704.799.7974 [fax]
david.robin...@corvidte
I am seeing the following on one of my FUSE clients (indy.rst and
indy.rst.old has ??? ???).
Has anyone seen this before? Any idea what causes this for a given
client?
If I try to access the file, I get a stale file handle..
# cp indy.rst dfr.rst
cp: cannot stat `indy.rst': Stale file handle
B
. Not sure why we are seeing
this issue again.
Once you figure this out, do you or will you have some kind of tool to
go through and clean up all of these stale links? Or, would you just
leave them as they are?
David
-- Original Message --
From: "Raghavendra Gowdappa&quo
al Message --
From: "Shyam"
To: "David F. Robinson" ; "Gluster Devel"
; "gluster-users@gluster.org"
; "Susant Palai"
Sent: 2/9/2015 11:11:20 AM
Subject: Re: [Gluster-devel] cannot delete non-empty directory
On 02/08/2015 12:19 PM, David F. Robin
usterfs.dht.linkto="homegfs_bkp-client-1"
# file:
data/brick01bkp/homegfs_bkp/backup.0/old_shelf4/Aegis/!!!Programs/Nextel_Cup/SHR/Backup/shr/Airbox/C24/z_slices/c24-airbox_vr_z=32.5.jpeg
trusted.gfid="d+0ÇxþM¯GxÑ@>Â"
trusted.glusterfs.dht.linkto="homegfs_bkp-client-1&
changelog.rollover-time: 15
changelog.fsync-interval: 3
===
David F. Robinson, Ph.D.
President - Corvid Technologies
704.799.6944 x101 [office]
704.252.1310 [cell]
704.799.7974 [fax]
david.robin...@corvidtec.com
http://www.corvidtechnologies.com
I don't think I understood what you sent enough to give it a try. I'll
wait until it comes out in a beta or release version.
David
-- Original Message --
From: "Ben Turner"
To: "Justin Clift" ; "David F. Robinson"
Cc: "Benjamin Turner&
wxrws--- 3 root root 41 Feb 4 18:12 ..
-rwxrw 2 streadway sbir 42440 Jun 19 2014 ARMOR PACKAGES.one
-rwxrw 2 streadway sbir 38184 Jun 19 2014 CURRENT STANDARD
ARMORING.one
-- Original Message ------
From: "Xavier Hernandez"
To: "David F. Robinson" ; "Benjam
copy that. Thanks for looking into the issue.
David
-- Original Message --
From: "Benjamin Turner"
To: "David F. Robinson"
Cc: "Ben Turner" ; "Pranith Kumar Karampuri"
; "Xavier Hernandez" ;
"gluster-users@gluster.org"
Isn't rsync what geo-rep uses?
David (Sent from mobile)
===
David F. Robinson, Ph.D.
President - Corvid Technologies
704.799.6944 x101 [office]
704.252.1310 [cell]
704.799.7974 [fax]
david.robin...@corvidtec.com
http://www.corvidtechnologies.com
>
Should I run my rsync with --block-size = something other than the default? Is
there an optimal value? I think 128k is the max from my quick search. Didn't
dig into it throughly though.
David (Sent from mobile)
===
David F. Robinson, Ph.D.
President - C
It was a mix of files from very small to very large. And many terabytes of
data. Approx 20tb
David (Sent from mobile)
===
David F. Robinson, Ph.D.
President - Corvid Technologies
704.799.6944 x101 [office]
704.252.1310 [cell]
704.799.7974 [fax
I'll send you the emails I sent Pranith with the logs. What causes these
disconnects?
David (Sent from mobile)
===
David F. Robinson, Ph.D.
President - Corvid Technologies
704.799.6944 x101 [office]
704.252.1310 [cell]
704.799.7974 [fax]
david.
changelog.changelog: on
changelog.fsync-interval: 3
changelog.rollover-time: 15
server.manage-gids: on
-- Original Message --
From: "Xavier Hernandez"
To: "David F. Robinson" ; "Benjamin
Turner"
Cc: "gluster-users@gluster.org" ; "Gluster
D
I don't recall if that was before or after my upgrade.
I'll forward you an email thread for the current heal issues which are
after the 3.6.2 upgrade...
David
-- Original Message --
From: "Pranith Kumar Karampuri"
To: "David F. Robinson" ;
"gl
orted
wks_backup/homer_backup/logs: trusted.ec.heal: Operation not supported
wks_backup/homer_backup: trusted.ec.heal: Operation not supported
-- Original Message --
From: "Benjamin Turner"
To: "David F. Robinson"
Cc: "Gluster Devel" ;
"gluster-users@gluster.
connection from
gfs01a.corvidtec.com-1369-2015/02/04-00:16:53:613570-homegfs-client-2-0-0
-- Original Message --
From: "Benjamin Turner"
To: "David F. Robinson"
Cc: "Gluster Devel" ;
"gluster-users@gluster.org"
Sent: 2/3/2015 7:12:34 PM
Subject: Re:
nd saw the same
behavior. The files/directories were not shown until I did the "ls" on
the bricks.
David
=======
David F. Robinson, Ph.D.
President - Corvid Technologies
704.799.6944 x101 [office]
704.252.1310 [cell]
704.799.7974 [fax
e all of the user accounts and groups. The
preferred option would be to simply use sssd on my storage systems, but
it doesn't seem to play well with gluster.
David
------ Original Message --
From: "David F. Robinson"
To: "Gluster Devel" ;
"gluster-users@gluster
:sbir/2015.1> cd A15-029
A15-029_proposal_draft_rev1.docx* CB_work/ gun_work/ Refs/
David
=======
David F. Robinson, Ph.D.
President - Corvid Technologies
704.799.6944 x101 [office]
704.252.1310 [cell]
704.799.7974 [fax]
david.robin...@corvidtec.com
ht
I upgraded one of my bricks from 3.6.1 to 3.6.2 and I can no longer do a
'gluster volume heal homegfs info'. It hangs and never returns any
information.
I was trying to ensure that gfs01a had finished healing before upgrading
the other machines (gfs01b, gfs02a, gfs02b) in my configuration (see
entries: 0
Brick gfsib02b.corvidtec.com:/data/brick01b/homegfs
Number of entries: 0
Brick gfsib02a.corvidtec.com:/data/brick02a/homegfs
Number of entries: 0
Brick gfsib02b.corvidtec.com:/data/brick02b/homegfs
Number of entries: 0
=======
David F. Robinson, Ph.D.
Presiden
c.local:
/etc/init.d/glusterd restart
(sleep 20; mount -a; mount /backup_nfs/homegfs)&
-- Original Message --
From: "Xavier Hernandez"
To: "David F. Robinson" ; "Kaushal M"
Cc: "Gluster Users" ; "Gluster Devel"
Sent: 1/27/2015 10
ke this is working properly.
David
-- Original Message --
From: "Xavier Hernandez"
To: "David F. Robinson" ; "Kaushal M"
Cc: "Gluster Users" ; "Gluster Devel"
Sent: 1/27/2015 10:02:31 AM
Subject: Re: [Gluster-devel] [Gluster-us
Hernandez"
To: "David F. Robinson" ; "Kaushal M"
Cc: "Gluster Users" ; "Gluster Devel"
Sent: 1/27/2015 10:02:31 AM
Subject: Re: [Gluster-devel] [Gluster-users] v3.6.2
Hi,
I had a similar problem once. It happened after doing some unrelated
tests
and did a 'mount -a'. Worked perfectly.
And the errors that were showing up in the logs every 3-seconds stopped.
Thanks for your help. Greatly appreciated.
David
-- Original Message --
From: "Xavier Hernandez"
To: "David F. Robinson" ; "Kaushal M&quo
failed
(Invalid argument)
-- Original Message --
From: "Kaushal M"
To: "David F. Robinson"
Cc: "Joe Julian" ; "Gluster Users"
; "Gluster Devel"
Sent: 1/27/2015 1:49:56 AM
Subject: Re: Re[2]: [Gluster-devel] [Gluster-users] v3.6.2
Yo
-- Original Message --
From: "Joe Julian"
To: "Kaushal M" ; "David F. Robinson"
Cc: "Gluster Users" ; "Gluster Devel"
Sent: 1/27/2015 12:48:49 AM
Subject: Re: [Gluster-devel] [Gluster-users] v3.6.2
If that was true, wouldn't it not &q
our datacenter. When I powered it back up, NFS through
gluster will no longer start.
David
-- Original Message --
From: "Kaushal M"
To: "David F. Robinson"
Cc: "Atin Mukherjee" ; "Pranith Kumar Karampuri"
; "Justin Clift" ; "Glu
nfs.log attached. Where is glusterd.log?
David
-- Original Message --
From: "Atin Mukherjee"
To: "David F. Robinson" ; "Pranith Kumar
Karampuri" ; "Justin Clift"
Cc: "Gluster Users" ; "Gluster Devel"
Sent: 1/27/2015
"
To: "Pranith Kumar Karampuri" ; "Justin Clift"
; "David F. Robinson"
Cc: "Gluster Users" ; "Gluster Devel"
Sent: 1/26/2015 11:51:13 PM
Subject: Re: [Gluster-devel] v3.6.2
On 01/27/2015 07:33 AM, Pranith Kumar Karampuri wrote:
On 01/26/2
t] (-->
0-: received signum (0), shutting down
-- Original Message --
From: "Anatoly Pugachev"
To: "David F. Robinson"
Cc: "gluster-users@gluster.org" ; "Gluster
Devel"
Sent: 1/26/2015 2:48:08 PM
Subject: Re: [Gluster-users] v3.6.2
D
"
To: "David F. Robinson" ;
gluster-users@gluster.org
Sent: 1/26/2015 12:20:16 PM
Subject: Re: [Gluster-users] v3.6.2
Suggestion:
In my CentOS 7 and GlusterFS 3.6.1 (and .2) the NFS
works normally.
Run the rpcinfo -p command and see if result is the same or similar.
[root@vmg01 gl
[root@gfs01bkp bricks]# ps -ef | grep rpcbind
rpc 2306 1 0 11:32 ?00:00:00 rpcbind
root 5265 4638 0 11:55 pts/000:00:00 grep rpcbind
-- Original Message --
From: "Joe Julian"
To: "David F. Robinson" ;
"gluster-users@gluster.org&q
.
SELINUXTYPE=targeted
-- Original Message --
From: "Justin Clift"
To: "David F. Robinson"
Cc: "Gluster Users" ; "Gluster Devel"
Sent: 1/26/2015 11:11:15 AM
Subject: Re: [Gluster-devel] v3.6.2
On 26 Jan 2015, at 14:50, David F. Robinson
wrote:
/brick02bkp/homegfs_bkp on port 49155
-- Original Message --
From: "David F. Robinson"
To: "gluster-users@gluster.org" ; "Gluster
Devel"
Sent: 1/26/2015 9:50:09 AM
Subject: v3.6.2
I have a server with v3.6.2 from which I cannot mount using NFS. The
FUSE
t:
readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
[2015-01-26 14:42:24.265504] W [socket.c:611:__socket_rwv] 0-management:
readv on /var/run/1f0cee5a2d074e39b32ee5a81c70e68c.socket failed
(Invalid argument)
^C
Also, when I try to NFS mount my gluste
You are correct... Typo on my part. It happened when I installed
3.6.0-beta3.
I'll file the bug report so that fuse installation is dependent on attr
being installed... Thanks...
David
-- Original Message --
From: "Niels de Vos"
To: "David F. Robinson
When I installed the 3.5.3beta on my HPC cluster, I get the following
warnings during the mounts:
WARNING: getfattr not found, certain checks will be skipped..
I do not have attr installed on my compute nodes. Is this something
that I need in order for gluster to work properly or can this safe
back
ported to 3.5.2...
David
-- Original Message --
From: "M S Vishwanath Bhat"
To: "David F. Robinson" ; "Niels de Vos"
Cc: gluster-users@gluster.org
Sent: 9/3/2014 11:30:09 AM
Subject: Re: [Gluster-users] geo replication help
On 03/09/14 20:31, Dav
Is this bug-fix going to be in the 3.5.3 beta release?
David
-- Original Message --
From: "Niels de Vos"
To: "M S Vishwanath Bhat"
Cc: "David F. Robinson" ;
gluster-users@gluster.org
Sent: 8/15/2014 6:25:04 AM
Subject: Re: [Gluster-users] geo replicati
One other question... Is there a way to set a config variable to turn
off the compression for the rsync?
David
-- Original Message --
From: "M S Vishwanath Bhat"
To: "David F. Robinson" ;
gluster-users@gluster.org
Sent: 8/13/2014 6:47:11 AM
Subject: Re:
Thanks for the update.
Will this fix also be in 3.5.3?
David (Sent from mobile)
===
David F. Robinson, Ph.D.
President - Corvid Technologies
704.799.6944 x101 [office]
704.252.1310 [cell]
704.799.7974 [fax]
david.robin...@corvidtec.com
http
I was hoping someone could help me debug my geo-replication under
gluster 3.5.2.
I am trying to use geo-replication to create a lagged backup of my data.
What I wanted to do was to turn off the geo-replication (gluster volume
geo-replication homegfs gfsib01bkp.corvidtec.com::homegfs_bkp stop)
.x86_64
glusterfs-cli-3.5.0-2.el6.x86_64
glusterfs-rdma-3.5.0-2.el6.x86_64
-- Original Message --
From: "David F. Robinson"
To: gluster-users@gluster.org
Sent: 5/19/2014 10:58:57 AM
Subject: rsync + stale file handle
When I do an rsync to backup my workstations onto a glust
When I do an rsync to backup my workstations onto a gluster mounted file
system, I end up with thousands of healing problems. The heal status
repeatedly shows the same number of healed/failed during a "gluster
volume heal homegfs info statistics" check. There are over 9,000 files
healed and r
49 matches
Mail list logo