On 02/06/2014 02:59 PM, Justin Dossey wrote:
> An hour of googling didn't turn up the answer, so I'll ask here: do you
> know about when the linux kernel changed to enable inode64 by default? I
> haven't run into the issue Pat had, but I don't want to!
>
It looks like the following commit:
08bf
An hour of googling didn't turn up the answer, so I'll ask here: do you
know about when the linux kernel changed to enable inode64 by default? I
haven't run into the issue Pat had, but I don't want to!
I wish inode64 use were reported by xfs_info or something.
On Thu, Feb 6, 2014 at 10:57 AM, B
On 02/06/2014 11:25 AM, Pat Haley wrote:
>
> Hi Brian,
>
> gluster-0-1 did not recognize the delaylog option,
> but when I mounted the disk with nobarrier,inode64
> I was able to write to the disk both directly
> and from a client through gluster.
>
> Assuming inode64 was the key, was the proble
Hi Brian,
gluster-0-1 did not recognize the delaylog option,
but when I mounted the disk with nobarrier,inode64
I was able to write to the disk both directly
and from a client through gluster.
Assuming inode64 was the key, was the problem
that XFS could not address the inodes withour
64 bit rep
On 02/05/2014 03:39 PM, Pat Haley wrote:
>
> Hi Brian,
>
> I tried both just using touch to create
> an empty file and copying a small (<1kb)
> file. Neither worked.
>
> Note: currently the disk served by gluster-0-1
> is mounted as
>
> /dev/sdb1 /mseas-data-0-1 xfs d
Hi Brian,
I tried both just using touch to create
an empty file and copying a small (<1kb)
file. Neither worked.
Note: currently the disk served by gluster-0-1
is mounted as
/dev/sdb1 /mseas-data-0-1 xfs defaults1 0
I have received some advice to change the
> From the last directory I was able to create on all 3 brick:
>
>
> gluster-0-1
>
> [root@nas-0-1 test]# getfattr -d -e hex -m . .
> # file: .
> trusted.gfid=0x6131386b3c324551967d05c83b618a7b
> trusted.glusterfs.dht=0x0001
>
>
Jeff Darcy wrote:
For comparison, below is the output of getfattr on each brick and
on a client, starting with the brick causing trouble (gluster-0-1):
gluster-0-1
[root@nas-0-1 mseas-data-0-1]# getfattr -d -e hex -m . .
# file: .
trusted.gfid=0x
> For comparison, below is the output of getfattr on each brick and
> on a client, starting with the brick causing trouble (gluster-0-1):
>
>
> gluster-0-1
>
> [root@nas-0-1 mseas-data-0-1]# getfattr -d -e hex -m . .
> # file: .
> trusted.gfid=0x00
On 02/04/2014 02:14 PM, Jeff Darcy wrote:
>> I tried to "go behind" gluster and directly
>> write a file to the nfs filesystem on gluster-0-1.
>>
>> If I try to write to /mseas-data-0-1 (the file
>> space served by gluster-0-1) directly I still
>> get the "No space left on device" error.
>> (df -h
Hi Jeff,
Where do I find these trusted.gluster.fs.dht files?
(or if they aren't files, how do I find the information
you're reqesting?)
These are extended attributes, not files, so you can get them
like this:
getfattr -d -e hex -m . $file_or_dir
For comparison, below is the output of g
> Where do I find these trusted.gluster.fs.dht files?
> (or if they aren't files, how do I find the information
> you're reqesting?)
These are extended attributes, not files, so you can get them
like this:
getfattr -d -e hex -m . $file_or_dir
> If it is glusterfsd starting before the local fi
> I tried to "go behind" gluster and directly
> write a file to the nfs filesystem on gluster-0-1.
>
> If I try to write to /mseas-data-0-1 (the file
> space served by gluster-0-1) directly I still
> get the "No space left on device" error.
> (df -h still shows 784G on that disk)
>
> If I try to
Hi,
I tried to "go behind" gluster and directly
write a file to the nfs filesystem on gluster-0-1.
If I try to write to /mseas-data-0-1 (the file
space served by gluster-0-1) directly I still
get the "No space left on device" error.
(df -h still shows 784G on that disk)
If I try to write to th
Hi Jeffrey,
Two simple questions (reflecting my ignorance):
Where do I find these trusted.gluster.fs.dht files?
(or if they aren't files, how do I find the information
you're reqesting?)
If it is glusterfsd starting before the local filesystems
are mounted, would a service glusterd restart hav
> rebooting gluster-0-1 did not solve the problem. The one
> change I did notice is that now the only place I can write
> is to one of the test directories I had previously created
> which had not appeared on gluster-0-1. Now, trying to create
> a new directory elsewhere results in the "No space
Quick update:
rebooting gluster-0-1 did not solve the problem. The one
change I did notice is that now the only place I can write
is to one of the test directories I had previously created
which had not appeared on gluster-0-1. Now, trying to create
a new directory elsewhere results in the "No
Hi,
I am writing to see if there are any suggestions
as to what I should look at next to debug my
problem or what I should try to correct it.\
Would rebooting the gluster-0-1 brick help?
A short recap:
In early January the gluster-0-1 brick filled up
(the gluster-data brick was close to
Hi,
A tiny bit more data:
I tried restarting nfs on gluster-0-1. Following
that I was back to the original behavior of getting
"No space left on device" error messages when I tried
to write from a client. I then deleted the dummy test
directories I had made yesterday, followed by trying
to re
Hi,
I now have a change in the behavior of the gluster issue.
This afternoon I deleted a file that I knew was located
on the gluster-0-1 disk (the deleting command was issued
from a client). After that I was able to make a directoy
(empty) that appeared on all 3 bricks. My next step was
to cop
Hi Joe,
Sorry to take so long in responding, but we had another emergency
that took all my time...
The subsampled brick log file from gluster-0-1 is available at
http://mseas.mit.edu/download/phaley/GlusterUsers/gluster-0-1/bricks/mseas-data-0-1.log.1
The df results on gluster-0-1 are
[root@
1st, I don't think we need log data from November. Let's try to share
logs that are limited to an example of the error you're trying to get
help with. A 40Mb client log is unnecessary when it's only the last 5k
that's relevant.
2nd, your "server" logs are mostly more client logs, with the exce
Hi Vijay,
I've put the log files in
http://mseas.mit.edu/download/phaley/GlusterUsers/server_logs/
http://mseas.mit.edu/download/phaley/GlusterUsers/client_logs/
On 01/21/2014 10:24 PM, Pat Haley wrote:
Hi Joe,
The peer status on all 3 showed the
proper connections. Doing the killall
and
On 01/21/2014 10:24 PM, Pat Haley wrote:
Hi Joe,
The peer status on all 3 showed the
proper connections. Doing the killall
and restart on all three bricks fixed
the N in the Online column. I then
did have to remount the gluster
filesystem on the client
Unfortunately my original problem remai
Hi Joe,
The peer status on all 3 showed the
proper connections. Doing the killall
and restart on all three bricks fixed
the N in the Online column. I then
did have to remount the gluster
filesystem on the client
Unfortunately my original problem remains.
I'm still getting "no space on left on
All three.
On 01/21/2014 08:38 AM, Pat Haley wrote:
Hi Joe,
They do appear as connected from the first
brick, checking on the next 2. If they
all show the same, is the "killall glusterfsd"
command simply run from the first brick, or
will I need to try in on all 3 bricks, one
at a time?
Than
Hi Joe,
They do appear as connected from the first
brick, checking on the next 2. If they
all show the same, is the "killall glusterfsd"
command simply run from the first brick, or
will I need to try in on all 3 bricks, one
at a time?
Thanks
# gluster peer status
Number of Peers: 2
Hostname
You got lucky. That process could have deleted your volume entirely. The
volume configuration and state is stored in that directory path.
Check gluster peer status on two servers and make sure they're all "Peer
in Cluster (Connected)". If not, peer probe to make them so.
If they are, "killall
Hi Lala,
The glusterfsd process is running (see below). I also tried
"service iptables stop" (followed by restarting
gluserd) but still have the N in the OnLine column.
What should I look at next?
Thanks
# ps aux | grep gluster
root 2916 1.4 1.2 4442964 105740 ? Ssl 2013 870:40
Hi,
To try to clean thing out more, I took
the following steps
1) On gluster-0-0:
`gluster peer detach gluster-data`, if that fails, `gluster peer
detach gluster-data force`
2) On gluster-data:
`rm -rf /var/lib/glusterd`
`service glusterd restart`
3) Again on gluster-0-0:
'glus
On 01/21/2014 08:34 PM, Pat Haley wrote:
Also, going back to an earlier Email,
should I be concerned that in the output
from "gluster volume status" the
brick "gluster-data:/data" has an "N"
in the "Online" column? Does this suggest
an additional debugging route?
Yes, it means brick i.e. gl
Also, going back to an earlier Email,
should I be concerned that in the output
from "gluster volume status" the
brick "gluster-data:/data" has an "N"
in the "Online" column? Does this suggest
an additional debugging route?
gluster volume status
Status of volume: gdata
Gluster process
First, another update on my test of writing
a directory with 480 6Mb files. Not only do
over 3/4 of the files appear, but the are
written on all 3 bricks. Again, it is random
which files are not written but what I seem
to see is that files are written to each brick
even after the failures. Doe
On 01/17/2014 07:48 PM, Pat Haley wrote:
>
> Hi Franco,
>
> I checked using df -i on all 3 bricks. No brick is over
> 1% inode usage.
>
It might be worth a quick inode allocation test on the fs for each
brick, regardless. There are other non-obvious scenarios that can cause
inode allocation to
Was a long shot, you would have had to have many millions of small files to run
out of inodes and it also depends on the type of filesystem but I've been
caught out a few times with ext4.
On 18 Jan 2014 08:48, Pat Haley wrote:
Hi Franco,
I checked using df -i on all 3 bricks. No brick is ove
Hi Franco,
I checked using df -i on all 3 bricks. No brick is over
1% inode usage.
Thanks.
Pat
Have you run out of inodes on the underlying filesystems?
On 18 Jan 2014 05:41, Pat Haley wrote:
Latest updates:
no error messages were found on the log files of the bricks.
The error messag
Have you run out of inodes on the underlying filesystems?
On 18 Jan 2014 05:41, Pat Haley wrote:
Latest updates:
no error messages were found on the log files of the bricks.
The error messages appear on the client log files. Writing
from a second client also has the same errors.
Note that i
Latest updates:
no error messages were found on the log files of the bricks.
The error messages appear on the client log files. Writing
from a second client also has the same errors.
Note that if I try to write a directory with 480 6Mb files
to /projects, over 3/4 of the files are written. I
Hi,
Some additional data
[root@mseas-data save]# gluster volume info
Volume Name: gdata
Type: Distribute
Volume ID: eccc3a90-212d-4563-ae8d-10a77758738d
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: gluster-0-0:/mseas-data-0-0
Brick2: gluster-0-1:/mseas-data-0-1
Bric
Hi,
We are using gluster to present 3 bricks as a single name space.
We appear to have a situation in which gluster thinks there
is no disk space when there is actually plenty. I have restarted
the glusterd deamons on all three bricks and I still get the
following message
/bin/cp: cannot create
40 matches
Mail list logo