failure where one of the servers gets its OS disk
completely wiped, you need to do this:
http://europe.gluster.org/community/documentation/index.php/Gluster_3.2:_Brick_Restoration_-_Replace_Crashed_Server
--
Vikas Gorur
Engineer - Gluster
___
Gluster
now.
--
Vikas Gorur
Engineer - Gluster
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
urgency in writing the
tool and I forgot to document that it wouldn't handle files with a : in
them. It'll be fixed soon.
--
Vikas Gorur
Engineer - Gluster
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo
://github.com/vikasgorur/gfid
The repository contains the tools as well as a README that explains how to
use them.
Your questions and comments are welcome.
--
Vikas Gorur
Engineer - Gluster
___
Gluster-users mailing list
Gluster-users@gluster.org
http
. It will not kill the clients.
--
Vikas Gorur
Engineer - Gluster, Inc.
--
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
. Could you please file a bug about this on
http://bugs.gluster.com/ ?
--
Vikas Gorur
Engineer - Gluster, Inc.
--
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org
. The mount will _fail_, however, if server1 is offline at the
time of mounting. You can work-around that by setting up a DNS name that
round-robins across all server. Or you could hack up a script that pings a
server before selecting it for the mount.
--
Vikas Gorur
help you increase
performance.
--
Vikas Gorur
Engineer - Gluster, Inc.
--
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
vm.swappiness=0
# sysctl -p
--
Vikas Gorur
Engineer - Gluster, Inc.
--
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
:
# showmount -e localhost
The output should show:
/vol0/2e8a14e9-83d6-6e68-7c1a-464e6691988b *
--
Vikas Gorur
Engineer - Gluster, Inc.
--
___
Gluster-users mailing list
Gluster-users@gluster.org
.
--
Vikas Gorur
Engineer - Gluster, Inc.
--
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
then, you can set this option on your Gluster volume as a
work-around:
# gluster volume set performance.quick-read off
--
Vikas Gorur
Engineer - Gluster, Inc.
--
___
Gluster-users mailing
. For now, please use this option in your client
volume file so that Gluster keeps track of attributes properly.
option metadata-change-log on
option metadata-lock-server-count 1
--
Vikas Gorur
Engineer - Gluster, Inc
:-)
--
Vikas Gorur
Engineer - Gluster, Inc.
--
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
client fails. The second
node can then hold the lock and continue with its write.
--
Vikas Gorur
Engineer - Gluster, Inc.
--
___
Gluster-users mailing list
Gluster-users@gluster.org
http
clients only connect to the management IP. If you
want failover for the NFS server, you'd setup a virtual IP using ucarp
(http://www.ucarp.org) and the clients would only use this virtual IP.
--
Vikas Gorur
Engineer - Gluster, Inc
, that a way to monitor the state would be helpful.
--
Vikas Gorur
Engineer - Gluster, Inc.
--
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman
file contains
a pointer to the actual file in its extended attribute. It is completely normal
to have these files.
--
Vikas Gorur
Engineer - Gluster, Inc.
--
___
Gluster-users mailing list
at / will trigger a self-heal and copy over all the files to the new, 3rd
server.
--
Vikas Gorur
Engineer - Gluster, Inc.
--
___
Gluster-users mailing list
Gluster-users@gluster.org
http
how it goes.
--
Vikas Gorur
Engineer - Gluster
It is a capital mistake to theorize before one has data
-- Sherlock Holmes
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
to self-heal.
2) readdir (ls) is always sent to the first subvolume. This is necessary to
ensure consistent inode numbers. Perhaps you could ensure that the first
subvolume is local? (Make sure the order of subvolumes is the same on all your
clients.)
--
Vikas Gorur
.
--
Vikas Gorur
Engineer - Gluster, Inc.
--
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
get after making these changes. You should also
do atleast a few thousand requests to smooth out random fluctuations. Comparing
the times for a single request does not tell us much.
--
Vikas Gorur
Engineer - Gluster, Inc
also post
your volume
files (server and client).
--
Vikas Gorur
Engineer - Gluster, Inc.
+1 (408) 770 1894
--
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin
.
$ glusterfs-volgen --raid=1 --name=testvolume server_1:/backend
server_2:/backend
(you can give an IP address instead of a name like server_1).
--
Vikas Gorur
Engineer - Gluster, Inc.
+1 (408) 770 1894
password: glusteradmin
and then do sudo -s to get a root shell.
--
Vikas Gorur
Engineer - Gluster, Inc.
+1 (408) 770 1894
--
___
Gluster-users mailing list
Gluster-users@gluster.org
http
be interested if anyone was implementing NFS v4 on top of Gluster.
Gluster should theoretically work on a NFSv4 backend. We would be interested to
hear about it as well if anyone gets it working.
--
Vikas Gorur
Engineer - Gluster, Inc.
+1 (408) 770 1894
??
This will not work as NFS does not support extended attributes.
--
Vikas Gorur
Engineer - Gluster, Inc.
+1 (408) 770 1894
--
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin
the backends before starting GlusterFS. Self-heal will be overkill
because it holds locks and so on to ensure no one else writes to the same
file while the healing is happening.
--
Vikas Gorur
Engineer - Gluster, Inc.
+1 (408) 770 1894
. There is
no way
to disable this.
--
Vikas Gorur
Engineer - Gluster, Inc.
+1 (408) 770 1894
--
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo
is the largest
chunk that the kernel
will send to GlusterFS. If you try with higher block sizes, the kernel breaks
them up into 128k chunks
and thus you will see no more improvement.
--
Vikas Gorur
Engineer - Gluster, Inc.
+1 (408) 770 1894
have a core file from glusterfs (look in /core.*), you can get a
backtrace by:
# gdb /path/to/glusterfs /path/to/core/file
(gdb) thread apply all bt full
Please send us the output of that command.
--
Vikas Gorur
Engineer - Gluster, Inc.
+1 (408) 770 1894
all your questions.
--
Vikas Gorur
Engineer - Gluster, Inc.
+1 (408) 770 1894
--
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster
fixed in the 3.x releases.
--
Vikas Gorur
Engineer - Gluster, Inc.
--
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
, as the first server is used for locking.
--
Vikas Gorur
Engineer - Gluster, Inc.
--
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
On Feb 24, 2010, at 11:44 AM, Joelly Alexander wrote:
hi,
i want to setup a two node glusterfs with afr;
my servers have 4x 1gb interfaces build in, so i thought if it is possible to
connect the two nodes together on two interfaces with bonding mode 0 for
replication and the other two
Harald Stürzebecher wrote:
Would reexporting the GlusterFS volume as described in
http://gluster.com/community/documentation/index.php/Storage_Server_Installation_and_Configuration#Add_NFSv3_Protocol
work in this case?
Re-export will work without problems.
Vikas
Olivier Le Cam wrote:
Does it make sense to go that way? In fact, I don't have all the
equipment required to switch to a (replicated/distributed) 4-bricks.
This is why it came to my mind to do it step-by-step.
Yes, the steps you outlined will work. Just export your existing volume
using
Olivier Le Cam wrote:
Thanks Vikas. BTW, might it be possible to have the same volume
exported both by regular-NFS and GlusterFS at the same time in order
to migrate my clients smoothly? Is there any risks to get GlusterFS
confused and/or the ext3 volume damaged?
That would be quite risky. If
Fredrik Widlund wrote:
How do I add a brick to a replicate setup with preexisting data. There is old
article from almost 2 years back, and I don't think I will try that one out.
Do you want to increase the number of replicas, say from 2 to 3?
If so, you can add a new subvolume to your
Adrian Revill wrote:
This is an edge case scenario, but one that I am worried about.
Say you have 2 storage nodes as a mirror on a 10G network, and you
write into a client mount also on the 10G network. Then data will be
replicated by the client via the 10G network to the 2 servers.
If one
Mike,
There's a typo in your client volume file. You've specified the same
server twice in your replicate configuration.
volume pair02
type cluster/replicate
subvolumes cf03 cf03
end-volume
That line should be:
subvolumes cf03 cf04
I'm guessing that you killed the server cf03 and
milo...@gmail.com wrote:
I expect that a background process will re-sync the content between
the two server nodes. This hasn't happened (check the timestamps in
the log below).
Self-heal is not done by a background process. When a server comes back
up, self-heal on files will happen the next
milo...@gmail.com wrote:
Any hints about the problem that with the original,
glusterfs-volgengenerated config (with writeback, readahead, etc) does
not startsef-healing for 'ls -lR', I need to run this:
find /gtest/ -type f -exec ls -la {} /dev/null \;
Both 'ls -lR' and 'find' do
Jiann-Ming Su wrote:
In the User Guide:
http://www.gluster.com/community/documentation/index.php/User_Guide#Replicate
The next release of GlusterFS will add the following features:
Both are already in 3.0.0
*Ability to specify the sub-volume from which read operations are to
be done (this
Adrian Revill wrote:
Hi
I am looking at which is the best bonding mode for giagbit links for
the servers. I have a choice of using the 802.3ad (mode4) or
bonding-rr (mode0)
I would prefer to use mode4 but this will only give a single TCP
connection 1Gbit of bandwidth, where mode0 will give
Dmitry Sukhodoyev wrote:
i have ports directory with lot of files:
ls -lR ports | wc -l
228990
and not so much size:
du -hs ports
739Mports
then i try same commands on glusterfs (afr+dht), i receive error: du:
fts_read: No such file or directory. how to fix it? volume files for server
and
Dmitry Sukhodoyev wrote:
there is static directory, no any other actions performed while du was
running.
Can you try this with GlusterFS 3.0 if possible? Are you using
GlusterFS on Solaris? Also, how easy is it to reproduce this?
Vikas
___
Adrian Revill wrote:
Hi
I am having trouble with AFR, I have two servers set up to mirror, if
I shut down either server and then copy a file into the client mount
then restart the server I get a 0 size file on the newly started
servers backing store. Which I guess is to be expected.
But if i
Nick Birkett wrote:
Yesterday I updated to 3.0.0 (server and clients) and re-configured
the server and client vol files using glusterfs-volgen (renamed some
of the vol names).
Nick,
Thanks for testing. If you still have the hung processes, can you get
the statedump
for us (for both servers
Adrian Revill wrote:
Hi
I have read through the docs and google and I think i am trying to do
this right, but just wanted to be sure i have it correctly configured.
I have 2 servers factory1 and factory2, both clean installs of RHEL5.4
on basic hardware.
I followed the instructions from
Krzysztof Strasburger wrote:
volume replicated
type cluster/replicate
subvolumes sub1 sub2
end-volume
and on host2:
volume replicated
type cluster/replicate
subvolumes sub2 sub1
end-volume
then following (positive) side-effects should occur:
1. After a crash, ls -R would correctly
joel vennin wrote:
Hi ! I'm continuing to test glusterfs,
I've a strange problem us replication in fact. My setup is prety easy, i've
3 nodes, and I want to make replication through this 3 nodes
Basic configuration:
volume mega-replicator
type cluster/replicate
subvolumes
Thomas Wakefield wrote:
I want to have two or more servers each serving out 40TB of disk space
(80TB+ of total space). And i am wondering the best way to configure
this amount of disk.
Is it possible to have multiple volumes mounted on single gluster
server, but for the client to see the
- Christian Schab christian.sc...@me.com wrote:
On each Webserver is a local brick used as a type of cache.
Can you explain a bit more about how the local brick is used
and why it's needed?
Now when we change a file on a Fileserver.
Are you changing the file directly on the backend on
- Christian Schab christian.sc...@me.com wrote:
It is just used as another copy of data.
I feel like using such a local brick optimizes the performance very
much.
In that case you should set the option read-subvolume to brick,
so that reads go to the local brick.
No, we change file
- Christian Schab christian.sc...@me.com wrote:
When I use this option, and the file isn't available in brick, does
it automatically loads the file from a fileserver?
Yes.
And now the old file ist now on the Fileserver again ... :-(
Can you try to reproduce this with the GlusterFS
Signed-off-by: Anand V. Avati av...@dev.gluster.com
BUG: 150 (AFR readdir should not failover to other subvolume)
URL: http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=150
and in mainline by:
commit 77d8cfeab52cd19f3b70abd9c3a2c4bbc9219bff
Author: Vikas Gorur vi
- Stephan von Krawczynski sk...@ithnet.com wrote:
Hello all,
I really do wonder if any of you does rsync onto glusterfs. I can't
hardly believe that, because 2.0.6 as every single release before has broken
mtimes on non-empty directories. Am I really the only one recognising this
- Sven Kummer m...@proxworx.org wrote:
I removed the files in the export directory on server1 only for
simulating a hdd crash (files lost) and want the server1-daemon to resync it's
export directory with the other server.
Sven,
What is happening in your case is:
- Hiren Joshi j...@moonfruit.com wrote:
Hello all,
I have 2 servers both exporting 36 partitions. The client mirrors the
36 partitions on each server and puts them all into a DHT.
dd if=/dev/zero of=/home/webspace_glust/zeros bs=1024 count=1024000
Takes 8 minutes, compared to 30
- Marko Weber | Zackbummfertig we...@zackbummfertig.de wrote:
Hello,
is the communciation between the glsuterfs connected Servers secure ?
Is there an SSL / TLS possible to crypt the datatransfer ?
No. The general assumption is that GlusterFS runs inside a 'trusted' network
with
- Justin Kolb jk...@realtour.biz wrote:
Thanks for the clarification, it's what I had figured but wanted to
make sure.
Have another question: is it possible to dynamically add AFR nodes
(preferably without bringing down the server processes)?
You should be able to add a volume to the
- John Simmonds john2.simmo...@uwe.ac.uk wrote:
B: Samba's ping_pong test works if I do it on just one GlusterFS
client but if I run it on two simultaneously (like the Samba Wiki on
ping_pong suggests you test), it appears to lock up the ping_pong
program. Anyone got ping_pong test to
- Matt M mister...@gmail.com wrote:
If all the nodes are both client and servers, how many replicate
volumes should I have? I don't want a ton of replication traffic, but
redundancy is important. Maximizing the size of the volume is less
important to me since I'm just reclaiming
- Mathew Eis mat...@eisbox.net wrote:
1) Is the above possible with the current Gluster translators? I've
experimented with various combinations of translators, including
replicate, nufa, writebehind, and io-threads, and while I have been
able to improve performance of some operations,
- Jeff Evans je...@tricab.com wrote:
Not long at all, as this crash was a controlled (induced) one:
Thanks, Jeff. We have been able to reproduce this. You can track the
progress of this bug at:
http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=144
Vikas
- Hiren Joshi j...@moonfruit.com wrote:
My thinking is both on the client so:
I AFR my nodes.
I then DHT my AFR bricks.
I then mount the DHT vols.
Or would I get better performance the other way around?
DHT over AFR'd pairs is the configuration you want. You can
then add another AFR
Stephan,
Are you absolutely sure you're running 2.0.4? I'm asking because I
_can_ reproduce the problem exactly as you described in 2.0.3 without
the patch mentioned and can see the problem goes away with the patch/2.0.4.
Folks,
If one of you has a little time, could you please test this
and
- Stephan von Krawczynski sk...@ithnet.com wrote:
Maybe I should more precisely explain what I am doing, in case there
are related problems.
I am using a copy of the opensuse 11.1 DVD for that test. I copy the
whole DVD files into a directory called suse on the first server. The
- Hiren Joshi j...@moonfruit.com wrote:
Hi all,
I have a setup with 2 nodes mirrored, if I simulate a disk crash (take
a
server node out, clear the data and restart it) on the first stat of
files on the client, it appears to be self healing but during this
time
(it's syncing about
- Stephan von Krawczynski sk...@ithnet.com wrote:
Hello glusterfs team,
just in case you need this info:
2.0.3rc1 does not set the dir timestamp correctly while healing and
the
client processes exit immediately when starting bonnie.
Can you elaborate a little more? What
- Stephan von Krawczynski sk...@ithnet.com wrote:
Hello,
do you still have my config from the last debugging? We use the very
same testbed with rc1.
Sorry, I don't. Can you tell me briefly what's the config and when
do the directory timestamps end up being inconsistent?
Vikas
--
- maurizio oggiano oggiano.mauri...@gmail.com wrote:
I all,
I have some problem with automatic file translator ( afr). I have two
server
A e B. Both servers have afr client configured.
If I stop one server, for example B, The file system managed from AFR
is not
available for 30 sec
- Phillip Walsh phil...@symmetry.com wrote:
Hello,
I'm having problems with 2 server setup using replicate to create a 2
system mirror for small HA setup. It seems like a locking issue or
something. The below configuration was based on a tutorial and seemed
solid when testing, however
- Peter Gervai grin...@gmail.com wrote:
However I cannot seem to be able to raise block performance below 64k
(especially around 4k) higher than 2-3MB/s (or 9-10MB/s without WB);
it basically doesn't change if I try to remove other translators.
CPU and network load seems to be low on
- eagleeyes eaglee...@126.com wrote:
HI:
Was there a limit of servers which was used as storage in Gluster
?
No, there is no limit on the number of servers that can be used as storage.
Vikas
--
Engineer - http://gluster.com/
A: Because it messes up the way people read text.
Q: Why
- eagleeyes eaglee...@126.com wrote:
Thanks a lot, but in unify ,dht and stripe mode ,is there a limit of
the number of servers ?
No.
--
Engineer - http://gluster.com/
A: Because it messes up the way people read text.
Q: Why is a top-posting such a bad thing?
--
- Arend-Jan Wijtzes ajwyt...@wise-guys.nl wrote:
Hi Gluster people,
We are seeing errors when GlusterFS is being accessed after a long
period (days) of inactivity (the FS is used but not from this
machine).
The error is not related to the inactivity. Take a look at these lines of
the
- Arend-Jan Wijtzes ajwyt...@wise-guys.nl wrote:
So there is a timeout, but whatever the cause is, it's triggered by
long term inactivity. We never had any network problems.
Other machines that access the filesystem on a regular basis do not
show this problem. It's only the machine that
- Ville Tuulos tuu...@gmail.com wrote:
Not a single file is synced to the recovered node but I can read the
files ok from other replicas. However, if I create the missing
directories manually in the data directory, the files get synced
correctly.
A nasty implication of this bug is
- Daniel Jordan Bambach d...@lateral.net wrote:
that files in the io-cache will be invalidated after 1 second (by
default), and therefore if the access pattern for files is longer than
this, then the cache won't offer any benefit?
You can set the option cache-timeout to a higher
- Paras Fadte plf...@gmail.com wrote:
Hi,
Does glusetrfs client have any issues with stability ? I ran
glusterfs
setup for about 4 days and have encountered issue of client suddenly
getting stopped causing Transport end-point not connected error
message while accessing the mount
- Paras Fadte plf...@gmail.com wrote:
I would like to know what are the common circumstances under which
glusterfs client would stop . Would it have any relation with Disk
issue ? space on disk ?
You would get the Transport endpoint not connected error when the client
cannot reach the
- Stephan von Krawczynski sk...@ithnet.com wrote:
Hopefully you do not disconnect after one lost ping. Unfortunately
one can
experience some lost packets over the day. There is few you can do
about it.
Some are lost because of real disconnects but most are just glitches
in
switches
- Raja coksu...@spectrum.net.in wrote:
Can you please send me the output of this command run as root?
# getfattr -d -m '.*' -e hex zip/tba48701ma141232158444.zip
(The getfattr tool is part of the 'attr' package in Debian/Ubuntu.
Source can be found here:
- Raja coksu...@spectrum.net.in wrote:
Hi Vikas,
Thank you. It works fine, now i can copy the file using apache user.
I need one more help, while i am mounting
gluster its shows the whole path ( in GlusterFS-2.0.1).
glusterfs -f /etc/glusterfs/glusterfs.vol /opt/home/storage/
- Matthew J. Salerno vagabond_k...@yahoo.com wrote:
I'm still unable to find a resolution. Has anyone else come across
this?
A patch that fixes this is under review. It should be in the repository in a
couple of days.
Vikas
--
Engineer - Z Research
http://gluster.com/
A few recent fixes that have gone into the repository should fix this
issue. Can you please check with the latest git (or pre36)?
Vikas
--
Engineer - Z Research
http://gluster.com/
___
Gluster-users mailing list
Gluster-users@gluster.org
2009/4/2 Stas Oskin stas.os...@gmail.com:
Great - hope it gets fixed soon! :)
This issue should be fixed in the latest git. Can you please check?
Vikas
--
Engineer - Z Research
http://gluster.com/
___
Gluster-users mailing list
2009/4/1 Vu Tong Minh vtm...@fpt.net:
Hello,
I have a system with 4 nodes.
Node 1 for downloading config as client
Node 2, node 3 for uploading config as client
Node 4 for storage config as server
1,2,3's config:
volume storage1
type protocol/client
option
2009/4/1 Vu Tong Minh vtm...@fpt.net:
I tried to load posix_locks on node1 too, but I got error:
Sorry, my bad. Got a little confused.
The problem with your config is that you should export the locks
volume from the server and specify locks as the remote-subvolume in
client.
So your
Stas,
Would it be possible to give us remote access to your cluster sometime
so that I can look at all the issues you are facing?
Vikas
--
Engineer - Z Research
http://gluster.com/
___
Gluster-users mailing list
Gluster-users@gluster.org
2009/3/28 Sean Davis sdav...@mail.nih.gov:
My understanding is that distribute uses a hash to do the distribution. So,
if you copy the same file, you will get the same result every time; that is,
it will go to the same server every time. Hashing is deterministic in that
sense. Vikas or
2009/3/28 Simon Liang sim...@bigair.net.au:
Is there a way to use a round-robin or something, as opposed to hash?
The problem with distribute is that it does not know the disk has run out of
space, until it's starting copying and run out of space.
Can this be avoided?
As Sean Davis said in
2009/3/27 Simon sim...@bigair.net.au:
Hi Vikas,
I am currently just testing a basic distribute with 2 servers.
Distribute schedules files to servers based on a hash of the inode
number. In your case it appears that the file has been scheduled to
server B by the hash algorithm.
Vikas
--
2009/3/26 Simon sim...@bigair.net.au:
Hi Guys,
I’ve built a client-side Distributed storage system with 2 servers and I’ve
ran into a problem.
ServerA has about 3GB of free space, and ServerB has 600mb.
I’m trying to copy a 1GB file into the storage system, but it keeps trying
to copy
2009/3/26 Stas Oskin stas.os...@gmail.com:
Hi.
We erased all the data from our mount point, but the df still reports
it's almost full:
glusterfs 31G 27G 2.5G 92% /mnt/glusterfs
Running du either in the mount point, or in the back-end directory,
reports 914M.
How do we get the space
2009/3/25 Jeff Lord jl...@mediosystems.com:
I am wondering what the difference between these two might be?
volume replicate
type cluster/replicate
subvolumes gfs01-hq.hq.msrch gfs02-hq.hq.msrch
end-volume
volume replicate
type cluster/afr
subvolumes gfs01-hq.hq.msrch
2009/3/14 Keith Freedman freed...@freeformit.com:
all of a sudden, I'm getting messages such as this:
2009-03-13 23:14:06 C [posix.c:709:pl_forget] posix-locks-home1: Pending
fcntl locks found!
and some processes are hanging waiting presumably for the locks?
any way to find out what files
1 - 100 of 106 matches
Mail list logo