[Gluster-users] Performance on GlusterFS

2011-06-23 Thread Anish.B.Kumar
Hi

I have setup a 4 node cluster on virtual servers on RHEL platform.
Not able to get better performance statistics on glusterFS as compared to local 
file system.
Kindly suggest a test run that can be checked to differentiate between them.

Regards,
Anish Kumar


"Confidentiality Warning: This message and any attachments are intended only 
for the use of the intended recipient(s). 
are confidential and may be privileged. If you are not the intended recipient. 
you are hereby notified that any 
review. re-transmission. conversion to hard copy. copying. circulation or other 
use of this message and any attachments is 
strictly prohibited. If you are not the intended recipient. please notify the 
sender immediately by return email. 
and delete this message and any attachments from your system.

Virus Warning: Although the company has taken reasonable precautions to ensure 
no viruses are present in this email. 
The company cannot accept responsibility for any loss or damage arising from 
the use of this email or attachment."
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] "Invalid argument" when removing files and "No suchfile or directory" when creating files

2011-06-23 Thread Amar Tumballi
Hi,

Can you please paste your 'client' log files ? That should help to debug
with reasons why you are getting such errors. We too will try to re-create
the issue and see if there are any obvious issues.

-Amar

2011/6/24 nuaa_liuben 

> **
> More information: Back-end filesystem is EXT4, gluster version is 3.2.1.
>
> thanks
>
> --
> *>From:*  nuaa_liuben
> *>Send Time: * 2011-06-23  18:27:00
> *>TO:* gluster-users
> *>CC: *
> *>Subject:* [Gluster-users] "Invalid argument" when removing files and "No
> suchfile or directory" when creating files
>  *> *
> Hi, All,
>
> I create a Distributed-Stripe volume with 4 brick servers, volume
> configuration as follow:
>
> Volume Name: tvol
> Type: Distributed-Stripe
> Status: Started
> Number of Bricks: 3 x 4 = 12
> Transport-type: tcp
> Bricks:
> Brick1: 192.168.8.200:/mnt/sdb/tvol
> Brick2: 192.168.8.201:/mnt/sdb/tvol
> Brick3: 192.168.8.202:/mnt/sdb/tvol
> Brick4: 192.168.8.203:/mnt/sdb/tvol
> Brick5: 192.168.8.200:/mnt/sdc/tvol
> Brick6: 192.168.8.201:/mnt/sdc/tvol
> Brick7: 192.168.8.202:/mnt/sdc/tvol
> Brick8: 192.168.8.203:/mnt/sdc/tvol
> Brick9: 192.168.8.200:/mnt/sdd/tvol
> Brick10: 192.168.8.201:/mnt/sdd/tvol
> Brick11: 192.168.8.202:/mnt/sdd/tvol
> Brick12: 192.168.8.203:/mnt/sdd/tvol
>
>
> mount volume using native glusterfs, but "Invalid argument" error occured
> when removing files , and "No such file or directory" when creating files.
>
> [root@client cifs]# touch b
> touch: cannot touch `b': No such file or directory
> [root@client cifs]# rm -rf a
> rm: cannot remove `a': Invalid argument
>
> Anybody meet such problems? And hope advice and solutions.
>
> BR,
> liuben
>
> 
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] "Invalid argument" when removing files and "No suchfile or directory" when creating files

2011-06-23 Thread nuaa_liuben
More information: Back-end filesystem is EXT4, gluster version is 3.2.1.

thanks




>From:  nuaa_liuben 
>Send Time:  2011-06-23  18:27:00 
>TO: gluster-users 
>CC:   
>Subject: [Gluster-users] "Invalid argument" when removing files and "No 
>suchfile or directory" when creating files 
> 
Hi, All,

I create a Distributed-Stripe volume with 4 brick servers, volume configuration 
as follow:

Volume Name: tvol
Type: Distributed-Stripe
Status: Started
Number of Bricks: 3 x 4 = 12
Transport-type: tcp
Bricks:
Brick1: 192.168.8.200:/mnt/sdb/tvol
Brick2: 192.168.8.201:/mnt/sdb/tvol
Brick3: 192.168.8.202:/mnt/sdb/tvol
Brick4: 192.168.8.203:/mnt/sdb/tvol
Brick5: 192.168.8.200:/mnt/sdc/tvol
Brick6: 192.168.8.201:/mnt/sdc/tvol
Brick7: 192.168.8.202:/mnt/sdc/tvol
Brick8: 192.168.8.203:/mnt/sdc/tvol
Brick9: 192.168.8.200:/mnt/sdd/tvol
Brick10: 192.168.8.201:/mnt/sdd/tvol
Brick11: 192.168.8.202:/mnt/sdd/tvol
Brick12: 192.168.8.203:/mnt/sdd/tvol


mount volume using native glusterfs, but "Invalid argument" error occured when 
removing files , and "No such file or directory" when creating files.

[root@client cifs]# touch b
touch: cannot touch `b': No such file or directory
[root@client cifs]# rm -rf a
rm: cannot remove `a': Invalid argument

Anybody meet such problems? And hope advice and solutions.

BR,
liuben

 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Files present on the backend but have become invisible from clients

2011-06-23 Thread Burnash, James
Hi Anand.

Here is my script (no heckling - I know I do not produce beautiful code :)), 
and below is the invocation I used:

#!/bin/bash
#
# description:  # Gluster server script to check brick extended attributes
#
# J. Burnash 2011
#
# $Id$

ECHO_CMD=""
SORT_CMD="cat"
brick_parent="read-only"
while getopts  "a:b:ehsw" flag
do
case $flag in
a) attribute=$OPTARG;
   OPTARG="";
   ;;
b) brick=$OPTARG;
   OPTARG="";
   ;;
e) ECHO_CMD="-n";
   ;;
s) SORT_CMD="sort -k 2";
   ;;
w) brick_parent="read-write"
   OPTARG="";
   ;;
h) progname=$(basename $0);
   echo "$progname: Usage:";
   echo "$progname <-a attribute> <-b brick> hostname1 
 ... ";
   echo "where  -e puts the hostname at the beginning of the 
output from the requested comand,";
   echo "   -a  sets the extended attribute 
to be displayed,";
   echo "   -b  sets the brick for which 
extended attributes are to be displayed,";
   echo "and-h gives this help message";
   echo "the brick and attribute arguments may be enclosed in 
quotes to allow multiple matches for each host";
   echo "Example: $progname 'service nitemond restart' 
hostname1 "
   exit;
   ;;
*) echo "Unknown argument $flag";
   ;;
esac
# Debug
#echo "Flag=$flag OPTIND=$OPTIND OPTARG=$OPTARG"
done

# Dispose of processed option arguments:
shift $((OPTIND-1)); OPTIND=1

loop_check -f $ECHO_CMD "cd /export/$brick_parent; for brick_root in $brick; do 
echo -n \$HOSTNAME; getfattr -d -e hex -m $attribute \$brick_root |  xargs echo 
| sed -e \"s/# file:/ /\" ; done" $@ | awk '{print $2,$3,$1}' | $SORT_CMD

Invocation: check_brick_attrs_ro -e -s -a trusted.afr -b 'g0{1,2}' 
jc1letgfs{14,15,17,18}

Thanks,

James Burnash
Unix Engineer
Knight Capital Group

From: Anand Avati [mailto:anand.av...@gmail.com]
Sent: Thursday, June 23, 2011 6:31 AM
To: Jeff Darcy
Cc: Burnash, James; gluster-users@gluster.org
Subject: Re: [Gluster-users] Files present on the backend but have become 
invisible from clients


On Thu, Jun 23, 2011 at 2:10 AM, Jeff Darcy 
mailto:jda...@redhat.com>> wrote:
On 06/22/2011 02:44 PM, Burnash, James wrote:
> g01/pfs-ro1-client-0=0x jc1letgfs17
> g01/pfs-ro1-client-0=0x0608 jc1letgfs18
> g01/pfs-ro1-client-20=0x jc1letgfs14
> g01/pfs-ro1-client-20=0x0200 jc1letgfs15
> g02/pfs-ro1-client-2=0x jc1letgfs17
> g02/pfs-ro1-client-2=0x4504 jc1letgfs18
> g02/pfs-ro1-client-22=0x jc1letgfs14
> g02/pfs-ro1-client-22=0x0200 jc1letgfs15
>
> Would anybody have any insights as to what is going on here? I'm
> seeing attributes in my sleep these days ... that cannot be good!

Can you give your script or explain what each of those fields mean and how they 
fit into your volume configuration? Also can you post your client volfile?

Avati



DISCLAIMER:
This e-mail, and any attachments thereto, is intended only for use by the 
addressee(s)named herein and
may contain legally privileged and/or confidential information. If you are not 
the intended recipient of this
e-mail, you are hereby notified that any dissemination, distribution or copying 
of this e-mail and any attachments
thereto, is strictly prohibited. If you have received this in error, please 
immediately notify me and permanently
delete the original and any printout thereof. E-mail transmission cannot be 
guaranteed to be secure or error-free.
The sender therefore does not accept liability for any errors or omissions in 
the contents of this message which
arise as a result of e-mail transmission.
NOTICE REGARDING PRIVACY AND CONFIDENTIALITY
Knight Capital Group may, at its discretion, monitor and review the content of 
all e-mail communications.

http://www.knight.com


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Files present on the backend but have become invisible from clients

2011-06-23 Thread Burnash, James
Sorry - volfile:

http://pastebin.com/2hYZw2V9

James Burnash
Unix Engineer
Knight Capital Group

From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Burnash, James
Sent: Thursday, June 23, 2011 2:03 PM
To: 'Anand Avati'; Jeff Darcy
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Files present on the backend but have become 
invisible from clients

Hi Anand.

Here is my script (no heckling - I know I do not produce beautiful code :)), 
and below is the invocation I used:

#!/bin/bash
#
# description:  # Gluster server script to check brick extended attributes
#
# J. Burnash 2011
#
# $Id$

ECHO_CMD=""
SORT_CMD="cat"
brick_parent="read-only"
while getopts  "a:b:ehsw" flag
do
case $flag in
a) attribute=$OPTARG;
   OPTARG="";
   ;;
b) brick=$OPTARG;
   OPTARG="";
   ;;
e) ECHO_CMD="-n";
   ;;
s) SORT_CMD="sort -k 2";
   ;;
w) brick_parent="read-write"
   OPTARG="";
   ;;
h) progname=$(basename $0);
   echo "$progname: Usage:";
   echo "$progname <-a attribute> <-b brick> hostname1 
 ... ";
   echo "where  -e puts the hostname at the beginning of the 
output from the requested comand,";
   echo "   -a  sets the extended attribute 
to be displayed,";
   echo "   -b  sets the brick for which 
extended attributes are to be displayed,";
   echo "and-h gives this help message";
   echo "the brick and attribute arguments may be enclosed in 
quotes to allow multiple matches for each host";
   echo "Example: $progname 'service nitemond restart' 
hostname1 "
   exit;
   ;;
*) echo "Unknown argument $flag";
   ;;
esac
# Debug
#echo "Flag=$flag OPTIND=$OPTIND OPTARG=$OPTARG"
done

# Dispose of processed option arguments:
shift $((OPTIND-1)); OPTIND=1

loop_check -f $ECHO_CMD "cd /export/$brick_parent; for brick_root in $brick; do 
echo -n \$HOSTNAME; getfattr -d -e hex -m $attribute \$brick_root |  xargs echo 
| sed -e \"s/# file:/ /\" ; done" $@ | awk '{print $2,$3,$1}' | $SORT_CMD

Invocation: check_brick_attrs_ro -e -s -a trusted.afr -b 'g0{1,2}' 
jc1letgfs{14,15,17,18}

Thanks,

James Burnash
Unix Engineer
Knight Capital Group

From: Anand Avati [mailto:anand.av...@gmail.com]
Sent: Thursday, June 23, 2011 6:31 AM
To: Jeff Darcy
Cc: Burnash, James; gluster-users@gluster.org
Subject: Re: [Gluster-users] Files present on the backend but have become 
invisible from clients


On Thu, Jun 23, 2011 at 2:10 AM, Jeff Darcy 
mailto:jda...@redhat.com>> wrote:
On 06/22/2011 02:44 PM, Burnash, James wrote:
> g01/pfs-ro1-client-0=0x jc1letgfs17
> g01/pfs-ro1-client-0=0x0608 jc1letgfs18
> g01/pfs-ro1-client-20=0x jc1letgfs14
> g01/pfs-ro1-client-20=0x0200 jc1letgfs15
> g02/pfs-ro1-client-2=0x jc1letgfs17
> g02/pfs-ro1-client-2=0x4504 jc1letgfs18
> g02/pfs-ro1-client-22=0x jc1letgfs14
> g02/pfs-ro1-client-22=0x0200 jc1letgfs15
>
> Would anybody have any insights as to what is going on here? I'm
> seeing attributes in my sleep these days ... that cannot be good!

Can you give your script or explain what each of those fields mean and how they 
fit into your volume configuration? Also can you post your client volfile?

Avati


DISCLAIMER:
This e-mail, and any attachments thereto, is intended only for use by the 
addressee(s)named herein and
may contain legally privileged and/or confidential information. If you are not 
the intended recipient of this
e-mail, you are hereby notified that any dissemination, distribution or copying 
of this e-mail and any attachments
thereto, is strictly prohibited. If you have received this in error, please 
immediately notify me and permanently
delete the original and any printout thereof. E-mail transmission cannot be 
guaranteed to be secure or error-free.
The sender therefore does not accept liability for any errors or omissions in 
the contents of this message which
arise as a result of e-mail transmission.
NOTICE REGARDING PRIVACY AND CONFIDENTIALITY
Knight Capital Group may, at its discretion, monitor and review the content of 
all e-mail communications.

http://www.knight.com

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Files present on the backend but have become invisible from clients

2011-06-23 Thread Jeff Darcy
On 06/23/2011 12:38 PM, Burnash, James wrote:
> Well, it took me 3 reads (1 out loud) to process what was going in 
> your explanation - but then it all made sense :-)

Sorry about that. People have told me I can be a bit cryptic when I've
had too much caffeine. Unfortunately I'm worse when I haven't had enough. ;)

> As to your question about xattributes on those 4 directories at the 
> end of your message:
> 
> fs18/g01/pfs-ro1-client-1 getfattr -d -e hex -m - 
> /export/read-only/g01 getfattr: Removing leading '/' from absolute 
> path names # file: export/read-only/g01 
> trusted.afr.pfs-ro1-client-0=0x0608 
> trusted.afr.pfs-ro1-client-1=0x 
> trusted.gfid=0x0001 
> trusted.glusterfs.dht=0x000133303ffb 
> trusted.glusterfs.test=0x776f726b696e6700

OK, so much for that theory.  The xattrs for the nodes themselves do
seem to be present and valid, so we wouldn't get past the first step of
the sequence I described.  What I do know is that we're getting to a
place in the code (we call afr_sh_wise_nodes_conflict and it returns
TRUE) where all of the nodes seem to be "accusing" each other even
though no such pattern is apparent from the xattrs.  If the xattrs are
there, then maybe we're looking at the wrong ones.  Does fs18 know that
its g01 is "client-1" in the volfile, and thus should be looking at
trusted.afr.pfs-ro1-client-1 for itself?  Or does it think it should be
looking at trusted.afr.pfs-ro1-client57 - which obviously isn't there?
I guess there might be cases where that could happen because of
inconsistent volfiles, perhaps some kind of DNS issues, maybe even bugs
in the code.  Maybe looking for "pending_key" in a state dump (what you
get if you send SIGUSR1 to glusterfsd) would reveal any such
inconsistencies.

Yeah, I'm grasping at straws here.  It's just not making sense.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Files present on the backend but have become invisible from clients

2011-06-23 Thread Burnash, James
Hi Jeff.

Well, it took me 3 reads (1 out loud) to process what was going in your 
explanation - but then it all made sense :-)

As to your question about xattributes on those 4 directories at the end of your 
message:

fs18/g01/pfs-ro1-client-1
getfattr -d -e hex -m - /export/read-only/g01
getfattr: Removing leading '/' from absolute path names
# file: export/read-only/g01
trusted.afr.pfs-ro1-client-0=0x0608
trusted.afr.pfs-ro1-client-1=0x
trusted.gfid=0x0001
trusted.glusterfs.dht=0x000133303ffb
trusted.glusterfs.test=0x776f726b696e6700

fs15/g01/pfs-ro1-client-21
getfattr -d -e hex -m - /export/read-only/g01
getfattr: Removing leading '/' from absolute path names
# file: export/read-only/g01
trusted.afr.pfs-ro1-client-20=0x0200
trusted.afr.pfs-ro1-client-21=0x
trusted.gfid=0x0001
trusted.glusterfs.dht=0x00014cc85993
trusted.glusterfs.test=0x776f726b696e6700

fs18/g02/pfs-ro1-client-3
getfattr -d -e hex -m - /export/read-only/g02
getfattr: Removing leading '/' from absolute path names
# file: export/read-only/g02
trusted.afr.pfs-ro1-client-2=0x4504
trusted.afr.pfs-ro1-client-3=0x
trusted.gfid=0x0001
trusted.glusterfs.dht=0x00013ffc4cc7
trusted.glusterfs.test=0x776f726b696e6700

fs15/g02/pfs-ro1-client-23
getfattr -d -e hex -m - /export/read-only/g02
getfattr: Removing leading '/' from absolute path names
# file: export/read-only/g02
trusted.afr.pfs-ro1-client-22=0x0200
trusted.afr.pfs-ro1-client-23=0x
trusted.gfid=0x0001
trusted.glusterfs.dht=0x00015994665f
trusted.glusterfs.test=0x776f726b696e6700

Does that help at all?

James Burnash
Unix Engineer
Knight Capital Group


-Original Message-
From: Jeff Darcy [mailto:jda...@redhat.com]
Sent: Wednesday, June 22, 2011 4:41 PM
To: Burnash, James
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Files present on the backend but have become 
invisible from clients

On 06/22/2011 02:44 PM, Burnash, James wrote:
> g01/pfs-ro1-client-0=0x jc1letgfs17
> g01/pfs-ro1-client-0=0x0608 jc1letgfs18
> g01/pfs-ro1-client-20=0x jc1letgfs14
> g01/pfs-ro1-client-20=0x0200 jc1letgfs15
> g02/pfs-ro1-client-2=0x jc1letgfs17
> g02/pfs-ro1-client-2=0x4504 jc1letgfs18
> g02/pfs-ro1-client-22=0x jc1letgfs14
> g02/pfs-ro1-client-22=0x0200 jc1letgfs15
>
> Would anybody have any insights as to what is going on here? I'm
> seeing attributes in my sleep these days ... that cannot be good!

When I look at this, a few things occur to me.  First, those are some pretty 
big metadata-change numbers.  For g02 on fs18, 0x4504 is actually 
0x0445 - about 67M - after byte swapping.  The other thing that seems 
strange is that it always seems to be the second member of a replica pair 
"indicting" the first. Lastly, if you don't see any non-zero xattrs besides 
those above then this is not a normal split-brain situation.  It might be a 
more exotic kind, based on
*missing* xattrs.  Here's the sequence:

* Lookup for '/' on client-0 returns a zero opcounts for both client-0
  and client-1

* Lookup for '/' on client-1 returns a non-zero opcount for client-0 and
  *no xattr at all* (i.e. missing) for client-1

* Client-1 is declared "ignorant" in afr_sh_build_pending_matrix

* Client-0's value for client-1 is incremented, again in
  afr_sh_build_pending_matrix

* Client-0 and client-1 are both marked "wise" in afr_sh_mark_sources

* Client-0 and client-1 both receive wisdom=0 in afr_sh_compute_wisdom

* No wisdom=1 nodes are found in afr_sh_wise_nodes_conflict

* The "Unable to self-heal" messages come from afr_sh_metadata_fix

So, what I'd do is check whether the following xattrs are missing:

fs18/g01/pfs-ro1-client-1
fs15/g01/pfs-ro1-client-21
fs18/g02/pfs-ro1-client-3
fs15/g02/pfs-ro1-client-23

There might be other "false split-brain" scenarios lurking in this code.
 I don't fully understand it, but it seems to go through a lot of paths that 
might not have been fully tested.


DISCLAIMER:
This e-mail, and any attachments thereto, is intended only for use by the 
addressee(s) named herein and may contain legally privileged and/or 
confidential information. If you are not the intended recipient of this e-mail, 
you are hereby notified that any dissemination, distribution or copying of this 
e-mail, and any attachments thereto, is strictly prohibited. If you have 
received this in error, please immediately notify me and permanently delete the 
original and any copy of any e-mail and any printout thereof. E-m

[Gluster-users] Difference type of volume in GlusterFS

2011-06-23 Thread Marco Agostini
Hi, I have read the documentation on www.gluster.com but I don't
understand the difference from some volume type.

I know that Replicated is similar to a RAID1 on 2 server, but I don't
understand how work Distributed and Striped volume.

Someone can help me to understand ?
thank you very much.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Files present on the backend but have become invisible from clients

2011-06-23 Thread Anand Avati
On Thu, Jun 23, 2011 at 2:10 AM, Jeff Darcy  wrote:

> On 06/22/2011 02:44 PM, Burnash, James wrote:
> > g01/pfs-ro1-client-0=0x jc1letgfs17
> > g01/pfs-ro1-client-0=0x0608 jc1letgfs18
> > g01/pfs-ro1-client-20=0x jc1letgfs14
> > g01/pfs-ro1-client-20=0x0200 jc1letgfs15
> > g02/pfs-ro1-client-2=0x jc1letgfs17
> > g02/pfs-ro1-client-2=0x4504 jc1letgfs18
> > g02/pfs-ro1-client-22=0x jc1letgfs14
> > g02/pfs-ro1-client-22=0x0200 jc1letgfs15
> >
> > Would anybody have any insights as to what is going on here? I'm
> > seeing attributes in my sleep these days ... that cannot be good!
>

Can you give your script or explain what each of those fields mean and how
they fit into your volume configuration? Also can you post your client
volfile?

Avati
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] "Invalid argument" when removing files and "No such file or directory" when creating files

2011-06-23 Thread nuaa_liuben
Hi, All,

I create a Distributed-Stripe volume with 4 brick servers, volume configuration 
as follow:

Volume Name: tvol
Type: Distributed-Stripe
Status: Started
Number of Bricks: 3 x 4 = 12
Transport-type: tcp
Bricks:
Brick1: 192.168.8.200:/mnt/sdb/tvol
Brick2: 192.168.8.201:/mnt/sdb/tvol
Brick3: 192.168.8.202:/mnt/sdb/tvol
Brick4: 192.168.8.203:/mnt/sdb/tvol
Brick5: 192.168.8.200:/mnt/sdc/tvol
Brick6: 192.168.8.201:/mnt/sdc/tvol
Brick7: 192.168.8.202:/mnt/sdc/tvol
Brick8: 192.168.8.203:/mnt/sdc/tvol
Brick9: 192.168.8.200:/mnt/sdd/tvol
Brick10: 192.168.8.201:/mnt/sdd/tvol
Brick11: 192.168.8.202:/mnt/sdd/tvol
Brick12: 192.168.8.203:/mnt/sdd/tvol


mount volume using native glusterfs, but "Invalid argument" error occured when 
removing files , and "No such file or directory" when creating files.

[root@client cifs]# touch b
touch: cannot touch `b': No such file or directory
[root@client cifs]# rm -rf a
rm: cannot remove `a': Invalid argument

Anybody meet such problems? And hope advice and solutions.

BR,
liuben
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users