Re: [Gluster-users] Very slow ls

2014-02-21 Thread Franco Broi

Amazingly setting cluster.readdir-optimize has fixed the problem, ls is still 
slow but there's no long pause on the last readdir call.

What does this option do and why isn't it enabled by default?
___
From: gluster-users-boun...@gluster.org [gluster-users-boun...@gluster.org] on 
behalf of Franco Broi [franco.b...@iongeo.com]
Sent: Friday, February 21, 2014 7:25 PM
To: Vijay Bellur
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Very slow ls

On 21 Feb 2014 22:03, Vijay Bellur  wrote:
>
> On 02/18/2014 12:42 AM, Franco Broi wrote:
> >
> > On 18 Feb 2014 00:13, Vijay Bellur  wrote:
> >  >
> >  > On 02/17/2014 07:00 AM, Franco Broi wrote:
> >  > >
> >  > > I mounted the filesystem with trace logging turned on and can see that
> >  > > after the last successful READDIRP there is a lot of other connections
> >  > > being made the clients repeatedly which takes minutes to complete.
> >  >
> >  > I did not observe anything specific which points to clients repeatedly
> >  > reconnecting. Can you point to the appropriate line numbers for this?
> >  >
> >  > Can you also please describe the directory structure being referred here?
> >  >
> >
> > I was tailing the log file while the readdir script was running and
> > could see respective READDIRP calls for each readdir, after the last
> > call all the rest of the stuff in the log file was returning nothing but
> > took minutes to complete. This particular example was a directory
> > containing a number of directories, one for each of the READDIRP calls
> > in the log file.
> >
>
> One possible tuning that can possibly help:
>
> volume set  cluster.readdir-optimize on
>
> Let us know if there is any improvement after enabling this option.

I'll give it a go but I think this is a bug and not a performance issue. I've 
filed a bug report on bugzilla.

>
> Thanks,
> Vijay
>
>




This email and any files transmitted with it are confidential and are intended 
solely for the use of the individual or entity to whom they are addressed. If 
you are not the original recipient or the person responsible for delivering the 
email to the intended recipient, be advised that you have received this email 
in error, and that any use, dissemination, forwarding, printing, or copying of 
this email is strictly prohibited. If you received this email in error, please 
immediately notify the sender and delete the original.




This email and any files transmitted with it are confidential and are intended 
solely for the use of the individual or entity to whom they are addressed. If 
you are not the original recipient or the person responsible for delivering the 
email to the intended recipient, be advised that you have received this email 
in error, and that any use, dissemination, forwarding, printing, or copying of 
this email is strictly prohibited. If you received this email in error, please 
immediately notify the sender and delete the original.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Very slow ls

2014-02-21 Thread Franco Broi

On 21 Feb 2014 22:03, Vijay Bellur  wrote:
>
> On 02/18/2014 12:42 AM, Franco Broi wrote:
> >
> > On 18 Feb 2014 00:13, Vijay Bellur  wrote:
> >  >
> >  > On 02/17/2014 07:00 AM, Franco Broi wrote:
> >  > >
> >  > > I mounted the filesystem with trace logging turned on and can see that
> >  > > after the last successful READDIRP there is a lot of other connections
> >  > > being made the clients repeatedly which takes minutes to complete.
> >  >
> >  > I did not observe anything specific which points to clients repeatedly
> >  > reconnecting. Can you point to the appropriate line numbers for this?
> >  >
> >  > Can you also please describe the directory structure being referred here?
> >  >
> >
> > I was tailing the log file while the readdir script was running and
> > could see respective READDIRP calls for each readdir, after the last
> > call all the rest of the stuff in the log file was returning nothing but
> > took minutes to complete. This particular example was a directory
> > containing a number of directories, one for each of the READDIRP calls
> > in the log file.
> >
>
> One possible tuning that can possibly help:
>
> volume set  cluster.readdir-optimize on
>
> Let us know if there is any improvement after enabling this option.

I'll give it a go but I think this is a bug and not a performance issue. I've 
filed a bug report on bugzilla.

>
> Thanks,
> Vijay
>
>




This email and any files transmitted with it are confidential and are intended 
solely for the use of the individual or entity to whom they are addressed. If 
you are not the original recipient or the person responsible for delivering the 
email to the intended recipient, be advised that you have received this email 
in error, and that any use, dissemination, forwarding, printing, or copying of 
this email is strictly prohibited. If you received this email in error, please 
immediately notify the sender and delete the original.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] upgrading from gluster-3.2.6 to gluster-3.4.2

2014-02-21 Thread Tamas Papp

On 02/21/2014 05:32 PM, Paul Simpson wrote:
> I too would like to know about this.
>
> I also tried this process on my 3.2.7 cluster and reported my findings
> here:
> http://vbellur.wordpress.com/2012/05/31/upgrading-to-glusterfs-3-3/
>


Why don't you try to upgrade to 3.3 first?

tamas
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Java 1.4+ and Gluster - new project libgfapi-java-io

2014-02-21 Thread Harshavardhana
+1

On Fri, Feb 21, 2014 at 2:26 PM, Brad Childs  wrote:
> I would like to announce a new project on Gluster forge - libgfapi-java-io.  
> This project aims at creating a Java 1.4+ interface to gluster using libgfapi 
> interface.
>
> https://forge.gluster.org/libgfapi-java-io
>
> libgfapi-java-io provides-
>
> - Maven compatibility
> - Raw Inputstream + OutputStream (very slow)
> - Buffered Inputstream and Outputstream (much faster.. amortizes the JNI call 
> over larger blocks)
> - Full support for the following file and directory functions: delete, 
> rename, mkdirs, list() list(filter), getMod, getUid, getGid, setUid, setGid, 
> getAtime, getMtime, getCtime, getBlockSize, length, exists.
> - Very much java/OO structure hiding the libgfapi static calls.  Quite 
> similar to the java.io.File class.
>
> I will continue working and improving documentation, tests and examples. 
> Currently the OutputStream is highly performant beating raw FUSE writes, and 
> the InputStream is nearly as performant as raw FUSE writes.  I should have 
> the InputStream performance sorted soon.
>
> Of course if you hate old Java and are looking for the cleaner FileSystem 
> implementation of Java 1.7, don't forget Louis' glusterfs-java-filesystem 
> project:  https://forge.gluster.org/glusterfs-java-filesystem
>
>
> -bc
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users



-- 
Religious confuse piety with mere ritual, the virtuous confuse
regulation with outcomes
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Java 1.4+ and Gluster - new project libgfapi-java-io

2014-02-21 Thread Brad Childs
I would like to announce a new project on Gluster forge - libgfapi-java-io.  
This project aims at creating a Java 1.4+ interface to gluster using libgfapi 
interface.

https://forge.gluster.org/libgfapi-java-io

libgfapi-java-io provides-

- Maven compatibility
- Raw Inputstream + OutputStream (very slow)
- Buffered Inputstream and Outputstream (much faster.. amortizes the JNI call 
over larger blocks)
- Full support for the following file and directory functions: delete, rename, 
mkdirs, list() list(filter), getMod, getUid, getGid, setUid, setGid, getAtime, 
getMtime, getCtime, getBlockSize, length, exists.
- Very much java/OO structure hiding the libgfapi static calls.  Quite similar 
to the java.io.File class.

I will continue working and improving documentation, tests and examples. 
Currently the OutputStream is highly performant beating raw FUSE writes, and 
the InputStream is nearly as performant as raw FUSE writes.  I should have the 
InputStream performance sorted soon.

Of course if you hate old Java and are looking for the cleaner FileSystem 
implementation of Java 1.7, don't forget Louis' glusterfs-java-filesystem 
project:  https://forge.gluster.org/glusterfs-java-filesystem


-bc

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Failover and split-brain.

2014-02-21 Thread 邓尧
Hi, all
I'm new to gluster, and still trying to figure out whether it's suitable
for my project. Gluster has built-in failover support, which is very great,
but after reading the user manual, I found that the split-brain, one of the
most difficult problems of HA design, wasn't metioned at all.
Does gluster have such problem during failover ? If does, how to avoid data
corruption ?

Thanks
Yao
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Best Practices for different failure scenarios?

2014-02-21 Thread BGM
>> It might be very helpful to have a wiki next to this mailing list,
>> where all the good experience, all the proved solutions for "situations"
>> that are brought up here, could be gathered in a more
>> permanent and straight way.
> 
> +1. It would be very useful to evolve an operations guide for GlusterFS.
> 
>> .
>> To your questions I would add:
>> what's best practice in setting options for performance and/or integrity...
>> (yeah, well, for which use case under which conditions)
>> a mailinglist is very helpful for adhoc probs and questions,
>> but it would be nice to distill the knowledge into a permanent, searchable 
>> form.
>> .
>> sure anybody could set up a wiki, but...
>> it would need the acceptance and participation of an active group
>> to get best results.
>> so IMO the appropriate place would be somewhere close to gluster.org?
>> .
> 
> Would be happy to carry this in doc/ folder of glusterfs.git and collaborate 
> on it if a lightweight documentation format like markdown or asciidoc is used 
> for evolving this guide.

I haven't worked with neither of them,
on the very first glance asciidoc looks easier to me.
(assuming it is either or ?)
and (sorry for being flat, i m op not dev ;-) you suggest everybody sets up a 
git from where you
pull, right?
well, wouldn't a wiki be much easier? both, to contribute to and to access the 
information?
(like wiki.debian.org?)
The git based solution might be easier to start of with,
but would it reach a big enough community?
Wouldn't a wiki also have a better PR/marketing effect (by being easier to 
access)?
just a thought...
best regards,
Bernhard

> 
> -Vijay
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Message: 4 Split-brain

2014-02-21 Thread Khoi Mai
http://joejulian.name/blog/fixing-split-brain-with-glusterfs-33/


I spent 2 hours trying to work down my 4 node gluster split brain.

Above is a doc, I found helpful fro Joe Julian, and he's also very helpful 
on irc #gluster room.  Not sure if you have access to redhat but this pdf 
is helpful too
https://access.redhat.com/site/sites/default/files/attachments/rhstorage_split-brain_20131120_0.pdf

I haven't came up with a clever way to script how to proceed yet b/c I'm 
still building my understanding on it.  Comes down to gfid and which one 
is not following suit with the cluster, then delete it and its 
corresponding link.

Cannot document all my steps on here, so come join the irc #gluster room.


Khoi Mai
Union Pacific Railroad
Distributed Engineering & Architecture
Project Engineer



**

This email and any attachments may contain information that is confidential 
and/or privileged for the sole use of the intended recipient.  Any use, review, 
disclosure, copying, distribution or reliance by others, and any forwarding of 
this email or its contents, without the express permission of the sender is 
strictly prohibited by law.  If you are not the intended recipient, please 
contact the sender immediately, delete the e-mail and destroy all copies.
**
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] upgrading from gluster-3.2.6 to gluster-3.4.2

2014-02-21 Thread Paul Simpson
I too would like to know about this.

I also tried this process on my 3.2.7 cluster and reported my findings here:
http://vbellur.wordpress.com/2012/05/31/upgrading-to-glusterfs-3-3/

No reply as of yet...

Not wanting to sound negative, but I do find there's little support from
the Gluster sites/community on issues such as this. Especially compared to
other projects I've seen. It's not very reassuring for a film-system in
which we're supposed to trust with our precious data.

For this reason - we are looking to migrate away from Gluster after using
it for 3+ years.  It's a shame - and I was a believer (still think it's a
great idea) - but we can't afford to carry on groping around in the dark
(with sparse documentation, obscure error messages & a low-bandwidth
community) any more. :/

Still interested to hear if / how we can upgrade by hand (if need be).
Would certainly help me in the interim - might even change my mind (not
having seen 3.4 in action)..  :)








On 21 February 2014 13:22, Dmitry Kopelevich wrote:

>  I would like to follow up on my question regarding an upgrade from 3.2.6
> to 3.4.2.
> Can anybody tell me whether I'm doing something completely wrong? Am I
> trying to skip too many versions of gluster in my upgrade? Is CentOS 5 too
> old for this?
>
> Thanks,
>
> Dmitry
>
> On 2/18/2014 2:51 PM, Dmitry Kopelevich wrote:
>
> I am attempting to upgrade my GlusterFS from 3.2.6 to 3.4.2 using the
> instructions posted at
> http://vbellur.wordpress.com/2012/05/31/upgrading-to-glusterfs-3-3. These
> guidelines are for an upgrade to 3.3 but it is stated at
> http://vbellur.wordpress.com/2013/07/15/upgrading-to-glusterfs-3-4 that
> they can also be used to upgrade to 3.4.0. So I was hoping that they would
> also work with an upgrade to 3.4.2.
>
> I'm running CentOS 5 and installed the following rpms on the gluster
> servers:
>
> glusterfs-libs-3.4.2-1.el5.x86_64.rpm
>  glusterfs-3.4.2-1.el5.x86_64.rpm
>  glusterfs-fuse-3.4.2-1.el5.x86_64.rpm
>  glusterfs-cli-3.4.2-1.el5.x86_64.rpm
>  glusterfs-server-3.4.2-1.el5.x86_64.rpm
>  glusterfs-rdma-3.4.2-1.el5.x86_64.rpm
>  glusterfs-geo-replication-3.4.2-1.el5.x86_64.rpm
>
> According to the installation guidelines, installation from rpms should
> automatically copy the files from /etc/glusterd to /var/lib/glusterd. This
> didn't happen for me -- the directory /var/lib/glusterd contained only
> empty subdirectories. But the content of /etc/glusterd directory has moved
> to /etc/glusterd/glusterd.
>
> So, I decided to manually copy files from /etc/glusterd/glusterd to
> /var/lib/glusterd and follow step 5 of the installation guidelines (which
> was supposed to be skipped when installing from rpms):
>
> glusterd --xlator-option *.upgrade=on -N
>
> This didn't work (error message: glusterd: No match)
>
> Then I tried specifying explicitly the name of my volume:
>
> glusterd --xlator-option .upgrade=on -N
>
> This lead to the following messages in file etc-glusterfs-glusterd.vol.log:
>
> [2014-02-18 17:22:27.146449] I [glusterd.c:961:init] 0-management: Using
> /var/lib/glusterd as working directory
> [2014-02-18 17:22:27.149097] I [socket.c:3480:socket_init]
> 0-socket.management: SSL support is NOT enabled
> [2014-02-18 17:22:27.149126] I [socket.c:3495:socket_init]
> 0-socket.management: using system polling thread
> [2014-02-18 17:22:29.282665] I
> [glusterd-store.c:1339:glusterd_restore_op_version] 0-glusterd: retrieved
> op-version: 1
> [2014-02-18 17:22:29.283478] E
> [glusterd-store.c:1858:glusterd_store_retrieve_volume] 0-: Unknown key:
> brick-0
> [2014-02-18 17:22:29.283513] E
> [glusterd-store.c:1858:glusterd_store_retrieve_volume] 0-: Unknown key:
> brick-1
> [2014-02-18 17:22:29.283534] E
> [glusterd-store.c:1858:glusterd_store_retrieve_volume] 0-: Unknown key:
> brick-2
> ...
> and so on for all other bricks.
>
> After that, files nfs.log, glustershd.log, and
> etc-glusterfs-glusterd.vol.log get filled with a large number of warning
> messages and nothing else seems to happen. The following messages appear to
> be relevant:
>
> - Files nfs.log, glustershd.log:
>
> 2014-02-18 15:58:01.889847] W [rdma.c:1079:gf_rdma_cm_event_handler]
> 0-data-volume-client-2: cma event RDMA_CM_EVENT_ADDR_ERROR, error -2 (me:
> peer:)
>
> (the name of my volume is data-volume and its transport type is RDMA)
>
> - File etc-glusterfs-glusterd.vol.log
>
> [2014-02-18 17:22:33.322565] W [socket.c:514:__socket_rwv] 0-management:
> readv failed (No data available)
>
> Also, for some reason the time stamps in the log files are incorrect.
>
> Any suggestions for fixing this would be greatly appreciated.
>
> Thanks,
>
> Dmitry
>
> --
> Dmitry Kopelevich
> Associate Professor
> Chemical Engineering Department
> University of Florida
> Gainesville, FL 32611
>
> Phone:   (352)-392-4422
> Fax: (352)-392-9513
> E-mail:  dkopelev...@che.ufl.edu
>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://superc

Re: [Gluster-users] Very slow ls

2014-02-21 Thread Vijay Bellur

On 02/18/2014 12:42 AM, Franco Broi wrote:


On 18 Feb 2014 00:13, Vijay Bellur  wrote:
 >
 > On 02/17/2014 07:00 AM, Franco Broi wrote:
 > >
 > > I mounted the filesystem with trace logging turned on and can see that
 > > after the last successful READDIRP there is a lot of other connections
 > > being made the clients repeatedly which takes minutes to complete.
 >
 > I did not observe anything specific which points to clients repeatedly
 > reconnecting. Can you point to the appropriate line numbers for this?
 >
 > Can you also please describe the directory structure being referred here?
 >

I was tailing the log file while the readdir script was running and
could see respective READDIRP calls for each readdir, after the last
call all the rest of the stuff in the log file was returning nothing but
took minutes to complete. This particular example was a directory
containing a number of directories, one for each of the READDIRP calls
in the log file.



One possible tuning that can possibly help:

volume set  cluster.readdir-optimize on

Let us know if there is any improvement after enabling this option.

Thanks,
Vijay


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Best Practices for different failure scenarios?

2014-02-21 Thread Vijay Bellur

On 02/20/2014 03:52 AM, BGM wrote:



On 19.02.2014, at 21:15, James  wrote:


On Wed, Feb 19, 2014 at 3:07 PM, Michael Peek  wrote:

Is there a best practices document somewhere for how to handle standard
problems that crop up?


Short answer, it sounds like you'd benefit from playing with a test
cluster... Would I be correct in guessing that you haven't setup a
gluster pool yet?
You might want to look at:
https://ttboj.wordpress.com/2014/01/08/automatically-deploying-glusterfs-with-puppet-gluster-vagrant/
This way you can try them out easily...
For some of those points... solve them with...


Sort of a crib notes for things like:

1) What do you do if you see that a drive is about to fail?

RAID6

or: zol, raidz
(open for critical commends)
or: brick remove && brick add && volume heal
(it's really just three commands, at least in my experience so far, touch wood)
.
but Michael, I appreciate your _original_ question:
"Is there a best practice document?"
Nope, not that I am aware of.
.
It might be very helpful to have a wiki next to this mailing list,
where all the good experience, all the proved solutions for "situations"
that are brought up here, could be gathered in a more
permanent and straight way.


+1. It would be very useful to evolve an operations guide for GlusterFS.


.
To your questions I would add:
what's best practice in setting options for performance and/or integrity...
(yeah, well, for which use case under which conditions)
a mailinglist is very helpful for adhoc probs and questions,
but it would be nice to distill the knowledge into a permanent, searchable form.
.
sure anybody could set up a wiki, but...
it would need the acceptance and participation of an active group
to get best results.
so IMO the appropriate place would be somewhere close to gluster.org?
.


Would be happy to carry this in doc/ folder of glusterfs.git and 
collaborate on it if a lightweight documentation format like markdown or 
asciidoc is used for evolving this guide.


-Vijay
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] upgrading from gluster-3.2.6 to gluster-3.4.2

2014-02-21 Thread Dmitry Kopelevich
I would like to follow up on my question regarding an upgrade from 3.2.6 
to 3.4.2.
Can anybody tell me whether I'm doing something completely wrong? Am I 
trying to skip too many versions of gluster in my upgrade? Is CentOS 5 
too old for this?


Thanks,

Dmitry

On 2/18/2014 2:51 PM, Dmitry Kopelevich wrote:
I am attempting to upgrade my GlusterFS from 3.2.6 to 3.4.2 using the 
instructions posted at 
http://vbellur.wordpress.com/2012/05/31/upgrading-to-glusterfs-3-3. 
These guidelines are for an upgrade to 3.3 but it is stated at 
http://vbellur.wordpress.com/2013/07/15/upgrading-to-glusterfs-3-4 
that they can also be used to upgrade to 3.4.0. So I was hoping that 
they would also work with an upgrade to 3.4.2.


I'm running CentOS 5 and installed the following rpms on the gluster 
servers:


glusterfs-libs-3.4.2-1.el5.x86_64.rpm
glusterfs-3.4.2-1.el5.x86_64.rpm
glusterfs-fuse-3.4.2-1.el5.x86_64.rpm
glusterfs-cli-3.4.2-1.el5.x86_64.rpm
glusterfs-server-3.4.2-1.el5.x86_64.rpm
glusterfs-rdma-3.4.2-1.el5.x86_64.rpm
glusterfs-geo-replication-3.4.2-1.el5.x86_64.rpm

According to the installation guidelines, installation from rpms 
should automatically copy the files from /etc/glusterd to 
/var/lib/glusterd. This didn't happen for me -- the directory 
/var/lib/glusterd contained only empty subdirectories. But the content 
of /etc/glusterd directory has moved to /etc/glusterd/glusterd.


So, I decided to manually copy files from /etc/glusterd/glusterd to 
/var/lib/glusterd and follow step 5 of the installation guidelines 
(which was supposed to be skipped when installing from rpms):


glusterd --xlator-option *.upgrade=on -N

This didn't work (error message: glusterd: No match)

Then I triedspecifying explicitly the name of my volume:

glusterd --xlator-option .upgrade=on -N

This lead to the following messages in file 
etc-glusterfs-glusterd.vol.log:


[2014-02-18 17:22:27.146449] I [glusterd.c:961:init] 0-management: 
Using /var/lib/glusterd as working directory
[2014-02-18 17:22:27.149097] I [socket.c:3480:socket_init] 
0-socket.management: SSL support is NOT enabled
[2014-02-18 17:22:27.149126] I [socket.c:3495:socket_init] 
0-socket.management: using system polling thread
[2014-02-18 17:22:29.282665] I 
[glusterd-store.c:1339:glusterd_restore_op_version] 0-glusterd: 
retrieved op-version: 1
[2014-02-18 17:22:29.283478] E 
[glusterd-store.c:1858:glusterd_store_retrieve_volume] 0-: Unknown 
key: brick-0
[2014-02-18 17:22:29.283513] E 
[glusterd-store.c:1858:glusterd_store_retrieve_volume] 0-: Unknown 
key: brick-1
[2014-02-18 17:22:29.283534] E 
[glusterd-store.c:1858:glusterd_store_retrieve_volume] 0-: Unknown 
key: brick-2

...
and so on for all other bricks.

After that, files nfs.log, glustershd.log, and 
etc-glusterfs-glusterd.vol.log get filled with a large number of 
warning messages and nothing else seems to happen. The following 
messages appear to be relevant:


- Files nfs.log, glustershd.log:

2014-02-18 15:58:01.889847] W [rdma.c:1079:gf_rdma_cm_event_handler] 
0-data-volume-client-2: cma event RDMA_CM_EVENT_ADDR_ERROR, error -2 
(me: peer:)


(the name of my volume is data-volume and its transport type is RDMA)

- File etc-glusterfs-glusterd.vol.log

[2014-02-18 17:22:33.322565] W [socket.c:514:__socket_rwv] 
0-management: readv failed (No data available)


Also, for some reason the time stamps in the log files are incorrect.

Any suggestions for fixing this would be greatly appreciated.

Thanks,

Dmitry
--
Dmitry Kopelevich
Associate Professor
Chemical Engineering Department
University of Florida
Gainesville, FL 32611

Phone:   (352)-392-4422
Fax: (352)-392-9513
E-mail:dkopelev...@che.ufl.edu


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users