Did a major update yesterday to 3.5.2 on all servers and happy to report
that it went smoothly and everything seems to be working well. I also
did an update of ZOL to 0.6.3 running on linux-3.14.16 and the result of
all the updates is a definite improvement in speed of ls for the fuse
client, good
On Mon, Aug 11, 2014 at 4:27 PM, Juan José Pavlik Salles
wrote:
> Hi James, that post was my inspiration actually hahaha.
lol, oh cool. Check out some of the new articles, they are more fun :)
> I just re-checked
> the link and you are right, there's an optional fixed internal bay for
> drives so
On Mon, Aug 11, 2014 at 3:03 PM, Juan José Pavlik Salles
wrote:
> Hi Guys, we are about to get two of these
> http://www.supermicro.com/products/system/4U/6047/SSG-6047R-E1R36N.cfm and
> the seller said that we have to use one of the 36 drives as OS drive. I'd
> like to avoid using one of the swap
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
I'm testing the process of adding bricks. I have a replica volume
with two bricks. I mounted this volume on a client, and started
writing data. I then added two more bricks to the volume.
This caused the client to complain pretty loudly.
Here's
- Original Message -
> Nightly builds from release-3.6 are now available at [2]. Testing
> feedback for these nightly builds would be very welcome!
> [2] http://download.gluster.org/pub/gluster/glusterfs/nightly/glusterfs-3.6/
This seems to have RPM's, but no nightly tarball. Can we a
Red Hat Bugzilla – Bug 1128820 Submitted
https://bugzilla.redhat.com/show_bug.cgi?id=1128820
On Wed, Aug 6, 2014 at 9:27 PM, Harshavardhana
wrote:
> Would you mind opening up a bug and provide glusterfs logs server -
> also reproduce it while grabbing a tcpdump on server?
>
> Thanks
>
> On Wed,
The proper way to engineer your system would be to identify the performance of
your most expensive processes and design a system to allow those to perform in
a way that is in line with your expectations.
If you're properly engineering a system, you should know what your performance
expectation
- Original Message -
> hi Ray,
>Reads are served from the bricks which respond the fastest at the
> moment. They are not load-balanced.
Maybe a good feature for 3.7? :)
+ Justin
--
GlusterFS - http://www.gluster.org
An open source, distributed file system scaling to several
petabyt
Hi,
I guess some good advice is "pre-mature optimization is the root of all
evil". Use the Gluster defaults for your replica volumes, then, when
you inevitably have performance issues, identify bottlenecks logically
and iteratively:
- Raw RAID read/write speeds
- Raw network read/