Hi,
just a question ...
Would SAS disks be better in situation with lots of seek times using
GlusterFS?
2014-09-22 23:03 GMT+03:00 Jeff Darcy jda...@redhat.com:
The biggest issue that we are having, is that we are talking about
-billions- of small (max 5MB) files. Seek times are killing
Hi,
SSD has been considered but is not an option due to cost. SAS has
been considered but is not a option due to the relatively small sizes
of the drives. We are *rapidly* growing towards a PB of actual online
storage.
We are exploring raid controllers with onboard SSD cache which may help.
Hi,
SAS 7200 RPM disks are not that small size at all (same as SATA basically).
If I remember right, the reason of switching to SAS here would be Full
Duplex with SAS (you can read and write in the same time to them) instead
of Half Duplex with SATA disks (read or write per one moment only).
Hi,
I followed the below steps to run tests.
The link
http://gluster.org/community/documentation/index.php/Using_the_Gluster_Test_Framework
does not have info for CentOS 7 and I tried to follow the same steps with
epel pointing to release 7
Here are the steps for CentOS 7:
1. Install EPEL:
$
Hi,
Sorry, I did not checkout the gluster source for the version I am running
to run the testcases.
It is working fine.
Thanks,
Kiran.
On Tue, Sep 23, 2014 at 1:09 PM, Kiran Patil kirantpa...@gmail.com wrote:
Hi,
I followed the below steps to run tests.
The link
Hi All,
GlusterFS 3.6.0beta1 RPMs for el5-7 (RHEL, CentOS, etc.), Fedora
(19,20,21,22) are available at download.gluster.org [1].
Please use the same for GlusterFest and we welcome your
suggestions/comments/feedback about this release.
[1]
Hi all,
please join the #gluster-meeting IRC channel on irc.freenode.net to
participate on the following topics:
* Roll call
* Status of last weeks action items
* What happens after a bug has been marked Triaged?
* Add distinction between problem reports and enhancement requests
* Group Triage
*
On Tue, Sep 23, 2014 at 11:57:54AM +0200, Niels de Vos wrote:
Hi all,
please join the #gluster-meeting IRC channel on irc.freenode.net to
participate on the following topics:
* Roll call
* Status of last weeks action items
* What happens after a bug has been marked Triaged?
* Add
We tried, but the process which hits 100% CPU is glusterfsd, therefore, the
impact on Gluster is still there, because that process is virtually at 100% CPU.
-Original Message-
From: James [mailto:purplei...@gmail.com]
Sent: 22 septembre 2014 13:25
To: Jocelyn Hotte
Cc:
Here are the steps to reproduce this issue. (gluster version 3.5.2)
On one server lab1 (There is another server lab2 for replica 2):
[root@lab1 ~]# gluster volume set g1 worm on
volume set: success
[root@lab1 ~]# gluster volume stop g1
Stopping volume will make its data inaccessible. Do you
Yes, Roman is correct. Also, if you have lots of random IO you're better
off with many smaller SAS drives. This is because the greater number of
spindles you have the greater your random IO is. This is also why we went
with ssd drives because sas drives weren't cutting it on the random io
SSD has been considered but is not an option due to cost. SAS has
been considered but is not a option due to the relatively small sizes
of the drives. We are *rapidly* growing towards a PB of actual online
storage.
We are exploring raid controllers with onboard SSD cache which may help.
Hi,
Is the below in my brick logs anything to worry about?
Seen a hell of a lot of them today.
Thanks
Alex
[2014-09-23 15:11:00.252041] W
[server-resolve.c:420:resolve_anonfd_simple] 0-server: inode for the
gfid (f7e985f2-381d-4fa7-9f7c-f70745f9d5d6) is not found. anonymous fd
creation
I inherited a non-replicated gluster system based on antique hardware.
One of the brick filesystems is flaking out, and remounts read-only. I
repair it and remount it, but this is only postponing the inevitable.
How can I migrate files off a failing brick that intermittently turns
read-only? I
Cool, I've not gotten around to testing EL7 yet. :)
Would you have the time / interest to add CentOS 7 steps to the page?
+ Justin
On 23/09/2014, at 8:47 AM, Kiran Patil wrote:
Hi,
Sorry, I did not checkout the gluster source for the version I am running to
run the testcases.
It is
I was having trouble setting up geo-replication, and I finally figured out why.
Gsyncd is trying to use the wrong (but valid) name for the slave server, and
it’s resolving to an address it can’t reach. It does this even though I tried
to setup the geo-replication to a specific IP address, and
On 09/23/2014 08:56 PM, james.bellin...@icecube.wisc.edu wrote:
I inherited a non-replicated gluster system based on antique hardware.
One of the brick filesystems is flaking out, and remounts read-only. I
repair it and remount it, but this is only postponing the inevitable.
How can I migrate
17 matches
Mail list logo