Update: after a few hours the CPU usage seems to have dropped down
to a small value. I did not change anything with respect to the configuration
or unmount / stop anything as I wanted to see if this would persist for a long
period of time. Both the client and the self-mounted bricks
Check the client log(s).
Michael Colonno wrote:
>Forgot to mention: on a client system (not a brick) the
>glusterfs process is consuming ~ 68% CPU continuously. This is a much
>less
>powerful desktop system so the CPU load can't be compared 1:1 with the
>systems comprising the brick
On 02/01/13 12:46, Michael Colonno wrote:
Gluster gurus ~
I’ve deployed and 8-brick (2x replicate) Gluster 3.3.1 volume on CentOS
6.3 with tcp transport. I was able to build, start, mount, and use the
volume. On each system contributing a brick, however, my CPU usage
(glusterfsd) is hovering aro
On Fri, Feb 01, 2013 at 12:53:24PM -0800, Michael Colonno wrote:
> Forgot to mention: on a client system (not a brick) the glusterfs
> process is consuming ~ 68% CPU continuously. This is a much less powerful
> desktop system so the CPU load can’t be compared 1:1 with the systems
> comp
On 1/31/2013 3:23 PM, Shawn Heisey wrote:> On 1/31/2013 12:36 PM, Kaleb
Keithley wrote:
>> Not sure if you saw this in #gluster on IRC.
>>
>> The other work-around for F18 is to delete
>> /etc/swift/{account,container,object}-server.conf before starting UFO.
>>
>> With that my UFO set-up works as
Forgot to mention: on a client system (not a brick) the
glusterfs process is consuming ~ 68% CPU continuously. This is a much less
powerful desktop system so the CPU load can't be compared 1:1 with the
systems comprising the bricks but still very high. So the issue seems to
exist with b
Gluster gurus ~
I've deployed and 8-brick (2x replicate) Gluster 3.3.1 volume on
CentOS 6.3 with tcp transport. I was able to build, start, mount, and use
the volume. On each system contributing a brick, however, my CPU usage
(glusterfsd) is hovering around 20% (virtuall
Hi, this list is for Gluster.org, which is a self-support community and only
has volunteers. If you need immediate help from red hat, I recommend you call
them.
Thanks,
JM
yongtaofu wrote:
It's mid night in china about 2:00 and I can't sleep. I have a glusterfs 3.3
cluster in production and
It's mid night in china about 2:00 and I can't sleep. I have a glusterfs 3.3
cluster in production and there're several
hundreds of clients on it. The io load is extremely high 24 hours. So the file
system crush happens frequently in the cluster. About several times a year(I
use xfs
)
When one
It's a problem of stripped volumes in 3.3.1.
It does not appear on 3.3.0 and it's solved in coming 3.4.
Best regards,
Samuel.
On 25 January 2013 14:41, wrote:
> Hi there,
> each time I copy (or dd or similar) a file to a striped replicated volume
> I get an error: the argument is not valid.
>
I suspect you hit the Ext4 bug.
-JM
On Fri, Feb 1, 2013 at 7:38 AM, Michael Colonno wrote:
> To close out this thread I rebuilt the entire volume with ZFS
> instead of ext4 and it now appears to work well. I have not loaded it up
> yet
> but the ls / ll hanging problem is gone and I ca
11 matches
Mail list logo