Hi Oliver,
On Tue, Feb 16, 2010 at 4:30 AM, Olivier Le Cam <
olivier.le...@crdp.ac-versailles.fr> wrote:
> Hi -
>
> Is there a way to re-balance files over the whole glusterfs storage
> (v3.0.x) when a new distributed volume is added to the existing ones?
>
>
here you go scripts which should
Also, 3.0.x has improvements related to io-cache.
Are you observing high cpu usage on client side or server side? If it is on
client side, can you remove all performance translators and observe whether
you still face the same problem? If removing performance translators solves
the problem of high
Hi,
Am 16.02.2010 01:58, schrieb Chad:
> I am new to glusterfs, and this list, please let me know if I have made
> any mistakes in posting this to the list.
> I am not sure what your standards are.
>
> I came across glusterfs last week, it was super easy to set-up and test
> and is almost exactly
I am new to glusterfs, and this list, please let me know if I have made any
mistakes in posting this to the list.
I am not sure what your standards are.
I came across glusterfs last week, it was super easy to set-up and test and is
almost exactly what I want/need.
I set up 2 "glusterfs servers"
Hi -
Is there a way to re-balance files over the whole glusterfs storage
(v3.0.x) when a new distributed volume is added to the existing ones?
Thanks in anticipation for any pointers.
--
Olivier Le Cam
Département des Technologies de l'Information et de la Communication
Académie de Versailles
Hi John,
* replies inline *
On Tue, Feb 16, 2010 at 12:43 AM, John Madden wrote:
> I've made a few swings at using glusterfs for the php session store for a
> heavily-used web app (~6 million pages daily) and I've found time and again
> that cpu usage and odd load characteristics cause glusterfs
I've made a few swings at using glusterfs for the php session store for
a heavily-used web app (~6 million pages daily) and I've found time and
again that cpu usage and odd load characteristics cause glusterfs to be
entirely unsuitable for this use case at least given my configuration.
I posted
Just some experiments with 3.0.2. Using Gluster doc instruction source
compiles fine to 32 bit
binaries/libraries (64 bit x86 server).
Performance issues when Server is 64 bit x86 osol-0906 /ZFS fs and
single gigabit e1000 nics.
1 server. + 1 Linux client works at wire speed (> 100Mbytes/s).
Hi Casper,
please find the inlined comments.
On Mon, Feb 15, 2010 at 3:38 PM, Casper Langemeijer wrote:
> Hi List!
>
> Raghavendra G, thanks for you reply.
>
> On Mon, 2010-02-15 at 11:11 +0400, Raghavendra G wrote:
> > Could I duplicate files to multiple data bricks in the cluster
> >
Hi List!
Raghavendra G, thanks for you reply.
On Mon, 2010-02-15 at 11:11 +0400, Raghavendra G wrote:
> Could I duplicate files to multiple data bricks in the cluster
> to
> provide a raid5-like setup? I very much want to be able to
> shutdown a single machine in t
I do have an idea how it could be worked round.
On each of the 10G servers run a special client that connects like a
normal client, and has the normal configuration.
But this client exposes a server interface. This would mean the server
client would appear as a server with the whole file system
Adrian Revill wrote:
This is an edge case scenario, but one that I am worried about.
Say you have 2 storage nodes as a mirror on a 10G network, and you
write into a client mount also on the 10G network. Then data will be
replicated by the client via the 10G network to the 2 servers.
If one ser
Fredrik Widlund wrote:
How do I add a brick to a replicate setup with preexisting data. There is old
article from almost 2 years back, and I don't think I will try that one out.
Do you want to increase the number of replicas, say from 2 to 3?
If so, you can add a new subvolume to your repli
Hi!
Yes I've tried reexporting using unsfd too, but problem was the same.
What is even more interesting, when I'm trying to upload an iso image
to the storage using ESXi interface, everything is fine(image size
>1GB) so no problem in creating large files.
Quoting "Raghavendra G" :
Hi Roma
14 matches
Mail list logo