Hi
I am having trouble with AFR, I have two servers set up to mirror, if I
shut down either server and then copy a file into the client mount then
restart the server I get a 0 size file on the newly started servers
backing store. Which I guess is to be expected.
But if i do a ls -lR I do not g
Adrian Revill wrote:
Hi
I am having trouble with AFR, I have two servers set up to mirror, if
I shut down either server and then copy a file into the client mount
then restart the server I get a 0 size file on the newly started
servers backing store. Which I guess is to be expected.
But if i
I ran some benchmarks last week using 2.0.8. Single server with 8 Intel
e1000e bonded mode=balance-alb
All worked fine and I got some good results using 8 clients. All Gigabit.
The benchmarks did 2 passes of IOZONE in network mode using 1-8 threads
per client and using 1 - 8 clients. Each c
Nick Birkett wrote:
Yesterday I updated to 3.0.0 (server and clients) and re-configured
the server and client vol files using glusterfs-volgen (renamed some
of the vol names).
Nick,
Thanks for testing. If you still have the hung processes, can you get
the statedump
for us (for both servers an
Hi Nick,
Thank you for using Gluster and sending us such detailed description of the
problem you are seeing. We will try a run with exactly the same switches and
config as you mention and see if we can reproduce this inhouse to make
debugging easier.
Regards,
Tejas.
- Original Message ---
Hi,
I got some questions about the Gluster Storage Platform :
- Is it just a platform to manage the storage brick and clients running with
glusterFS 3.0 ?
- Do you install the platform on a single node or do you install the platform
on each server you want to manage?
Thx
Hi Anthony,
Thanks for your interest in Gluster Platform. The Platform has 3 components -
Admin GUI, GlusterFS, Underlying OS. So its the same GlusterFS filesystem at
the core of the Platform along with easy management and installation and a
fixed underlying OS. You can install this on bare-met
Hi,
I am a new GlusterFS user. I set up a replicated folder on two computers.
When both are running, I can see that the folder on both computers are
synchronized with each other as I write files to it. However, when I
shutdown one:
1) File operations, like "ls" command, take time to finish on the
Thanks Vikas,
You were right, i commented out the stat-prefetch section and the sync
now works. Perhaps the glusterfs-volgen should not put it in for raid 1
So it looks like to make a fully redundant system we need to poll the
client mount points with ls -lR at least at server startup.
For sc
Adrian Revill wrote:
Thanks Vikas,
You were right, i commented out the stat-prefetch section and the sync
now works. Perhaps the glusterfs-volgen should not put it in for raid 1
We'll review this and see if we can tweak stat-prefetch to allow
replicate sync's
to happen sooner.
So it looks lik
Hi.
I looked through the email archives, and this feature is mentioned in the
GlusterFS platform - any chance it will be present in the open-source
version of GlusterFS 3?
Regards.
On Tue, Dec 22, 2009 at 4:29 PM, Stas Oskin wrote:
> Hi.
>
> I'm looking to evaluate the GlusterFS platform once
I've successfully set up GlusterFS 3.0 with a single server. I brought
on a 2nd server and setup AFR and have been working through the
mirroring process. I started an "ls -alR" to trigger a complete mirror
of all files between the two servers. After running for about 16 hours
I started getti
I have just tried a re-sync of many GB using the ls -lR and the ls -lR
is hanging while the sync happens.
In the release notes for 3.0.0 is says
In GlusterFS 2.0.x, if self-healing is required e.g. when a failed
Replicate server recovered,
the first I/O command executed after recovery, such as
Hi Stas,
Good to have you back !! .. Dynamic volumes and on the fly adding and removing
of servers is a feature planned for 3.1 which we have tentatively planned for
2Q 2010. Our codebase is all open source, including the platform and the FS in
the platform is the same as the FS-only product.
Hello Yan,
Welcome to Gluster community. I would suggest to try the 3.0 product for a
better replication experience. There is code in there that makes it faster.
Also, rebuilding of replica site is an exceptional condition and does not
happen all the time. When it does happen, it can slow acces
Hi Josef,
A testing iso is available here -
http://ftp.gluster.com/pub/gluster/gluster-platform/3.0/testing/
We plan to officially release it soon. Let us know how it goes.
Regards,
Tejas.
- Original Message -
From: "it-news (Josef Lahmer)"
To: gluster-users@gluster.org
Sent: Tuesday
Hi Larry,
What is the client and server configuration? Instead of killing glusterfs
process, can you do "echo 3 > /proc/sys/vm/drop_caches" and check whether
memory usage comes down?
regards,
On Wed, Dec 23, 2009 at 9:01 PM, Larry Bates wrote:
> I've successfully set up GlusterFS 3.0 with a sing
Good day.
We have been using glusterfs since verion 2.0.1-1 and just recently
upgraded our storage
servers to the new versions 3.0.0-1.
Before then we never noticed the problems we are experiencing now.
When trying to use the glusterfs share it seems to hang for a few
seconds every time dat
I restarted the mirror and once again the glusterfs client process is just
growing. Echoing the 3 to /proc/sys/vm/drop_caches eems to shriink the memory
footprint but only by a small amount.
BTW - it is the client process that seems to grow infinitely. Note: I have one
machine that is acting as
Hello,
I plan to do a small compiling farm.
What would be the best architecture and mode to use?
Initially, I was planning to have one server (which is also
a client) and three other machines which are only clients.
I am asking myself if having all machines acting both as
client and server woul
what operations are you doing on mount point? how easy it is to reproduce
memory leak?
On Thu, Dec 24, 2009 at 5:08 AM, Larry Bates wrote:
> I restarted the mirror and once again the glusterfs client process is
> just growing. Echoing the 3 to /proc/sys/vm/drop_caches eems to shriink the
> memo
Hi Ken,
Do you experience same issues with all the performance translators
(readahead, writebehind, iocache, stat-prefetch, quick-read) removed, i.e.,
with only stripe in the configuration?
regards,
On Thu, Dec 24, 2009 at 3:06 AM, Ken wrote:
> Good day.
>
> We have been using glusterfs since v
Hi Francois,
On Thu, Dec 24, 2009 at 5:22 AM, Francois Berenger wrote:
> Hello,
>
> I plan to do a small compiling farm.
> What would be the best architecture and mode to use?
>
> Initially, I was planning to have one server (which is also
> a client) and three other machines which are only clien
23 matches
Mail list logo