Hi All,
When reading some files we get this error:
md5sum: /path/to/file.xml: Structure needs cleaning
in /var/log/glusterfs/mnt-sharedfs.log we see these errors:
[2013-12-10 08:07:32.256910] W
[client-rpc-fops.c:526:client3_3_stat_cbk] 1-testvolume-client-0: remote
operation failed: No such
Am 10.12.2013 06:39:47, schrieb Vijay Bellur:
On 12/08/2013 07:06 PM, Nguyen Viet Cuong wrote:
Thanks for sharing.
Btw, I do believe that GlusterFS 3.2.x is much more stable than 3.4.x in
production.
This is quite contrary to what we have seen in the community. From a
development
I could reproduce this problem with while my mount point is running in
debug mode.
logfile is attached.
gr.
Johan Huysmans
On 10-12-13 09:30, Johan Huysmans wrote:
Hi All,
When reading some files we get this error:
md5sum: /path/to/file.xml: Structure needs cleaning
in
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi guys,
thanks for all these reports. Well, I think I'll change my Raid level
to 6 and let the Raid controller build and rebuild all Raid members
and replicate again with glusterFS. I get more capacity but I need to
check if the write throughput
Hi,
It seems I have a related problem (just posted this on the mailing list).
Do you already have a solution for this problem?
gr.
Johan Huysmans
On 05-12-13 20:05, Bill Mair wrote:
Hi,
I'm trying to use glusterfs to mirror the ownCloud data area between
2 servers.
They are using debian
Hi Ben,
For glusterfs would you recommend the enterprise-storage
or throughput-performance tuned profile?
Thanks,
Andrew
On Tue, Dec 10, 2013 at 6:31 AM, Ben Turner btur...@redhat.com wrote:
- Original Message -
From: Ben Turner btur...@redhat.com
To: Heiko Krämer
On 12/10/2013 02:26 PM, Bernhard Glomm wrote:
Am 10.12.2013 06:39:47, schrieb Vijay Bellur:
On 12/08/2013 07:06 PM, Nguyen Viet Cuong wrote:
Thanks for sharing.
Btw, I do believe that GlusterFS 3.2.x is much more stable than
3.4.x in
production.
This
On 12/09/2013 07:21 PM, Alexandru Coseru wrote:
[2013-12-09 13:20:52.066978] E
[afr-self-heal-common.c:197:afr_sh_print_split_brain_log]
0-stor1-replicate-0: Unable to self-heal contents of '/' (possible
split-brain). Please delete the file from all but the preferred
subvolume.- Pending
Greetings,
Legend:
storage-gfs-3-prd - the first gluster.
storage-1-saas - new gluster where the first gluster had to be
migrated.
storage-gfs-4-prd - the second gluster (which had to be migrated later).
I've started command replace-brick:
'gluster volume replace-brick sa_bookshelf
Hi All,
It seems I can easily reproduce the problem.
* on node 1 create a file (touch , cat , ...).
* on node 2 take md5sum of direct file (md5sum /path/to/file)
* on node 1 move file to other name (mv file file1)
* on node 2 take md5sum of direct file (md5sum /path/to/file), this is
still
Hi Vijay,
Thank you for your prompt, and accurate reply, it is appreciated... I was
starting to worry about which version I should run! I've logged
https://bugzilla.redhat.com/show_bug.cgi?id=1039954 for this issue. It would
be good to sort out some of the documentation on this issue. If I
On Tuesday, December 10, 2013 12:49:25 PM Sharuzzaman Ahmat Raslan wrote:
Hi Harry,
Did you setup ntp on each of the node, and sync the time to one single
source?
Yes, this is done by ROCKS and all the nodes have the identical time.
(2admins have checked repeatedly)
Thanks.
On Tue, Dec
On Tuesday, December 10, 2013 10:42:28 AM Vijay Bellur wrote:
On 12/10/2013 10:14 AM, harry mangalam wrote:
Admittedly I should search the source, but I wonder if anyone knows this
offhand.
Background: of our 84 ROCKS (6.1) -provisioned compute nodes, 4 have
picked up an 'advanced
On 12/10/2013 10:57 AM, harry mangalam wrote:
On Tuesday, December 10, 2013 10:42:28 AM Vijay Bellur wrote:
On 12/10/2013 10:14 AM, harry mangalam wrote:
Admittedly I should search the source, but I wonder if anyone
knows this
offhand.
Background: of our 84 ROCKS (6.1)
- Original Message -
From: Andrew Lau and...@andrewklau.com
To: Ben Turner btur...@redhat.com
Cc: gluster-users@gluster.org List gluster-users@gluster.org
Sent: Tuesday, December 10, 2013 5:03:36 AM
Subject: Re: [Gluster-users] Gluster infrastructure question
Hi Ben,
For
Hello;
Appologies for being so direct, but I need some help.
We're a 30 000 people company, mostly doing postal financial things in
Belgium. In fact, we are the largest company in Belgium.
We got ever greater pressure on budgets (don't we all), and we're facing a 5
year old VTL near end of
16 matches
Mail list logo