Hi,
I have a block device at /dev/blkdev. Does Gluster have some kind of
function to mount it on this block device?
Kind regards
Samuel
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
'm trying to integrate Gluster in a
microkernel OS, to which FUSE has been ported. But to be able to use FUSE
some changes have to made in calling the fuse_main.
Kind regards
Hall Samuel
___
Gluster-users mailing list
Gluster-users@gl
trying to look for information but the only one I've found is to
manually mount gluster in the nova node and use it via FUSE.
Can anyone provide information how to setup this environtment?
Thanks in advance,
Samuel.
___
Gluster-users mailin
een these 2 versions?
Any hint or link to documentation would be highly appreciated.
Thank you in advance,
Samuel.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
rease both the version and the
nodes in the current system.
Any answer or hint to where to find above information is more than welcome.
Thanks in advance,
Samuel.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
aemon copies de data?
As a side note, we've got around 150 files with similar issues. Is there
any limit about the maximum files the self-healing daemon can handle? Would
it be safe to manually copy the data from one brick to the other?
Thanks
on the clients by issuing (
http://linux-mm.org/Drop_Caches):
echo 3 > /proc/sys/vm/drop_caches
If the client still see the file as split brain, you shall have to umount
and mount again the gluster volume.
Hope it helps,
Samuel.
On 11 April 2013 01:48, Robert Hajime Lanning wrote:
> On
Dear Patric,
That seemed to make the trick because we can now see the file and access it.
Is there any place where it could be documented?
Thanks a lot,
Samuel
On 8 March 2013 10:40, Patric Uebele wrote:
> Hi Samuel,
>
> can you try to drop caches on the clients:
>
> “echo 3
e read as split brain from clients but we got the
following logs:
"split brain detected during lookup of"
thanks in advance,
Samuel.
On 4 March 2013 14:37, samuel wrote:
> Hi folks,
>
> We have detected a split-brain on 3.3.0 stripped replicated 8 nodes
> cluster.
> We
uot;coherent" manner?
Thanks a lot in advance,
Samuel.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
free sharedbuffers cached
Mem: 40476804012488 35192 0 10883713512
-/+ buffers/cache: 2978883749792
Swap: 3905532 252443880288
could it be a memory issue?
Best regards,
Samuel.
___
Gluster-us
It's a problem of stripped volumes in 3.3.1.
It does not appear on 3.3.0 and it's solved in coming 3.4.
Best regards,
Samuel.
On 25 January 2013 14:41, wrote:
> Hi there,
> each time I copy (or dd or similar) a file to a striped replicated volume
> I get an error: the ar
Hi all,
Besides our negative experience about this topic, there's this post:
http://community.gluster.org/q/how-i-can-troubleshoot-rdma-performance-issues-in-3-3-0/
So it's normal that version 3.3 does not work with transport rdma.
Best regards,
Samuel.
On 30 December 2012 14:
Done.
https://bugzilla.redhat.com/show_bug.cgi?id=888174
While testing the system, we found 3.3.0 enables stripped-replicated
volumes and seems to offer a "right" read behaviour in some tests.
Thanks in advance and, please, contact me in case I can offer further help.
Best regards,
S
99.94 551292.41 us 10.00 us 1996709.00 us361FINODELK
Could anyone provide some information how to debug this problem? Currently
the volume is not usable due to the horrible delay.
Thank you very much in advance,
Samuel.
___
G
ars on the
gluster logs:
tcp connect to failed (Connection refused)
My question is whether, as I've read, rdma transport is not available in
3.3 and I have to use 3.2.7. I've also tried 3.4.rc1 with the same problems
as 3.3
Thanks in advance,
Samuel.
___
ess using IP as
transport will enable bonding...would it be a workaround?
Thank you for the provided answers,
Samuel.
On 21 September 2012 12:41, Fernando Frediani (Qube) <
fernando.fredi...@qubenet.net> wrote:
> Well, it actually says it is a limitation of the Infiniband driver so
about this subject?
Thanks in advance!
Samuel.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
ove performance? Any other options?
Thanks in advance for any hint,
Samuel.
On 19 July 2012 08:44, samuel wrote:
> This are the parameters that are set:
>
> 59: volume nfs-server
> 60: type nfs/server
> 61: option nfs.dynamic-volumes on
> 62: option
PU but it's not the only scenario (deleting not empyt directory) that
causes the degradation. Sometimes it has happened wihout any concrete error
on the log files. I'll try to make more tests and offer more debug
information.
Thanks for your answer so far,
Samuel.
On 18 July 2012 21:54, An
d NFS performance? Are there any NFS
parameters to play with that can mitigate this degradation (standard R/W
values drops to a quarter of standard values)?
Thanks in advance for any help,
Samuel.
___
Gluster-users mailing list
Gluster-users@gluster.org
her node (delete file and hard link) and
launch self-healing and the file is not yet accessible.
Is there any guide or procedure to handle split brains on 3.3?
Thanks in advance,
Samuel.
___
Gluster-users mailing list
Gluster-users@gluster.org
http:
by degraded for
the activation of the NFS compatibility?
Thank you in advance.
Samuel.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
s issue? Is my guess
something similar to reallity? Is there any workaround besides using nodes
of 1 single brick?
Thank you in advance for any help,
Samuel.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
I'll read through the provided links.
thanks very much indeed, Dan!
Samuel.
On 5 March 2012 13:29, Dan Bretherton wrote:
>
> On 03/05/2012 09:39 AM,
> gluster-users-request@gluster.**orgwrote:
>
>> Date: Mon, 5 Mar 2012 10:29:22 +0100
>> From: samuel
>> Sub
read somewhere that it was highly advisable to use bricks of the
same size but I can not locate where I read such statement? Is it true?
Case it's depending on the version, I'm using mostly 3.2.4 version.
Thank in advance,
Samuel.
___
Gluste
lose:5668 : cannot close file: Bad file descriptor
Is there any configuration option to prevent this problem? Is there some
enhancements done in gluster 3.3 that can prevent the "split brain" issue
happen again?
Thank you very much in advance for any link or information,
Samuel.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
I don't know from which version on but, if you use the native client for
mounting the volumes, it's only required to have the IP active in the mount
moment. After that, the native client will transparently manage node's
failure.
Best regards,
Samuel.
On 18 July 2011 13:14, Marcel
all files
compared to NFS but, in theory, the rest of scenarios gluster offers better
performance.
Can anyone point to some documentation where I can improve gluster
behaviour? Or any suggestion|idea to improve the storage system?
Thank you very much in advance,
Samuel.
__
Someone know what that means?
Regards.
Samuel
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Hi there,
I just want to add that we have exactly the same problem, with many many
files on our infrastructure.
If I want to delete a file, DHT returned "Invalid argument".
And in the log file:
[2011-01-04 15:20:43.641438] I [dht-common.c:369:dht_revalidate_cbk]
dns-dht: subvolume dns-replicate
Hi all,
I have just migrated my old gluster partition to a fresh one with 4 nodes
with:
Type: Distributed-Replicate
Status: Started
Number of Bricks: 2 x 2 = 4
It solves my problems of latency and disk errors (like input/output errors
or file descriptor in bad state) but I have just m
I have the same errors sometime.
I attach my entire client and server log files.
-Message d'origine-
De : gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] De la part de Raghavendra G
Envoyé : lundi 6 décembre 2010 08:55
À : Matt Keating
Cc : gluster-users@glust
-timeout: 10
performance.cache-size: 4096MB
What is the log file or the command to view this diagnostic?
Regards.
Samuel Hassine
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Hello there,
I have Gluster 3.1.1 and just 2 replicated nodes for one big partition. I
have many errors "operation not permitted" like :
[2010-12-06 12:00:54.637585] W [fuse-bridge.c:648:fuse_setattr_cbk]
glusterfs-fuse: 12221497: SETATTR()
/com/olympe-network/var/lib/mysql/13052_surfyport/
(null)/41872720 (security.capability)
(fuse_loc_fill() failed)
[2010-12-03 16:33:58.596328] W [fuse-bridge.c:2506:fuse_getxattr]
glusterfs-fuse: 252715: GETXATTR (null)/41872720 (security.capability)
(fuse_loc_fill() failed)
What does it mean?
Thanks for your answers.
Regards.
Samuel
users] Anormal Gluster shutdown
Samuel -
I can't reproduce this issue locally, can you send me operating system
and hardware details for both the Gluster servers and the client?
Thanks,
Craig
-->
Craig Carl
Senior Systems Engineer
Gluster
On 12/02/2010 05:59 AM, Samuel Hassine wr
Hi all,
GlusterFS partition automatically shutdown when umounting a binded mount
point
with "-f" option (without it works).
How to reproduce:
mounted Gluster partition on /gluster (any config):
df: localhost:/gluster4.5T 100G 4.4T 3% /gluster
mount: localhost:/gluster
Hi all,
Our service using GlusterFS is in production since one week and we are
managing a huge trafic. The last night, one of the Gluster client (on a
physical node with a lot of virtual engines) crashed. Can you give me
more information about the log of the crash?
Here is the log:
pending fram
Hello all,
I have just upgraded from Gluster 3.0 to Gluster 3.1 with a simple
configuration : 2 nodes with replication between them.
I have now many problems during replication and files lookup like :
on
/com/**/speedcoolandfun/administrator/templates/khepri/images/toolbar/icon-32-upload.pn
Hello there,
I have just upgraded a GlusterFS filesystem in production after one week
of tests on a independant environment. First of all, I want to
congratulate Gluster team for the work and the new filesystem
capabilities, it is just awesome :)
But I am also very disappointed in the new interna
Hello all,
I'm using gluster since 2 years and I just test on a new pool the latest
version (3.1) with the .deb. (I'm used to compile and configure by hand)
I have just done:
wget GLUSTER.deb
dpkg -i GLUSTER.deb
/etc/init.d/glusterd start
And I have the following error:
Starting glusterd servi
42 matches
Mail list logo