I do not have such experiance but if its critical business and if devices
are ready to install , register it http://www.gluster.com/ and get 30 day
free supoort from glusterfs guys and go on .
-Original Message-
From: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.
Hello,
I know that GlusterFS do not care about file systems which underline of
nodes. GlusterFS power is collect all nodes under one namespace and
distribute files to all nodes.
Means the file limit is depend on your used file system and number of nodes
Example
it should be;
if you use ext3 a
ry structure in our application that we rely
>> on GlusterFS to store.
>>
>> Is there a way of copying files behind GlusterFS' back and have
>> GlusterFS pick up on the change with a self heal?
>>
>> It seems that a system designed for scalability should t
ly on
GlusterFS to store.
Is there a way of copying files behind GlusterFS' back and have GlusterFS pick
up on the change with a self heal?
It seems that a system designed for scalability should tolerate eh... being
scaled...?
Regards
Roland
2009/12/6 Vahriç Muhtaryan
> One idea , docs
One idea , docs said that you should create new directory. Could you create a
new directory after add another server and copy files from old directory to new
one, then everything will be distributed.
I know that glusterfs do not have reallocation mechanism for distribute data to
new added brick
Regards
Vahric
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Hello All,
I'm really wonder when 2.1 version will be released. I would like to test
bultin NFS and CIFS option. Anybody know is there any RC for test it until
released ?
Regards
Vahric
___
Gluster-users mailing list
Gluster-users@gluster.org
h
Any info ?
Regards
Vahric
___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
Hello,
I have a same question in my mind about isci but I belive that we can
achieve the problem with installing http://iscsitarget.sourceforge.net/ on
client side and share the glusterfs area as a lun storage , isn'T it ?
-Original Message-
From: gluster-users-boun...@gluster.org
[mailto
For better understanding issue did you try 4 servers DHT only or 2 servers
DHT only or two servers replication only for find out real problem maybe
replication or dht could have a bug ?
-Original Message-
From: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On
Hello All,
İs there any way to implament redistribute already distributed data when new
brick added to DHT distributer ? Means if a new server added and DHT was
used then I would like to balanace capacity and performance with
redistributing data to new added server ?
Regards
Vahric
ow and I wanted to be
> sure. I'll keep you posted about the result...
>
> Le dimanche 31 mai 2009 à 15:15 +0300, Vahriç Muhtaryan a écrit :
> > I'm new to glusterfs but looks like your config file is okay.
> > Why don't you execute your conf and test it ;) g
inal Message-
From: Shehjar Tikoo [mailto:shehj...@gluster.com]
Sent: Monday, June 01, 2009 7:30 AM
To: Vahriç Muhtaryan
Cc: 'Alpha Electronics'; gluster-users@gluster.org
Subject: Re: [Gluster-users] Could be the bug of Glusterfs? The file system
is unstable and hang
Vahriç Muhta
Thank you very much
I'm waiting August, thanks again
Vahric
-Original Message-
From: Shehjar Tikoo [mailto:shehj...@gluster.com]
Sent: Monday, June 01, 2009 7:35 AM
To: Vahriç Muhtaryan
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] CIFS GlusterFS 2.2 FUSE and WI
I'm new to glusterfs but looks like your config file is okay.
Why don't you execute your conf and test it ;) go ahead.
İf something wrong glusterfs or glusterfsd will kick you ;)
Pls keep informed , its work or not ! I can simulate same environment on my
servers.
Regards
Vahric
-Original Me
Hello,
Sorry I would like to understand all components correctly and hava a
question about DHT.
Instead of loookup from all servers related file your are starting to use
32bit hash for each file for keep where is stored.
1) Where this hash is stored ? on servers or on clients ? I
Hello,
I'm waiting CIFS support for becase of our environment and have a few
questions .
· İs there any deadline for version 2.2 ?
· I wonder after CIFS support available how we will connect to
glusterfs from our windows servers ? FUSE for windows ready ? because I
afrai
Hello,
I was installed new version like you and making test for something should be
or not . We have same configuration but I got differnet error, I couldn't
create directory or file , "it was giving Invalid Argument" and I saw that
one of server give an error like below , still testing .
Hello,
I will install two servers which will be striped and one client installation
for test.
http://europe.gluster.org/glusterfs/2.0/LATEST/CentOS/ I saw that some rpms
there , I'm not sure which rpm should be installed or not
for example
on servers
glusterfs-common-2.0.1
Thank you very much !
-Original Message-
From: Liam Slusser [mailto:lslus...@gmail.com]
Sent: Monday, May 25, 2009 5:44 PM
To: Vahriç Muhtaryan
Cc:
Subject: Re: [Gluster-users] raid5 or raid6 level cluster
Currently no, but it's in the roadmap for a future release.
ls
On M
Hello,
İs there anyway to create raid6 or raid5 level glusterfs installation ?
>From docs I undetstood that I can do raid1 base glusterfs installation or
radi0 (strapting data too all servers ) and raid10 based solution but raid10
based solution is not cost effective because need too much
Thank you for your all answers, all of them are helpful
-Original Message-
From: Shehjar Tikoo [mailto:shehj...@gluster.com]
Sent: Friday, May 22, 2009 6:38 PM
To: Vahriç Muhtaryan
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] About FileSystem
Vahriç Muhtaryan wrote
Hello To All,
I'm interesting with glusterfs. Something I do not understand. Docs said
that glusterfs do not need to fsck but I'm not sure.
if I understood correctly, I'm sharing related servers related directories
for example
Server 1 : /home/disk_space_1 à which is limited with server
23 matches
Mail list logo