Re: [Gluster-users] volume sizes

2009-12-29 Thread Vahriç Muhtaryan
I do not have such experiance but if its critical business and if devices
are ready to install , register it http://www.gluster.com/ and get 30 day
free supoort from glusterfs guys and go on .

-Original Message-
From: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Anthony Goddard
Sent: Tuesday, December 29, 2009 11:55 PM
To: gluster-users@gluster.org
Subject: [Gluster-users] volume sizes

First post!
We're looking at setting up 6x 24 bay storage servers (36TB of JBOD storage
per node) and running glusterFS over this cluster.
We have RAID cards on these boxes and are trying to decide what the best
size of each volume should be, for example if we present the OS's (and
gluster) with six 36TB volumes, I imagine rebuilding one node would take a
long time, and there may be other performance implications of this. On the
other hand, if we present gluster / the OS's with 6x 6TB volumes on each
node, we might have more trouble in managing a larger number of volumes.

My gut tells me a lot of small (if you can call 6TB small) volumes will be
lower risk and offer faster rebuilds from a failure, though I don't know
what the pros and cons of these two approaches might be.

Any advice would be much appreciated!


Cheers,
Anthony
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] What about maximum number of Folder of GlusterFS?

2009-12-29 Thread Vahriç Muhtaryan
Hello,

I know that GlusterFS do not care about file systems which underline of
nodes. GlusterFS power is collect all nodes under one namespace and
distribute files to all nodes.
Means the file limit is depend on your used file system and number of nodes 

Example 

it should be; 
if you use ext3 and have 5 nodes then you can think that 32.000 * 4 folder
you can have 

or if its ext4 then 64.000 * 4 folder you can have

Regards
Vahric

-Original Message-
From: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On Behalf Of lesonus
Sent: Wednesday, December 30, 2009 3:40 AM
To: gluster-users@gluster.org
Subject: [Gluster-users] What about maximum number of Folder of GlusterFS?

I know that EXT3 has max number of folder is about 32., EXT4 ~ 
64.000 folder
And I want to know Max folder of GlusterFS?
Thanks in advanced!
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Adding bricks to DHT

2009-12-07 Thread Vahriç Muhtaryan
Rolan, 

I found something from faqs , pls check it 
http://gluster.com/community/documentation/index.php/GlusterFS_Technical_FAQ#How_do_I_add_a_new_node_to_an_already_running_cluster_of_GlusterFS

Regards
Vahric

-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Roland Rabben
Sent: Monday, December 07, 2009 10:14 AM
To: Deyan Chepishev
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Adding bricks to DHT

Can someone confirm if new files will be stored on the new bricks with this 
option enabled, even if I don't create new directories?

If I understand the documentation correct, it will only help with reading files.

Thanks,
Roland

2009/12/6 Deyan Chepishev 

> Hello Roland,
>
> I think you should consider the DHT option: lookup-unhashed
>
>
> http://gluster.com/community/documentation/index.php/Translators/clust
> er/distribute
>
>
> Please let us know if this does the job
>
> Regards,
> Deian
>
> Roland Rabben wrote:
>
>> My problem is that I can't change directory names without a lot of hassle.
>> We have a hashed directory structure in our application that we rely 
>> on GlusterFS to store.
>>
>> Is there a way of copying files behind GlusterFS' back and have 
>> GlusterFS pick up on the change with a self heal?
>>
>> It seems that a system designed for scalability should tolerate eh...
>> being
>> scaled...?
>>
>> Regards
>> Roland
>>
>> 2009/12/6 Vahriç Muhtaryan 
>>
>>
>>
>>> One idea , docs said that you should create new directory. Could you 
>>> create a new directory after add another server and copy files from 
>>> old directory to new one, then everything will be distributed.
>>>
>>> I know that glusterfs do not have reallocation mechanism for 
>>> distribute data to new added bricks for achive performance and capcity 
>>> problem.
>>>
>>>
>>> *Now the question is what about adding a new server?
>>>
>>> Currently hash works based on directory level distribution. i.e, a 
>>> given file's parent directory will have information of how the hash 
>>> numbers are mapped to subvolumes. So, adding new node doesn't 
>>> disturb any current setup as the files/dirs present already have its 
>>> information preserved.
>>> Whatever
>>> new directory gets created, will start considering new volume for 
>>> scheduling files.
>>>
>>> Regards
>>> Vahric
>>>
>>> -Original Message-
>>> From: gluster-users-boun...@gluster.org [mailto:
>>> gluster-users-boun...@gluster.org] On Behalf Of Roland Rabben
>>> Sent: Sunday, December 06, 2009 6:29 PM
>>> To: gluster-users@gluster.org
>>> Subject: [Gluster-users] Adding bricks to DHT
>>>
>>> Hi
>>> I have a GlusterFS DHT system that I need to expand in a few days.
>>> Reading
>>> up on the documentation it seems to me that adding bricks won't 
>>> solve our problem because of how the hash works.
>>> I have a fairly static folder structure, but the number of files are 
>>> growing fast. From what I understand, adding bricks will not allow 
>>> new files to be stored on the new bricks unless they are stored in 
>>> new directories.
>>>
>>> So my questions are:
>>> 1. How can I make sure the new bricks are used?
>>> 2. Is there a way to "rebalance" content over the new and old bricks?
>>>
>>> I can't tolerate much downtime on my FS.
>>>
>>> I am using GlusterFS 2.06.
>>>
>>> Regards
>>> Roland Rabben
>>>
>>>
>>>
>>>
>>
>>
>>  
>> -
>> ---
>>
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>
>>
>
>


--
Roland Rabben
Founder & CEO Jotta AS
Cell: +47 90 85 85 39
Phone: +47 21 04 29 00
Email: rol...@jotta.no

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Adding bricks to DHT

2009-12-06 Thread Vahriç Muhtaryan
I guess in list many people live which have more experiance then me 
But I saw that glusterfs do not have such functionalty , its scalable of 
course, how, if you install 100 servers in the beginning

As for me you should care about two ways ; 
First use gluster for create parallel stroage cluster
use server cases which can have 12 , 16 , 24 disk slot and hardware 
raid.

Because of gluster is file system independent, you can use xfs and can grow 
parititon how you wany or use lvm for increase capacity.

Hope, developers read and improve functionalty for next versions because if we 
are talking about scale clustered storage , it should be expandable any time 
and need backend reallocation mechanisim for keep nodes load equal at the 
backend.

Regards
Vahric  

-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Roland Rabben
Sent: Sunday, December 06, 2009 7:29 PM
To: gluster-users@gluster.org
Subject: Re: [Gluster-users] Adding bricks to DHT

My problem is that I can't change directory names without a lot of hassle.
We have a hashed directory structure in our application that we rely on 
GlusterFS to store.

Is there a way of copying files behind GlusterFS' back and have GlusterFS pick 
up on the change with a self heal?

It seems that a system designed for scalability should tolerate eh... being 
scaled...?

Regards
Roland

2009/12/6 Vahriç Muhtaryan 

> One idea , docs said that you should create new directory. Could you 
> create a new directory after add another server and copy files from 
> old directory to new one, then everything will be distributed.
>
> I know that glusterfs do not have reallocation mechanism for 
> distribute data to new added bricks for achive performance and capcity 
> problem.
>
>
> *Now the question is what about adding a new server?
>
> Currently hash works based on directory level distribution. i.e, a 
> given file's parent directory will have information of how the hash 
> numbers are mapped to subvolumes. So, adding new node doesn't disturb 
> any current setup as the files/dirs present already have its 
> information preserved. Whatever new directory gets created, will start 
> considering new volume for scheduling files.
>
> Regards
> Vahric
>
> -Original Message-
> From: gluster-users-boun...@gluster.org [mailto:
> gluster-users-boun...@gluster.org] On Behalf Of Roland Rabben
> Sent: Sunday, December 06, 2009 6:29 PM
> To: gluster-users@gluster.org
> Subject: [Gluster-users] Adding bricks to DHT
>
> Hi
> I have a GlusterFS DHT system that I need to expand in a few days. 
> Reading up on the documentation it seems to me that adding bricks 
> won't solve our problem because of how the hash works.
> I have a fairly static folder structure, but the number of files are 
> growing fast. From what I understand, adding bricks will not allow new 
> files to be stored on the new bricks unless they are stored in new 
> directories.
>
> So my questions are:
> 1. How can I make sure the new bricks are used?
> 2. Is there a way to "rebalance" content over the new and old bricks?
>
> I can't tolerate much downtime on my FS.
>
> I am using GlusterFS 2.06.
>
> Regards
> Roland Rabben
>
>


--
Roland Rabben
Founder & CEO Jotta AS
Cell: +47 90 85 85 39
Phone: +47 21 04 29 00
Email: rol...@jotta.no

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Adding bricks to DHT

2009-12-06 Thread Vahriç Muhtaryan
One idea , docs said that you should create new directory. Could you create a 
new directory after add another server and copy files from old directory to new 
one, then everything will be distributed.

I know that glusterfs do not have reallocation mechanism for distribute data to 
new added bricks for achive performance and capcity problem.


*Now the question is what about adding a new server?

Currently hash works based on directory level distribution. i.e, a given file's 
parent directory will have information of how the hash numbers are mapped to 
subvolumes. So, adding new node doesn't disturb any current setup as the 
files/dirs present already have its information preserved. Whatever new 
directory gets created, will start considering new volume for scheduling files.

Regards
Vahric

-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Roland Rabben
Sent: Sunday, December 06, 2009 6:29 PM
To: gluster-users@gluster.org
Subject: [Gluster-users] Adding bricks to DHT

Hi
I have a GlusterFS DHT system that I need to expand in a few days. Reading up 
on the documentation it seems to me that adding bricks won't solve our problem 
because of how the hash works.
I have a fairly static folder structure, but the number of files are growing 
fast. From what I understand, adding bricks will not allow new files to be 
stored on the new bricks unless they are stored in new directories.

So my questions are:
1. How can I make sure the new bricks are used?
2. Is there a way to "rebalance" content over the new and old bricks?

I can't tolerate much downtime on my FS.

I am using GlusterFS 2.06.

Regards
Roland Rabben

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] When Gluster 2.1 will be release

2009-11-05 Thread Vahriç Muhtaryan
Regards

Vahric

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Any Body Know When 2.1 will be released

2009-08-29 Thread Vahriç Muhtaryan
Hello All, 

 

I'm really wonder when 2.1 version will be released. I would like to test
bultin NFS and CIFS option. Anybody know is there any RC for test it until
released ?

 

Regards

Vahric

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] When will be release glusterfs 2.0.3

2009-07-07 Thread Vahriç Muhtaryan
Any info ?

 

Regards

Vahric

___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Need a quick answer on "Distributed Replicated Storage" questions

2009-06-18 Thread Vahriç Muhtaryan
Hello,

I have a same question in my mind about isci but I belive that we can
achieve the problem with installing http://iscsitarget.sourceforge.net/ on
client side and share the glusterfs area as a lun storage , isn'T it ?

-Original Message-
From: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Anand Babu
Sent: Thursday, June 18, 2009 11:36 AM
To: Liam Slusser
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Need a quick answer on "Distributed Replicated
Storage" questions

We made good progress with unfs3 integration using booster model. GlusterFS
and unfs3 (modified version) will run in single address space using booster
library.
This feature is scheduled for 2.1. We will try to have a pre-release
available
soon (in weeks). GlusterFS v2.2 will have a native NFS protocol translator.

iSCSI exporting requires mmap support. You can create image files and
losetup them
as devices. Then it will be possible to export as iSCSI volumes. We just
fixed
a bug that caused poor mmap write performance. Work is on the way. We will
keep
you updated.

--
Anand Babu Periasamy
GPG Key ID: 0x62E15A31
Blog [http://unlocksmith.org]
GlusterFS [http://www.gluster.org]
GNU/Linux [http://www.gnu.org]


Liam Slusser wrote:
>  
> Jonathan,
>  
> You can export a Gluster mount via a client with a NFS server however 
> the performance is pretty poor.  As far as i know there is no way to 
> export it with iSCSI.
>  
> Your best option is to use a single/dual Linux/Solaris iscsi server to 
> boot strap all your systems in xenServer and then use Gluster and fuse 
> to mount your /data drive once the system is up and running.
>  
> liam
> 
> On Mon, Jun 15, 2009 at 5:15 PM, Jonathan Bayles  > wrote:
> 
> Hi all,
> 
> I am attempting to prevent my company from having to buy a SAN to
> backend our virtualization platform(xenServer). Right now we have a
> light workload and 4 dell 2950's (6disks, 1 controller each) to
> leverage against the storage side. I like what I see in regard to
> the "Distributed Replicated Storage" where you essentially create a
> RAID 10 of bricks. This would work very well for me. The question
> is, how do I serve this storage paradigm to a front end that's
> expecting an NFS share or an iSCSI target? Does gluster enable me to
> access the entire cluster from a single IP? Or is it something I
> could run on a centos cluster (luci and ricci) and use the cluster
> suite to present the glustered file system in the form of an NFS
share?
> 
> Let me back up and state my needs/assumptions:
> 
> * A storage cluster with the capacity equal to at least 1
> node(assuming all nodes are the same).
> 
> * I need to be able to lose/take down any one brick in the cluster
> at any time without a loss of data.
> 
> * I need more than the throughput of a single server, if not in
> overall speed, then in width.
> 
> * I need to be able to add more bricks in and have the expectation
> of increased storage capacity and throughput.
> 
> * I need to present the storage as a single entity as an NFS share
> or a iSCSI target.
> 
> If there are any existing models out there please point me too them,
> I don't mind doing the work I just don't want to re-invent the
> wheel. Thanks in advance for your time and effort, I know what its
> like to have to answer newbie questions!
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org 
> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
> 
> 
> 
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Horrible performance with small files (DHT/AFR)

2009-06-03 Thread Vahriç Muhtaryan
For better understanding issue did you try 4 servers DHT only or 2 servers
DHT only or two servers replication only for find out real problem maybe
replication or dht could have a bug ?

-Original Message-
From: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Benjamin Krein
Sent: Wednesday, June 03, 2009 11:00 PM
To: Jasper van Wanrooy - Chatventure
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Horrible performance with small files (DHT/AFR)

The current boxes I'm using for testing are as follows:

  * 2x dual-core Opteron ~2GHz (x86_64)
  * 4GB RAM
  * 4x 7200 RPM 73GB SATA - RAID1+0 w/3ware hardware controllers

The server storage directories live in /home/clusterfs where /home is  
an ext3 partition mounted with noatime.

These servers are not virtualized.  They are running Ubuntu 8.04 LTS  
Server x86_64.

The files I'm copying are all <2k javascript files (plain text) stored  
in 100 hash directories in each of 3 parent directories:

/home/clusterfs/
   + parentdir1/
   |   + 00/
   |   | ...
   |   + 99/
   + parentdir1/
   |   + 00/
   |   | ...
   |   + 99/
   + parentdir1/
   + 00/
   | ...
   + 99/

There are ~10k of these <2k javascript files distributed throughout  
the above directory structure totaling approximately 570MB.  My tests  
have been copying that entire directory structure from a client  
machine into the glusterfs mountpoint on the client.

Observing IO on both the client box & all the server boxes via iostat  
shows that the disks are doing *very* little work.  Observing the CPU/ 
memory load with top or htop shows that none of the boxes are CPU or  
memory bound.  Observing the bandwidth in/out of the network interface  
shows <1MB/s throughput (we have a fully gigabit LAN!) which usually  
drops down to <150KB/s during the copy.

scp'ing the same directory structure from the same client to one of  
the same servers will work at ~40-50MB/s sustained as a comparison.   
Here is the results of copying the same directory structure using  
rsync to the same partition:

# time rsync -ap * b...@cfs1:~/cache/
b...@cfs1's password:

real0m23.566s
user0m8.433s
sys 0m4.580s

Ben

On Jun 3, 2009, at 3:16 PM, Jasper van Wanrooy - Chatventure wrote:

> Hi Benjamin,
>
> That's not good news. What kind of hardware do you use? Is it  
> virtualised? Or do you use real boxes?
> What kind of files are you copying in your test? What performance do  
> you have when copying it to a local dir?
>
> Best regards Jasper
>
> - Original Message -
> From: "Benjamin Krein" 
> To: "Jasper van Wanrooy - Chatventure" 
> Cc: "Vijay Bellur" , gluster-users@gluster.org
> Sent: Wednesday, 3 June, 2009 19:23:51 GMT +01:00 Amsterdam /  
> Berlin / Bern / Rome / Stockholm / Vienna
> Subject: Re: [Gluster-users] Horrible performance with small files  
> (DHT/AFR)
>
> I reduced my config to only 2 servers (had to donate 2 of the 4 to
> another project).  I now have a single server using DHT (for future
> scaling) and AFR to a mirrored server.  Copy times are much better,
> but still pretty horrible:
>
> # time cp -rp * /mnt/
>
> real  21m11.505s
> user  0m1.000s
> sys   0m6.416s
>
> Ben
>
> On Jun 3, 2009, at 3:13 AM, Jasper van Wanrooy - Chatventure wrote:
>
>> Hi Benjamin,
>>
>> Did you also try with a lower thread-count. Actually I'm using 3
>> threads.
>>
>> Best Regards Jasper
>>
>>
>> On 2 jun 2009, at 18:25, Benjamin Krein wrote:
>>
>>> I do not see any difference with autoscaling removed.  Current
>>> server config:
>>>
>>> # webform flat-file cache
>>>
>>> volume webform_cache
>>> type storage/posix
>>> option directory /home/clusterfs/webform/cache
>>> end-volume
>>>
>>> volume webform_cache_locks
>>> type features/locks
>>> subvolumes webform_cache
>>> end-volume
>>>
>>> volume webform_cache_brick
>>> type performance/io-threads
>>> option thread-count 32
>>> subvolumes webform_cache_locks
>>> end-volume
>>>
>>> <>
>>>
>>> # GlusterFS Server
>>> volume server
>>> type protocol/server
>>> option transport-type tcp
>>> subvolumes dns_public_brick dns_private_brick webform_usage_brick
>>> webform_cache_brick wordpress_uploads_brick subs_exports_brick
>>> option auth.addr.dns_public_brick.allow 10.1.1.*
>>> option auth.addr.dns_private_brick.allow 10.1.1.*
>>> option auth.addr.webform_usage_brick.allow 10.1.1.*
>>> option auth.addr.webform_cache_brick.allow 10.1.1.*
>>> option auth.addr.wordpress_uploads_brick.allow 10.1.1.*
>>> option auth.addr.subs_exports_brick.allow 10.1.1.*
>>> end-volume
>>>
>>> # time cp -rp * /mnt/
>>>
>>> real70m13.672s
>>> user0m1.168s
>>> sys 0m8.377s
>>>
>>> NOTE: the above test was also done during peak hours when the LAN/
>>> dev server were in use which would cause some of the extra time.
>>> This is still WAY too much, though.
>>>
>>> Ben
>>>
>>>
>>> On Jun 1, 2009, at 1:40 PM, Vijay Bellur wrote:
>>>
 Hi Benjamin,

 Could you please try by turning autoscali

[Gluster-users] Feature Requests About Redistribution

2009-06-01 Thread Vahriç Muhtaryan
Hello All, 

 

İs there any way to implament redistribute already distributed data when new
brick added to DHT distributer ? Means if a new server added and DHT was
used then I would like to balanace capacity and performance with
redistributing data to new added server ? 

 

Regards

Vahric

___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Configuration validation needed

2009-06-01 Thread Vahriç Muhtaryan
Maybe you should make some io test with iometer for get it 

What kind of raid configurtion will you have on each physcal server ? radi 5 or 
raid 6 or raid10 ? 

Also whak kind of application will you use , you know DHT is file based and 
maybe using it for mysql or mssql couldn't make you happy but maybe stripe 
translator make you happy 

İf your files are big maybe jumbo frames could help you better for getting much 
throughput

Also ram is important you know if you have too much ram and if you are making 
read more then write it can help too 

-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Julien Cornuwel
Sent: Monday, June 01, 2009 9:31 PM
To: gluster-users@gluster.org
Subject: Re: [Gluster-users] Configuration validation needed

My test servers were not available today, so I had to test my design on
virtual machines :-/

All I can tell is : it works (after correcting minor errors in the
config file). But I have no idea what performances I can expect on real
servers...

Le lundi 01 juin 2009 à 02:35 +0200, Julien Cornuwel a écrit :
> I asked because I cannot perform tests until tomorrow and I wanted to be
> sure. I'll keep you posted about the result...
> 
> Le dimanche 31 mai 2009 à 15:15 +0300, Vahriç Muhtaryan a écrit :
> > I'm new to glusterfs but looks like your config file is okay.
> > Why don't you execute your conf and test it ;) go ahead.
> > İf something wrong glusterfs or glusterfsd will kick you ;)
> > 
> > Pls keep informed , its work or not ! I can simulate same environment on my 
> > servers.
> > 
> > Regards
> > Vahric
> > 
> > -Original Message-
> > From: gluster-users-boun...@gluster.org 
> > [mailto:gluster-users-boun...@gluster.org] On Behalf Of Julien Cornuwel
> > Sent: Sunday, May 31, 2009 2:55 PM
> > To: gluster-users@gluster.org
> > Subject: [Gluster-users] Configuration validation needed
> > 
> > Hi,
> > 
> > 
> > I will soon have to deploy a glusterFS across several servers. Here are my 
> > goals :
> > 
> > - Every server can access the all filesystem
> > - Every file must be replicated
> > 
> > I used a combination of Replicate and Distribute to be able to add new 
> > servers. 
> > 
> > Each server storage is divided in two bricks (primary and secondary).
> > First server's primary storage is replicated on second server's secondary. 
> > Second server's primary is replicated on third servers secondary and so on.
> > 
> > And then, I use Distribute across these replicated volumes. I think this 
> > should work, but could you please review the attached configuration file 
> > for errors ?
> > 
> > 
> > Regards,
> > 
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Could be the bug of Glusterfs? The file system is unstable and hang

2009-06-01 Thread Vahriç Muhtaryan
Hello, 

I replacedif (priv->incoming.buf_p)
-free (priv->incoming.buf_p);

With  +if (priv->incoming.iobuf)
+iobuf_unref (priv->incoming.iobuf);

And its working .

Now, I will test to see whats going on !

Regards
Vahric

-Original Message-
From: Shehjar Tikoo [mailto:shehj...@gluster.com] 
Sent: Monday, June 01, 2009 7:30 AM
To: Vahriç Muhtaryan
Cc: 'Alpha Electronics'; gluster-users@gluster.org
Subject: Re: [Gluster-users] Could be the bug of Glusterfs? The file system
is unstable and hang

Vahriç Muhtaryan wrote:
> 
> Hello,
> 
>  
> 
> I was installed new version like you and making test for something 
> should be or not . We have same configuration but I got differnet error, 
> I couldn't create directory or file , "it was giving Invalid Argument" 
> and I saw that one of server give an error like below , still testing 

Hi

If you're comfortable with applying a patch, you could
test with the following patch. We've seen the crash mentioned by you
and this patch fixes it for us.

http://patches.gluster.com/patch/436/

Please confirm if it does the same for you.

Thanks
Shehjar

> 
>  
> 
> pending frames:
> 
> frame : type(1) op(WRITE)
> 
>  
> 
> patchset: 5c1d9108c1529a1155963cb1911f8870a674ab5b
> 
> signal received: 6
> 
> configuration details:argp 1
> 
> backtrace 1
> 
> db.h 1
> 
> dlfcn 1
> 
> fdatasync 1
> 
> libpthread 1
> 
> llistxattr 1
> 
> setfsid 1
> 
> spinlock 1
> 
> epoll.h 1
> 
> xattr.h 1
> 
> st_atim.tv_nsec 1
> 
> package-string: glusterfs 2.0.1
> 
> [0xfa9420]
> 
> /lib/libc.so.6(abort+0x101)[0x218691]
> 
> /lib/libc.so.6[0x24f24b]
> 
> /lib/libc.so.6[0x2570f1]
> 
> /lib/libc.so.6(cfree+0x90)[0x25abc0]
> 
>
/usr/local/lib/glusterfs/2.0.1/transport/socket.so(__socket_reset+0x3e)[0xc8
155e]
> 
>
/usr/local/lib/glusterfs/2.0.1/transport/socket.so(socket_event_poll_err+0x3
b)[0xc8303b]
> 
>
/usr/local/lib/glusterfs/2.0.1/transport/socket.so(socket_event_handler+0x8b
)[0xc833bb]
> 
> /usr/local/lib/libglusterfs.so.0[0x9820ca]
> 
> /usr/local/lib/libglusterfs.so.0(event_dispatch+0x21)[0x980fb1]
> 
> glusterfsd(main+0xdf3)[0x804b1a3]
> 
> /lib/libc.so.6(__libc_start_main+0xdc)[0x203e8c]
> 
> glusterfsd[0x8049911]
> 
> -


___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] CIFS GlusterFS 2.2 FUSE and WINDOWS

2009-05-31 Thread Vahriç Muhtaryan
Thank you very much
I'm waiting August, thanks again

Vahric

-Original Message-
From: Shehjar Tikoo [mailto:shehj...@gluster.com] 
Sent: Monday, June 01, 2009 7:35 AM
To: Vahriç Muhtaryan
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] CIFS GlusterFS 2.2 FUSE and WINDOWS

Vahriç Muhtaryan wrote:
> Hello,
> 
> 
> 
> I'm waiting CIFS support for becase of our environment and have a few
>  questions .
> 
> 
> 
> · I.s there any deadline  for version 2.2 ?
> 

2.1 is roughly due in August so you could estimate 2.2 to be
available roughly 3 months after.

> · I wonder after CIFS support available how we will connect
> to glusterfs from our windows servers ? FUSE for windows ready  ?
> because I

Windows client will be able to connect to GlusterFS through
a CIFS front-end. At the server, the CIFS will run inside the
GlusterFS server binary so FUSE will not be required.

These are, of course, early ideas on how we want to implement it.
Things could change when we get down to it.

-Shehjar

> afraid when see this sentence "Is it really true that there are no
> FUSE port for Windows? If anyone knows of one add it to this list,
> please." I.n sourceforge. Will be any recommendation after CIFS
> support and version 2.2 released by glusterfs developers ?
> 
> 
> 
> Regards
> 
> Vahric
> 
> 
> 
> 
> 
> ___ Gluster-users mailing
> list Gluster-users@gluster.org 
> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Configuration validation needed

2009-05-31 Thread Vahriç Muhtaryan
I'm new to glusterfs but looks like your config file is okay.
Why don't you execute your conf and test it ;) go ahead.
İf something wrong glusterfs or glusterfsd will kick you ;)

Pls keep informed , its work or not ! I can simulate same environment on my 
servers.

Regards
Vahric

-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Julien Cornuwel
Sent: Sunday, May 31, 2009 2:55 PM
To: gluster-users@gluster.org
Subject: [Gluster-users] Configuration validation needed

Hi,


I will soon have to deploy a glusterFS across several servers. Here are my 
goals :

- Every server can access the all filesystem
- Every file must be replicated

I used a combination of Replicate and Distribute to be able to add new servers. 

Each server storage is divided in two bricks (primary and secondary).
First server's primary storage is replicated on second server's secondary. 
Second server's primary is replicated on third servers secondary and so on.

And then, I use Distribute across these replicated volumes. I think this should 
work, but could you please review the attached configuration file for errors ?


Regards,


___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] About DHT Translater

2009-05-30 Thread Vahriç Muhtaryan
Hello,

 

Sorry I would like to understand all components correctly and hava a
question about DHT.

 

Instead of loookup from all servers related file your are starting to use
32bit hash for each file for keep where is stored. 

 

1)  Where this hash is stored ? on servers or on clients ? I mean
clients already know the hash and request file directly from related server
? or all backend server have all hash informations and redirect request to
right server ? 

2)  İn where and which format are you storing hashs , can I see it ? 

3)  Looks like if I add new server to servers farm then looks like no
way to distribute already stored data to new server for balance disk usage
on all servers ? Because file is already hashed and there is no rehesh
mechanisim after data rearranged, right ? if it is , then design problem
could couse because if people do not know trend of their data, they can do
wrong things and they can not optimize it in feaure , isn't it ? 

 

Regards

Vahric

___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] CIFS GlusterFS 2.2 FUSE and WINDOWS

2009-05-30 Thread Vahriç Muhtaryan
 Hello,

 

I'm waiting CIFS support for becase of our environment and have a few
questions .

 

· İs there any deadline  for version 2.2 ?

· I wonder after CIFS support available how we will connect to
glusterfs from our windows servers ? FUSE for windows ready  ? because I
afraid when see this sentence "Is it really true that there are no FUSE port
for Windows? If anyone knows of one add it to this list, please." İn
sourceforge. Will be any recommendation after CIFS support and version 2.2
released by glusterfs developers ?

 

Regards

Vahric

___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Could be the bug of Glusterfs? The file system is unstable and hang

2009-05-30 Thread Vahriç Muhtaryan
 

 

Hello,

 

I was installed new version like you and making test for something should be
or not . We have same configuration but I got differnet error, I couldn't
create directory or file , "it was giving Invalid Argument" and I saw that
one of server give an error like below , still testing  

 

pending frames:

frame : type(1) op(WRITE)

 

patchset: 5c1d9108c1529a1155963cb1911f8870a674ab5b

signal received: 6

configuration details:argp 1

backtrace 1

db.h 1

dlfcn 1

fdatasync 1

libpthread 1

llistxattr 1

setfsid 1

spinlock 1

epoll.h 1

xattr.h 1

st_atim.tv_nsec 1

package-string: glusterfs 2.0.1

[0xfa9420]

/lib/libc.so.6(abort+0x101)[0x218691]

/lib/libc.so.6[0x24f24b]

/lib/libc.so.6[0x2570f1]

/lib/libc.so.6(cfree+0x90)[0x25abc0]

/usr/local/lib/glusterfs/2.0.1/transport/socket.so(__socket_reset+0x3e)[0xc8
155e]

/usr/local/lib/glusterfs/2.0.1/transport/socket.so(socket_event_poll_err+0x3
b)[0xc8303b]

/usr/local/lib/glusterfs/2.0.1/transport/socket.so(socket_event_handler+0x8b
)[0xc833bb]

/usr/local/lib/libglusterfs.so.0[0x9820ca]

/usr/local/lib/libglusterfs.so.0(event_dispatch+0x21)[0x980fb1]

glusterfsd(main+0xdf3)[0x804b1a3]

/lib/libc.so.6(__libc_start_main+0xdc)[0x203e8c]

glusterfsd[0x8049911]

-

 

From: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Alpha Electronics
Sent: Friday, May 29, 2009 10:32 PM
To: gluster-users@gluster.org
Subject: [Gluster-users] Could be the bug of Glusterfs? The file system is
unstable and hang

 

We are testing the glusterfs before recommending them to enterprise clients.
We found that the file system always hang after running for about 2 days.
after killing the server side process and then restart, everything goes back
to normal.

 Here is the spec and error logged:
GlusterFS version:  v2.0.1

Client volume:
volume brick_1
  type protocol/client
  option transport-type tcp/client
  option remote-port  # Non-default port
  option remote-host server1
  option remote-subvolume brick
end-volume

volume brick_2
  type protocol/client
  option transport-type tcp/client
  option remote-port  # Non-default port
  option remote-host server2
  option remote-subvolume brick
end-volume

volume bricks
  type cluster/distribute
  subvolumes brick_1 brick_2
end-volume

Error logged on client side through /var/log/glusterfs.log
[2009-05-29 14:58:55] E [client-protocol.c:292:call_bail] brick_1: bailing
out frame LK(28) frame sent = 2009-05-29 14:28:54. frame-timeout = 1800
[2009-05-29 14:58:55] W [fuse-bridge.c:2284:fuse_setlk_cbk] glusterfs-fuse:
106850788: ERR => -1 (Transport endpoint is not connected)
error logged on server 
[2009-05-29 14:59:15] E [client-protocol.c:292:call_bail] brick_2: bailing
out frame LK(28) frame sent = 2009-05-29 14:29:05. frame-timeout = 1800
[2009-05-29 14:59:15] W [fuse-bridge.c:2284:fuse_setlk_cbk] glusterfs-fuse:
106850860: ERR => -1 (Transport endpoint is not connected)

There is error message logged on server side after 1 hour in
/var/log/messages:
May 29 16:04:16 server2 winbindd[3649]: [2009/05/29 16:05:16, 0]
lib/util_sock.c:write_data(564)
May 29 16:04:16 server2 winbindd[3649]:   write_data: write failure. Error =
Connection reset by peer
May 29 16:04:16 server2 winbindd[3649]: [2009/05/29 16:05:16, 0]
libsmb/clientgen.c:write_socket(158)
May 29 16:04:16 server2 winbindd[3649]:   write_socket: Error writing 104
bytes to socket 18: ERRNO = Connection reset by peer
May 29 16:04:16 server2 winbindd[3649]: [2009/05/29 16:05:16, 0]
libsmb/clientgen.c:cli_send_smb(188)
May 29 16:04:16 server2 winbindd[3649]:   Error writing 104 bytes to client.
-1 (Connection reset by peer)
May 29 16:04:16 server2 winbindd[3649]: [2009/05/29 16:05:16, 0]
libsmb/cliconnect.c:cli_session_setup_spnego(859)
May 29 16:04:16 server2 winbindd[3649]:   Kinit failed: Cannot contact any
KDC for requested realm

 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] About Installation

2009-05-27 Thread Vahriç Muhtaryan
Hello,

 

I will install two servers which will be striped and one client installation
for test.

 

http://europe.gluster.org/glusterfs/2.0/LATEST/CentOS/ I saw that some rpms
there , I'm not sure which rpm should be installed or not 

 

for example 

 

on servers 

 

   glusterfs-common-2.0.1-1.el5.x86_64.rpm

   glusterfs-devel-2.0.1-1.el5.x86_64.rpm
 
   glusterfs-server-2.0.1-1.el5.x86_64.rpm
  

 

on client only glusterfs-client-2.0.1-1.el5.x86_64.rpm
  package ? 
 
or , I have 32 bit centos installed virtual servers , instead of these files
install glusterfs-2.0.1.tar.gz
   on
all servers and clients enough or not ?

 

Regards

Vahric

___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] raid5 or raid6 level cluster

2009-05-25 Thread Vahriç Muhtaryan
Thank you very much ! 

-Original Message-
From: Liam Slusser [mailto:lslus...@gmail.com] 
Sent: Monday, May 25, 2009 5:44 PM
To: Vahriç Muhtaryan
Cc: 
Subject: Re: [Gluster-users] raid5 or raid6 level cluster

Currently no, but it's in the roadmap for a future release.

ls



On May 25, 2009, at 1:57 AM, Vahriç Muhtaryan   
wrote:

> Hello,
>
>
>
> İs there anyway  to create raid6 or raid5 level glusterfs installati 
> on ?
>
>
>
> From docs I undetstood that I can do raid1 base glusterfs  
> installation or radi0 (strapting data too all servers ) and raid10  
> based solution but raid10 based solution is not cost effective  
> because need too much server.
>
>
>
> Do you have a plan for keep one or two server as a parity for whole  
> glusterfs system ?
>
>
>
> Regards
>
> Vahric
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] raid5 or raid6 level cluster

2009-05-25 Thread Vahriç Muhtaryan
Hello,

 

İs there anyway  to create raid6 or raid5 level glusterfs installation ? 

 

>From docs I undetstood that I can do raid1 base glusterfs installation or
radi0 (strapting data too all servers ) and raid10 based solution but raid10
based solution is not cost effective because need too much server.

 

Do you have a plan for keep one or two server as a parity for whole
glusterfs system ?  

 

Regards

Vahric

___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] About FileSystem

2009-05-22 Thread Vahriç Muhtaryan
Thank you for your all answers, all of them are helpful

-Original Message-
From: Shehjar Tikoo [mailto:shehj...@gluster.com] 
Sent: Friday, May 22, 2009 6:38 PM
To: Vahriç Muhtaryan
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] About FileSystem

Vahriç Muhtaryan wrote:
> Hello To All,
> 
> 
> 
> I’m interesting with glusterfs. Something I do not understand. Docs 
> said that glusterfs do not need to fsck but I’m not sure.
> 
> if I understood correctly,  I’m sharing related servers related 
> directories for example
> 
> 
> 
> Server 1 : /home/disk_space_1 à which is limited with server disk 
> space where disk_space_1 folder reside and this folder is on node 
> file system which is xfs, ext3, or another
> 
> Server 2: /home/disk_space_1 à which is limited with server disk 
> space where disk_space_1 folder reside and this folder is on node 
> file system which is xfs, ext3, or another
> 
> 
> 
> This means always file check will be happen because bit of glusterfs 
> system on Server 1 and , alwyas fsck could be possible ?
> 

That is correct but that is a function(..or limitation..) of the local,
on-disk file system and not GlusterFS. In fact, if you use the replicate
translator you can have multiple copies of your data on different
machines so it is possible for you to stop relying on an fsck for data
availability.

> 
> 
> Second I understood that glusterfs on servers file systems and 
> because of this capacity is Server1 + Server 2 + Server n , am I 
> right ?
> 
> 

That depends on how you configure it. What you say above, is valid
in case you use distribute or stripe translator but is not valid
if you use the replicate translator.

> 
> İs there any way to support iscsi ?

Not through GlusterFS.

> 
> 
> 
> Looks like NFS is supported but NFSv3 and CIFS will be supported next
>  release does it means without samba glusterfs will support CIFS 
> easyly ?
> 
> 

Yes. When CIFS support comes in, it will be usable without requiring
Samba.

Regards
Shehjar

> 
> Thanks & Regards
> 
> Vahric
> 
> 
> 
> 
> 
> 
> ___ Gluster-users mailing
>  list Gluster-users@gluster.org 
> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] About FileSystem

2009-05-22 Thread Vahriç Muhtaryan
Hello To All, 

 

I'm interesting with glusterfs. Something I do not understand. Docs said
that glusterfs do not need to fsck but I'm not sure.

if I understood correctly,  I'm sharing related servers related directories
for example 

 

Server 1 : /home/disk_space_1 à which is limited with server disk space
where disk_space_1 folder reside and this folder is on node file system
which is xfs, ext3, or another

Server 2: /home/disk_space_1 à which is limited with server disk space where
disk_space_1 folder reside and this folder is on node file system which is
xfs, ext3, or another

 

This means always file check will be happen because bit of glusterfs system
on Server 1 and , alwyas fsck could be possible ?

 

Second I understood that glusterfs on servers file systems and because of
this capacity is Server1 + Server 2 + Server n , am I right ? 

 

İs there any way to support iscsi ? 

 

Looks like NFS is supported but NFSv3 and CIFS will be supported next
release does it means without samba glusterfs will support CIFS easyly ? 

 

Thanks & Regards

Vahric

___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users