Re: [Gluster-users] cannot create a new volume with a brick that used to be part of a deleted volume?

2012-09-21 Thread Dr. Jörg Petersen

Hello,

   what I regularly do:
1) Create a snapshot (btrfs) of Brick
2) reassemble the snapshots into an new (Snapshot-) Gluster-Volume

When Reassembling the snapshots I have to remove all xattr's and 
.gluster-Directory.
Since btrfs is painfully slow in deleting, I would prefer an option to 
reuse the Content, wich should be valid for the new (Snapshot-) 
Gluster-Volume...


Greetings,
Jörg


Am 20.09.2012 20:56, schrieb Doug Hunley:

On Thu, Sep 20, 2012 at 2:47 PM, Joe Julian j...@julianfamily.org wrote:

Because it's a vastly higher priority to preserve data. Just because I
delete a volume doesn't mean I want the data deleted. In fact, more often
than not, it's quite the opposite. The barrier to data loss is high, and it
should remain high.

OK, again I'll ask: what is a typical scenario for me as a gluster
admin to delete a volume and want to add one (or more) of its former
bricks to another volume and keep that data in tact? I can't think of
a real world example. But I assume they exist as this is the model
taken by gluster..



___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster's read performance

2012-09-21 Thread Joe Julian
Small files is sort of a misconception. Initial file ops include a small 
amount of overhead, with a lookup, the filename is hashed, the dht subvolume  
is selected and the request is sent to that subvolume. If it's a replica, the 
request is sent to each replica in that subvolume set (usually 2). If it is a 
replica, all the replicas have to respond. If  one or more have pending flags 
or there's an attribute mismatch, either some self heal action has to take 
place, or a split-brain is determined. If the file doesn't exist on that 
subvolume, the same must be done to all the subvolumes. If the file is found, a 
link file is made on the expected dht subvolume pointing to the place we found 
the file. This will make finding it faster the next time. Once the file is 
found and is determined to be clean, the file system can move on to the next 
file operation. 

PHP applications, specifically, normally have a lot of small files that are 
opened for every page query so per-page, that overhead adds up. PHP also 
queries a lot of files that just don't exist. Your single page might query 200 
files that just aren't there. They're in a different portion of the search 
path, or they're a plugin that's not used, etc.

NFS mitigates that affect by using FScache in the kernel. It stores directories 
and stats, preventing the call to the actual filesystem. This also means, of 
course, that the image that was just uploaded through a different server isn't 
going to exist on this one until the cache times out. Stale data in a 
multi-client system is going to have to be expected in a cached client.

Jeff Darcy created a test translator that caches negative lookups which he said 
also mitigated the PHP problem pretty nicely.

If you have control over your app, things like absolute pathing for PHP or 
leaving file descriptors open can also avoid overhead. Also, optimizing the 
number of times you open a file or the number of files to open can help.

So small files refers to the percent of total file op time that's spent on 
overhead vs actual data retrieval.

Chandan Kumar chandank.ku...@gmail.com wrote:

Hello All,

I am new to gluster and evaluating it for my production environment. After
reading some blogs and googling I learned that NFS mount at clients give
better read performance for small files and the glusterfs/FUSE mount gives
better for large write operations.

Now my questions are

1) What do we mean by small files? 1KB/1MB/1GB?
2) If I am using NFS mount at the client I am most likely loosing the high
availability feature of gluster. unlike fuse mount where if primary goes
down I don't need to worry about availability.

Basically my production environment will mostly have read operations of
files ranging from 400KB to 5MB and they will be concurrently read by
different threads.

Thanks,
Chandan

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] infiniband bonding

2012-09-21 Thread samuel
Hi folks,

Reading this post:
http://community.gluster.org/q/port-bonding-link-aggregation-transport-rdma-ib-verbs/

It says that gluster 3.2 does not support bonding of infiniband ports.

Does anyone knows whether 3.3 has changed this limitation? Is there any
other place where to find information about this subject?

Thanks in advance!

Samuel.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] infiniband bonding

2012-09-21 Thread Fernando Frediani (Qube)
Well, it actually says it is a limitation of the Infiniband driver so nothing 
with Gluster I guess. If the driver allow then in theory should not be a 
problem for Gluster.

Fernando

From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of samuel
Sent: 21 September 2012 10:56
To: gluster-users@gluster.org
Subject: [Gluster-users] infiniband bonding

Hi folks,

Reading this post: 
http://community.gluster.org/q/port-bonding-link-aggregation-transport-rdma-ib-verbs/

It says that gluster 3.2 does not support bonding of infiniband ports.

Does anyone knows whether 3.3 has changed this limitation? Is there any other 
place where to find information about this subject?

Thanks in advance!

Samuel.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] gluster 3.2x and 3.3 compatible?

2012-09-21 Thread Gregor Burck
Hi,

I test gluster on two debian (3.2.7) and one ubuntu machine (3.2.5).
So long it work. Then I try to upgrade on the debian machines to 3.3.0 from 
experimental. With this, I can't use the ubuntu one as peer.

Isn't that compatible?

Bye

Gregor
-- 



___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster 3.2x and 3.3 compatible?

2012-09-21 Thread Joe Julian

Sorry, no. That was a design goal, but it wasn't possible.

On 09/21/2012 03:44 AM, Gregor Burck wrote:

Hi,

I test gluster on two debian (3.2.7) and one ubuntu machine (3.2.5).
So long it work. Then I try to upgrade on the debian machines to 3.3.0 from 
experimental. With this, I can't use the ubuntu one as peer.

Isn't that compatible?

Bye

Gregor



___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] cannot create a new volume with a brick that used to be part of a deleted volume?

2012-09-21 Thread Doug Hunley
Yes, I think an option (--force maybe) that says I know I'm about to lose
data, that's what I want sounds like a reasonable compromise. And the
given examples have clarified the current behavior in my mind. Thank you
for the replies everyone. I'm better informed now.

Sent from my Google Nexus 7
On Sep 21, 2012 2:11 AM, Joe Julian j...@julianfamily.org wrote:

 Adding a --yes-i-know-what-im-doing type option is something I would get
 behind (and have suggested, myself). File a bug report as an enhancement
 request.

 Dr. Jörg Petersen joerg.h.peter...@googlemail.com wrote:

 Hello,
 
 what I regularly do:
 1) Create a snapshot (btrfs) of Brick
 2) reassemble the snapshots into an new (Snapshot-) Gluster-Volume
 
 When Reassembling the snapshots I have to remove all xattr's and
 .gluster-Directory.
 Since btrfs is painfully slow in deleting, I would prefer an option to
 reuse the Content, wich should be valid for the new (Snapshot-)
 Gluster-Volume...
 
 Greetings,
 Jörg
 
 
 Am 20.09.2012 20:56, schrieb Doug Hunley:
  On Thu, Sep 20, 2012 at 2:47 PM, Joe Julian j...@julianfamily.org
 wrote:
  Because it's a vastly higher priority to preserve data. Just because I
  delete a volume doesn't mean I want the data deleted. In fact, more
 often
  than not, it's quite the opposite. The barrier to data loss is high,
 and it
  should remain high.
  OK, again I'll ask: what is a typical scenario for me as a gluster
  admin to delete a volume and want to add one (or more) of its former
  bricks to another volume and keep that data in tact? I can't think of
  a real world example. But I assume they exist as this is the model
  taken by gluster..
 
 
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Performance optimization tips Gluster 3.3? (small files / directory listings)

2012-09-21 Thread Alex
Hi Brian, I'm just wondering if you had any luck with figuring out performance
limitations of your setup. I'm testing a similar configuration, so any tips or
recommendations would be much appreciated. Thanks, --Alex


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users