El lun, 29-03-2010 a las 12:38 +0100, Ed W escribió:
> I don't see this as a gluster issue - it's a fundamental limitation
> of
> whether you want an ack for network based operations? Many people
> switch to fiberchannel or similar for the io for exactly this
> reason.
> If you can drop the la
3) ACK sent once X server machines have received the request (to
ram). Data loss possible if all server machines lost before they
write the request to disk. Good compromise of speed vs reliability
guarantees
This functionality can be achieved by loading write-behind translato
On Wed, Mar 31, 2010 at 1:40 PM, Ed W wrote:
> On 31/03/2010 06:14, Tom Lanyon wrote:
>
>> On 31/03/2010, at 2:36 PM, Raghavendra G wrote:
>>
>>
>>
>>> Current design of write-behind acknowledges writes (to applications) even
>>> when they've not hit the disk. Can you please explain how this desi
On 31/03/2010 06:14, Tom Lanyon wrote:
On 31/03/2010, at 2:36 PM, Raghavendra G wrote:
Current design of write-behind acknowledges writes (to applications) even
when they've not hit the disk. Can you please explain how this design is
different (if it is different) from the idea you've expla
On 31/03/2010, at 2:36 PM, Raghavendra G wrote:
> Current design of write-behind acknowledges writes (to applications) even
> when they've not hit the disk. Can you please explain how this design is
> different (if it is different) from the idea you've explained above?
Is this gluster method of w
Hi Ed,
On Mon, Mar 29, 2010 at 3:38 PM, Ed W wrote:
> On 26/03/2010 18:22, Ramiro Magallanes wrote:
>
>>
>> You coud run the genfiles script simultaneuosly (my english is really
>> poor, we can change the subject of this mail for something like "poor
>> performance and poor english" xDDD) but it
On 26/03/2010 18:22, Ramiro Magallanes wrote:
You coud run the genfiles script simultaneuosly (my english is really
poor, we can change the subject of this mail for something like "poor
performance and poor english" xDDD) but its not like a thread aplication
(iozone rulez).
If I run 3 process o
El vie, 26-03-2010 a las 17:44 +, Ian Rogers escribió:
> Hi Ramiro
>
> ideas off the top of my head:
>
> Get rid of performance/quick-read - it has a memory leak bug due to be
> fixed in gluster v3.0.5
Thanks for the tip ;-)
> If the files are going to be accessed by a program (which doesn
El vie, 26-03-2010 a las 18:17 +0100, Stephan von Krawczynski escribió:
> Can you check how things look like when using ext3 instead of xfs?
Yes, shure , and thanks for you suggestion!
That was the first thing/test that i made , test with ext3 and the
numbers are better in each node (local mode)
Hi Ramiro
ideas off the top of my head:
Get rid of performance/quick-read - it has a memory leak bug due to be
fixed in gluster v3.0.5
If the files are going to be accessed by a program (which doesn't list
the directories often) rather than a user (who might) then you can get
rid of perfor
Can you check how things look like when using ext3 instead of xfs?
On Fri, 26 Mar 2010 18:04:07 +0100
Ramiro Magallanes wrote:
> Hello there!
>
> Im working on a 6-nodes cluster, with SuperMicro new hardware.
> The cluster have to store a millons of JPG's about (200k-4MB),and little
> tex
Hello there!
Im working on a 6-nodes cluster, with SuperMicro new hardware.
The cluster have to store a millons of JPG's about (200k-4MB),and little
text files.
Each node is :
-Single Xeon(R) CPU E5405 @ 2.00GHz (4 cores)
-4 GB RAM
-64 bits Distro-based (Debian L
12 matches
Mail list logo