Sabuj Pattanayek,
Seems like this issue has been fixed in latest 3.0.x releases. Can you try
with latest release?
regards,
On Wed, Mar 3, 2010 at 11:05 AM, Harshavardhana wrote:
> On 03/02/2010 10:43 PM, Sabuj Pattanayek wrote:
>
>> Hi,
>>
>> I've got this strange problem where a striped endpoi
For small-block sized i/o, write-behind should've helped since it
acknowledges writes from application even before it has got acks from server
(and writing in background after it does ack to application), thereby
reducing the latency of a single write call.
@Jeremy, can you perform dd test writing
Are other operations like chown, chmod etc are succeeding? stat call is
generally received as lookup in glusterfs (lookup callback returns a stat
structure) and lookup is sent to all subvolumes, whereas calls like open are
sent only to those nodes where file is present.
On Wed, Mar 31, 2010 at 3:4
Hi Dean,
Seems like you are trying to re-export glusterfs using Samba (as CIFS). I
think CIFS relies on POSIX Acls to implement permissions. Glusterfs does not
provide ACL support (yet). This may be the cause of your problems.
regards,
On Wed, Mar 31, 2010 at 6:18 PM, Dean Bruhn wrote:
> Raghav
Hi Oliver,
On Wed, Mar 31, 2010 at 12:23 PM, Olivier Le Cam <
olivier.le...@crdp.ac-versailles.fr> wrote:
> Hi -
>
>
> * Can you test dd performance with only write-behind translator (along
>> with
>> protocol/client) in client volume configuration?
>>
>
> That makes no difference with either on
On Wed, Mar 31, 2010 at 1:40 PM, Ed W wrote:
> On 31/03/2010 06:14, Tom Lanyon wrote:
>
>> On 31/03/2010, at 2:36 PM, Raghavendra G wrote:
>>
>>
>>
>>> Current design of write-behind acknowledges writes (to applications) even
>>> when they've not hit the disk. Can you please explain how this desi
Raghavendra,
The server doesn't provide any logs for the issue at least through the
admin console and I don't have shell access as it is platform not glusterfs.
With OS X I can provide more issue but I will have to bring the cluster back
up. I broke the cluster trying to test the restor
Paul Kölle wrote :
Hmm, if I read this right, there is no difference in blocksize:
(Olivier) Ubuntu (corrected):
# dd if=/dev/zero of=/mnt/file_test count=262144 bs=1024
262144+0 records in
262144+0 records out
268435456 bytes (268 MB) copied, 5,56942 s, 48,2 MB/s
(Olivier) Lenny:
lenny:# dd i
Hi gluster developers,
I have encountered a situation where a file can not be found,
but it does exist and it is on the correct node. The file can
be stat()-ed but not opened. After a Gluster restart the file
is accessable again.
Glusterfs: 3.0.3 with altered hashing function (by me).
== On the
On Wednesday 31 March 2010 wrote Jeremy Enos:
> That too, is what I'm guessing is happening. Besides official
> confirmation of what's going on, I'm mainly just after an answer as to
> if there is a way to solve it, and make a locally mounted single disk
> Gluster fs perform even close to as well
On 31/03/2010 06:14, Tom Lanyon wrote:
On 31/03/2010, at 2:36 PM, Raghavendra G wrote:
Current design of write-behind acknowledges writes (to applications) even
when they've not hit the disk. Can you please explain how this design is
different (if it is different) from the idea you've expla
That too, is what I'm guessing is happening. Besides official
confirmation of what's going on, I'm mainly just after an answer as to
if there is a way to solve it, and make a locally mounted single disk
Gluster fs perform even close to as well as a single local disk
directly, including for cac
Thanks for the links- those are interesting numbers. Looks like small
block i/o performance stinks there relative to NFS too. Given the
performance I'm seeing, I doubt much has changed, but it certainly would
be interesting to see the tests re-run.
Jeremy
On 3/29/2010 1:47 PM, Ian Roger
I did supply more context than you mention- it was a tar file of X MB w/
Y files, and compares to disk w/ cache or w/o cache by Z seconds. This
should give some idea of the task I was attempting and even a basis for
replication of the test.
Jeremy
On 3/23/2010 12:37 PM, Ian Rogers wrote:
>
> We are trying to find a way to force a set of subdirectories to be
> written to a selected brick. I see xlators/cluster/map in the source,
> and am wondering if it would do what we want, but I can't find any
> documentation for it.
Yes! cluster/map can do it for you.
> It appears as thoug
I too am searching for a way to get Gluster to use a cache effectively.
So far, no performance translator, variation on translator parameters,
or locating translator on server or clients side has made any
difference. :(
Jeremy
On 3/31/2010 3:23 AM, Olivier Le Cam wrote:
Hi -
* Can yo
Hi -
* Can you test dd performance with only write-behind translator (along with
protocol/client) in client volume configuration?
That makes no difference with either only write-behind translator or all
of default translator configurated by genvol. Without it, results are
abt 10 times worse.
17 matches
Mail list logo