I'm not sure if this is related but worth taking note of: write-behind when 
sees a shorter write, it chooses to ignore ENOSPC or EDQUOT that it received 
from the brick and will return a generic EIO sometimes.
https://bugzilla.redhat.com/show_bug.cgi?id=986812

Regards,
 -Prashanth Pai

----- Original Message -----
From: "Raghavendra G" <raghaven...@gluster.com>
To: "Emmanuel Dreyfus" <m...@netbsd.org>
Cc: "Gluster Devel" <gluster-devel@gluster.org>
Sent: Wednesday, October 15, 2014 11:50:36 AM
Subject: Re: [Gluster-devel] quota

On Tue, Oct 14, 2014 at 9:18 AM, Emmanuel Dreyfus < m...@netbsd.org > wrote: 


Vijay Bellur < vbel...@redhat.com > wrote: 

> You would need to set features.soft-timeout and features.hard-timeout 
> values to 0 when testing with lower values of directory quota. 

It works more like expected this way, but there are still oddities: for 
instance once quota is reached, I can still append smal chunk to a file, 
and do it a lot of times. 

A few debug printf tell me this is because of write-behind: the writing 
process gets success, then glusterfs attemps to write - and fail. This 
means we silently discard data, which does not looks nice. Is it the 
expected behavior or is it a NetBSD bug? 

It happens on other environments too. Applications receive success for writes 
and writes wouldn't have happened on the brick. Write-behind propagates 
failures of cached-writes to application in the first file operation after 
write-failure to brick. In the worst case the application would see a non-zero 
return value for close of the fd. However not all applications check return 
value of close and hence write failures would go unnoticed by application. 

This behaviour is posix-conformant. From man 2 close, 

<man 2 close> 

NOTES 
Not checking the return value of close() is a common but nevertheless serious 
programming error. It is quite possible that errors on a 
previous write(2) operation are first reported at the final close(). Not 
checking the return value when closing the file may lead to 
silent loss of data. This can especially be observed with NFS and with disk 
quota. 

A successful close does not guarantee that the data has been successfully saved 
to disk, as the kernel defers writes. It is not common for 
a file system to flush the buffers when the stream is closed. If you need to be 
sure that the data is physically stored use fsync(2). (It 
will depend on the disk hardware at this point.) 

</man close> 




I assume it is expected but undesirable behavior: couldn't we check for 
quota space left, and avoid write behind if we aregoing to hit the 
barrier? 

-- 
Emmanuel Dreyfus 
http://hcpnet.free.fr/pubz 
m...@netbsd.org 
_______________________________________________ 
Gluster-devel mailing list 
Gluster-devel@gluster.org 
http://supercolony.gluster.org/mailman/listinfo/gluster-devel 



-- 
Raghavendra G 

_______________________________________________
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel
_______________________________________________
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel

Reply via email to