On 17.06.2014 13:46, Paolo Bonzini wrote:
Il 17/06/2014 13:37, Peter Lieven ha scritto:
On 17.06.2014 13:15, Paolo Bonzini wrote:
Il 17/06/2014 08:14, Peter Lieven ha scritto:


BTW, while debugging a case with a bigger storage supplier I found
that open-iscsi seems to do exactly this undeterministic behaviour.
I have a 3TB LUN. If I access < 2TB sectors it uses READ10/WRITE10 and
if I go beyond 2TB it changes to READ16/WRITE16.

Isn't that exactly what your latest patch does for >64K sector writes? :)

Not exactly, we choose the default by checking the LUN size. 10 Byte for
< 2TB and 16 Byte otherwise.

Yeah, I meant introducing the non-determinism.

My latest patch makes an exception if a request is bigger than 64K
sectors and
switches to 16 Byte requests. These would otherwise end in an I/O error.

It could also be split at the block layer, like we do for unmap. I think there's 
also a maximum transfer size somewhere in the VPD, we could to READ16/WRITE16 if 
it is >64K sectors.

It seems that there might be a real world example where Linux issues >32MB 
write requests. Maybe someone familiar with btrfs can advise.
I see iSCSI Protocol Errors in my logs:

Sep  1 10:10:14 libiscsi:0 PDU header: 01 a1 00 00 00 01 00 00 00 00 00 00 00 
00 00 00 00 00 00 07 06 8f 30 00 00 00 00 06 00 00 00 0a 2a 00 01 09 9e 50 00 
47 98 00 00 00 00 00 00 00 [XXX]
Sep  1 10:10:14 qemu-2.0.0: iSCSI: Failed to write10 data to iSCSI lun. Request 
was rejected with reason: 0x04 (Protocol Error)

Looking at the headers the xferlen in the iSCSI PDU is 110047232 Byte which is 
214936 sectors.
214936 % 65536 = 18328 which is exactly the number of blocks in the SCSI 
WRITE10 CDB.

Can someone advise if this is something that btrfs can cause or if I have to 
blame the customer that he issues very big write requests with Direct I/O?

The user sseems something like this in the log:
/[34640.489284] BTRFS: bdev /dev/vda2 errs: wr 8232, rd 0, flush 0, corrupt 0, 
gen 0/
/[34640.490379] end_request: I/O error, dev vda, sector 17446880/
/[34640.491251] end_request: I/O error, dev vda, sector 5150144/
/[34640.491290] end_request: I/O error, dev vda, sector 17472080/
/[34640.492201] end_request: I/O error, dev vda, sector 17523488/
/[34640.492201] end_request: I/O error, dev vda, sector 17536592/
/[34640.492201] end_request: I/O error, dev vda, sector 17599088/
/[34640.492201] end_request: I/O error, dev vda, sector 17601104/
/[34640.685611] end_request: I/O error, dev vda, sector 15495456/
/[34640.685650] end_request: I/O error, dev vda, sector 7138216/

Thanks,
Peter

Reply via email to