#3561: massively parallel "recheck" operations perform far below system optimum
----------------------------+--------------------------------------
 Reporter:  brainchild      |       Type:  bug
   Status:  new             |   Priority:  minor
Milestone:  needs verified  |  Component:  Core
  Version:  2.0.5           |   Keywords:  recheck, batch, parallel
----------------------------+--------------------------------------
 Presently, when a large number of torrents are simultaneously submitted
 for "recheck", the application will dispatch the operation immediately for
 a very large number of them (I am observing more than 60). I suggest that
 such a large number of simultaneous operations will create I/O bottlenecks
 on most consumer hardware, and that 4, 3, or even 2 simultaneous
 operations is appropriate for optimal throughput. For a relatively
 inexpensive checksum algorithm, I/O more than CPU is likely to limit
 throughput, in which case, each additional file dispatched for concurrent
 processing, above a small number, is likely to degrade performance.

 As a point of reference, I have tested on a consumer storage device that
 is normally capable of read speeds exceeding 200 Mb/s from its redundant
 magnetic array, but for the batch operations (performed locally) generally
 has been limited to below one tenth that rate, with negligible CPU usage
 and extremely high system load.

 For solid-state storage, results may be different, but are unlikely to be
 helped by such an enormous number of concurrent operations.

--
Ticket URL: <https://dev.deluge-torrent.org/ticket/3561>
Deluge <https://deluge-torrent.org/>
Deluge Project

-- 
You received this message because you are subscribed to the Google Groups 
"Deluge Dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/deluge-dev/049.de5e59d65ae256a6b88c651e0252179b%40deluge-torrent.org.

Reply via email to