> From: "[email protected]" <[email protected]>
>T he current default behaviour of Qt sockets is that they allocate an 
>unbounded 
> amount of memory if the application is not reading all data from the socket 
> but 
> the event loop is running.
> In the worst case, this causes memory allocation failure resulting in a crash 
> or 
> app exit due to bad_alloc exception
> 
> We could change the default read buffer size from 0 (unbounded) to a sensible 
> default value e.g. 64k (to be benchmarked)
> Applications wanting the old behaviour could explicitly call 
> setReadBufferSize(0)
> Applications that already set a read buffer size will be unchanged
> 
> The same change would be required at the QNetworkAccessManager level, as 
> there 
> is no point applying flow control at the socket and having unbounded memory 
> allocations in the QNetworkReply.
> 
> This is a behavioural change that would certainly cause regressions.
> e.g. any application that waits for the QNetworkReply::finished() signal and 
> then calls readAll() would break if the size of the object being downloaded 
> is 
> larger than the buffer size.
> 
> On the other hand, we can't enable default outbound flow control in sockets 
> (we don't want write to block).
> Applications need to use the bytesWritten signal.
> QNetworkAccessManager already implements outbound flow control for uploads.
> 
> Is making applications safe against this kind of overflow by default worth 
> the 
> compatibility breaks?

Not sure. Is it a big problem? Or is it better to just continue as is, and let 
the applications that do have a problem set it to something reasonable to them 
instead?

I'd probably suggest that we instead improve the output on that worst case 
failure to help devs fix the problems in their programs.

$0.02

Ben
_______________________________________________
Development mailing list
[email protected]
http://lists.qt-project.org/mailman/listinfo/development

Reply via email to