On Wed, May 24, 2017 at 04:09:35PM +0200, Michal Privoznik wrote:
On 05/24/2017 02:42 PM, Richard W.M. Jones wrote:
On Tue, May 23, 2017 at 05:07:40PM +0200, Michal Privoznik wrote:
Because:

https://www.redhat.com/archives/libvir-list/2017-May/msg00088.html

I don't think this is a reason at all.

Libguestfs uses an RPC system which was modelled on the libvirt one,
and has exactly the same problem with message size limits, except
smaller -- 4MB and we've never had to increase it.

So you're basically doing what I'm describing in point a). Transforming
problem to another one. The maximum number of 4MB messages.

We get around this by batching operations over messages as necessary
(eg [1]).  This adds a little complexity in the implementation of the
API, but the point is that the complexity is entirely hidden to users
of the APIs.

Exactly. A little complexity. That's in your case. In our case it would
be slightly more complex IMO (although I've never tried to write the
code, so I cannot say really). BUT, more importantly why even bother
when we can just raise the limit of the message?
The limits are there so that if one side starts sending malicious
packets it won't eat all the memory on the other side. Well, what if the
attacker is slightly more ingenious and sends N messages that fit size
limit for one message? I don't really see a difference between raising
limit for one message and splitting the data into multiple messages.


I understand this as the only way this would go is daemon -> client.
And daemon cannot transfer more messages than it has data for.  The only
thing we would need to make sure doesn't happen is daemon keeping the
allocated data while client is requesting another data to be allocated.
Basically error out if client is calling yet another API that uses this
mechanism *and* there is still some data allocated and not read.

Michal

Attachment: signature.asc
Description: Digital signature

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Reply via email to