On Jan 23, 2012, at 4:45 PM, Merlin Moncure wrote:

> On Mon, Jan 23, 2012 at 2:00 PM, A.M. <age...@themactionfaction.com> wrote:
>> One simple way clients could detect the binary encoding at startup would be 
>> to pass known test parameters and match against the returned values. If the 
>> client cannot match the response, then it should choose the text 
>> representation.
>> 
>> Alternatively, the 16-bit int in the Bind and RowDescription messages could 
>> be incremented to indicate a new format and then clients can specify the 
>> highest "version" of the binary format which they support.
> 
> Prefer the version.  But why send this over and over with each bind?
> Wouldn't you negotiate that when connecting? Most likely, optionally,
> doing as much as you can from the server version?  Personally I'm not
> really enthusiastic about a solution that adds a non-avoidable penalty
> to all queries.
> 
> Also, a small nit: this problem is not specific to binary formats.
> Text formats can and do change, albeit rarely, with predictable
> headaches for the client.  I see no reason to deal with text/binary
> differently.  The only difference between text/binary wire formats in
> my eyes are that the text formats are documented.
> 
> merlin


In terms of backwards compatibility (to support the widest range of clients), 
wouldn't it make sense to freeze each format option? That way, an updated text 
version could also assume a new int16 format identifier. The client would 
simply pass its preferred format. This could also allow for multiple in-flight 
formats; for example, if a client anticipates a large in-bound bytea column, it 
could specify format X which indicates the server should use gzip the result 
before sending. That same format may not be preferable on a different request.

Cheers,
M




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to