Re: [HACKERS] V3 protocol gets out of sync on messages that cause allocation failures

2004-10-18 Thread Tom Lane
I wrote:
> Yeah.  The intent of the protocol design was that the recipient could
> skip over the correct number of bytes even if it didn't have room to
> buffer them, but the memory allocation mechanism in the backend makes
> it difficult to actually do that.  Now that we have PG_TRY, though,
> it might not be out of reach to do it right.

And indeed it wasn't.  Patch committed.

regards, tom lane

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [HACKERS] V3 protocol gets out of sync on messages that cause allocation failures

2004-10-18 Thread Tom Lane
Oliver Jowett <[EMAIL PROTECTED]> writes:
> What appears to be happening is that the backend goes into error 
> recovery as soon as the allocation fails (just after reading the message 
> length), and never does the read() of the body of the Bind message. So 
> it falls out of sync, and tries to interpret the guts of the Bind as a 
> new message. Bad server, no biscuit.

Yeah.  The intent of the protocol design was that the recipient could
skip over the correct number of bytes even if it didn't have room to
buffer them, but the memory allocation mechanism in the backend makes
it difficult to actually do that.  Now that we have PG_TRY, though,
it might not be out of reach to do it right.  Something like

PG_TRY();
buf = palloc(N);
PG_CATCH();
read and discard N bytes;
re-raise the out-of-memory error;
PG_END_TRY();
normal read path

I'm not sure how many places would need to be touched to make this
actually happen; if memory serves, the "read a packet" code extends
over multiple logical levels.

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


[HACKERS] V3 protocol gets out of sync on messages that cause allocation failures

2004-10-17 Thread Oliver Jowett
(Tom: this is not as severe a problem as I first thought)
If a client sends a V3 message that is sufficiently large to cause a 
memory allocation failure on the backend when allocating space to read 
the message, the backend gets out of sync with the protocol stream.

For example, sending this:
 FE=> Parse(stmt=null,query="SELECT $1",oids={17})
 FE=> Bind(stmt=null,portal=null,$1=<>)
provokes this:
ERROR:  out of memory
DETAIL:  Failed on request of size 1073741823.
FATAL:  invalid frontend message type 0
What appears to be happening is that the backend goes into error 
recovery as soon as the allocation fails (just after reading the message 
length), and never does the read() of the body of the Bind message. So 
it falls out of sync, and tries to interpret the guts of the Bind as a 
new message. Bad server, no biscuit.

I was concerned that this was exploitable in applications that pass 
hostile binary parameters as protocol-level parameters, but it doesn't 
seem possible as the bytes at the start of a Bind are not under the 
control of the attacker and don't form a valid message.

The CopyData message could probably be exploited, but it seems unlikely 
that (security-conscious) applications will pass hostile data directly 
in a CopyData message.

I haven't looked at a fix to this in detail (I'm not really familiar 
with the backend's error-recovery path), but it seems like one easy 
option is to treate all errors that occur while a message is in the 
process of being read as FATAL?

-O
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html