I'm reading the protocol spec, and I'm really having hard time coming up with a plausible scenario that would cause this problem.

Within the paired readObject and writeObject, it's easy to cause this. But it's very hard for an object written in Foo.writeObject() to be read by Bar.readObject() because of TC_ENDBLOCKDATA enforcement at the boundary. Every time the application code gets to write stuff to a stream, the spec puts TC_ENDBLOCKDATA marker at the end to catch errors.

The other variable length places (such as how it reads field values for object) are dictated by the class descriptor that the writer sends, so the reader can't get out of sync there.

Even the arbitrary byte[] data written gets enveloped into TC_BLOCKDATA, so you can't make a reader get out of sync by reading more bytes from read(byte[]) than you wrote. Ditto for primitive values.

So unless the sequence of bytes are tampered after it left ObjectOutputStream and before read by ObjectInputStream, the framing logic feels ironcrad. And such a stream corruption normally occurs a boundary between data burst, and not in the middle like this.

I'm filing this ticket here to see if anyone else saw the same problem.

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators.
For more information on JIRA, see: http://www.atlassian.com/software/jira

--
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-issues+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

Reply via email to