> +        ChunkedInputStreamEnumeration(InputStream inputStream, int 
> chunkedBlockSize) {
> +            this.inputStream = new BufferedInputStream(inputStream, 
> chunkedBlockSize);
> +            buffer = new byte[chunkedBlockSize];
> +            lastChunked = false;
> +        }
> +
> +        @Override
> +        public boolean hasMoreElements() {
> +            return !lastChunked;
> +        }
> +
> +        @Override
> +        public InputStream nextElement() {
> +            int bytesRead;
> +            try {
> +                bytesRead = inputStream.read(buffer, 0, buffer.length);

bytesRead can be less than buffer.length!

This caused problems during our tests, when the underlying inputStream only 
allows 8k blocks. The result was, that the pre-calculated content-length (using 
64k chunks) was less than the actual send content length (using 8k chunks got 
from the InputStream).

That leads to HttpUrlConnection's "too many bytes written" Exception later on.

We fixed that on our side with a wrapping InputStream that always returns the 
requested length when read(buf,off,len) is invoked. Performing n additional 
reads from the underlying inputStream if necessary.

This way, the expected 64k blocks were send and the pre-calculated 
content-length matched the actual content length.

Maybe a loop must be added here to read exactly buffer.length bytes from the 
inputStream matching the chunkedBlockSize, like we did in our inputStream 
wrapper.




---
Reply to this email directly or view it on GitHub:
https://github.com/jclouds/jclouds/pull/678/files#r30581994

Reply via email to