[ 
http://issues.apache.org/jira/browse/AXIS-2084?page=comments#action_12323097 ] 

Robert Hook commented on AXIS-2084:
-----------------------------------

Poot. 1.2 doesn't seem to be readily available anywhere. However, I grabbed the 
current build (labelled as 1.3) and the problem goes away. We'll see what else 
goes wrong.

> Dime attachements: Type_Length of the final record chunk must be zero
> ---------------------------------------------------------------------
>
>          Key: AXIS-2084
>          URL: http://issues.apache.org/jira/browse/AXIS-2084
>      Project: Apache Axis
>         Type: Bug
>   Components: Serialization/Deserialization
>     Versions: 1.2, 1.2.1
>  Environment: Microsoft XP
>     Reporter: Coralia Silvana Popa
>     Assignee: Davanum Srinivas
>  Attachments: DimeBodyPart.java, DimeBodyPartDiff.txt, 
> DimeBodyPartDiff_2.txt, DimeBodyPart_2.java, EchoAttachment.java
>
> Large files sent as DIME attachments are not correct serialized. Seems that 
> the 
> When reading a series of chunked records, the parser assumes that the first 
> record without the CF flag is the final record in the chunk; in this case, 
> it's the last record in my sample. The record type is specified only in the 
> first record chunk, and all remaining chunks must have the TYPE_T field and 
> all remaining header fields (except for the DATA_LENGTH field) set to zero.
> Seems that Type_Length (and maybe other header fields) is not set to 0 for 
> the last chunk. The code work correct when there is only one chunck.
> The problem is in class: org.apache.axis.attachments.DimeBodyPart, in method 
> void send(java.io.OutputStream os, byte position, DynamicContentDataHandler 
> dh, final long maxchunk)
> I suggest the following code the fix this problem:
> void send(java.io.OutputStream os, byte position, DynamicContentDataHandler 
> dh,
>             final long maxchunk)
>             throws java.io.IOException {
>       
>               BufferedInputStream in = new 
> BufferedInputStream(dh.getInputStream());
>               
>               final int myChunkSize = dh.getChunkSize();
>               
>               byte[] buffer1 = new byte[myChunkSize]; 
>               byte[] buffer2 = new byte[myChunkSize]; 
>               
>               int bytesRead1 = 0 , bytesRead2 = 0;
>               bytesRead1 = in.read(buffer1);
>               
>               if(bytesRead1 < 0) {
>                 sendHeader(os, position, 0, (byte) 0);
>                 os.write(pad, 0, dimePadding(0));
>                 return;
>               }
>               byte chunknext = 0;
>               do {
>                       bytesRead2 = in.read(buffer2);
>                       
>                       if(bytesRead2 < 0) {
>                               //last record...do not set the chunk bit.
>                               //buffer1 contains the last chunked record!
>                               sendChunk(os, position, buffer1, 0, bytesRead1, 
> chunknext);
>                               break;
>                       }
>                       
>                       sendChunk(os, position, buffer1, 0, bytesRead1,(byte) 
> (CHUNK | chunknext) );
>                       chunknext = CHUNK_NEXT;
>                       //now that we have written out buffer1, copy buffer2 
> into to buffer1
>                       System.arraycopy(buffer2,0,buffer1,0,myChunkSize);
>                       bytesRead1 = bytesRead2;
>                       
>               }while(bytesRead2 > 0);
>     }

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira

Reply via email to