[ http://issues.apache.org/jira/browse/AXIS-2084?page=comments#action_12316338 ]
Tom Ziemer commented on AXIS-2084: ---------------------------------- Hi, I was using Axis 1.2 Final on our server in cobination with a) an axis client and b) a .NET (2.0 Beta) client to send attachments. With the axis client we never encountered any problems no matter how large the files were (1-20MB). With the .NET client we only could receive attachments up to 1 MB. For files larger than that we got the exception mentioned in original bug description. After switching to the the CVS Version of Axis (as of 07/21/05) the problem is worse: The axis client is still working as expected but the .NET client cannot receive any attachments at all (no matter how small the attachments are) and always throws this exception: "WSE311: The Type_T of non_chunked record must not be 0 (Type_T=Unchanged)." [Source: Microsoft.Web.Services2] We are using version 1.29 of org.apache.axis.attachments.DimeBodyPart Tom > Dime attachements: Type_Length of the final record chunk must be zero > --------------------------------------------------------------------- > > Key: AXIS-2084 > URL: http://issues.apache.org/jira/browse/AXIS-2084 > Project: Apache Axis > Type: Bug > Components: Serialization/Deserialization > Versions: 1.2, 1.2.1 > Environment: Microsoft XP > Reporter: Coralia Silvana Popa > Assignee: Davanum Srinivas > Attachments: DimeBodyPart.java, DimeBodyPartDiff.txt, EchoAttachment.java > > Large files sent as DIME attachments are not correct serialized. Seems that > the > When reading a series of chunked records, the parser assumes that the first > record without the CF flag is the final record in the chunk; in this case, > it's the last record in my sample. The record type is specified only in the > first record chunk, and all remaining chunks must have the TYPE_T field and > all remaining header fields (except for the DATA_LENGTH field) set to zero. > Seems that Type_Length (and maybe other header fields) is not set to 0 for > the last chunk. The code work correct when there is only one chunck. > The problem is in class: org.apache.axis.attachments.DimeBodyPart, in method > void send(java.io.OutputStream os, byte position, DynamicContentDataHandler > dh, final long maxchunk) > I suggest the following code the fix this problem: > void send(java.io.OutputStream os, byte position, DynamicContentDataHandler > dh, > final long maxchunk) > throws java.io.IOException { > > BufferedInputStream in = new > BufferedInputStream(dh.getInputStream()); > > final int myChunkSize = dh.getChunkSize(); > > byte[] buffer1 = new byte[myChunkSize]; > byte[] buffer2 = new byte[myChunkSize]; > > int bytesRead1 = 0 , bytesRead2 = 0; > bytesRead1 = in.read(buffer1); > > if(bytesRead1 < 0) { > sendHeader(os, position, 0, (byte) 0); > os.write(pad, 0, dimePadding(0)); > return; > } > byte chunknext = 0; > do { > bytesRead2 = in.read(buffer2); > > if(bytesRead2 < 0) { > //last record...do not set the chunk bit. > //buffer1 contains the last chunked record! > sendChunk(os, position, buffer1, 0, bytesRead1, > chunknext); > break; > } > > sendChunk(os, position, buffer1, 0, bytesRead1,(byte) > (CHUNK | chunknext) ); > chunknext = CHUNK_NEXT; > //now that we have written out buffer1, copy buffer2 > into to buffer1 > System.arraycopy(buffer2,0,buffer1,0,myChunkSize); > bytesRead1 = bytesRead2; > > }while(bytesRead2 > 0); > } -- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://issues.apache.org/jira/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira
