[ https://issues.apache.org/jira/browse/HDDS-732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16687894#comment-16687894 ]
Elek, Marton commented on HDDS-732: ----------------------------------- Thanks [~candychencan] to work on this issue. I executed the unit tests and they work well but I have two comments: 1. First of all the read(byte b[], int off, int len) call could be more faster than read() in some cases (depends from the underlying InputStream). If I understood well the original comment from [~bharatviswa], the goal here is to use the read(byte[],int,int) on the originalStream in our read(byte[],int,int), which is added in this patch very well. I think it could be done with adjusting the line 82 to the read max(remainingData, currentLen) number of bytes with read(byte[],int,int) 2. The other minor: Even if we will have this excellent read(byte[],int,int) some stupid client may use the simple read(). With the modification in the unit test we test only the new read method. I would test both of the reads. > Add read method which takes offset and length in SignedChunkInputStream > ----------------------------------------------------------------------- > > Key: HDDS-732 > URL: https://issues.apache.org/jira/browse/HDDS-732 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Reporter: Bharat Viswanadham > Assignee: chencan > Priority: Major > Attachments: HDDS-732.001.patch, HDDS-732.002.patch > > > This Jira is created from the comments in HDDS-693 > > {quote}We have only read(), we don't have read(byte[] b, int off, int len), > we might see some slow operation during put with SignedInputStream. > {quote} > 100% agree. I didn't check any performance numbers, yet, but we need to do it > sooner or later. I would implement this method in a separate jira as it adds > more complexity and as of now I would like to support the mkdir operations of > the s3a unit tests (where the size is 0). -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org