[ https://issues.apache.org/jira/browse/HDFS-15442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
AMC-team updated HDFS-15442: ---------------------------- Description: In current implementation of checkpoint image transfer, if the file length is bigger than the configured value dfs.image.transfer.chunksize, it will use chunked streaming mode to avoid internal buffering. This mode should be used only if more than chunkSize data is present to upload, otherwise upload may not happen sometimes. {code:java} //TransferFsImage.java int chunkSize = (int) conf.getLongBytes( DFSConfigKeys.DFS_IMAGE_TRANSFER_CHUNKSIZE_KEY, DFSConfigKeys.DFS_IMAGE_TRANSFER_CHUNKSIZE_DEFAULT); if (imageFile.length() > chunkSize) { // using chunked streaming mode to support upload of 2GB+ files and to // avoid internal buffering. // this mode should be used only if more than chunkSize data is present // to upload. otherwise upload may not happen sometimes. connection.setChunkedStreamingMode(chunkSize); } {code} There is no any check code for this parameter. User may accidentally set this value to a wrong value. Here, if the user set chunkSize to a negative value. Chunked streaming mode will always be used. In setChunkedStreamingMode(chunkSize), there is a correction code that if the chunkSize is <=0, it will be change to DEFAULT_CHUNK_SIZE. {code:java} public void setChunkedStreamingMode (int chunklen) { if (connected) { throw new IllegalStateException ("Can't set streaming mode: already connected"); } if (fixedContentLength != -1 || fixedContentLengthLong != -1) { throw new IllegalStateException ("Fixed length streaming mode set"); } chunkLength = chunklen <=0? DEFAULT_CHUNK_SIZE : chunklen; } {code} However, the correction may be too late. If the user set dfs.image.transfer.chunksize to value that <= 0, even for images whose imageFile.length() < DEFAULT_CHUNK_SIZE will use chunked streaming mode and may fail the upload as mentioned above. *(This scenario may be rare or even impossible, but a**t least we can prevent users setting this param to an extremely small value.**)* *How to fix:* Add checking code or correction code right after parsing the config value before really use the value (if statement and setChunkedStreamingMode). was: In current implementation of checkpoint image transfer, if the file length is bigger than the configured value dfs.image.transfer.chunksize, it will use chunked streaming mode to avoid internal buffering. This mode should be used only if more than chunkSize data is present to upload, otherwise upload may not happen sometimes. {code:java} //TransferFsImage.java int chunkSize = (int) conf.getLongBytes( DFSConfigKeys.DFS_IMAGE_TRANSFER_CHUNKSIZE_KEY, DFSConfigKeys.DFS_IMAGE_TRANSFER_CHUNKSIZE_DEFAULT); if (imageFile.length() > chunkSize) { // using chunked streaming mode to support upload of 2GB+ files and to // avoid internal buffering. // this mode should be used only if more than chunkSize data is present // to upload. otherwise upload may not happen sometimes. connection.setChunkedStreamingMode(chunkSize); } {code} There is no any check code for this parameter. User may accidentally set this value to a wrong value. Here, if the user set chunkSize to a negative value. Chunked streaming mode will always be used. In setChunkedStreamingMode(chunkSize), there is a correction code that if the chunkSize is <=0, it will be change to DEFAULT_CHUNK_SIZE. {code:java} public void setChunkedStreamingMode (int chunklen) { if (connected) { throw new IllegalStateException ("Can't set streaming mode: already connected"); } if (fixedContentLength != -1 || fixedContentLengthLong != -1) { throw new IllegalStateException ("Fixed length streaming mode set"); } chunkLength = chunklen <=0? DEFAULT_CHUNK_SIZE : chunklen; } {code} However, the correction may be too late. *If the user set dfs.image.transfer.chunksize to value that <= 0, even for images whose imageFile.length() < DEFAULT_CHUNK_SIZE will use chunked streaming mode and may fail the upload as mentioned above.* *How to fix:* Add checking code or correction code right after parsing the config value before really use the value (if statement and setChunkedStreamingMode). > dfs.image.transfer.chunksize should have check code before use > -------------------------------------------------------------- > > Key: HDFS-15442 > URL: https://issues.apache.org/jira/browse/HDFS-15442 > Project: Hadoop HDFS > Issue Type: Bug > Reporter: AMC-team > Priority: Major > > In current implementation of checkpoint image transfer, if the file length is > bigger than the configured value dfs.image.transfer.chunksize, it will use > chunked streaming mode to avoid internal buffering. This mode should be used > only if more than chunkSize data is present to upload, otherwise upload may > not happen sometimes. > {code:java} > //TransferFsImage.java > int chunkSize = (int) conf.getLongBytes( > DFSConfigKeys.DFS_IMAGE_TRANSFER_CHUNKSIZE_KEY, > DFSConfigKeys.DFS_IMAGE_TRANSFER_CHUNKSIZE_DEFAULT); > if (imageFile.length() > chunkSize) { > // using chunked streaming mode to support upload of 2GB+ files and to > // avoid internal buffering. > // this mode should be used only if more than chunkSize data is present > // to upload. otherwise upload may not happen sometimes. > connection.setChunkedStreamingMode(chunkSize); > } > {code} > There is no any check code for this parameter. User may accidentally set this > value to a wrong value. Here, if the user set chunkSize to a negative value. > Chunked streaming mode will always be used. In > setChunkedStreamingMode(chunkSize), there is a correction code that if the > chunkSize is <=0, it will be change to DEFAULT_CHUNK_SIZE. > {code:java} > public void setChunkedStreamingMode (int chunklen) { > if (connected) { > throw new IllegalStateException ("Can't set streaming mode: already > connected"); > } > if (fixedContentLength != -1 || fixedContentLengthLong != -1) { > throw new IllegalStateException ("Fixed length streaming mode set"); > } > chunkLength = chunklen <=0? DEFAULT_CHUNK_SIZE : chunklen; > } > {code} > However, the correction may be too late. > If the user set dfs.image.transfer.chunksize to value that <= 0, even for > images whose imageFile.length() < DEFAULT_CHUNK_SIZE will use chunked > streaming mode and may fail the upload as mentioned above. *(This scenario > may be rare or even impossible, but a**t least we can prevent users setting > this param to an extremely small value.**)* > *How to fix:* > Add checking code or correction code right after parsing the config value > before really use the value (if statement and setChunkedStreamingMode). > -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org