[jira] [Updated] (HDFS-15445) ZStandardCodec compression mail fail(generic error) when encounter specific file

2020-07-01 Thread Igloo (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igloo updated HDFS-15445:
-
Description: 
*Problem:* 

In our production environment,  we put file in hdfs with zstd compressor, 
recently, we find that a specific file may leads to zstandard compressor 
failures. 

And we can reproduce the issue with specific file(attached file: badcase.data)

!image-2020-06-30-11-51-18-026.png|width=1031,height=230!

 

*Analysis*: 

ZStandarCompressor use buffersize( From zstd recommended compress out buffer 
size)  for both inBufferSize and outBufferSize 

!image-2020-06-30-11-35-46-859.png|width=1027,height=387!

but zstd indeed provides two separately recommending inputBufferSize and 
outputBufferSize  

!image-2020-06-30-11-39-17-861.png!

 

*Workaround*

One workaround,  using recommended in/out buffer size provided by zstd lib can 
avoid the problem, but we don't know why. 

zstd recommended input buffer size:  1301072 (128 * 1024)

zstd recommended ouput buffer size: 131591 

!image-2020-06-30-11-42-44-585.png|width=1023,height=196!

 

 

 

 

 

 

 

  was:
*Problem:* 

In our production environment,  we put file in hdfs with zstd compressor, 
recently, we find that a specific file may leads to zstandard compressor 
failures. 

And we can reproduce the issue with specific file(attached file: badcase.data)

!image-2020-06-30-11-51-18-026.png|width=699,height=156!

 

*Analysis*: 

ZStandarCompressor use buffersize( From zstd recommended compress out buffer 
size)  for both inBufferSize and outBufferSize 

!image-2020-06-30-11-35-46-859.png|width=475,height=179!

but zstd indeed provides two separately recommending inputBufferSize and 
outputBufferSize  

!image-2020-06-30-11-39-17-861.png!

 

*Workaround*

One workaround,  using recommended in/out buffer size provided by zstd lib can 
avoid the problem, but we don't know why. 

zstd recommended input buffer size:  1301072 (128 * 1024)

zstd recommended ouput buffer size: 131591 

!image-2020-06-30-11-42-44-585.png!

 

 

 

 

 

 

 


> ZStandardCodec compression mail fail(generic error) when encounter specific 
> file
> 
>
> Key: HDFS-15445
> URL: https://issues.apache.org/jira/browse/HDFS-15445
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.6.5
> Environment: zstd 1.3.3
> hadoop 2.6.5 
>  
> --- 
> a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/zstd/TestZStandardCompressorDecompressor.java
> +++ 
> b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/zstd/TestZStandardCompressorDecompressor.java
> @@ -62,10 +62,8 @@
>  @BeforeClass
>  public static void beforeClass() throws Exception {
>  CONFIGURATION.setInt(IO_FILE_BUFFER_SIZE_KEY, 1024 * 64);
> - uncompressedFile = new File(TestZStandardCompressorDecompressor.class
> - .getResource("/zstd/test_file.txt").toURI());
> - compressedFile = new File(TestZStandardCompressorDecompressor.class
> - .getResource("/zstd/test_file.txt.zst").toURI());
> + uncompressedFile = new File("/tmp/badcase.data");
> + compressedFile = new File("/tmp/badcase.data.zst");
>Reporter: Igloo
>Priority: Blocker
> Attachments: HDFS-15445.patch, badcase.data, 
> image-2020-06-30-11-35-46-859.png, image-2020-06-30-11-39-17-861.png, 
> image-2020-06-30-11-42-44-585.png, image-2020-06-30-11-51-18-026.png
>
>
> *Problem:* 
> In our production environment,  we put file in hdfs with zstd compressor, 
> recently, we find that a specific file may leads to zstandard compressor 
> failures. 
> And we can reproduce the issue with specific file(attached file: badcase.data)
> !image-2020-06-30-11-51-18-026.png|width=1031,height=230!
>  
> *Analysis*: 
> ZStandarCompressor use buffersize( From zstd recommended compress out buffer 
> size)  for both inBufferSize and outBufferSize 
> !image-2020-06-30-11-35-46-859.png|width=1027,height=387!
> but zstd indeed provides two separately recommending inputBufferSize and 
> outputBufferSize  
> !image-2020-06-30-11-39-17-861.png!
>  
> *Workaround*
> One workaround,  using recommended in/out buffer size provided by zstd lib 
> can avoid the problem, but we don't know why. 
> zstd recommended input buffer size:  1301072 (128 * 1024)
> zstd recommended ouput buffer size: 131591 
> !image-2020-06-30-11-42-44-585.png|width=1023,height=196!
>  
>  
>  
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15445) ZStandardCodec compression mail fail(generic error) when encounter specific file

2020-06-30 Thread Igloo (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igloo updated HDFS-15445:
-
Attachment: (was: 15445.patch)

> ZStandardCodec compression mail fail(generic error) when encounter specific 
> file
> 
>
> Key: HDFS-15445
> URL: https://issues.apache.org/jira/browse/HDFS-15445
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.6.5
> Environment: zstd 1.3.3
> hadoop 2.6.5 
>  
> --- 
> a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/zstd/TestZStandardCompressorDecompressor.java
> +++ 
> b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/zstd/TestZStandardCompressorDecompressor.java
> @@ -62,10 +62,8 @@
>  @BeforeClass
>  public static void beforeClass() throws Exception {
>  CONFIGURATION.setInt(IO_FILE_BUFFER_SIZE_KEY, 1024 * 64);
> - uncompressedFile = new File(TestZStandardCompressorDecompressor.class
> - .getResource("/zstd/test_file.txt").toURI());
> - compressedFile = new File(TestZStandardCompressorDecompressor.class
> - .getResource("/zstd/test_file.txt.zst").toURI());
> + uncompressedFile = new File("/tmp/badcase.data");
> + compressedFile = new File("/tmp/badcase.data.zst");
>Reporter: Igloo
>Priority: Blocker
> Attachments: HDFS-15445.patch, badcase.data, 
> image-2020-06-30-11-35-46-859.png, image-2020-06-30-11-39-17-861.png, 
> image-2020-06-30-11-42-44-585.png, image-2020-06-30-11-51-18-026.png
>
>
> *Problem:* 
> In our production environment,  we put file in hdfs with zstd compressor, 
> recently, we find that a specific file may leads to zstandard compressor 
> failures. 
> And we can reproduce the issue with specific file(attached file: badcase.data)
> !image-2020-06-30-11-51-18-026.png|width=699,height=156!
>  
> *Analysis*: 
> ZStandarCompressor use buffersize( From zstd recommended compress out buffer 
> size)  for both inBufferSize and outBufferSize 
> !image-2020-06-30-11-35-46-859.png|width=475,height=179!
> but zstd indeed provides two separately recommending inputBufferSize and 
> outputBufferSize  
> !image-2020-06-30-11-39-17-861.png!
>  
> *Workaround*
> One workaround,  using recommended in/out buffer size provided by zstd lib 
> can avoid the problem, but we don't know why. 
> zstd recommended input buffer size:  1301072 (128 * 1024)
> zstd recommended ouput buffer size: 131591 
> !image-2020-06-30-11-42-44-585.png!
>  
>  
>  
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15445) ZStandardCodec compression mail fail(generic error) when encounter specific file

2020-06-30 Thread Igloo (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igloo updated HDFS-15445:
-
Attachment: HDFS-15445.patch
Status: Patch Available  (was: Open)

> ZStandardCodec compression mail fail(generic error) when encounter specific 
> file
> 
>
> Key: HDFS-15445
> URL: https://issues.apache.org/jira/browse/HDFS-15445
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.6.5
> Environment: zstd 1.3.3
> hadoop 2.6.5 
>  
> --- 
> a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/zstd/TestZStandardCompressorDecompressor.java
> +++ 
> b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/zstd/TestZStandardCompressorDecompressor.java
> @@ -62,10 +62,8 @@
>  @BeforeClass
>  public static void beforeClass() throws Exception {
>  CONFIGURATION.setInt(IO_FILE_BUFFER_SIZE_KEY, 1024 * 64);
> - uncompressedFile = new File(TestZStandardCompressorDecompressor.class
> - .getResource("/zstd/test_file.txt").toURI());
> - compressedFile = new File(TestZStandardCompressorDecompressor.class
> - .getResource("/zstd/test_file.txt.zst").toURI());
> + uncompressedFile = new File("/tmp/badcase.data");
> + compressedFile = new File("/tmp/badcase.data.zst");
>Reporter: Igloo
>Priority: Blocker
> Attachments: 15445.patch, HDFS-15445.patch, badcase.data, 
> image-2020-06-30-11-35-46-859.png, image-2020-06-30-11-39-17-861.png, 
> image-2020-06-30-11-42-44-585.png, image-2020-06-30-11-51-18-026.png
>
>
> *Problem:* 
> In our production environment,  we put file in hdfs with zstd compressor, 
> recently, we find that a specific file may leads to zstandard compressor 
> failures. 
> And we can reproduce the issue with specific file(attached file: badcase.data)
> !image-2020-06-30-11-51-18-026.png|width=699,height=156!
>  
> *Analysis*: 
> ZStandarCompressor use buffersize( From zstd recommended compress out buffer 
> size)  for both inBufferSize and outBufferSize 
> !image-2020-06-30-11-35-46-859.png|width=475,height=179!
> but zstd indeed provides two separately recommending inputBufferSize and 
> outputBufferSize  
> !image-2020-06-30-11-39-17-861.png!
>  
> *Workaround*
> One workaround,  using recommended in/out buffer size provided by zstd lib 
> can avoid the problem, but we don't know why. 
> zstd recommended input buffer size:  1301072 (128 * 1024)
> zstd recommended ouput buffer size: 131591 
> !image-2020-06-30-11-42-44-585.png!
>  
>  
>  
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15445) ZStandardCodec compression mail fail(generic error) when encounter specific file

2020-06-30 Thread Igloo (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igloo updated HDFS-15445:
-
Status: Open  (was: Patch Available)

> ZStandardCodec compression mail fail(generic error) when encounter specific 
> file
> 
>
> Key: HDFS-15445
> URL: https://issues.apache.org/jira/browse/HDFS-15445
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.6.5
> Environment: zstd 1.3.3
> hadoop 2.6.5 
>  
> --- 
> a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/zstd/TestZStandardCompressorDecompressor.java
> +++ 
> b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/zstd/TestZStandardCompressorDecompressor.java
> @@ -62,10 +62,8 @@
>  @BeforeClass
>  public static void beforeClass() throws Exception {
>  CONFIGURATION.setInt(IO_FILE_BUFFER_SIZE_KEY, 1024 * 64);
> - uncompressedFile = new File(TestZStandardCompressorDecompressor.class
> - .getResource("/zstd/test_file.txt").toURI());
> - compressedFile = new File(TestZStandardCompressorDecompressor.class
> - .getResource("/zstd/test_file.txt.zst").toURI());
> + uncompressedFile = new File("/tmp/badcase.data");
> + compressedFile = new File("/tmp/badcase.data.zst");
>Reporter: Igloo
>Priority: Blocker
> Attachments: 15445.patch, badcase.data, 
> image-2020-06-30-11-35-46-859.png, image-2020-06-30-11-39-17-861.png, 
> image-2020-06-30-11-42-44-585.png, image-2020-06-30-11-51-18-026.png
>
>
> *Problem:* 
> In our production environment,  we put file in hdfs with zstd compressor, 
> recently, we find that a specific file may leads to zstandard compressor 
> failures. 
> And we can reproduce the issue with specific file(attached file: badcase.data)
> !image-2020-06-30-11-51-18-026.png|width=699,height=156!
>  
> *Analysis*: 
> ZStandarCompressor use buffersize( From zstd recommended compress out buffer 
> size)  for both inBufferSize and outBufferSize 
> !image-2020-06-30-11-35-46-859.png|width=475,height=179!
> but zstd indeed provides two separately recommending inputBufferSize and 
> outputBufferSize  
> !image-2020-06-30-11-39-17-861.png!
>  
> *Workaround*
> One workaround,  using recommended in/out buffer size provided by zstd lib 
> can avoid the problem, but we don't know why. 
> zstd recommended input buffer size:  1301072 (128 * 1024)
> zstd recommended ouput buffer size: 131591 
> !image-2020-06-30-11-42-44-585.png!
>  
>  
>  
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15445) ZStandardCodec compression mail fail(generic error) when encounter specific file

2020-06-29 Thread Igloo (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17148330#comment-17148330
 ] 

Igloo edited comment on HDFS-15445 at 6/30/20, 5:50 AM:


the issue may leads to hbase regionserver crashes, if hbase uses 
COMPRESSION=>"ZSTD"

 

https://issues.apache.org/jira/browse/HBASE-16710


was (Author: igloo1986):
the issue may leads to hbase regionserver crashes, if hbase uses 

> ZStandardCodec compression mail fail(generic error) when encounter specific 
> file
> 
>
> Key: HDFS-15445
> URL: https://issues.apache.org/jira/browse/HDFS-15445
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.6.5
> Environment: zstd 1.3.3
> hadoop 2.6.5 
>  
> --- 
> a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/zstd/TestZStandardCompressorDecompressor.java
> +++ 
> b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/zstd/TestZStandardCompressorDecompressor.java
> @@ -62,10 +62,8 @@
>  @BeforeClass
>  public static void beforeClass() throws Exception {
>  CONFIGURATION.setInt(IO_FILE_BUFFER_SIZE_KEY, 1024 * 64);
> - uncompressedFile = new File(TestZStandardCompressorDecompressor.class
> - .getResource("/zstd/test_file.txt").toURI());
> - compressedFile = new File(TestZStandardCompressorDecompressor.class
> - .getResource("/zstd/test_file.txt.zst").toURI());
> + uncompressedFile = new File("/tmp/badcase.data");
> + compressedFile = new File("/tmp/badcase.data.zst");
>Reporter: Igloo
>Priority: Blocker
> Attachments: 15445.patch, badcase.data, 
> image-2020-06-30-11-35-46-859.png, image-2020-06-30-11-39-17-861.png, 
> image-2020-06-30-11-42-44-585.png, image-2020-06-30-11-51-18-026.png
>
>
> *Problem:* 
> In our production environment,  we put file in hdfs with zstd compressor, 
> recently, we find that a specific file may leads to zstandard compressor 
> failures. 
> And we can reproduce the issue with specific file(attached file: badcase.data)
> !image-2020-06-30-11-51-18-026.png|width=699,height=156!
>  
> *Analysis*: 
> ZStandarCompressor use buffersize( From zstd recommended compress out buffer 
> size)  for both inBufferSize and outBufferSize 
> !image-2020-06-30-11-35-46-859.png|width=475,height=179!
> but zstd indeed provides two separately recommending inputBufferSize and 
> outputBufferSize  
> !image-2020-06-30-11-39-17-861.png!
>  
> *Workaround*
> One workaround,  using recommended in/out buffer size provided by zstd lib 
> can avoid the problem, but we don't know why. 
> zstd recommended input buffer size:  1301072 (128 * 1024)
> zstd recommended ouput buffer size: 131591 
> !image-2020-06-30-11-42-44-585.png!
>  
>  
>  
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15445) ZStandardCodec compression mail fail(generic error) when encounter specific file

2020-06-29 Thread Igloo (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17148330#comment-17148330
 ] 

Igloo commented on HDFS-15445:
--

the issue may leads to hbase regionserver crashes, if hbase uses 

> ZStandardCodec compression mail fail(generic error) when encounter specific 
> file
> 
>
> Key: HDFS-15445
> URL: https://issues.apache.org/jira/browse/HDFS-15445
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.6.5
> Environment: zstd 1.3.3
> hadoop 2.6.5 
>  
> --- 
> a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/zstd/TestZStandardCompressorDecompressor.java
> +++ 
> b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/zstd/TestZStandardCompressorDecompressor.java
> @@ -62,10 +62,8 @@
>  @BeforeClass
>  public static void beforeClass() throws Exception {
>  CONFIGURATION.setInt(IO_FILE_BUFFER_SIZE_KEY, 1024 * 64);
> - uncompressedFile = new File(TestZStandardCompressorDecompressor.class
> - .getResource("/zstd/test_file.txt").toURI());
> - compressedFile = new File(TestZStandardCompressorDecompressor.class
> - .getResource("/zstd/test_file.txt.zst").toURI());
> + uncompressedFile = new File("/tmp/badcase.data");
> + compressedFile = new File("/tmp/badcase.data.zst");
>Reporter: Igloo
>Priority: Blocker
> Attachments: 15445.patch, badcase.data, 
> image-2020-06-30-11-35-46-859.png, image-2020-06-30-11-39-17-861.png, 
> image-2020-06-30-11-42-44-585.png, image-2020-06-30-11-51-18-026.png
>
>
> *Problem:* 
> In our production environment,  we put file in hdfs with zstd compressor, 
> recently, we find that a specific file may leads to zstandard compressor 
> failures. 
> And we can reproduce the issue with specific file(attached file: badcase.data)
> !image-2020-06-30-11-51-18-026.png|width=699,height=156!
>  
> *Analysis*: 
> ZStandarCompressor use buffersize( From zstd recommended compress out buffer 
> size)  for both inBufferSize and outBufferSize 
> !image-2020-06-30-11-35-46-859.png|width=475,height=179!
> but zstd indeed provides two separately recommending inputBufferSize and 
> outputBufferSize  
> !image-2020-06-30-11-39-17-861.png!
>  
> *Workaround*
> One workaround,  using recommended in/out buffer size provided by zstd lib 
> can avoid the problem, but we don't know why. 
> zstd recommended input buffer size:  1301072 (128 * 1024)
> zstd recommended ouput buffer size: 131591 
> !image-2020-06-30-11-42-44-585.png!
>  
>  
>  
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HDFS-15445) ZStandardCodec compression mail fail(generic error) when encounter specific file

2020-06-29 Thread Igloo (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igloo updated HDFS-15445:
-
Comment: was deleted

(was: the issue may leads to hbase regionserver crashes, if hbase uses )

> ZStandardCodec compression mail fail(generic error) when encounter specific 
> file
> 
>
> Key: HDFS-15445
> URL: https://issues.apache.org/jira/browse/HDFS-15445
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.6.5
> Environment: zstd 1.3.3
> hadoop 2.6.5 
>  
> --- 
> a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/zstd/TestZStandardCompressorDecompressor.java
> +++ 
> b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/zstd/TestZStandardCompressorDecompressor.java
> @@ -62,10 +62,8 @@
>  @BeforeClass
>  public static void beforeClass() throws Exception {
>  CONFIGURATION.setInt(IO_FILE_BUFFER_SIZE_KEY, 1024 * 64);
> - uncompressedFile = new File(TestZStandardCompressorDecompressor.class
> - .getResource("/zstd/test_file.txt").toURI());
> - compressedFile = new File(TestZStandardCompressorDecompressor.class
> - .getResource("/zstd/test_file.txt.zst").toURI());
> + uncompressedFile = new File("/tmp/badcase.data");
> + compressedFile = new File("/tmp/badcase.data.zst");
>Reporter: Igloo
>Priority: Blocker
> Attachments: 15445.patch, badcase.data, 
> image-2020-06-30-11-35-46-859.png, image-2020-06-30-11-39-17-861.png, 
> image-2020-06-30-11-42-44-585.png, image-2020-06-30-11-51-18-026.png
>
>
> *Problem:* 
> In our production environment,  we put file in hdfs with zstd compressor, 
> recently, we find that a specific file may leads to zstandard compressor 
> failures. 
> And we can reproduce the issue with specific file(attached file: badcase.data)
> !image-2020-06-30-11-51-18-026.png|width=699,height=156!
>  
> *Analysis*: 
> ZStandarCompressor use buffersize( From zstd recommended compress out buffer 
> size)  for both inBufferSize and outBufferSize 
> !image-2020-06-30-11-35-46-859.png|width=475,height=179!
> but zstd indeed provides two separately recommending inputBufferSize and 
> outputBufferSize  
> !image-2020-06-30-11-39-17-861.png!
>  
> *Workaround*
> One workaround,  using recommended in/out buffer size provided by zstd lib 
> can avoid the problem, but we don't know why. 
> zstd recommended input buffer size:  1301072 (128 * 1024)
> zstd recommended ouput buffer size: 131591 
> !image-2020-06-30-11-42-44-585.png!
>  
>  
>  
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15445) ZStandardCodec compression mail fail(generic error) when encounter specific file

2020-06-29 Thread Igloo (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17148329#comment-17148329
 ] 

Igloo commented on HDFS-15445:
--

the issue may leads to hbase regionserver crashes, if hbase uses 

> ZStandardCodec compression mail fail(generic error) when encounter specific 
> file
> 
>
> Key: HDFS-15445
> URL: https://issues.apache.org/jira/browse/HDFS-15445
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.6.5
> Environment: zstd 1.3.3
> hadoop 2.6.5 
>  
> --- 
> a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/zstd/TestZStandardCompressorDecompressor.java
> +++ 
> b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/zstd/TestZStandardCompressorDecompressor.java
> @@ -62,10 +62,8 @@
>  @BeforeClass
>  public static void beforeClass() throws Exception {
>  CONFIGURATION.setInt(IO_FILE_BUFFER_SIZE_KEY, 1024 * 64);
> - uncompressedFile = new File(TestZStandardCompressorDecompressor.class
> - .getResource("/zstd/test_file.txt").toURI());
> - compressedFile = new File(TestZStandardCompressorDecompressor.class
> - .getResource("/zstd/test_file.txt.zst").toURI());
> + uncompressedFile = new File("/tmp/badcase.data");
> + compressedFile = new File("/tmp/badcase.data.zst");
>Reporter: Igloo
>Priority: Blocker
> Attachments: 15445.patch, badcase.data, 
> image-2020-06-30-11-35-46-859.png, image-2020-06-30-11-39-17-861.png, 
> image-2020-06-30-11-42-44-585.png, image-2020-06-30-11-51-18-026.png
>
>
> *Problem:* 
> In our production environment,  we put file in hdfs with zstd compressor, 
> recently, we find that a specific file may leads to zstandard compressor 
> failures. 
> And we can reproduce the issue with specific file(attached file: badcase.data)
> !image-2020-06-30-11-51-18-026.png|width=699,height=156!
>  
> *Analysis*: 
> ZStandarCompressor use buffersize( From zstd recommended compress out buffer 
> size)  for both inBufferSize and outBufferSize 
> !image-2020-06-30-11-35-46-859.png|width=475,height=179!
> but zstd indeed provides two separately recommending inputBufferSize and 
> outputBufferSize  
> !image-2020-06-30-11-39-17-861.png!
>  
> *Workaround*
> One workaround,  using recommended in/out buffer size provided by zstd lib 
> can avoid the problem, but we don't know why. 
> zstd recommended input buffer size:  1301072 (128 * 1024)
> zstd recommended ouput buffer size: 131591 
> !image-2020-06-30-11-42-44-585.png!
>  
>  
>  
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15445) ZStandardCodec compression mail fail(generic error) when encounter specific file

2020-06-29 Thread Igloo (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igloo updated HDFS-15445:
-
Attachment: 15445.patch
Status: Patch Available  (was: Open)

> ZStandardCodec compression mail fail(generic error) when encounter specific 
> file
> 
>
> Key: HDFS-15445
> URL: https://issues.apache.org/jira/browse/HDFS-15445
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.6.5
> Environment: zstd 1.3.3
> hadoop 2.6.5 
>  
> --- 
> a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/zstd/TestZStandardCompressorDecompressor.java
> +++ 
> b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/zstd/TestZStandardCompressorDecompressor.java
> @@ -62,10 +62,8 @@
>  @BeforeClass
>  public static void beforeClass() throws Exception {
>  CONFIGURATION.setInt(IO_FILE_BUFFER_SIZE_KEY, 1024 * 64);
> - uncompressedFile = new File(TestZStandardCompressorDecompressor.class
> - .getResource("/zstd/test_file.txt").toURI());
> - compressedFile = new File(TestZStandardCompressorDecompressor.class
> - .getResource("/zstd/test_file.txt.zst").toURI());
> + uncompressedFile = new File("/tmp/badcase.data");
> + compressedFile = new File("/tmp/badcase.data.zst");
>Reporter: Igloo
>Priority: Blocker
> Attachments: 15445.patch, badcase.data, 
> image-2020-06-30-11-35-46-859.png, image-2020-06-30-11-39-17-861.png, 
> image-2020-06-30-11-42-44-585.png, image-2020-06-30-11-51-18-026.png
>
>
> *Problem:* 
> In our production environment,  we put file in hdfs with zstd compressor, 
> recently, we find that a specific file may leads to zstandard compressor 
> failures. 
> And we can reproduce the issue with specific file(attached file: badcase.data)
> !image-2020-06-30-11-51-18-026.png|width=699,height=156!
>  
> *Analysis*: 
> ZStandarCompressor use buffersize( From zstd recommended compress out buffer 
> size)  for both inBufferSize and outBufferSize 
> !image-2020-06-30-11-35-46-859.png|width=475,height=179!
> but zstd indeed provides two separately recommending inputBufferSize and 
> outputBufferSize  
> !image-2020-06-30-11-39-17-861.png!
>  
> *Workaround*
> One workaround,  using recommended in/out buffer size provided by zstd lib 
> can avoid the problem, but we don't know why. 
> zstd recommended input buffer size:  1301072 (128 * 1024)
> zstd recommended ouput buffer size: 131591 
> !image-2020-06-30-11-42-44-585.png!
>  
>  
>  
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15445) ZStandardCodec compression mail fail(generic error) when encounter specific file

2020-06-29 Thread Igloo (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igloo updated HDFS-15445:
-
Status: Open  (was: Patch Available)

> ZStandardCodec compression mail fail(generic error) when encounter specific 
> file
> 
>
> Key: HDFS-15445
> URL: https://issues.apache.org/jira/browse/HDFS-15445
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.6.5
> Environment: zstd 1.3.3
> hadoop 2.6.5 
>  
> --- 
> a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/zstd/TestZStandardCompressorDecompressor.java
> +++ 
> b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/zstd/TestZStandardCompressorDecompressor.java
> @@ -62,10 +62,8 @@
>  @BeforeClass
>  public static void beforeClass() throws Exception {
>  CONFIGURATION.setInt(IO_FILE_BUFFER_SIZE_KEY, 1024 * 64);
> - uncompressedFile = new File(TestZStandardCompressorDecompressor.class
> - .getResource("/zstd/test_file.txt").toURI());
> - compressedFile = new File(TestZStandardCompressorDecompressor.class
> - .getResource("/zstd/test_file.txt.zst").toURI());
> + uncompressedFile = new File("/tmp/badcase.data");
> + compressedFile = new File("/tmp/badcase.data.zst");
>Reporter: Igloo
>Priority: Blocker
> Attachments: badcase.data, image-2020-06-30-11-35-46-859.png, 
> image-2020-06-30-11-39-17-861.png, image-2020-06-30-11-42-44-585.png, 
> image-2020-06-30-11-51-18-026.png
>
>
> *Problem:* 
> In our production environment,  we put file in hdfs with zstd compressor, 
> recently, we find that a specific file may leads to zstandard compressor 
> failures. 
> And we can reproduce the issue with specific file(attached file: badcase.data)
> !image-2020-06-30-11-51-18-026.png|width=699,height=156!
>  
> *Analysis*: 
> ZStandarCompressor use buffersize( From zstd recommended compress out buffer 
> size)  for both inBufferSize and outBufferSize 
> !image-2020-06-30-11-35-46-859.png|width=475,height=179!
> but zstd indeed provides two separately recommending inputBufferSize and 
> outputBufferSize  
> !image-2020-06-30-11-39-17-861.png!
>  
> *Workaround*
> One workaround,  using recommended in/out buffer size provided by zstd lib 
> can avoid the problem, but we don't know why. 
> zstd recommended input buffer size:  1301072 (128 * 1024)
> zstd recommended ouput buffer size: 131591 
> !image-2020-06-30-11-42-44-585.png!
>  
>  
>  
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15445) ZStandardCodec compression mail fail(generic error) when encounter specific file

2020-06-29 Thread Igloo (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igloo updated HDFS-15445:
-
Status: Patch Available  (was: Open)

> ZStandardCodec compression mail fail(generic error) when encounter specific 
> file
> 
>
> Key: HDFS-15445
> URL: https://issues.apache.org/jira/browse/HDFS-15445
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.6.5
> Environment: zstd 1.3.3
> hadoop 2.6.5 
>  
> --- 
> a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/zstd/TestZStandardCompressorDecompressor.java
> +++ 
> b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/zstd/TestZStandardCompressorDecompressor.java
> @@ -62,10 +62,8 @@
>  @BeforeClass
>  public static void beforeClass() throws Exception {
>  CONFIGURATION.setInt(IO_FILE_BUFFER_SIZE_KEY, 1024 * 64);
> - uncompressedFile = new File(TestZStandardCompressorDecompressor.class
> - .getResource("/zstd/test_file.txt").toURI());
> - compressedFile = new File(TestZStandardCompressorDecompressor.class
> - .getResource("/zstd/test_file.txt.zst").toURI());
> + uncompressedFile = new File("/tmp/badcase.data");
> + compressedFile = new File("/tmp/badcase.data.zst");
>Reporter: Igloo
>Priority: Blocker
> Attachments: badcase.data, image-2020-06-30-11-35-46-859.png, 
> image-2020-06-30-11-39-17-861.png, image-2020-06-30-11-42-44-585.png, 
> image-2020-06-30-11-51-18-026.png
>
>
> *Problem:* 
> In our production environment,  we put file in hdfs with zstd compressor, 
> recently, we find that a specific file may leads to zstandard compressor 
> failures. 
> And we can reproduce the issue with specific file(attached file: badcase.data)
> !image-2020-06-30-11-51-18-026.png|width=699,height=156!
>  
> *Analysis*: 
> ZStandarCompressor use buffersize( From zstd recommended compress out buffer 
> size)  for both inBufferSize and outBufferSize 
> !image-2020-06-30-11-35-46-859.png|width=475,height=179!
> but zstd indeed provides two separately recommending inputBufferSize and 
> outputBufferSize  
> !image-2020-06-30-11-39-17-861.png!
>  
> *Workaround*
> One workaround,  using recommended in/out buffer size provided by zstd lib 
> can avoid the problem, but we don't know why. 
> zstd recommended input buffer size:  1301072 (128 * 1024)
> zstd recommended ouput buffer size: 131591 
> !image-2020-06-30-11-42-44-585.png!
>  
>  
>  
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15445) ZStandardCodec compression mail fail(generic error) when encounter specific file

2020-06-29 Thread Igloo (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igloo updated HDFS-15445:
-
Description: 
*Problem:* 

In our production environment,  we put file in hdfs with zstd compressor, 
recently, we find that a specific file may leads to zstandard compressor 
failures. 

And we can reproduce the issue with specific file(attached file: badcase.data)

!image-2020-06-30-11-51-18-026.png|width=699,height=156!

 

*Analysis*: 

ZStandarCompressor use buffersize( From zstd recommended compress out buffer 
size)  for both inBufferSize and outBufferSize 

!image-2020-06-30-11-35-46-859.png|width=475,height=179!

but zstd indeed provides two separately recommending inputBufferSize and 
outputBufferSize  

!image-2020-06-30-11-39-17-861.png!

 

*Workaround*

One workaround,  using recommended in/out buffer size provided by zstd lib can 
avoid the problem, but we don't know why. 

zstd recommended input buffer size:  1301072 (128 * 1024)

zstd recommended ouput buffer size: 131591 

!image-2020-06-30-11-42-44-585.png!

 

 

 

 

 

 

 

  was:
*Problem:* 

In our production environment,  we put file in hdfs with zstd compressor, 
recently, we find that a specific file may leads to zstandard compressor 
failures. 

And we can reproduce the issue with specific file(attached file: badcase.data)

!image-2020-06-30-11-51-18-026.png|width=699,height=156!

 

*Analysis*: 

ZStandarCompressor use buffersize( From zstd recommended compress out buffer 
size)  for both inBufferSize and outBufferSize 

!image-2020-06-30-11-35-46-859.png|width=475,height=179!

but zstd indeed provides two separately recommending inputBufferSize and 
outputBufferSize  

!image-2020-06-30-11-39-17-861.png!

 

*Workaround*

One workaround,  using recommended in/out buffer size provided by zstd lib can 
avoid the problem, but we don't know why. 

input buffer size:  1301072 (128 * 1024)

ouput buffer size: 131591 

!image-2020-06-30-11-42-44-585.png!

 

 

 

 

 

 

 


> ZStandardCodec compression mail fail(generic error) when encounter specific 
> file
> 
>
> Key: HDFS-15445
> URL: https://issues.apache.org/jira/browse/HDFS-15445
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.6.5
> Environment: zstd 1.3.3
> hadoop 2.6.5 
>  
> --- 
> a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/zstd/TestZStandardCompressorDecompressor.java
> +++ 
> b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/zstd/TestZStandardCompressorDecompressor.java
> @@ -62,10 +62,8 @@
>  @BeforeClass
>  public static void beforeClass() throws Exception {
>  CONFIGURATION.setInt(IO_FILE_BUFFER_SIZE_KEY, 1024 * 64);
> - uncompressedFile = new File(TestZStandardCompressorDecompressor.class
> - .getResource("/zstd/test_file.txt").toURI());
> - compressedFile = new File(TestZStandardCompressorDecompressor.class
> - .getResource("/zstd/test_file.txt.zst").toURI());
> + uncompressedFile = new File("/tmp/badcase.data");
> + compressedFile = new File("/tmp/badcase.data.zst");
>Reporter: Igloo
>Priority: Blocker
> Attachments: badcase.data, image-2020-06-30-11-35-46-859.png, 
> image-2020-06-30-11-39-17-861.png, image-2020-06-30-11-42-44-585.png, 
> image-2020-06-30-11-51-18-026.png
>
>
> *Problem:* 
> In our production environment,  we put file in hdfs with zstd compressor, 
> recently, we find that a specific file may leads to zstandard compressor 
> failures. 
> And we can reproduce the issue with specific file(attached file: badcase.data)
> !image-2020-06-30-11-51-18-026.png|width=699,height=156!
>  
> *Analysis*: 
> ZStandarCompressor use buffersize( From zstd recommended compress out buffer 
> size)  for both inBufferSize and outBufferSize 
> !image-2020-06-30-11-35-46-859.png|width=475,height=179!
> but zstd indeed provides two separately recommending inputBufferSize and 
> outputBufferSize  
> !image-2020-06-30-11-39-17-861.png!
>  
> *Workaround*
> One workaround,  using recommended in/out buffer size provided by zstd lib 
> can avoid the problem, but we don't know why. 
> zstd recommended input buffer size:  1301072 (128 * 1024)
> zstd recommended ouput buffer size: 131591 
> !image-2020-06-30-11-42-44-585.png!
>  
>  
>  
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15445) ZStandardCodec compression mail fail(generic error) when encounter specific file

2020-06-29 Thread Igloo (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igloo updated HDFS-15445:
-
Summary: ZStandardCodec compression mail fail(generic error) when encounter 
specific file  (was: ZStandardCodec compression mail fail when encounter 
specific file)

> ZStandardCodec compression mail fail(generic error) when encounter specific 
> file
> 
>
> Key: HDFS-15445
> URL: https://issues.apache.org/jira/browse/HDFS-15445
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.6.5
> Environment: zstd 1.3.3
> hadoop 2.6.5 
>  
> --- 
> a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/zstd/TestZStandardCompressorDecompressor.java
> +++ 
> b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/zstd/TestZStandardCompressorDecompressor.java
> @@ -62,10 +62,8 @@
>  @BeforeClass
>  public static void beforeClass() throws Exception {
>  CONFIGURATION.setInt(IO_FILE_BUFFER_SIZE_KEY, 1024 * 64);
> - uncompressedFile = new File(TestZStandardCompressorDecompressor.class
> - .getResource("/zstd/test_file.txt").toURI());
> - compressedFile = new File(TestZStandardCompressorDecompressor.class
> - .getResource("/zstd/test_file.txt.zst").toURI());
> + uncompressedFile = new File("/tmp/badcase.data");
> + compressedFile = new File("/tmp/badcase.data.zst");
>Reporter: Igloo
>Priority: Blocker
> Attachments: badcase.data, image-2020-06-30-11-35-46-859.png, 
> image-2020-06-30-11-39-17-861.png, image-2020-06-30-11-42-44-585.png, 
> image-2020-06-30-11-51-18-026.png
>
>
> *Problem:* 
> In our production environment,  we put file in hdfs with zstd compressor, 
> recently, we find that a specific file may leads to zstandard compressor 
> failures. 
> And we can reproduce the issue with specific file(attached file: badcase.data)
> !image-2020-06-30-11-51-18-026.png|width=699,height=156!
>  
> *Analysis*: 
> ZStandarCompressor use buffersize( From zstd recommended compress out buffer 
> size)  for both inBufferSize and outBufferSize 
> !image-2020-06-30-11-35-46-859.png|width=475,height=179!
> but zstd indeed provides two separately recommending inputBufferSize and 
> outputBufferSize  
> !image-2020-06-30-11-39-17-861.png!
>  
> *Workaround*
> One workaround,  using recommended in/out buffer size provided by zstd lib 
> can avoid the problem, but we don't know why. 
> input buffer size:  1301072 (128 * 1024)
> ouput buffer size: 131591 
> !image-2020-06-30-11-42-44-585.png!
>  
>  
>  
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15445) ZStandardCodec compression mail fail when encounter specific file

2020-06-29 Thread Igloo (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17148293#comment-17148293
 ] 

Igloo commented on HDFS-15445:
--

i will work on the issue.

> ZStandardCodec compression mail fail when encounter specific file
> -
>
> Key: HDFS-15445
> URL: https://issues.apache.org/jira/browse/HDFS-15445
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.6.5
> Environment: zstd 1.3.3
> hadoop 2.6.5 
>  
> --- 
> a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/zstd/TestZStandardCompressorDecompressor.java
> +++ 
> b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/zstd/TestZStandardCompressorDecompressor.java
> @@ -62,10 +62,8 @@
>  @BeforeClass
>  public static void beforeClass() throws Exception {
>  CONFIGURATION.setInt(IO_FILE_BUFFER_SIZE_KEY, 1024 * 64);
> - uncompressedFile = new File(TestZStandardCompressorDecompressor.class
> - .getResource("/zstd/test_file.txt").toURI());
> - compressedFile = new File(TestZStandardCompressorDecompressor.class
> - .getResource("/zstd/test_file.txt.zst").toURI());
> + uncompressedFile = new File("/tmp/badcase.data");
> + compressedFile = new File("/tmp/badcase.data.zst");
>Reporter: Igloo
>Priority: Blocker
> Attachments: badcase.data, image-2020-06-30-11-35-46-859.png, 
> image-2020-06-30-11-39-17-861.png, image-2020-06-30-11-42-44-585.png, 
> image-2020-06-30-11-51-18-026.png
>
>
> *Problem:* 
> In our production environment,  we put file in hdfs with zstd compressor, 
> recently, we find that a specific file may leads to zstandard compressor 
> failures. 
> And we can reproduce the issue with specific file(attached file: badcase.data)
> !image-2020-06-30-11-51-18-026.png|width=699,height=156!
>  
> *Analysis*: 
> ZStandarCompressor use buffersize( From zstd recommended compress out buffer 
> size)  for both inBufferSize and outBufferSize 
> !image-2020-06-30-11-35-46-859.png|width=475,height=179!
> but zstd indeed provides two separately recommending inputBufferSize and 
> outputBufferSize  
> !image-2020-06-30-11-39-17-861.png!
>  
> *Workaround*
> One workaround,  using recommended in/out buffer size provided by zstd lib 
> can avoid the problem, but we don't know why. 
> input buffer size:  1301072 (128 * 1024)
> ouput buffer size: 131591 
> !image-2020-06-30-11-42-44-585.png!
>  
>  
>  
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15445) ZStandardCodec compression mail fail when encounter specific file

2020-06-29 Thread Igloo (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igloo updated HDFS-15445:
-
Attachment: image-2020-06-30-11-51-18-026.png

> ZStandardCodec compression mail fail when encounter specific file
> -
>
> Key: HDFS-15445
> URL: https://issues.apache.org/jira/browse/HDFS-15445
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.6.5
> Environment: zstd 1.3.3
> hadoop 2.6.5 
>  
> --- 
> a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/zstd/TestZStandardCompressorDecompressor.java
> +++ 
> b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/zstd/TestZStandardCompressorDecompressor.java
> @@ -62,10 +62,8 @@
>  @BeforeClass
>  public static void beforeClass() throws Exception {
>  CONFIGURATION.setInt(IO_FILE_BUFFER_SIZE_KEY, 1024 * 64);
> - uncompressedFile = new File(TestZStandardCompressorDecompressor.class
> - .getResource("/zstd/test_file.txt").toURI());
> - compressedFile = new File(TestZStandardCompressorDecompressor.class
> - .getResource("/zstd/test_file.txt.zst").toURI());
> + uncompressedFile = new File("/tmp/badcase.data");
> + compressedFile = new File("/tmp/badcase.data.zst");
>Reporter: Igloo
>Priority: Blocker
> Attachments: badcase.data, image-2020-06-30-11-35-46-859.png, 
> image-2020-06-30-11-39-17-861.png, image-2020-06-30-11-42-44-585.png, 
> image-2020-06-30-11-51-18-026.png
>
>
> *Problem:* 
> In our production environment,  we put file in hdfs with zstd compressor, 
> recently, we find that a specific file may leads to zstandard compressor 
> failures. 
> And we can reproduce the issue with specific file(attached file: badcase.data)
>  
> *Analysis*: 
> ZStandarCompressor use buffersize( From zstd recommended compress out buffer 
> size)  for both inBufferSize and outBufferSize 
> !image-2020-06-30-11-35-46-859.png|width=475,height=179!
> but zstd indeed provides two separately recommending inputBufferSize and 
> outputBufferSize  
> !image-2020-06-30-11-39-17-861.png!
>  
> *Workaround*
> One workaround,  using recommended in/out buffer size provided by zstd lib 
> can avoid the problem, but we don't know why. 
> input buffer size:  1301072 (128 * 1024)
> ouput buffer size: 131591 
> !image-2020-06-30-11-42-44-585.png!
>  
>  
>  
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15445) ZStandardCodec compression mail fail when encounter specific file

2020-06-29 Thread Igloo (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igloo updated HDFS-15445:
-
Description: 
*Problem:* 

In our production environment,  we put file in hdfs with zstd compressor, 
recently, we find that a specific file may leads to zstandard compressor 
failures. 

And we can reproduce the issue with specific file(attached file: badcase.data)

!image-2020-06-30-11-51-18-026.png|width=699,height=156!

 

*Analysis*: 

ZStandarCompressor use buffersize( From zstd recommended compress out buffer 
size)  for both inBufferSize and outBufferSize 

!image-2020-06-30-11-35-46-859.png|width=475,height=179!

but zstd indeed provides two separately recommending inputBufferSize and 
outputBufferSize  

!image-2020-06-30-11-39-17-861.png!

 

*Workaround*

One workaround,  using recommended in/out buffer size provided by zstd lib can 
avoid the problem, but we don't know why. 

input buffer size:  1301072 (128 * 1024)

ouput buffer size: 131591 

!image-2020-06-30-11-42-44-585.png!

 

 

 

 

 

 

 

  was:
*Problem:* 

In our production environment,  we put file in hdfs with zstd compressor, 
recently, we find that a specific file may leads to zstandard compressor 
failures. 

And we can reproduce the issue with specific file(attached file: badcase.data)

 

*Analysis*: 

ZStandarCompressor use buffersize( From zstd recommended compress out buffer 
size)  for both inBufferSize and outBufferSize 

!image-2020-06-30-11-35-46-859.png|width=475,height=179!

but zstd indeed provides two separately recommending inputBufferSize and 
outputBufferSize  

!image-2020-06-30-11-39-17-861.png!

 

*Workaround*

One workaround,  using recommended in/out buffer size provided by zstd lib can 
avoid the problem, but we don't know why. 

input buffer size:  1301072 (128 * 1024)

ouput buffer size: 131591 

!image-2020-06-30-11-42-44-585.png!

 

 

 

 

 

 

 


> ZStandardCodec compression mail fail when encounter specific file
> -
>
> Key: HDFS-15445
> URL: https://issues.apache.org/jira/browse/HDFS-15445
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.6.5
> Environment: zstd 1.3.3
> hadoop 2.6.5 
>  
> --- 
> a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/zstd/TestZStandardCompressorDecompressor.java
> +++ 
> b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/zstd/TestZStandardCompressorDecompressor.java
> @@ -62,10 +62,8 @@
>  @BeforeClass
>  public static void beforeClass() throws Exception {
>  CONFIGURATION.setInt(IO_FILE_BUFFER_SIZE_KEY, 1024 * 64);
> - uncompressedFile = new File(TestZStandardCompressorDecompressor.class
> - .getResource("/zstd/test_file.txt").toURI());
> - compressedFile = new File(TestZStandardCompressorDecompressor.class
> - .getResource("/zstd/test_file.txt.zst").toURI());
> + uncompressedFile = new File("/tmp/badcase.data");
> + compressedFile = new File("/tmp/badcase.data.zst");
>Reporter: Igloo
>Priority: Blocker
> Attachments: badcase.data, image-2020-06-30-11-35-46-859.png, 
> image-2020-06-30-11-39-17-861.png, image-2020-06-30-11-42-44-585.png, 
> image-2020-06-30-11-51-18-026.png
>
>
> *Problem:* 
> In our production environment,  we put file in hdfs with zstd compressor, 
> recently, we find that a specific file may leads to zstandard compressor 
> failures. 
> And we can reproduce the issue with specific file(attached file: badcase.data)
> !image-2020-06-30-11-51-18-026.png|width=699,height=156!
>  
> *Analysis*: 
> ZStandarCompressor use buffersize( From zstd recommended compress out buffer 
> size)  for both inBufferSize and outBufferSize 
> !image-2020-06-30-11-35-46-859.png|width=475,height=179!
> but zstd indeed provides two separately recommending inputBufferSize and 
> outputBufferSize  
> !image-2020-06-30-11-39-17-861.png!
>  
> *Workaround*
> One workaround,  using recommended in/out buffer size provided by zstd lib 
> can avoid the problem, but we don't know why. 
> input buffer size:  1301072 (128 * 1024)
> ouput buffer size: 131591 
> !image-2020-06-30-11-42-44-585.png!
>  
>  
>  
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15445) ZStandardCodec compression mail fail when encounter specific file

2020-06-29 Thread Igloo (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igloo updated HDFS-15445:
-
Description: 
*Problem:* 

In our production environment,  we put file in hdfs with zstd compressor, 
recently, we find that a specific file may leads to zstandard compressor 
failures. 

And we can reproduce the issue with specific file(attached file: badcase.data)

 

*Analysis*: 

ZStandarCompressor use buffersize( From zstd recommended compress out buffer 
size)  for both inBufferSize and outBufferSize 

!image-2020-06-30-11-35-46-859.png|width=475,height=179!

but zstd indeed provides two separately recommending inputBufferSize and 
outputBufferSize  

!image-2020-06-30-11-39-17-861.png!

 

*Workaround*

One workaround,  using recommended in/out buffer size provided by zstd lib can 
avoid the problem, but we don't know why. 

input buffer size:  1301072 (128 * 1024)

ouput buffer size: 131591 

!image-2020-06-30-11-42-44-585.png!

 

 

 

 

 

 

 

  was:
*Problem:* 

In our production environment,  we put file in hdfs with zstd compressor, 
recently, we find that a specific file may leads to zstandard compressor 
failures. 

And we can reproduce the issue with specific file(attached file: badcase.data)

 

*Analysis*: 

ZStandarCompressor use buffersize( From zstd recommended compress out buffer 
size)  for both inBufferSize and outBufferSize 

!image-2020-06-30-11-35-46-859.png|width=475,height=179!

but zstd indeed provides two separately recommending inputBufferSize and 
outputBufferSize  

!image-2020-06-30-11-39-17-861.png!

 

*Workaround*

One workaround,  use recommended in/out buffer size provided by zstd lib.  

input buffer size:  1301072 (128 * 1024)

ouput buffer size: 131591 

 

 

 

 

 

 

 


> ZStandardCodec compression mail fail when encounter specific file
> -
>
> Key: HDFS-15445
> URL: https://issues.apache.org/jira/browse/HDFS-15445
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.6.5
> Environment: zstd 1.3.3
> hadoop 2.6.5 
>  
> --- 
> a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/zstd/TestZStandardCompressorDecompressor.java
> +++ 
> b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/zstd/TestZStandardCompressorDecompressor.java
> @@ -62,10 +62,8 @@
>  @BeforeClass
>  public static void beforeClass() throws Exception {
>  CONFIGURATION.setInt(IO_FILE_BUFFER_SIZE_KEY, 1024 * 64);
> - uncompressedFile = new File(TestZStandardCompressorDecompressor.class
> - .getResource("/zstd/test_file.txt").toURI());
> - compressedFile = new File(TestZStandardCompressorDecompressor.class
> - .getResource("/zstd/test_file.txt.zst").toURI());
> + uncompressedFile = new File("/tmp/badcase.data");
> + compressedFile = new File("/tmp/badcase.data.zst");
>Reporter: Igloo
>Priority: Blocker
> Attachments: badcase.data, image-2020-06-30-11-35-46-859.png, 
> image-2020-06-30-11-39-17-861.png, image-2020-06-30-11-42-44-585.png
>
>
> *Problem:* 
> In our production environment,  we put file in hdfs with zstd compressor, 
> recently, we find that a specific file may leads to zstandard compressor 
> failures. 
> And we can reproduce the issue with specific file(attached file: badcase.data)
>  
> *Analysis*: 
> ZStandarCompressor use buffersize( From zstd recommended compress out buffer 
> size)  for both inBufferSize and outBufferSize 
> !image-2020-06-30-11-35-46-859.png|width=475,height=179!
> but zstd indeed provides two separately recommending inputBufferSize and 
> outputBufferSize  
> !image-2020-06-30-11-39-17-861.png!
>  
> *Workaround*
> One workaround,  using recommended in/out buffer size provided by zstd lib 
> can avoid the problem, but we don't know why. 
> input buffer size:  1301072 (128 * 1024)
> ouput buffer size: 131591 
> !image-2020-06-30-11-42-44-585.png!
>  
>  
>  
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15445) ZStandardCodec compression mail fail when encounter specific file

2020-06-29 Thread Igloo (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igloo updated HDFS-15445:
-
Attachment: image-2020-06-30-11-42-44-585.png

> ZStandardCodec compression mail fail when encounter specific file
> -
>
> Key: HDFS-15445
> URL: https://issues.apache.org/jira/browse/HDFS-15445
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 2.6.5
> Environment: zstd 1.3.3
> hadoop 2.6.5 
>  
> --- 
> a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/zstd/TestZStandardCompressorDecompressor.java
> +++ 
> b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/zstd/TestZStandardCompressorDecompressor.java
> @@ -62,10 +62,8 @@
>  @BeforeClass
>  public static void beforeClass() throws Exception {
>  CONFIGURATION.setInt(IO_FILE_BUFFER_SIZE_KEY, 1024 * 64);
> - uncompressedFile = new File(TestZStandardCompressorDecompressor.class
> - .getResource("/zstd/test_file.txt").toURI());
> - compressedFile = new File(TestZStandardCompressorDecompressor.class
> - .getResource("/zstd/test_file.txt.zst").toURI());
> + uncompressedFile = new File("/tmp/badcase.data");
> + compressedFile = new File("/tmp/badcase.data.zst");
>Reporter: Igloo
>Priority: Blocker
> Attachments: badcase.data, image-2020-06-30-11-35-46-859.png, 
> image-2020-06-30-11-39-17-861.png, image-2020-06-30-11-42-44-585.png
>
>
> *Problem:* 
> In our production environment,  we put file in hdfs with zstd compressor, 
> recently, we find that a specific file may leads to zstandard compressor 
> failures. 
> And we can reproduce the issue with specific file(attached file: badcase.data)
>  
> *Analysis*: 
> ZStandarCompressor use buffersize( From zstd recommended compress out buffer 
> size)  for both inBufferSize and outBufferSize 
> !image-2020-06-30-11-35-46-859.png|width=475,height=179!
> but zstd indeed provides two separately recommending inputBufferSize and 
> outputBufferSize  
> !image-2020-06-30-11-39-17-861.png!
>  
> *Workaround*
> One workaround,  use recommended in/out buffer size provided by zstd lib.  
> input buffer size:  1301072 (128 * 1024)
> ouput buffer size: 131591 
>  
>  
>  
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15445) ZStandardCodec compression mail fail when encounter specific file

2020-06-29 Thread Igloo (Jira)
Igloo created HDFS-15445:


 Summary: ZStandardCodec compression mail fail when encounter 
specific file
 Key: HDFS-15445
 URL: https://issues.apache.org/jira/browse/HDFS-15445
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Affects Versions: 2.6.5
 Environment: zstd 1.3.3

hadoop 2.6.5 

 

--- 
a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/zstd/TestZStandardCompressorDecompressor.java
+++ 
b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/zstd/TestZStandardCompressorDecompressor.java
@@ -62,10 +62,8 @@
 @BeforeClass
 public static void beforeClass() throws Exception {
 CONFIGURATION.setInt(IO_FILE_BUFFER_SIZE_KEY, 1024 * 64);
- uncompressedFile = new File(TestZStandardCompressorDecompressor.class
- .getResource("/zstd/test_file.txt").toURI());
- compressedFile = new File(TestZStandardCompressorDecompressor.class
- .getResource("/zstd/test_file.txt.zst").toURI());
+ uncompressedFile = new File("/tmp/badcase.data");
+ compressedFile = new File("/tmp/badcase.data.zst");
Reporter: Igloo
 Attachments: badcase.data, image-2020-06-30-11-35-46-859.png, 
image-2020-06-30-11-39-17-861.png

*Problem:* 

In our production environment,  we put file in hdfs with zstd compressor, 
recently, we find that a specific file may leads to zstandard compressor 
failures. 

And we can reproduce the issue with specific file(attached file: badcase.data)

 

*Analysis*: 

ZStandarCompressor use buffersize( From zstd recommended compress out buffer 
size)  for both inBufferSize and outBufferSize 

!image-2020-06-30-11-35-46-859.png|width=475,height=179!

but zstd indeed provides two separately recommending inputBufferSize and 
outputBufferSize  

!image-2020-06-30-11-39-17-861.png!

 

*Workaround*

One workaround,  use recommended in/out buffer size provided by zstd lib.  

input buffer size:  1301072 (128 * 1024)

ouput buffer size: 131591 

 

 

 

 

 

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-13288) Why we don't add a harder lease expiration limit.

2018-03-14 Thread Igloo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igloo resolved HDFS-13288.
--
Resolution: Invalid

> Why we don't add a harder lease expiration limit.
> -
>
> Key: HDFS-13288
> URL: https://issues.apache.org/jira/browse/HDFS-13288
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.6.5
>Reporter: Igloo
>Priority: Minor
>
> Currently there exists a soft expire timeout(1 minutes by default) and hard 
> expire timeout(60 minutes by default). 
> On our production environment. Some client began writing a file long 
> time(more than one year) ago, when writing finished and tried to close the 
> output stream, the client failed closing it (for some IOException. etc. ).  
> But the client process is a background service, it doesn't exit. So the lease 
> doesn't released for more than one year.
> The problem is that, the lease for the file is occupied, we have to call 
> recover lease on the file when doing demission or appending operation.
>  
> So I am wondering why we don't add a more harder lease expire timeout, when a 
> lease lasts too long (maybe one month),  revoke it. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13288) Why we don't add a harder lease expiration limit.

2018-03-14 Thread Igloo (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-13288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16399878#comment-16399878
 ] 

Igloo commented on HDFS-13288:
--

[~vinayrpet] 

Got your point.. "Namenode renews the lease for whole client, not per file." 

you are right.  thx~

 

 

> Why we don't add a harder lease expiration limit.
> -
>
> Key: HDFS-13288
> URL: https://issues.apache.org/jira/browse/HDFS-13288
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.6.5
>Reporter: Igloo
>Priority: Minor
>
> Currently there exists a soft expire timeout(1 minutes by default) and hard 
> expire timeout(60 minutes by default). 
> On our production environment. Some client began writing a file long 
> time(more than one year) ago, when writing finished and tried to close the 
> output stream, the client failed closing it (for some IOException. etc. ).  
> But the client process is a background service, it doesn't exit. So the lease 
> doesn't released for more than one year.
> The problem is that, the lease for the file is occupied, we have to call 
> recover lease on the file when doing demission or appending operation.
>  
> So I am wondering why we don't add a more harder lease expire timeout, when a 
> lease lasts too long (maybe one month),  revoke it. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13288) Why we don't add a harder lease expiration limit.

2018-03-14 Thread Igloo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igloo updated HDFS-13288:
-
Description: 
Currently there exists a soft expire timeout(1 minutes by default) and hard 
expire timeout(60 minutes by default). 

On our production environment. Some client began writing a file long time(more 
than one year) ago, when writing finished and tried to close the output stream, 
the client failed closing it (for some IOException. etc. ).  But the client 
process is a background service, it doesn't exit. So the lease doesn't released 
for more than one year.

The problem is that, the lease for the file is occupied, we have to call 
recover lease on the file when doing demission or appending operation.

 

So I am wondering why we don't add a more harder lease expire timeout, when a 
lease lasts too long (maybe one month),  revoke it. 

 

  was:
Currently there exists a soft expire timeout(1 minutes by default) and hard 
expire timeout(60 minutes by default). 

On our production environment. Some client began writing a file long time(more 
than one year) ago, when writing finished and tried to close the output stream, 
the client failed closing it (for some IOException. etc. ).  But the client 
process is a background service, it doesn't exit. So the lease doesn't released 
for more than one year.

The problem is the lease for the file is occupied, we have to call recover 
lease on the file.

So I am wondering why we don't add a more harder lease expire timeout, when a 
lease lasts too long (maybe one month),  revoke it. 

 


> Why we don't add a harder lease expiration limit.
> -
>
> Key: HDFS-13288
> URL: https://issues.apache.org/jira/browse/HDFS-13288
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.6.5
>Reporter: Igloo
>Priority: Minor
>
> Currently there exists a soft expire timeout(1 minutes by default) and hard 
> expire timeout(60 minutes by default). 
> On our production environment. Some client began writing a file long 
> time(more than one year) ago, when writing finished and tried to close the 
> output stream, the client failed closing it (for some IOException. etc. ).  
> But the client process is a background service, it doesn't exit. So the lease 
> doesn't released for more than one year.
> The problem is that, the lease for the file is occupied, we have to call 
> recover lease on the file when doing demission or appending operation.
>  
> So I am wondering why we don't add a more harder lease expire timeout, when a 
> lease lasts too long (maybe one month),  revoke it. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13288) Why we don't add a harder lease expiration limit.

2018-03-14 Thread Igloo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igloo updated HDFS-13288:
-
Description: 
Currently there exists a soft expire timeout(1 minutes by default) and hard 
expire timeout(60 minutes by default). 

On our production environment. Some client began writing a file long time(more 
than one year) ago, when writing finished and tried to close the output stream, 
the client failed closing it (for some IOException. etc. ).  But the client 
process is a background service, it doesn't exit. So the lease doesn't released 
for more than one year.

The problem is the lease for the file is occupied, we have to call recover 
lease on the file.

So I am wondering why we don't add a more harder lease expire timeout, when a 
lease lasts too long (maybe one month),  revoke it. 

 

  was:
Currently there exists a soft expire timeout(1 minutes by default) and hard 
expire timeout(60 minutes by default). 

On our production environment. Some client began writing a file long time(more 
than one year) ago, when writing finished and tried to close the output stream, 
the client failed closing it (for some IOException. etc. ).  But the client 
process is a background service, it doesn't exit. So the lease doesn't released 
for more than one year.

The problem is the lease for the file is occupied, we have to call recover 
lease on the file.

So I am wondering why we don't add a more harder lease expire timeout, when a 
lease lasts too long (maybe one month), revoke it.  

 


> Why we don't add a harder lease expiration limit.
> -
>
> Key: HDFS-13288
> URL: https://issues.apache.org/jira/browse/HDFS-13288
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.6.5
>Reporter: Igloo
>Priority: Minor
>
> Currently there exists a soft expire timeout(1 minutes by default) and hard 
> expire timeout(60 minutes by default). 
> On our production environment. Some client began writing a file long 
> time(more than one year) ago, when writing finished and tried to close the 
> output stream, the client failed closing it (for some IOException. etc. ).  
> But the client process is a background service, it doesn't exit. So the lease 
> doesn't released for more than one year.
> The problem is the lease for the file is occupied, we have to call recover 
> lease on the file.
> So I am wondering why we don't add a more harder lease expire timeout, when a 
> lease lasts too long (maybe one month),  revoke it. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13288) Why we don't add a harder lease expiration limit.

2018-03-14 Thread Igloo (JIRA)
Igloo created HDFS-13288:


 Summary: Why we don't add a harder lease expiration limit.
 Key: HDFS-13288
 URL: https://issues.apache.org/jira/browse/HDFS-13288
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.6.5
Reporter: Igloo


Currently there exists a soft expire timeout(1 minutes by default) and hard 
expire timeout(60 minutes by default). 

On our production environment. Some client began writing a file long time(more 
than one year) ago, when writing finished and tried to close the output stream, 
the client failed closing it (for some IOException. etc. ).  But the client 
process is a background service, it doesn't exit. So the lease doesn't released 
for more than one year.

The problem is the lease for the file is occupied, we have to call recover 
lease on the file.

So I am wondering why we don't add a more harder lease expire timeout, when a 
lease lasts too long (maybe one month), revoke it.  

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org