[
https://issues.apache.org/jira/browse/HADOOP-6768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12874572#action_12874572
]
Xiao Kang commented on HADOOP-6768:
---
Another case found for this issuse.
In mapreduce t
[
https://issues.apache.org/jira/browse/HADOOP-6768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Xiao Kang updated HADOOP-6768:
--
Attachment: HADOOP-6768.patch
patch attached.
A. change org.apache.hadoop.ipc.Client
1. add call.set
RPC client can response more efficiently when sendParam() got IOException
-
Key: HADOOP-6768
URL: https://issues.apache.org/jira/browse/HADOOP-6768
Project: Hadoop Common
[
https://issues.apache.org/jira/browse/HADOOP-6683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12855353#action_12855353
]
Xiao Kang commented on HADOOP-6683:
---
This patch does not add any new function and he tes
[
https://issues.apache.org/jira/browse/HADOOP-6683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Xiao Kang updated HADOOP-6683:
--
Release Note: Improve the buffer utilization of ZlibCompressor to avoid
invoking a JNI per write reque
[
https://issues.apache.org/jira/browse/HADOOP-6687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12854325#action_12854325
]
Xiao Kang commented on HADOOP-6687:
---
Which revision is the patch against? Since no such
[
https://issues.apache.org/jira/browse/HADOOP-6683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12854316#action_12854316
]
Xiao Kang commented on HADOOP-6683:
---
A comparision test was performed on a 1.8GB web log
[
https://issues.apache.org/jira/browse/HADOOP-6663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Xiao Kang updated HADOOP-6663:
--
Release Note: Fix EOF exception in BlockDecompressorStream when
decompressing previous compressed empt
[
https://issues.apache.org/jira/browse/HADOOP-6663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Xiao Kang updated HADOOP-6663:
--
Attachment: BlockDecompressorStream.java.patch
Thank you for your advice.
New patch attached with ap
[
https://issues.apache.org/jira/browse/HADOOP-6662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Xiao Kang updated HADOOP-6662:
--
Resolution: Duplicate
Status: Resolved (was: Patch Available)
Duplicated with HADOOP-6683.
>
[
https://issues.apache.org/jira/browse/HADOOP-6683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Xiao Kang updated HADOOP-6683:
--
Attachment: ZlibCompressor.java.patch
Patch attached.
> the first optimization: ZlibCompressor does n
[
https://issues.apache.org/jira/browse/HADOOP-4196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12853692#action_12853692
]
Xiao Kang commented on HADOOP-4196:
---
Thank you. Sub task HADOOP-6683 created.
> Possibl
the first optimization: ZlibCompressor does not fully utilize the buffer
Key: HADOOP-6683
URL: https://issues.apache.org/jira/browse/HADOOP-6683
Project: Hadoop Common
[
https://issues.apache.org/jira/browse/HADOOP-6672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12853155#action_12853155
]
Xiao Kang commented on HADOOP-6672:
---
The profiling data is got from a internal version c
[
https://issues.apache.org/jira/browse/HADOOP-6663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Xiao Kang updated HADOOP-6663:
--
Attachment: BlockDecompressorStream.java.patch
New patch attached, including test case.
> BlockDecomp
[
https://issues.apache.org/jira/browse/HADOOP-6672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12852833#action_12852833
]
Xiao Kang commented on HADOOP-6672:
---
Sun JDK's implementation of writeInt() writeLong()
[
https://issues.apache.org/jira/browse/HADOOP-6672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12852829#action_12852829
]
Xiao Kang commented on HADOOP-6672:
---
Compring two screenshots as following:
1. In scree
[
https://issues.apache.org/jira/browse/HADOOP-6672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Xiao Kang updated HADOOP-6672:
--
Attachment: screenshot-2.jpg
screenshot-2 after aplying the patch
> BytesWritable.write(buf) use much
[
https://issues.apache.org/jira/browse/HADOOP-6672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Xiao Kang updated HADOOP-6672:
--
Attachment: BytesWritable.java.patch
Patch attached.
Transfer int to a four bytes buffer ints and use
[
https://issues.apache.org/jira/browse/HADOOP-4196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12852715#action_12852715
]
Xiao Kang commented on HADOOP-4196:
---
Thanks Hong Tang for noticing duplication of anothe
[
https://issues.apache.org/jira/browse/HADOOP-6672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Xiao Kang updated HADOOP-6672:
--
Attachment: screenshot-1.jpg
A YJP screenshot attached.
comparing as following:
BytesWritable.write(
BytesWritable.write(buf) use much more CPU in writeInt() then write(buf)
Key: HADOOP-6672
URL: https://issues.apache.org/jira/browse/HADOOP-6672
Project: Hadoop Common
[
https://issues.apache.org/jira/browse/HADOOP-6662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Xiao Kang updated HADOOP-6662:
--
Status: Patch Available (was: Reopened)
> hadoop zlib compression does not fully utilize the buffer
>
[
https://issues.apache.org/jira/browse/HADOOP-6662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Xiao Kang reopened HADOOP-6662:
---
Thanks for Hong Tang. Patch is not attached in HADOOP-4196 and the issue is
still unresolved in release
[
https://issues.apache.org/jira/browse/HADOOP-6662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Xiao Kang updated HADOOP-6662:
--
Attachment: ZlibCompressor.patch
Patch attached.
needsInput() check the uncompressedDirectBuf, if it
[
https://issues.apache.org/jira/browse/HADOOP-6662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12851277#action_12851277
]
Xiao Kang commented on HADOOP-6662:
---
Thank you for your notice! However no patch attache
[
https://issues.apache.org/jira/browse/HADOOP-6663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Xiao Kang updated HADOOP-6663:
--
Attachment: BlockDecompressorStream.patch
Patch attached.
Return -1 to indicate EOF when reading a 0
[
https://issues.apache.org/jira/browse/HADOOP-6663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12850835#action_12850835
]
Xiao Kang commented on HADOOP-6663:
---
The EOF exception caused as follow:
BlockCompresso
BlockDecompressorStream get EOF exception when decompressing the file
compressed from empty file
Key: HADOOP-6663
URL: https://issues.apache.org/jira/browse/HADOOP-66
hadoop zlib compression does not fully utilize the buffer
-
Key: HADOOP-6662
URL: https://issues.apache.org/jira/browse/HADOOP-6662
Project: Hadoop Common
Issue Type: Improvement
30 matches
Mail list logo