Thanks Marton for the first release of ozone.
+1 (non-binding)
- Verified the signature
- Built from source
- Tested ozone shell and ozone fs commands using robot
- Deployed pseudo ozone cluster and verified basic shell commands
> On Sep 26, 2018, at 10:29 AM, Shashikant Banerjee
> wrote:
>
Hi Marton,
+1 (non-binding)
1. Verified the Signature
3. Built from source.
4. Ran Robot tests to verify all RPC and Rest commands
4. Deployed Pseudo ozone cluster and verified basic commands.
Thanks
Shashi
On 9/26/18, 8:26 AM, "Bharat Viswanadham" wrote:
Hi Marton,
Thank You for
Hi Wei-Chiu,
At the beginning, we have noted HDFS-9260 which changing the structure of the
stored blocks. With multiple adding/removing under this structure, there will
be a performance degradation.
But we think this will reach a stable state, and HDFS-9260 isn’t the root cause
from our point
Yiqun,
Is this related to HDFS-9260?
Note that HDFS-9260 was backported since CDH5.7 and above.
I'm interested to learn more. Did you observe clients failing to close file
due to insufficient number of block replicas? Did NN fail over?
Did you have gc logging enabled? Any chance to take a heap
Hi Kiwal, following are my responses.
Yes, we are using CMS, young gen size 36GB (maybe a little large), old gen 84
GB (0.75 threshold triggering full gc), total heap size 120GB. NN do young gen
collection every 30~40s one time.
Thanks Yiqun
发件人: Kihwal Lee [mailto:kih...@oath.com]
发送时间:
Correction: My vote is NON-BINDING. Sorry for the confusion.
Thanks,
Hanisha
On 9/25/18, 7:29 PM, "Hanisha Koneru" wrote:
>Thanks Marton for putting together the first RC for Ozone.
>
>+1 (binding)
>
>Verified the following:
> - Verified the signature
> - Built from Source
> -
Hi Marton,
Thank You for the first ozone release.
+1 (non-binding)
1. Verified signatures.
2. Built from source.
3. Ran a docker cluster using docker files from ozone tar ball. Tested ozone
shell commands.
4. Ran ozone-hdfs cluster and verified ozone is started as a plugin when
datanode boots
Thanks Marton for putting together the first RC for Ozone.
+1 (binding)
Verified the following:
- Verified the signature
- Built from Source
- Deployed Pseudo HDFS and Ozone clusters and verified basic operations
Thanks,
Hanisha
On 9/19/18, 2:49 PM, "Elek, Marton" wrote:
>Hi
Tsz Wo Nicholas Sze created HDDS-554:
Summary: In XceiverClientSpi, implements sendCommand(..) using
sendCommandAsync(..)
Key: HDDS-554
URL: https://issues.apache.org/jira/browse/HDDS-554
Nilotpal Nandi created HDDS-553:
---
Summary: copyFromLocal subcommand failed
Key: HDDS-553
URL: https://issues.apache.org/jira/browse/HDDS-553
Project: Hadoop Distributed Data Store
Issue Type:
Ajay Kumar created HDFS-13941:
-
Summary: make storageId in BlockPoolTokenSecretManager.checkAccess
optional
Key: HDFS-13941
URL: https://issues.apache.org/jira/browse/HDFS-13941
Project: Hadoop HDFS
Are you using CMS? How big is young gen?
How often does the NN do young gen collection when it is slow?
On Tue, Sep 25, 2018 at 4:04 AM Lin,Yiqun(vip.com)
wrote:
> Hi hdfs developers:
>
> We meet a bad problem after rolling upgrade our hadoop version from
> 2.5.0-cdh5.3.2 to 2.6.0-cdh5.13.1.
Hi Marton,
+1 (binding)
1. Verified the Signature
2. Verified the Checksums - MD5 and Sha*
3. Build from Sources.
4. Ran all RPC and REST commands against the cluster via Robot.
5. Tested the OzoneFS functionality
Thank you very much for creating the first release of Ozone.
--Anu
On 9/19/18,
For more details, see
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/907/
[Sep 24, 2018 6:50:28 AM] (sunilg) YARN-8742. [UI2] Container logs on
Application / Service pages on UI2
[Sep 24, 2018 3:49:47 PM] (sunilg) HDFS-13937. Multipart Uploader APIs to be
marked as
[
https://issues.apache.org/jira/browse/HDFS-8878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Steve Loughran resolved HDFS-8878.
--
Resolution: Duplicate
> An HDFS built-in DistCp
>
>
>
Steve Loughran created HDFS-13940:
-
Summary: Implement Multipart-aware discp-equivalent
Key: HDFS-13940
URL: https://issues.apache.org/jira/browse/HDFS-13940
Project: Hadoop HDFS
Issue Type:
Shashikant Banerjee created HDDS-552:
Summary: Partial Block Commits should happen via Ratis while
closing the container in case of no failures
Key: HDDS-552
URL:
Shashikant Banerjee created HDDS-551:
Summary: Fix the close container status check in
CloseContainerCommandHandler
Key: HDDS-551
URL: https://issues.apache.org/jira/browse/HDDS-551
Project:
Shashikant Banerjee created HDDS-550:
Summary: Synchronize PutBlock calls per Container in
ContainerStateMachine
Key: HDDS-550
URL: https://issues.apache.org/jira/browse/HDDS-550
Project: Hadoop
Takanobu Asanuma created HDFS-13939:
---
Summary: [JDK10] Javadoc build fails on JDK 10 in
hadoop-hdfs-project
Key: HDFS-13939
URL: https://issues.apache.org/jira/browse/HDFS-13939
Project: Hadoop
Hi hdfs developers:
We meet a bad problem after rolling upgrade our hadoop version from
2.5.0-cdh5.3.2 to 2.6.0-cdh5.13.1. The problem is that we find NN running slow
periodically (around a week). Concretely to say, For example, we startup NN on
Monday, it will run fast. But time coming to
Namit Maheshwari created HDDS-549:
-
Summary: Documentation for key rename is missing in keycommands.md
Key: HDDS-549
URL: https://issues.apache.org/jira/browse/HDDS-549
Project: Hadoop Distributed
22 matches
Mail list logo