[jira] [Commented] (HDDS-1175) Serve read requests directly from RocksDB

2019-05-01 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831257#comment-16831257
 ] 

Anu Engineer commented on HDDS-1175:


[~hanishakoneru] Sorry for commenting so late. I have not been looking at HA 
patches. I have a concern here.

bq. On OM leader, we run a periodic role checked to verify its leader status.
This means that, at the end of the day, it is possible that we do not "know" 
for sure if we are the leader. This suffers from the issue of time of check vs. 
time of access issue. One OM might think that it is a leader when it really is 
not.

Many other systems have used a notion of "Leader Lease" to avoid this problem. 
I have been thinking another way to solve this issue is to read from any 2 
nodes, and if they value of the key does not agree, we can use the later 
version of the key.

Without one of these approaches, OM HA will weaken the current set of strict 
serializability guarantees of OM ( that is OM without HA). Thought I will flag 
this here, for your consideration.


> Serve read requests directly from RocksDB
> -
>
> Key: HDDS-1175
> URL: https://issues.apache.org/jira/browse/HDDS-1175
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1175.001.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> We can directly server read requests from the OM's RocksDB instead of going 
> through the Ratis server. OM should first check its role and only if it is 
> the leader can it server read requests. 
> There can be a scenario where an OM can lose its Leader status but not know 
> about the new election in the ring. This OM could server stale reads for the 
> duration of the heartbeat timeout but this should be acceptable (similar to 
> how Standby Namenode could possibly server stale reads till it figures out 
> the new status).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1478) Provide k8s resources files for prometheus and performance tests

2019-05-01 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831156#comment-16831156
 ] 

Anu Engineer commented on HDDS-1478:


+1, thank you for getting this done. Please feel free to commit.

> Provide k8s resources files for prometheus and performance tests
> 
>
> Key: HDDS-1478
> URL: https://issues.apache.org/jira/browse/HDDS-1478
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Similar to HDDS-1412 we can further improve the available k8s resources with 
> providing example resources to:
> 1) install prometheus
> 2) execute freon test and check the results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-973) HDDS/Ozone fail to build on Windows

2019-04-29 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-973:
--
Fix Version/s: (was: 0.4.0)

> HDDS/Ozone fail to build on Windows
> ---
>
> Key: HDDS-973
> URL: https://issues.apache.org/jira/browse/HDDS-973
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Sammi Chen
>Assignee: Xiaoyu Yao
>Priority: Major
> Attachments: HDDS-973.001.patch
>
>
> Thanks [~Sammi] for reporting the issue on building hdds/ozone with Windows 
> OS. I can repro it locally and will post a fix shortly. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-973) HDDS/Ozone fail to build on Windows

2019-04-29 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-973:
--
Target Version/s: 0.5.0  (was: 0.4.0)

> HDDS/Ozone fail to build on Windows
> ---
>
> Key: HDDS-973
> URL: https://issues.apache.org/jira/browse/HDDS-973
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Sammi Chen
>Assignee: Xiaoyu Yao
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-973.001.patch
>
>
> Thanks [~Sammi] for reporting the issue on building hdds/ozone with Windows 
> OS. I can repro it locally and will post a fix shortly. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDDS-973) HDDS/Ozone fail to build on Windows

2019-04-29 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer reopened HDDS-973:
---

This commit creates a failure of documentation building on ozone-0.4.0, thanks 
to [~ajayydv] for finding and reverting it. [~elek] , care to take a look ?

{noformat}
[INFO] --- exec-maven-plugin:1.6.0:exec (default) @ hadoop-hdds-docs ---
Error: unknown command "0.4.0-SNAPSHOT" for "hugo"
Run 'hugo --help' for usage.
/Users/aengineer/diskBalancer/hadoop-hdds/docs/target
{noformat}

Here is the error message on the Mac Os.

> HDDS/Ozone fail to build on Windows
> ---
>
> Key: HDDS-973
> URL: https://issues.apache.org/jira/browse/HDDS-973
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Sammi Chen
>Assignee: Xiaoyu Yao
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-973.001.patch
>
>
> Thanks [~Sammi] for reporting the issue on building hdds/ozone with Windows 
> OS. I can repro it locally and will post a fix shortly. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1452) All chunk writes should happen to a single file for a block in datanode

2019-04-26 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16827159#comment-16827159
 ] 

Anu Engineer commented on HDDS-1452:


I agree it is not orthogonal. I was thinking we can skip step one completely if 
we do the second one. Since the code changes are exactly in the same place. 
Most Object stores and file systems use Extend based allocation and writes. 
Ozone would benefit from moving into some kind of extend based system. In fact, 
it would be best if can allocate extents on SSD, keep the data in those extents 
for 24 hours and move it to spinning disks later. This is similar to what ZFS 
does, and you automatically get SSD caching. If you are writing to a a spinning 
disk, all writes are sequential which increases the write speed.

> All chunk writes should happen to a single file for a block in datanode
> ---
>
> Key: HDDS-1452
> URL: https://issues.apache.org/jira/browse/HDDS-1452
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.5.0
>
>
> Currently, all chunks of a block happen to individual chunk files in 
> datanode. This idea here is to write all individual chunks to a single file 
> in datanode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1454) GC other system pause events can trigger pipeline destroy for all the nodes in the cluster

2019-04-23 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16824203#comment-16824203
 ] 

Anu Engineer commented on HDDS-1454:


One of the hard learned lesson, in my previous job was systems like SCM should 
not make massive changes. Say if we are closing down more than 30% of all 
pipelines. Emit a warning wait for human intervention, or slow down to a degree 
that close is controlled. In fact, we should have a pipeline/container close 
rate, that is we will not do more than x amount per unit time. It is also a 
good to have a big red button, so that if the system gets into this state, 
admin has the ability to stop this activity by SCM.

> GC other system pause events can trigger pipeline destroy for all the nodes 
> in the cluster
> --
>
> Key: HDDS-1454
> URL: https://issues.apache.org/jira/browse/HDDS-1454
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.3.0
>Reporter: Mukul Kumar Singh
>Priority: Major
>  Labels: MiniOzoneChaosCluster
>
> In a MiniOzoneChaosCluster run it was observed that events like GC pauses or 
> any other pauses in SCM can mark all the datanodes as stale in SCM. This will 
> trigger multiple pipeline destroy and will render the system unusable. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1301) Optimize recursive ozone filesystem apis

2019-04-22 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16823567#comment-16823567
 ] 

Anu Engineer commented on HDDS-1301:


Thank you, I will sync with you, but I will write and post a design document 
that explains all these changes and the possible changes in the output 
committer. Then based on your feedback, we can shape the Ozone manager API.

> Optimize recursive ozone filesystem apis
> 
>
> Key: HDDS-1301
> URL: https://issues.apache.org/jira/browse/HDDS-1301
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1301.001.patch
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> This Jira aims to optimise recursive apis in ozone file system. These are the 
> apis which have a recursive flag which requires an operation to be performed 
> on all the children of the directory. The Jira would add support for 
> recursive apis in Ozone manager in order to reduce the number of rpc calls to 
> Ozone Manager. Also currently these operations are not atomic. This Jira 
> would make all the operations in ozone filesystem atomic.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1452) All chunks should happen to a single file for a block in datanode

2019-04-22 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16823407#comment-16823407
 ] 

Anu Engineer commented on HDDS-1452:


{quote}Why impose such a requirement? Doing EC of 1 KB files is probably a 
terrible idea even from the perspective of disk usage
{quote}
We will not be doing EC on the 1KB files, we will be doing EC at the data file. 
That is, if you store lots of data into a data file, we can EC that file. It is 
irrelevant if what the size is from the Ozone point of view. HDDS can do the EC 
at the data files level, and be completely independent of the sizes in question.

 

Now if we have 1GB of data – so some arbitrary number that is large, then EC 
makes sense. This is one of the advantages of Ozone over HDFS.

> All chunks should happen to a single file for a block in datanode
> -
>
> Key: HDDS-1452
> URL: https://issues.apache.org/jira/browse/HDDS-1452
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.5.0
>
>
> Currently, all chunks of a block happen to individual chunk files in 
> datanode. This idea here is to write all individual chunks to a single file 
> in datanode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1452) All chunks should happen to a single file for a block in datanode

2019-04-22 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16823378#comment-16823378
 ] 

Anu Engineer commented on HDDS-1452:


{quote}If a container is full of 1KB files it may not be a good candidate for 
Erasure Coding. If your entire 
{quote}
In the ozone world, it should not matter. Especially since we plan to EC at the 
level of data or containers. The actual EC would not work on RockDB, but it 
should work on all containers, irrespective of the data size of the actual keys.
{quote}cluster is full of 1KB files then we have other serious problems, of 
course.
{quote}
Hopefully, Ozone will be just be able to handle this scenario, we might need 
many Ozone Managers, but a single SCM and few data nodes. I am not advising 
this model, but it is something that I am sure we will run into eventually, 
especially since we are an object store; the HDFS use case is different; but in 
the ozone world I think we will have to be prepared for this eventuality.

> All chunks should happen to a single file for a block in datanode
> -
>
> Key: HDDS-1452
> URL: https://issues.apache.org/jira/browse/HDDS-1452
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.5.0
>
>
> Currently, all chunks of a block happen to individual chunk files in 
> datanode. This idea here is to write all individual chunks to a single file 
> in datanode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1452) All chunks should happen to a single file for a block in datanode

2019-04-22 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16823374#comment-16823374
 ] 

Anu Engineer commented on HDDS-1452:


Possibly; I don't know what would be a good option; one single large file or 
RocksDB. Either way, when we do this, we need to make sure that it is not a 
single block to a single file mapping that exists. It is better to have the 
ability to control the data size of the files.

One down side with the 1KB files in RockDB is the erasure coding might become 
harder, since we can take a closed container and erasure code all data files 
and leave metadata in RockDB with erasure coding. That is my only concern with 
leaving 1 KB inside RockDB; and also we will have to benchmark how it will work 
out.

 

> All chunks should happen to a single file for a block in datanode
> -
>
> Key: HDDS-1452
> URL: https://issues.apache.org/jira/browse/HDDS-1452
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.5.0
>
>
> Currently, all chunks of a block happen to individual chunk files in 
> datanode. This idea here is to write all individual chunks to a single file 
> in datanode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1452) All chunks should happen to a single file for a block in datanode

2019-04-22 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16823359#comment-16823359
 ] 

Anu Engineer commented on HDDS-1452:


Just a thought: Would it make sense to write to Data files until they become 
say 1GB? so we can take any chunk write to a file until it is large enough. 
This addresses the uses case where we are writing say 1 KB Ozone Keys. In the 
current proposal, if I write all 1 KB would we end up having 1 KB block files ? 
Just a thought since you are planning to address this issue.

> All chunks should happen to a single file for a block in datanode
> -
>
> Key: HDDS-1452
> URL: https://issues.apache.org/jira/browse/HDDS-1452
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.5.0
>
>
> Currently, all chunks of a block happen to individual chunk files in 
> datanode. This idea here is to write all individual chunks to a single file 
> in datanode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1425) Ozone compose files are not compatible with the latest docker-compose

2019-04-22 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16823276#comment-16823276
 ] 

Anu Engineer commented on HDDS-1425:


we plan to release 0.4.1 and 0.4.2 soon ... So this is helpful. Thanks for the 
patch and commit.

 

> Ozone compose files are not compatible with the latest docker-compose
> -
>
> Key: HDDS-1425
> URL: https://issues.apache.org/jira/browse/HDDS-1425
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 0.5.0, 0.4.1
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> I upgraded my docker-compose to the latest available one (1.24.0)
> But after the upgrade I can't start the docker-compose based cluster any more:
> {code}
> ./test.sh 
> -
> Executing test(s): [basic]
>   Cluster type:  ozone
>   Compose file:  
> /home/elek/projects/hadoop-review/hadoop-ozone/dist/target/ozone-0.4.0-SNAPSHOT/smoketest/../compose/ozone/docker-compose.yaml
>   Output dir:
> /home/elek/projects/hadoop-review/hadoop-ozone/dist/target/ozone-0.4.0-SNAPSHOT/smoketest/result
>   Command to rerun:  ./test.sh --keep --env ozone basic
> -
> ERROR: In file 
> /home/elek/projects/hadoop-review/hadoop-ozone/dist/target/ozone-0.4.0-SNAPSHOT/compose/ozone/docker-config:
>  environment variable name 'LOG4J2.PROPERTIES_appender.rolling.file 
> {code}
> It turned out that the line of LOG4J2.PROPERTIES_appender.rolling.file 
> contains an unnecessary space which is not accepted by the latest 
> docker-compose any more.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1301) Optimize recursive ozone filesystem apis

2019-04-18 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821611#comment-16821611
 ] 

Anu Engineer commented on HDDS-1301:


{quote}I think we need to move beyond thinking rename is a low cost way to 
commit work.
{quote}
Yes, 100% of that; Thanks for that comment. (y)

> Optimize recursive ozone filesystem apis
> 
>
> Key: HDDS-1301
> URL: https://issues.apache.org/jira/browse/HDDS-1301
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1301.001.patch
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> This Jira aims to optimise recursive apis in ozone file system. These are the 
> apis which have a recursive flag which requires an operation to be performed 
> on all the children of the directory. The Jira would add support for 
> recursive apis in Ozone manager in order to reduce the number of rpc calls to 
> Ozone Manager. Also currently these operations are not atomic. This Jira 
> would make all the operations in ozone filesystem atomic.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-1301) Optimize recursive ozone filesystem apis

2019-04-18 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821598#comment-16821598
 ] 

Anu Engineer edited comment on HDDS-1301 at 4/19/19 1:11 AM:
-

[~ljain], I am still looking at code and just thinking aloud, if we support 
this feature, and allow a bucket level rename (I am sure Owen, Gopal, and Steve 
is going to really happy) but would we not get into a situation where an 
application can take a lock on a directory with millions on sub-keys. The added 
danger is that we have trained all our HDFS users that Rename is an atomic and 
fast operation, so application which is designed to work against HDFS will 
assume that rename is a cheap operation and attempt to do it. 

I am just wondering how long it would take to rename a directory with one 
million keys, or more specifically, if rename for us is an O( n ) operation and 
for normal file systems it is O(1); do we run into the danger of DDOS of Ozone 
via Rename? Should we support a completely different operation on the client 
side – for the lack of a better word – I am going to call it "File Commit", 
where the user can collect a set of file names and send it to us, with prefix 
either replaced or advice the server to replace the prefix. That way, we will 
still allow bucket operations to continue, and we will only be concerned with 
the list of Ozone keys that the client gave us. 

>From the OzoneFS point of view, it might simply be renaming a small set of 
>files under a directory (say 100K files, in a bucket with 10 million keys), so 
>in the "File Commit" operation, the client specifies the file names explicitly 
>or via regex, and we will collect all those files and replace the prefix. I am 
>hoping that OzoneFS will be able to simulate a rename operation via "File 
>Commit". what do you think? Should we get on a call brainstorm this? 

 

 

 

 


was (Author: anu):
[~ljain], I am still looking at code and just thinking aloud, if we support 
this feature, and allow a bucket level rename (I am sure Owen, Gopal, and Steve 
is going to really happy) but would we not get into a situation where an 
application can take a lock on a directory with millions on sub-keys. The added 
danger is that we have trained all our HDFS users that Rename is an atomic and 
fast operation, so application which is designed to work against HDFS will 
assume that rename is a cheap operation and attempt to do it. 

I am just wondering how long it would take to rename a directory with one 
million keys, or more specifically, if rename for us is an O(n) operation and 
for normal file systems it is O(1); do we run into the danger of DDOS of Ozone 
via Rename? Should we support a completely different operation on the client 
side – for the lack of a better word – I am going to call it "File Commit", 
where the user can collect a set of file names and send it to us, with prefix 
either replaced or advice the server to replace the prefix. That way, we will 
still allow bucket operations to continue, and we will only be concerned with 
the list of Ozone keys that the client gave us. 

>From the OzoneFS point of view, it might simply be renaming a small set of 
>files under a directory (say 100K files, in a bucket with 10 million keys), so 
>in the "File Commit" operation, the client specifies the file names explicitly 
>or via regex, and we will collect all those files and replace the prefix. I am 
>hoping that OzoneFS will be able to simulate a rename operation via "File 
>Commit". what do you think? Should we get on a call brainstorm this? 

 

 

 

 

> Optimize recursive ozone filesystem apis
> 
>
> Key: HDDS-1301
> URL: https://issues.apache.org/jira/browse/HDDS-1301
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1301.001.patch
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> This Jira aims to optimise recursive apis in ozone file system. These are the 
> apis which have a recursive flag which requires an operation to be performed 
> on all the children of the directory. The Jira would add support for 
> recursive apis in Ozone manager in order to reduce the number of rpc calls to 
> Ozone Manager. Also currently these operations are not atomic. This Jira 
> would make all the operations in ozone filesystem atomic.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1301) Optimize recursive ozone filesystem apis

2019-04-18 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821598#comment-16821598
 ] 

Anu Engineer commented on HDDS-1301:


[~ljain], I am still looking at code and just thinking aloud, if we support 
this feature, and allow a bucket level rename (I am sure Owen, Gopal, and Steve 
is going to really happy) but would we not get into a situation where an 
application can take a lock on a directory with millions on sub-keys. The added 
danger is that we have trained all our HDFS users that Rename is an atomic and 
fast operation, so application which is designed to work against HDFS will 
assume that rename is a cheap operation and attempt to do it. 

I am just wondering how long it would take to rename a directory with one 
million keys, or more specifically, if rename for us is an O(n) operation and 
for normal file systems it is O(1); do we run into the danger of DDOS of Ozone 
via Rename? Should we support a completely different operation on the client 
side – for the lack of a better word – I am going to call it "File Commit", 
where the user can collect a set of file names and send it to us, with prefix 
either replaced or advice the server to replace the prefix. That way, we will 
still allow bucket operations to continue, and we will only be concerned with 
the list of Ozone keys that the client gave us. 

>From the OzoneFS point of view, it might simply be renaming a small set of 
>files under a directory (say 100K files, in a bucket with 10 million keys), so 
>in the "File Commit" operation, the client specifies the file names explicitly 
>or via regex, and we will collect all those files and replace the prefix. I am 
>hoping that OzoneFS will be able to simulate a rename operation via "File 
>Commit". what do you think? Should we get on a call brainstorm this? 

 

 

 

 

> Optimize recursive ozone filesystem apis
> 
>
> Key: HDDS-1301
> URL: https://issues.apache.org/jira/browse/HDDS-1301
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1301.001.patch
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> This Jira aims to optimise recursive apis in ozone file system. These are the 
> apis which have a recursive flag which requires an operation to be performed 
> on all the children of the directory. The Jira would add support for 
> recursive apis in Ozone manager in order to reduce the number of rpc calls to 
> Ozone Manager. Also currently these operations are not atomic. This Jira 
> would make all the operations in ozone filesystem atomic.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1301) Optimize recursive ozone filesystem apis

2019-04-18 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821588#comment-16821588
 ] 

Anu Engineer commented on HDDS-1301:


[~ljain] The patch looks good, but I am going to open a discussion with few 
other people since I need some advice. Nothing really to do with this patch. 
Thought that this discussion will be interesting to you too.

 

[~gopalv]  based on your feedback we are optimizing some features and access 
patterns of O3FS. One of the issues is rename, we can do single file rename 
within a bucket. I know it would be better if we can collect a large number of 
files and rename transactionally.

 

Thoughts, Comments?

 

cc: [~ste...@apache.org], [~jnp], [~arpitagarwal]

> Optimize recursive ozone filesystem apis
> 
>
> Key: HDDS-1301
> URL: https://issues.apache.org/jira/browse/HDDS-1301
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1301.001.patch
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> This Jira aims to optimise recursive apis in ozone file system. These are the 
> apis which have a recursive flag which requires an operation to be performed 
> on all the children of the directory. The Jira would add support for 
> recursive apis in Ozone manager in order to reduce the number of rpc calls to 
> Ozone Manager. Also currently these operations are not atomic. This Jira 
> would make all the operations in ozone filesystem atomic.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-373) Ozone genconf tool must generate ozone-site.xml with sample values instead of a template

2019-04-18 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16821400#comment-16821400
 ] 

Anu Engineer commented on HDDS-373:
---

No we need this. The configs will not go away. Thanks

 

> Ozone genconf tool must generate ozone-site.xml with sample values instead of 
> a template
> 
>
> Key: HDDS-373
> URL: https://issues.apache.org/jira/browse/HDDS-373
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
> Attachments: HDDS-373.001.patch
>
>
> As discussed with [~anu], currently, the genconf tool generates a template 
> ozone-site.xml. This is not very useful for new users as they would have to 
> understand what values should be set for the minimal configuration properties.
> This Jira proposes to modify the ozone-default.xml which is leveraged by 
> genconf tool to generate ozone-site.xml
>  
> Further, as suggested by [~arpitagarwal], we must add a {{--pseudo}} option 
> to generate configs for starting pseudo-cluster. This should be useful for 
> quick dev-testing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1447) Fix CheckStyle warnings

2019-04-17 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16820689#comment-16820689
 ] 

Anu Engineer commented on HDDS-1447:


Thank you, Really appreciate your attention to detail and help in making ozone 
better. +1, I will commit this as soon as I have a Jenkins run. 

> Fix CheckStyle warnings 
> 
>
> Key: HDDS-1447
> URL: https://issues.apache.org/jira/browse/HDDS-1447
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
> Attachments: HDDS-1447.001.patch
>
>
> We had a full acceptance test + unit test build for 
> [HDDS-1433|https://issues.apache.org/jira/browse/HDDS-1433] : 
> [https://ci.anzix.net/job/ozone/16677/] gave 3 warnings belongs to Ozone.
> *Modules:*
>  * [Apache Hadoop Ozone 
> Client|https://ci.anzix.net/job/ozone/16677/checkstyle/new/moduleName.1350159737/]
>  ** KeyOutputStream.java:319
>  ** KeyOutputStream.java:622
>  * [Apache Hadoop Ozone Integration 
> Tests|https://ci.anzix.net/job/ozone/16677/checkstyle/new/moduleName.-1713756601/]
>  ** ContainerTestHelper.java:731



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13596) NN restart fails after RollingUpgrade from 2.x to 3.x

2019-04-17 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16820302#comment-16820302
 ] 

Anu Engineer edited comment on HDFS-13596 at 4/17/19 5:03 PM:
--

Not to add more noise, but this might be a good opportunity for us to learn a 
trick or two from our brethren in the HBase land.  HBase has this nice ability 
to upgrade to 3.0, but will not enable 3.0 features unless another command or 
setting is applied. We actually have a very similar situation here, we might 
have changes in Edit logs, but let us not allow that feature to be used until 
after the main step is completely done and we have some way of verifying that 
nothing is broken. Then you can enable the full 3.0 features once the full 
upgrade is done. (Thanks to [~ccondit] for educating me on how good HBase in 
doing this, and letting me know that Ozone should probably learn from that 
experience).

 

if we do this, Rolling upgrade would be two steps, upgrade, then enable 3.0 
features like EC.  Till enable call is done, HDFS will not allow 3.0 features 
like EC.

 


was (Author: anu):
Not to add more noise, but this might be a good opportunity for us to learn a 
trick or two from our brethren in the HBase land.  HBase has this nice ability 
to upgrade to 3.0, but will not enable 3.0 features unless another command or 
setting is applied. We actually have a very similar situation here, we might 
have changes in Edit logs, but let us not allow that feature to be used until 
after the main step is completely done and we have some way of verifying that 
nothing is broken. Then you can enable the full 3.0 features once the full 
upgrade is done. (Thanks to [~ccondit] for educating me on how good HBase does 
this, and letting me know that Ozone should probably learn from that 
experience).

 

if we do this, Rolling upgrade would be two steps, upgrade, then enable 3.0 
features like EC.  Till enable call is done, HDFS will not allow 3.0 features 
like EC.

 

> NN restart fails after RollingUpgrade from 2.x to 3.x
> -
>
> Key: HDFS-13596
> URL: https://issues.apache.org/jira/browse/HDFS-13596
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Fei Hui
>Priority: Critical
> Attachments: HDFS-13596.001.patch, HDFS-13596.002.patch, 
> HDFS-13596.003.patch, HDFS-13596.004.patch, HDFS-13596.005.patch, 
> HDFS-13596.006.patch, HDFS-13596.007.patch
>
>
> After rollingUpgrade NN from 2.x and 3.x, if the NN is restarted, it fails 
> while replaying edit logs.
>  * After NN is started with rollingUpgrade, the layoutVersion written to 
> editLogs (before finalizing the upgrade) is the pre-upgrade layout version 
> (so as to support downgrade).
>  * When writing transactions to log, NN writes as per the current layout 
> version. In 3.x, erasureCoding bits are added to the editLog transactions.
>  * So any edit log written after the upgrade and before finalizing the 
> upgrade will have the old layout version but the new format of transactions.
>  * When NN is restarted and the edit logs are replayed, the NN reads the old 
> layout version from the editLog file. When parsing the transactions, it 
> assumes that the transactions are also from the previous layout and hence 
> skips parsing the erasureCoding bits.
>  * This cascades into reading the wrong set of bits for other fields and 
> leads to NN shutting down.
> Sample error output:
> {code:java}
> java.lang.IllegalArgumentException: Invalid clientId - length is 0 expected 
> length 16
>  at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
>  at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:74)
>  at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:86)
>  at 
> org.apache.hadoop.ipc.RetryCache$CacheEntryWithPayload.(RetryCache.java:163)
>  at 
> org.apache.hadoop.ipc.RetryCache.addCacheEntryWithPayload(RetryCache.java:322)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.addCacheEntryWithPayload(FSNamesystem.java:960)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:397)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:249)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158)
>  at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:888)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:745)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:323)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1086)
>  at 
> 

[jira] [Comment Edited] (HDFS-13596) NN restart fails after RollingUpgrade from 2.x to 3.x

2019-04-17 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16820302#comment-16820302
 ] 

Anu Engineer edited comment on HDFS-13596 at 4/17/19 5:03 PM:
--

Not to add more noise, but this might be a good opportunity for us to learn a 
trick or two from our brethren in the HBase land.  HBase has this nice ability 
to upgrade to 3.0, but will not enable 3.0 features unless another command or 
setting is applied. We actually have a very similar situation here, we might 
have changes in Edit logs, but let us not allow that feature to be used until 
after the main step is completely done and we have some way of verifying that 
nothing is broken. Then you can enable the full 3.0 features once the full 
upgrade is done. (Thanks to [~ccondit] for educating me on how good HBase is in 
doing this, and letting me know that Ozone should probably learn from that 
experience).

 

if we do this, Rolling upgrade would be two steps, upgrade, then enable 3.0 
features like EC.  Till enable call is done, HDFS will not allow 3.0 features 
like EC.

 


was (Author: anu):
Not to add more noise, but this might be a good opportunity for us to learn a 
trick or two from our brethren in the HBase land.  HBase has this nice ability 
to upgrade to 3.0, but will not enable 3.0 features unless another command or 
setting is applied. We actually have a very similar situation here, we might 
have changes in Edit logs, but let us not allow that feature to be used until 
after the main step is completely done and we have some way of verifying that 
nothing is broken. Then you can enable the full 3.0 features once the full 
upgrade is done. (Thanks to [~ccondit] for educating me on how good HBase in 
doing this, and letting me know that Ozone should probably learn from that 
experience).

 

if we do this, Rolling upgrade would be two steps, upgrade, then enable 3.0 
features like EC.  Till enable call is done, HDFS will not allow 3.0 features 
like EC.

 

> NN restart fails after RollingUpgrade from 2.x to 3.x
> -
>
> Key: HDFS-13596
> URL: https://issues.apache.org/jira/browse/HDFS-13596
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Fei Hui
>Priority: Critical
> Attachments: HDFS-13596.001.patch, HDFS-13596.002.patch, 
> HDFS-13596.003.patch, HDFS-13596.004.patch, HDFS-13596.005.patch, 
> HDFS-13596.006.patch, HDFS-13596.007.patch
>
>
> After rollingUpgrade NN from 2.x and 3.x, if the NN is restarted, it fails 
> while replaying edit logs.
>  * After NN is started with rollingUpgrade, the layoutVersion written to 
> editLogs (before finalizing the upgrade) is the pre-upgrade layout version 
> (so as to support downgrade).
>  * When writing transactions to log, NN writes as per the current layout 
> version. In 3.x, erasureCoding bits are added to the editLog transactions.
>  * So any edit log written after the upgrade and before finalizing the 
> upgrade will have the old layout version but the new format of transactions.
>  * When NN is restarted and the edit logs are replayed, the NN reads the old 
> layout version from the editLog file. When parsing the transactions, it 
> assumes that the transactions are also from the previous layout and hence 
> skips parsing the erasureCoding bits.
>  * This cascades into reading the wrong set of bits for other fields and 
> leads to NN shutting down.
> Sample error output:
> {code:java}
> java.lang.IllegalArgumentException: Invalid clientId - length is 0 expected 
> length 16
>  at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
>  at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:74)
>  at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:86)
>  at 
> org.apache.hadoop.ipc.RetryCache$CacheEntryWithPayload.(RetryCache.java:163)
>  at 
> org.apache.hadoop.ipc.RetryCache.addCacheEntryWithPayload(RetryCache.java:322)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.addCacheEntryWithPayload(FSNamesystem.java:960)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:397)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:249)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158)
>  at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:888)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:745)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:323)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1086)
>  at 
> 

[jira] [Commented] (HDFS-13596) NN restart fails after RollingUpgrade from 2.x to 3.x

2019-04-17 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16820302#comment-16820302
 ] 

Anu Engineer commented on HDFS-13596:
-

Not to add more noise, but this might be a good opportunity for us to learn a 
trick or two from our brethren in the HBase land.  HBase has this nice ability 
to upgrade to 3.0, but will not enable 3.0 features unless another command or 
setting is applied. We actually have a very similar situation here, we might 
have changes in Edit logs, but let us not allow that feature to be used until 
after the main step is complete done and we have some way of verifying that 
nothing is broken. Then you can enable the full 3.0 features once the full 
upgrade is done. (Thanks to [~ccondit] for educating me on how good HBase does 
this, and letting me know that Ozone should probably learn from that 
experience).

 

if we do this, Rolling upgrade would be two steps, upgrade, then enable 3.0 
features like EC.  Till enable call is done, HDFS will not allow 3.0 features 
like EC.

 

> NN restart fails after RollingUpgrade from 2.x to 3.x
> -
>
> Key: HDFS-13596
> URL: https://issues.apache.org/jira/browse/HDFS-13596
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Fei Hui
>Priority: Critical
> Attachments: HDFS-13596.001.patch, HDFS-13596.002.patch, 
> HDFS-13596.003.patch, HDFS-13596.004.patch, HDFS-13596.005.patch, 
> HDFS-13596.006.patch, HDFS-13596.007.patch
>
>
> After rollingUpgrade NN from 2.x and 3.x, if the NN is restarted, it fails 
> while replaying edit logs.
>  * After NN is started with rollingUpgrade, the layoutVersion written to 
> editLogs (before finalizing the upgrade) is the pre-upgrade layout version 
> (so as to support downgrade).
>  * When writing transactions to log, NN writes as per the current layout 
> version. In 3.x, erasureCoding bits are added to the editLog transactions.
>  * So any edit log written after the upgrade and before finalizing the 
> upgrade will have the old layout version but the new format of transactions.
>  * When NN is restarted and the edit logs are replayed, the NN reads the old 
> layout version from the editLog file. When parsing the transactions, it 
> assumes that the transactions are also from the previous layout and hence 
> skips parsing the erasureCoding bits.
>  * This cascades into reading the wrong set of bits for other fields and 
> leads to NN shutting down.
> Sample error output:
> {code:java}
> java.lang.IllegalArgumentException: Invalid clientId - length is 0 expected 
> length 16
>  at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
>  at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:74)
>  at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:86)
>  at 
> org.apache.hadoop.ipc.RetryCache$CacheEntryWithPayload.(RetryCache.java:163)
>  at 
> org.apache.hadoop.ipc.RetryCache.addCacheEntryWithPayload(RetryCache.java:322)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.addCacheEntryWithPayload(FSNamesystem.java:960)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:397)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:249)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158)
>  at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:888)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:745)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:323)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1086)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:632)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:694)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:937)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:910)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1643)
>  at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710)
> 2018-05-17 19:10:06,522 WARN 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception 
> loading fsimage
> java.io.IOException: java.lang.IllegalStateException: Cannot skip to less 
> than the current value (=16389), where newValue=16388
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.resetLastInodeId(FSDirectory.java:1945)
>  at 
> 

[jira] [Comment Edited] (HDFS-13596) NN restart fails after RollingUpgrade from 2.x to 3.x

2019-04-17 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16820302#comment-16820302
 ] 

Anu Engineer edited comment on HDFS-13596 at 4/17/19 5:02 PM:
--

Not to add more noise, but this might be a good opportunity for us to learn a 
trick or two from our brethren in the HBase land.  HBase has this nice ability 
to upgrade to 3.0, but will not enable 3.0 features unless another command or 
setting is applied. We actually have a very similar situation here, we might 
have changes in Edit logs, but let us not allow that feature to be used until 
after the main step is completely done and we have some way of verifying that 
nothing is broken. Then you can enable the full 3.0 features once the full 
upgrade is done. (Thanks to [~ccondit] for educating me on how good HBase does 
this, and letting me know that Ozone should probably learn from that 
experience).

 

if we do this, Rolling upgrade would be two steps, upgrade, then enable 3.0 
features like EC.  Till enable call is done, HDFS will not allow 3.0 features 
like EC.

 


was (Author: anu):
Not to add more noise, but this might be a good opportunity for us to learn a 
trick or two from our brethren in the HBase land.  HBase has this nice ability 
to upgrade to 3.0, but will not enable 3.0 features unless another command or 
setting is applied. We actually have a very similar situation here, we might 
have changes in Edit logs, but let us not allow that feature to be used until 
after the main step is complete done and we have some way of verifying that 
nothing is broken. Then you can enable the full 3.0 features once the full 
upgrade is done. (Thanks to [~ccondit] for educating me on how good HBase does 
this, and letting me know that Ozone should probably learn from that 
experience).

 

if we do this, Rolling upgrade would be two steps, upgrade, then enable 3.0 
features like EC.  Till enable call is done, HDFS will not allow 3.0 features 
like EC.

 

> NN restart fails after RollingUpgrade from 2.x to 3.x
> -
>
> Key: HDFS-13596
> URL: https://issues.apache.org/jira/browse/HDFS-13596
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: Hanisha Koneru
>Assignee: Fei Hui
>Priority: Critical
> Attachments: HDFS-13596.001.patch, HDFS-13596.002.patch, 
> HDFS-13596.003.patch, HDFS-13596.004.patch, HDFS-13596.005.patch, 
> HDFS-13596.006.patch, HDFS-13596.007.patch
>
>
> After rollingUpgrade NN from 2.x and 3.x, if the NN is restarted, it fails 
> while replaying edit logs.
>  * After NN is started with rollingUpgrade, the layoutVersion written to 
> editLogs (before finalizing the upgrade) is the pre-upgrade layout version 
> (so as to support downgrade).
>  * When writing transactions to log, NN writes as per the current layout 
> version. In 3.x, erasureCoding bits are added to the editLog transactions.
>  * So any edit log written after the upgrade and before finalizing the 
> upgrade will have the old layout version but the new format of transactions.
>  * When NN is restarted and the edit logs are replayed, the NN reads the old 
> layout version from the editLog file. When parsing the transactions, it 
> assumes that the transactions are also from the previous layout and hence 
> skips parsing the erasureCoding bits.
>  * This cascades into reading the wrong set of bits for other fields and 
> leads to NN shutting down.
> Sample error output:
> {code:java}
> java.lang.IllegalArgumentException: Invalid clientId - length is 0 expected 
> length 16
>  at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
>  at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:74)
>  at org.apache.hadoop.ipc.RetryCache$CacheEntry.(RetryCache.java:86)
>  at 
> org.apache.hadoop.ipc.RetryCache$CacheEntryWithPayload.(RetryCache.java:163)
>  at 
> org.apache.hadoop.ipc.RetryCache.addCacheEntryWithPayload(RetryCache.java:322)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.addCacheEntryWithPayload(FSNamesystem.java:960)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:397)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:249)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:158)
>  at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:888)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:745)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:323)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1086)
>  at 
> 

[jira] [Commented] (HDDS-1266) [Ozone upgrade] Support Upgrading HDFS clusters to use Ozone

2019-04-17 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16820160#comment-16820160
 ] 

Anu Engineer commented on HDDS-1266:


[~linyiqun] Thanks for the feedback and please keep it coming. I will update 
the design document based on your feedback as well as add a planner User 
experience document soon. 

 

> [Ozone upgrade] Support Upgrading HDFS clusters to use Ozone
> 
>
> Key: HDDS-1266
> URL: https://issues.apache.org/jira/browse/HDDS-1266
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Attachments: InPlaceUpgradesForOzone.pdf
>
>
> This is the master JIRA to support upgrading existing HDFS clusters to have 
> Ozone running concurrently. One of the requirements is that we support 
> upgrading from HDFS to Ozone, without a full data copy. This requirement is 
> called "In Place upgrade", the end result of such an upgrade would be to have 
> the HDFS data appear in Ozone as if Ozone has taken a snap-shot of the HDFS 
> data. Once upgrade is complete, Ozone and HDFS will act as independent 
> systems. I will post a design document soon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1266) [Ozone upgrade] Support Upgrading HDFS clusters to use Ozone

2019-04-11 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815993#comment-16815993
 ] 

Anu Engineer commented on HDDS-1266:


{quote}Another aspect I forget to mention in my last comment, the doc has 
designed a lot about how to do the upgrade but how the downgrade behavior it 
will be?
{quote}
Very good catch, I will add that as a section. Basically it would boil down to 
deleting each container and then deleting SCM and OM. That would take us back 
to HDFS cluster.  But I think we should have this section in the document.

 

> [Ozone upgrade] Support Upgrading HDFS clusters to use Ozone
> 
>
> Key: HDDS-1266
> URL: https://issues.apache.org/jira/browse/HDDS-1266
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Attachments: InPlaceUpgradesForOzone.pdf
>
>
> This is the master JIRA to support upgrading existing HDFS clusters to have 
> Ozone running concurrently. One of the requirements is that we support 
> upgrading from HDFS to Ozone, without a full data copy. This requirement is 
> called "In Place upgrade", the end result of such an upgrade would be to have 
> the HDFS data appear in Ozone as if Ozone has taken a snap-shot of the HDFS 
> data. Once upgrade is complete, Ozone and HDFS will act as independent 
> systems. I will post a design document soon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1431) Linkage error thrown if ozone-fs-legacy jar is on the classpath

2019-04-11 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815990#comment-16815990
 ] 

Anu Engineer commented on HDDS-1431:


[~elek] FYI.

> Linkage error thrown if ozone-fs-legacy jar is on the classpath
> ---
>
> Key: HDDS-1431
> URL: https://issues.apache.org/jira/browse/HDDS-1431
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.5.0
>Reporter: Siddharth Wagle
>Priority: Major
>
> With hadoop-ozone-filesystem-lib-legacy-0.5.0-SNAPSHOT.jar on the classpath 
> along with current jar results in classloader throwing an error on fs write 
> operation as below:
> {code}
> 2019-04-11 16:06:54,127 ERROR [OzoneClientAdapterFactory] Can't initialize 
> the ozoneClientAdapter
> java.lang.reflect.InvocationTargetException
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.fs.ozone.OzoneClientAdapterFactory.lambda$createAdapter$1(OzoneClientAdapterFactory.java:66)
> at 
> org.apache.hadoop.fs.ozone.OzoneClientAdapterFactory.createAdapter(OzoneClientAdapterFactory.java:116)
> at 
> org.apache.hadoop.fs.ozone.OzoneClientAdapterFactory.createAdapter(OzoneClientAdapterFactory.java:62)
> at 
> org.apache.hadoop.fs.ozone.OzoneFileSystem.createAdapter(OzoneFileSystem.java:92)
> at 
> org.apache.hadoop.fs.ozone.BasicOzoneFileSystem.initialize(BasicOzoneFileSystem.java:146)
> at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3303)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
> at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3352)
> at 
> org.apache.hadoop.fs.FileSystem$Cache.getUnique(FileSystem.java:3326)
> at org.apache.hadoop.fs.FileSystem.newInstance(FileSystem.java:532)
> at org.notmysock.repl.Works$CopyWorker.run(Works.java:252)
> at org.notmysock.repl.Works$CopyWorker.call(Works.java:287)
> at org.notmysock.repl.Works$CopyWorker.call(Works.java:207)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.LinkageError: loader constraint violation: loader 
> (instance of org/apache/hadoop/fs/ozone/FilteredClassLoader) previously 
> initiated loading for a different t
> ype with name "org/apache/hadoop/crypto/key/KeyProvider"
> at java.lang.ClassLoader.defineClass1(Native Method)
> at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
> at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
> at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
> at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> at 
> org.apache.hadoop.fs.ozone.FilteredClassLoader.loadClass(FilteredClassLoader.java:72)
> at java.lang.Class.getDeclaredMethods0(Native Method)
> at java.lang.Class.privateGetDeclaredMethods(Class.java:2701)
> at java.lang.Class.privateGetPublicMethods(Class.java:2902)
> at java.lang.Class.getMethods(Class.java:1615)
> at sun.misc.ProxyGenerator.generateClassFile(ProxyGenerator.java:451)
> at sun.misc.ProxyGenerator.generateProxyClass(ProxyGenerator.java:339)
> at java.lang.reflect.Proxy$ProxyClassFactory.apply(Proxy.java:639)
> at java.lang.reflect.Proxy$ProxyClassFactory.apply(Proxy.java:557)
> at java.lang.reflect.WeakCache$Factory.get(WeakCache.java:230)
> at java.lang.reflect.WeakCache.get(WeakCache.java:127)
> at java.lang.reflect.Proxy.getProxyClass0(Proxy.java:419)
> at java.lang.reflect.Proxy.newProxyInstance(Proxy.java:719)
> at 
> 

[jira] [Commented] (HDFS-14234) Limit WebHDFS to specifc user, host, directory triples

2019-04-10 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16814961#comment-16814961
 ] 

Anu Engineer commented on HDFS-14234:
-

[~clayb] the patch looks quite good, here are some very minor comments.

 
1. DatanodeHTTPserver.java, there are some minor checkstyle fixes needed.
2. The changes in hadoop-env.sh are accidental?
3. For production purposes, we should remove the  log4j.properties settings for 
web handlers?
4. I am not sure if this is possible in real life, but from the test case, it 
is possible to trigger a NullPointerException.
?? ??
??    httpRequest =??
??        new DefaultFullHttpRequest(HttpVersion.HTTP_1_1,??
??            HttpMethod.GET,??
??            WebHdfsFileSystem.PATH_PREFIX + "/user/myName/fooFile");??
If we send a request without a query portion then it looks like the 
\{{HostRestrictingAuthorizationFilter.handleInteraction}}
will throw a null pointer java.lang.NullPointerException

> Limit WebHDFS to specifc user, host, directory triples
> --
>
> Key: HDFS-14234
> URL: https://issues.apache.org/jira/browse/HDFS-14234
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Reporter: Clay B.
>Assignee: Clay B.
>Priority: Trivial
> Attachments: 
> 0001-HDFS-14234.-Limit-WebHDFS-to-specifc-user-host-direc.patch, 
> 0002-HDFS-14234.-Limit-WebHDFS-to-specifc-user-host-direc.patch, 
> 0003-HDFS-14234.-Limit-WebHDFS-to-specifc-user-host-direc.patch
>
>
> For those who have multiple network zones, it is useful to prevent certain 
> zones from downloading data from WebHDFS while still allowing uploads. This 
> can enable functionality of HDFS as a dropbox for data - data goes in but can 
> not be pulled back out. (Motivation further presented in [StrangeLoop 2018 Of 
> Data Dropboxes and Data 
> Gloveboxes|https://www.thestrangeloop.com/2018/of-data-dropboxes-and-data-gloveboxes.html]).
> Ideally, one could limit the datanodes from returning data via an 
> [{{OPEN}}|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Open_and_Read_a_File]
>  but still allow things such as 
> [{{GETFILECHECKSUM}}|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Get_File_Checksum]
>  and 
> {{[{{CREATE}}|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Create_and_Write_to_a_File]}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14234) Limit WebHDFS to specifc user, host, directory triples

2019-04-10 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16814911#comment-16814911
 ] 

Anu Engineer commented on HDFS-14234:
-

Hi [~clayb] ,

I have looked at the patch, and I think it would be good to be able to load the 
RestCsrfPreventionFilterHandler to preserve the existing behavior. I will add 
my review comments in a while. 

Thanks

Anu

 

> Limit WebHDFS to specifc user, host, directory triples
> --
>
> Key: HDFS-14234
> URL: https://issues.apache.org/jira/browse/HDFS-14234
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Reporter: Clay B.
>Assignee: Clay B.
>Priority: Trivial
> Attachments: 
> 0001-HDFS-14234.-Limit-WebHDFS-to-specifc-user-host-direc.patch, 
> 0002-HDFS-14234.-Limit-WebHDFS-to-specifc-user-host-direc.patch, 
> 0003-HDFS-14234.-Limit-WebHDFS-to-specifc-user-host-direc.patch
>
>
> For those who have multiple network zones, it is useful to prevent certain 
> zones from downloading data from WebHDFS while still allowing uploads. This 
> can enable functionality of HDFS as a dropbox for data - data goes in but can 
> not be pulled back out. (Motivation further presented in [StrangeLoop 2018 Of 
> Data Dropboxes and Data 
> Gloveboxes|https://www.thestrangeloop.com/2018/of-data-dropboxes-and-data-gloveboxes.html]).
> Ideally, one could limit the datanodes from returning data via an 
> [{{OPEN}}|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Open_and_Read_a_File]
>  but still allow things such as 
> [{{GETFILECHECKSUM}}|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Get_File_Checksum]
>  and 
> {{[{{CREATE}}|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Create_and_Write_to_a_File]}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1266) [Ozone upgrade] Support Upgrading HDFS clusters to use Ozone

2019-04-10 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16814615#comment-16814615
 ] 

Anu Engineer commented on HDDS-1266:


Hi [~linyiqun],

Thank you for very careful reading. You are right on both counts. I will add 
some more comments.
{quote}When do the mapping for HDFS blockId to Ozone blockId, there is one 
precondition that HDFS blockId should be unique. The HDFS block id is unique in 
a single block pool (namepsace). But with multiple block pool case, I suppose 
we cannot promise this.
{quote}
You are right, if we have 2 different block pools we cannot guarantee block 
uniqueness.  One way to deal with that is a create a namespace like feature – 
that is BlockPool1.blockid and BlockPool2.BlockID. Another way to deal with 
this issue is to do the upgrades in serial fashion. That is upgrade 
HDFS.BlockPool 1 to Ozone, then Upgrade HDFS.BlockPool2. Discussion about both 
are missing in the current proposal, since the doc was already 20 pages long. 
We will write up smaller more focused design docs to compliment this document, 
so that we can dive deeper into issues like multiple block pools.
{quote}From the design doc, we cannot sync the metadata change between HDFS and 
Ozone during upgrade time. That is to say that for one system metadata change 
cannot reflect in another system. Based on this, seems we can't well supported 
concurrently running HDFS and Ozone to provider the service, except one case 
that we upgrade some specific HDFS folders that will only accessed by Ozone.
{quote}
This came up multiple times during the initial chats. [~ccondit] had suggested 
that one way to solve that problem is to listen to the changes via Journal 
Node. That way, we know what changes are happening on HDFS side and when we 
verify the Ozone against HDFS after upgrade, we know what we should expect.

There is still a problem of Datanode trying to make a Hard link to an deleted 
HDFS block, since during the process of the HDFS the blocks did exist. The 
current plan is for the data node to report that information back, and we will 
try to verify the information against HDFS at that point of time. The default 
catch all, as you have seen from the design doc is the fall back into DistCp, 
if the in-place upgrade detects any errors.

 

> [Ozone upgrade] Support Upgrading HDFS clusters to use Ozone
> 
>
> Key: HDDS-1266
> URL: https://issues.apache.org/jira/browse/HDDS-1266
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Attachments: InPlaceUpgradesForOzone.pdf
>
>
> This is the master JIRA to support upgrading existing HDFS clusters to have 
> Ozone running concurrently. One of the requirements is that we support 
> upgrading from HDFS to Ozone, without a full data copy. This requirement is 
> called "In Place upgrade", the end result of such an upgrade would be to have 
> the HDFS data appear in Ozone as if Ozone has taken a snap-shot of the HDFS 
> data. Once upgrade is complete, Ozone and HDFS will act as independent 
> systems. I will post a design document soon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1383) [Ozone Upgrade] Create the project skeleton with CLI interface

2019-04-08 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16812911#comment-16812911
 ] 

Anu Engineer commented on HDDS-1383:


+1, thanks for the skeleton.

> [Ozone Upgrade] Create the project skeleton with CLI interface
> --
>
> Key: HDDS-1383
> URL: https://issues.apache.org/jira/browse/HDDS-1383
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: upgrade
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Ozone In-Place upgrade tool is a tool to upgrade hdfs data to ozone data 
> without data movement.
> In this jira I will create a skeleton project with the cli interface without 
> any business logic.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1266) [Ozone upgrade] Support Upgrading HDFS clusters to use Ozone

2019-04-08 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16812893#comment-16812893
 ] 

Anu Engineer commented on HDDS-1266:


[~shv] I have attached the design doc for the in place upgrade. You are the 
first person who asked for this feature, would appreciate any feedback you 
might have on the design, especially if you see any issues from HDFS point of 
view that is not addressed.

[~clayb] This is the upgrade design I had taked about in the community 
meetings, Please do let me know you thoughts, questions or comments.

[~daryn] If you have time, I would appreciate any comments you have on the 
HDFS-to-Ozone in-place upgrades. 

 

Thanks

Anu

 

> [Ozone upgrade] Support Upgrading HDFS clusters to use Ozone
> 
>
> Key: HDDS-1266
> URL: https://issues.apache.org/jira/browse/HDDS-1266
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Attachments: InPlaceUpgradesForOzone.pdf
>
>
> This is the master JIRA to support upgrading existing HDFS clusters to have 
> Ozone running concurrently. One of the requirements is that we support 
> upgrading from HDFS to Ozone, without a full data copy. This requirement is 
> called "In Place upgrade", the end result of such an upgrade would be to have 
> the HDFS data appear in Ozone as if Ozone has taken a snap-shot of the HDFS 
> data. Once upgrade is complete, Ozone and HDFS will act as independent 
> systems. I will post a design document soon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1266) [Ozone upgrade] Support Upgrading HDFS clusters to use Ozone

2019-04-08 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1266:
---
Attachment: InPlaceUpgradesForOzone.pdf

> [Ozone upgrade] Support Upgrading HDFS clusters to use Ozone
> 
>
> Key: HDDS-1266
> URL: https://issues.apache.org/jira/browse/HDDS-1266
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Attachments: InPlaceUpgradesForOzone.pdf
>
>
> This is the master JIRA to support upgrading existing HDFS clusters to have 
> Ozone running concurrently. One of the requirements is that we support 
> upgrading from HDFS to Ozone, without a full data copy. This requirement is 
> called "In Place upgrade", the end result of such an upgrade would be to have 
> the HDFS data appear in Ozone as if Ozone has taken a snap-shot of the HDFS 
> data. Once upgrade is complete, Ozone and HDFS will act as independent 
> systems. I will post a design document soon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1329) Update documentation for Ozone-0.4.0 release

2019-04-05 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1329:
---
   Resolution: Fixed
Fix Version/s: 0.4.0
   Status: Resolved  (was: Patch Available)

[~ajayydv] Thanks for contribution. [~xyao], [~arpitagarwal] Thanks for the 
reviews, I have committed this patch to the trunk and ozone-0.4.0 branches.

> Update documentation for Ozone-0.4.0 release
> 
>
> Key: HDDS-1329
> URL: https://issues.apache.org/jira/browse/HDDS-1329
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 0.4.0
>
>  Time Spent: 7h 10m
>  Remaining Estimate: 0h
>
> We need to update documenation of Ozone for all the new features which is 
> part of 0.4.0 release. This is a 0.4.0 blocker JIRA.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14234) Limit WebHDFS to specifc user, host, directory triples

2019-04-05 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16811253#comment-16811253
 ] 

Anu Engineer commented on HDFS-14234:
-

bq. should I write support for dfs.webhdsf.rest-csrf.enabled to still load the 
RestCsrfPreventionFilterHandler to preserve existing behavior and not allow one 
to pick where in the filter chain the CsrfPreventionFilter gets loaded?

I have not looked at the patch yet, but if we have a way of loading the csrf 
filter and during an upgrade it is not broken, I think we are good to go. The 
exact order does not matter for CSRF, IMHO.

> Limit WebHDFS to specifc user, host, directory triples
> --
>
> Key: HDFS-14234
> URL: https://issues.apache.org/jira/browse/HDFS-14234
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Reporter: Clay B.
>Assignee: Clay B.
>Priority: Trivial
> Attachments: 
> 0001-HDFS-14234.-Limit-WebHDFS-to-specifc-user-host-direc.patch, 
> 0002-HDFS-14234.-Limit-WebHDFS-to-specifc-user-host-direc.patch, 
> 0003-HDFS-14234.-Limit-WebHDFS-to-specifc-user-host-direc.patch
>
>
> For those who have multiple network zones, it is useful to prevent certain 
> zones from downloading data from WebHDFS while still allowing uploads. This 
> can enable functionality of HDFS as a dropbox for data - data goes in but can 
> not be pulled back out. (Motivation further presented in [StrangeLoop 2018 Of 
> Data Dropboxes and Data 
> Gloveboxes|https://www.thestrangeloop.com/2018/of-data-dropboxes-and-data-gloveboxes.html]).
> Ideally, one could limit the datanodes from returning data via an 
> [{{OPEN}}|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Open_and_Read_a_File]
>  but still allow things such as 
> [{{GETFILECHECKSUM}}|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Get_File_Checksum]
>  and 
> {{[{{CREATE}}|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Create_and_Write_to_a_File]}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1294) ExcludeList shoud be a RPC Client config so that multiple streams can avoid the same error.

2019-04-05 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1294:
---
Target Version/s: 0.5.0

> ExcludeList shoud be a RPC Client config so that multiple streams can avoid 
> the same error.
> ---
>
> Key: HDDS-1294
> URL: https://issues.apache.org/jira/browse/HDDS-1294
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: MiniOzoneChaosCluster
> Attachments: HDDS-1294.000.patch, HDDS-1294.001.patch
>
>
> ExcludeList right now is a per BlockOutPutStream value, this can result in 
> multiple keys created out of the same client to run into same exception



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1282) TestFailureHandlingByClient causes a jvm exit

2019-04-05 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1282:
---
Target Version/s: 0.5.0

> TestFailureHandlingByClient causes a jvm exit
> -
>
> Key: HDDS-1282
> URL: https://issues.apache.org/jira/browse/HDDS-1282
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Mukul Kumar Singh
>Assignee: Shashikant Banerjee
>Priority: Major
> Attachments: HDDS-1282.001.patch, 
> org.apache.hadoop.ozone.client.rpc.TestFailureHandlingByClient-output.txt
>
>
> The test causes jvm exit because the test exits prematurely.
> {code}
> [ERROR] org.apache.hadoop.ozone.client.rpc.TestFailureHandlingByClient
> [ERROR] org.apache.maven.surefire.booter.SurefireBooterForkException: 
> ExecutionException The forked VM terminated without properly saying goodbye. 
> VM crash or System.exit called?
> [ERROR] Command was /bin/sh -c cd 
> /Users/msingh/code/apache/ozone/oz_new1/hadoop-ozone/integration-test && 
> /Library/Java/JavaVirtualMachines/jdk1.8.0_171.jdk/Contents/Home/jre/bin/java 
> -Xmx2048m -XX:+HeapDumpOnOutOfMemoryError -jar 
> /Users/msingh/code/apache/ozone/oz_new1/hadoop-ozone/integration-test/target/surefire/surefirebooter5405606309417840457.jar
>  
> /Users/msingh/code/apache/ozone/oz_new1/hadoop-ozone/integration-test/target/surefire
>  2019-03-13T23-31-09_018-jvmRun1 surefire5934599060460829594tmp 
> surefire_1202723709650989744795tmp
> [ERROR] Error occurred in starting fork, check output in log
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1340) Add List Containers API for Recon

2019-04-05 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1340:
---
Target Version/s: 0.5.0

> Add List Containers API for Recon
> -
>
> Key: HDDS-1340
> URL: https://issues.apache.org/jira/browse/HDDS-1340
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> Recon server should support "/containers" API that lists all the containers



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1329) Update documentation for Ozone-0.4.0 release

2019-04-01 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16807079#comment-16807079
 ] 

Anu Engineer commented on HDDS-1329:


# Architecture page for Security – A high level overview of how Ozone security 
works.
 # Setting up a secure cluster – Pages on how to setup a secure cluster.
 # Command Line tools - the new CLIs if we have any
 # General documentation update.

This does not block the RC0 of Ozone release and voting, We will get this done 
this week.

 

> Update documentation for Ozone-0.4.0 release
> 
>
> Key: HDDS-1329
> URL: https://issues.apache.org/jira/browse/HDDS-1329
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Blocker
>
> We need to update documenation of Ozone for all the new features which is 
> part of 0.4.0 release. This is a 0.4.0 blocker JIRA.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1356) Wrong response code in s3g in case of an invalid access key

2019-04-01 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1356:
---
Target Version/s: 0.5.0  (was: 0.4.0)

> Wrong response code in s3g in case of an invalid access key
> ---
>
> Key: HDDS-1356
> URL: https://issues.apache.org/jira/browse/HDDS-1356
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: S3
>Reporter: Elek, Marton
>Priority: Major
>
> In case of a wrong aws credential the s3g returns with HTTP 500:
> {code}
> [hadoop@om-0 keytabs]$ aws s3api --endpoint=http://s3g-0.s3g:9878 
> create-bucket --bucket qwe
> An error occurred (500) when calling the CreateBucket operation (reached max 
> retries: 4): Internal Server Error
> {code}
> And throws an exception server side:
> {code}
> s3g-0 s3g 3ff4582bec94fee02ae4babcd4294c5a1c46cf7a6f750bfd5de4e894e41663c5, 
> signature=73ea5e939f47de1389e26624c91444d6b88fa70c64e5ee1e39e6804269736a99, 
> awsAccessKeyId=scm/om-0.om.perf.svc.cluster.lo...@example.co
> s3g-0 s3g at 
> org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1511)
> s3g-0 s3g at org.apache.hadoop.ipc.Client.call(Client.java:1457)
> s3g-0 s3g at org.apache.hadoop.ipc.Client.call(Client.java:1367)
> s3g-0 s3g at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
> s3g-0 s3g at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> s3g-0 s3g at com.sun.proxy.$Proxy77.submitRequest(Unknown Source)
> s3g-0 s3g at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown 
> Source)
> s3g-0 s3g at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> s3g-0 s3g at java.lang.reflect.Method.invoke(Method.java:498)
> s3g-0 s3g at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
> s3g-0 s3g at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
> s3g-0 s3g at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
> s3g-0 s3g at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> s3g-0 s3g at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
> s3g-0 s3g at com.sun.proxy.$Proxy77.submitRequest(Unknown Source)
> s3g-0 s3g at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> s3g-0 s3g at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> s3g-0 s3g at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> s3g-0 s3g at java.lang.reflect.Method.invoke(Method.java:498)
> s3g-0 s3g at 
> org.apache.hadoop.hdds.tracing.TraceAllMethod.invoke(TraceAllMethod.java:66)
> s3g-0 s3g at com.sun.proxy.$Proxy77.submitRequest(Unknown Source)
> s3g-0 s3g at 
> org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.submitRequest(OzoneManagerProtocolClientSideTranslatorPB.java:284)
> s3g-0 s3g at 
> org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.getServiceList(OzoneManagerProtocolClientSideTranslatorPB.java:1097)
> s3g-0 s3g at 
> org.apache.hadoop.ozone.client.rpc.RpcClient.getScmAddressForClient(RpcClient.java:219)
> s3g-0 s3g at 
> org.apache.hadoop.ozone.client.rpc.RpcClient.(RpcClient.java:148)
> s3g-0 s3g at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> s3g-0 s3g at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> s3g-0 s3g at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> s3g-0 s3g at 
> java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> s3g-0 s3g at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:291)
> s3g-0 s3g at 
> org.apache.hadoop.ozone.client.OzoneClientFactory.getClient(OzoneClientFactory.java:92)
> s3g-0 s3g at 
> org.apache.hadoop.ozone.s3.OzoneClientProducer.getClient(OzoneClientProducer.java:108)
> s3g-0 s3g at 
> org.apache.hadoop.ozone.s3.OzoneClientProducer.createClient(OzoneClientProducer.java:68)
> s3g-0 s3g at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> s3g-0 s3g at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> s3g-0 s3g at 
> 

[jira] [Updated] (HDDS-1344) Update Ozone website

2019-03-28 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1344:
---
Fix Version/s: 0.4.0

> Update Ozone website
> 
>
> Key: HDDS-1344
> URL: https://issues.apache.org/jira/browse/HDDS-1344
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Fix For: 0.4.0
>
> Attachments: HDDS-1344.001.patch
>
>
> Just a minor patch that updates the landing page and FAQ of the Ozone website.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1344) Update Ozone website

2019-03-28 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1344:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Update Ozone website
> 
>
> Key: HDDS-1344
> URL: https://issues.apache.org/jira/browse/HDDS-1344
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Attachments: HDDS-1344.001.patch
>
>
> Just a minor patch that updates the landing page and FAQ of the Ozone website.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1346) Remove hard-coded version ozone-0.5.0 from ReadMe of ozonesecure-mr docker-compose

2019-03-27 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16803553#comment-16803553
 ] 

Anu Engineer commented on HDDS-1346:


+1, thanks for the fix.

 

> Remove hard-coded version ozone-0.5.0 from ReadMe of ozonesecure-mr 
> docker-compose
> --
>
> Key: HDDS-1346
> URL: https://issues.apache.org/jira/browse/HDDS-1346
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> As we are releasing ozone-0.4, we should not have hard-coded ozone-0.5 for 
> trunk. 
> The proposal is to use the following to replace it
> {{cd}} {{$(git rev-parse 
> --show-toplevel)}}{{/hadoop-ozone/dist/target/ozone-}}{{*-SNAPSHOT}}{{/compose/ozone}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1344) Update Ozone website

2019-03-27 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16803260#comment-16803260
 ] 

Anu Engineer commented on HDDS-1344:


[~dineshchitlangia] and [~ajayydv] Thank you for the review and approval. I 
will commit this now.

 

> Update Ozone website
> 
>
> Key: HDDS-1344
> URL: https://issues.apache.org/jira/browse/HDDS-1344
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Attachments: HDDS-1344.001.patch
>
>
> Just a minor patch that updates the landing page and FAQ of the Ozone website.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1345) Genconfig does not generate LOG4j configs

2019-03-27 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1345:
---
Issue Type: Improvement  (was: Task)

> Genconfig does not generate LOG4j configs
> -
>
> Key: HDDS-1345
> URL: https://issues.apache.org/jira/browse/HDDS-1345
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.3.0
>Reporter: Hrishikesh Gadre
>Assignee: Hrishikesh Gadre
>Priority: Major
>
> Genconfig does not generate Log4J configs, This is needed for Ozone configs 
> to work correctly. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1345) Genconfig does not generate LOG4j configs

2019-03-27 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer reassigned HDDS-1345:
--

Assignee: Hrishikesh Gadre

> Genconfig does not generate LOG4j configs
> -
>
> Key: HDDS-1345
> URL: https://issues.apache.org/jira/browse/HDDS-1345
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Affects Versions: 0.3.0
>Reporter: Hrishikesh Gadre
>Assignee: Hrishikesh Gadre
>Priority: Major
>
> Genconfig does not generate Log4J configs, This is needed for Ozone configs 
> to work correctly. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1345) Genconfig does not generate LOG4j configs

2019-03-27 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer reassigned HDDS-1345:
--

 Assignee: (was: Hrishikesh Gadre)
Affects Version/s: (was: 0.3.0)
   0.3.0
 Target Version/s: 0.5.0  (was: 0.5.0)
 Workflow: patch-available, re-open possible  (was: 
no-reopen-closed, patch-avail)
  Key: HDDS-1345  (was: HADOOP-16215)
  Project: Hadoop Distributed Data Store  (was: Hadoop Common)

> Genconfig does not generate LOG4j configs
> -
>
> Key: HDDS-1345
> URL: https://issues.apache.org/jira/browse/HDDS-1345
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Affects Versions: 0.3.0
>Reporter: Hrishikesh Gadre
>Priority: Major
>
> Genconfig does not generate Log4J configs, This is needed for Ozone configs 
> to work correctly. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1344) Update Ozone website

2019-03-27 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1344:
---
Status: Patch Available  (was: Open)

> Update Ozone website
> 
>
> Key: HDDS-1344
> URL: https://issues.apache.org/jira/browse/HDDS-1344
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Attachments: HDDS-1344.001.patch
>
>
> Just a minor patch that updates the landing page and FAQ of the Ozone website.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1344) Update Ozone website

2019-03-27 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1344:
---
Attachment: HDDS-1344.001.patch

> Update Ozone website
> 
>
> Key: HDDS-1344
> URL: https://issues.apache.org/jira/browse/HDDS-1344
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Anu Engineer
>Assignee: Anu Engineer
>Priority: Major
> Attachments: HDDS-1344.001.patch
>
>
> Just a minor patch that updates the landing page and FAQ of the Ozone website.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1344) Update Ozone website

2019-03-27 Thread Anu Engineer (JIRA)
Anu Engineer created HDDS-1344:
--

 Summary: Update Ozone website
 Key: HDDS-1344
 URL: https://issues.apache.org/jira/browse/HDDS-1344
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Anu Engineer
Assignee: Anu Engineer


Just a minor patch that updates the landing page and FAQ of the Ozone website.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1329) Update documentation for Ozone-0.4.0 release

2019-03-22 Thread Anu Engineer (JIRA)
Anu Engineer created HDDS-1329:
--

 Summary: Update documentation for Ozone-0.4.0 release
 Key: HDDS-1329
 URL: https://issues.apache.org/jira/browse/HDDS-1329
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Anu Engineer
Assignee: Anu Engineer


We need to update documenation of Ozone for all the new features which is part 
of 0.4.0 release. This is a 0.4.0 blocker JIRA.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1326) putkey operation failed with java.lang.ArrayIndexOutOfBoundsException

2019-03-22 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1326:
---
Target Version/s: 0.4.0
Priority: Blocker  (was: Major)

> putkey operation failed with java.lang.ArrayIndexOutOfBoundsException
> -
>
> Key: HDDS-1326
> URL: https://issues.apache.org/jira/browse/HDDS-1326
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nilotpal Nandi
>Assignee: Shashikant Banerjee
>Priority: Blocker
>
> steps taken :
> ---
>  # trying to write key in 40 node cluster.
>  # write failed.
> client output
> ---
>  
> {noformat}
> e530-491c-ab03-3b1c34d1a751:c80390, 
> 974a806d-bf7d-4f1b-adb4-d51d802d368a:c80390, 
> 469bd8c4-5da2-43bb-bc4b-7edd884931e5:c80390]
> 2019-03-22 10:56:19,592 [main] WARN - Encountered exception {}
> java.io.IOException: Unexpected Storage Container Exception: 
> java.util.concurrent.ExecutionException: 
> java.util.concurrent.CompletionException: 
> org.apache.ratis.protocol.StateMachineException: 
> org.apache.hadoop.hdds.scm.container.common.helpers.ContainerNotOpenException 
> from Server 5d3eb91f-e530-491c-ab03-3b1c34d1a751: Container 1269 in CLOSED 
> state
>  at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.close(BlockOutputStream.java:511)
>  at 
> org.apache.hadoop.ozone.client.io.BlockOutputStreamEntry.close(BlockOutputStreamEntry.java:144)
>  at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.handleFlushOrClose(KeyOutputStream.java:565)
>  at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.handleWrite(KeyOutputStream.java:329)
>  at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.write(KeyOutputStream.java:273)
>  at 
> org.apache.hadoop.ozone.client.io.OzoneOutputStream.write(OzoneOutputStream.java:49)
>  at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:96)
>  at 
> org.apache.hadoop.ozone.web.ozShell.keys.PutKeyHandler.call(PutKeyHandler.java:111)
>  at 
> org.apache.hadoop.ozone.web.ozShell.keys.PutKeyHandler.call(PutKeyHandler.java:53)
>  at picocli.CommandLine.execute(CommandLine.java:919)
>  at picocli.CommandLine.access$700(CommandLine.java:104)
>  at picocli.CommandLine$RunLast.handle(CommandLine.java:1083)
>  at picocli.CommandLine$RunLast.handle(CommandLine.java:1051)
>  at 
> picocli.CommandLine$AbstractParseResultHandler.handleParseResult(CommandLine.java:959)
>  at picocli.CommandLine.parseWithHandlers(CommandLine.java:1242)
>  at picocli.CommandLine.parseWithHandler(CommandLine.java:1181)
>  at org.apache.hadoop.hdds.cli.GenericCli.execute(GenericCli.java:61)
>  at org.apache.hadoop.ozone.web.ozShell.Shell.execute(Shell.java:82)
>  at org.apache.hadoop.hdds.cli.GenericCli.run(GenericCli.java:52)
>  at org.apache.hadoop.ozone.web.ozShell.Shell.main(Shell.java:93)
> Caused by: java.util.concurrent.ExecutionException: 
> java.util.concurrent.CompletionException: 
> org.apache.ratis.protocol.StateMachineException: 
> org.apache.hadoop.hdds.scm.container.common.helpers.ContainerNotOpenException 
> from Server 5d3eb91f-e530-491c-ab03-3b1c34d1a751: Container 1269 in CLOSED 
> state
>  at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
>  at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
>  at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.waitOnFlushFutures(BlockOutputStream.java:529)
>  at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.handleFlush(BlockOutputStream.java:481)
>  at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.close(BlockOutputStream.java:496)
>  ... 19 more
> Caused by: java.util.concurrent.CompletionException: 
> org.apache.ratis.protocol.StateMachineException: 
> org.apache.hadoop.hdds.scm.container.common.helpers.ContainerNotOpenException 
> from Server 5d3eb91f-e530-491c-ab03-3b1c34d1a751: Container 1269 in CLOSED 
> state
>  at 
> org.apache.ratis.client.impl.RaftClientImpl.handleStateMachineException(RaftClientImpl.java:402)
>  at 
> org.apache.ratis.client.impl.RaftClientImpl.lambda$sendAsync$3(RaftClientImpl.java:198)
>  at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602)
>  at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
>  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
>  at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962)
>  at 
> org.apache.ratis.client.impl.RaftClientImpl$PendingAsyncRequest.setReply(RaftClientImpl.java:95)
>  at 
> org.apache.ratis.client.impl.RaftClientImpl$PendingAsyncRequest.setReply(RaftClientImpl.java:75)
>  at 
> org.apache.ratis.util.SlidingWindow$RequestMap.setReply(SlidingWindow.java:127)
>  at 
> 

[jira] [Updated] (HDDS-1303) Support ACL for Ozone

2019-03-21 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1303:
---
Target Version/s: 0.5.0  (was: 0.4.0)

> Support ACL for Ozone
> -
>
> Key: HDDS-1303
> URL: https://issues.apache.org/jira/browse/HDDS-1303
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Anu Engineer
>Priority: Major
>
> add native acl support for OM operations



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1303) Support ACL for Ozone

2019-03-21 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer reassigned HDDS-1303:
--

Assignee: Anu Engineer  (was: Ajay Kumar)

> Support ACL for Ozone
> -
>
> Key: HDDS-1303
> URL: https://issues.apache.org/jira/browse/HDDS-1303
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Anu Engineer
>Priority: Major
>
> add native acl support for OM operations



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1314) TestOzoneDelegationTokenSecretManager is failing because of NPE

2019-03-21 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer reassigned HDDS-1314:
--

Assignee: Ajay Kumar

> TestOzoneDelegationTokenSecretManager is failing because of NPE
> ---
>
> Key: HDDS-1314
> URL: https://issues.apache.org/jira/browse/HDDS-1314
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Security
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Ajay Kumar
>Priority: Major
>
> TestOzoneDelegationTokenSecretManager is failing because of NPE
> {code}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.ozone.security.TestOzoneDelegationTokenSecretManager.tearDown(TestOzoneDelegationTokenSecretManager.java:133)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1318) Fix MalformedTracerStateStringException on DN logs

2019-03-21 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer reassigned HDDS-1318:
--

Assignee: Xiaoyu Yao

> Fix MalformedTracerStateStringException on DN logs
> --
>
> Key: HDDS-1318
> URL: https://issues.apache.org/jira/browse/HDDS-1318
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>
> Have seen many warnings on DN logs. This ticket is opened to track the 
> investigation and fix for this.
> {code}
> 2019-03-20 19:01:33 WARN 
> PropagationRegistry$ExceptionCatchingExtractorDecorator:60 - Error when 
> extracting SpanContext from carrier. Handling gracefully.
> io.jaegertracing.internal.exceptions.MalformedTracerStateStringException: 
> String does not match tracer state format: 
> 2c919331-9a51-4bc4-acee-df57a8dcecf0
>  at org.apache.hadoop.hdds.tracing.StringCodec.extract(StringCodec.java:42)
>  at org.apache.hadoop.hdds.tracing.StringCodec.extract(StringCodec.java:32)
>  at 
> io.jaegertracing.internal.PropagationRegistry$ExceptionCatchingExtractorDecorator.extract(PropagationRegistry.java:57)
>  at io.jaegertracing.internal.JaegerTracer.extract(JaegerTracer.java:208)
>  at io.jaegertracing.internal.JaegerTracer.extract(JaegerTracer.java:61)
>  at io.opentracing.util.GlobalTracer.extract(GlobalTracer.java:143)
>  at 
> org.apache.hadoop.hdds.tracing.TracingUtil.importAndCreateScope(TracingUtil.java:96)
>  at 
> org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:148)
>  at 
> org.apache.hadoop.ozone.container.common.transport.server.GrpcXceiverService$1.onNext(GrpcXceiverService.java:73)
>  at 
> org.apache.hadoop.ozone.container.common.transport.server.GrpcXceiverService$1.onNext(GrpcXceiverService.java:61)
>  at 
> org.apache.ratis.thirdparty.io.grpc.stub.ServerCalls$StreamingServerCallHandler$StreamingServerCallListener.onMessage(ServerCalls.java:248)
>  at 
> org.apache.ratis.thirdparty.io.grpc.ForwardingServerCallListener.onMessage(ForwardingServerCallListener.java:33)
>  at 
> org.apache.ratis.thirdparty.io.grpc.Contexts$ContextualizedServerCallListener.onMessage(Contexts.java:76)
>  at 
> org.apache.ratis.thirdparty.io.grpc.ForwardingServerCallListener.onMessage(ForwardingServerCallListener.java:33)
>  at 
> org.apache.hadoop.hdds.tracing.GrpcServerInterceptor$1.onMessage(GrpcServerInterceptor.java:46)
>  at 
> org.apache.ratis.thirdparty.io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.messagesAvailable(ServerCallImpl.java:263)
>  at 
> org.apache.ratis.thirdparty.io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1MessagesAvailable.runInContext(ServerImpl.java:686)
>  at 
> org.apache.ratis.thirdparty.io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
>  at 
> org.apache.ratis.thirdparty.io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1317) KeyOutputStream#write throws ArrayIndexOutOfBoundsException when running RandomWrite MR examples

2019-03-21 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer reassigned HDDS-1317:
--

Assignee: Xiaoyu Yao

> KeyOutputStream#write throws ArrayIndexOutOfBoundsException when running 
> RandomWrite MR examples
> 
>
> Key: HDDS-1317
> URL: https://issues.apache.org/jira/browse/HDDS-1317
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Affects Versions: 0.4.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>
> Repro steps:
> {code} 
> jar $HADOOP_MAPRED_HOME/hadoop-mapreduce-examples-*.jar randomwriter 
> -Dtest.randomwrite.total_bytes=1000  o3fs://bucket1.vol1/randomwrite.out
> {code}
>  
> Error Stack:
> {code}
> 2019-03-20 19:02:37 INFO Job:1686 - Task Id : 
> attempt_1553108378906_0002_m_00_0, Status : FAILED
> Error: java.lang.ArrayIndexOutOfBoundsException: -5
>  at java.util.ArrayList.elementData(ArrayList.java:422)
>  at java.util.ArrayList.get(ArrayList.java:435)
>  at 
> org.apache.hadoop.hdds.scm.storage.BufferPool.getBuffer(BufferPool.java:45)
>  at 
> org.apache.hadoop.hdds.scm.storage.BufferPool.allocateBufferIfNeeded(BufferPool.java:59)
>  at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.write(BlockOutputStream.java:215)
>  at 
> org.apache.hadoop.ozone.client.io.BlockOutputStreamEntry.write(BlockOutputStreamEntry.java:130)
>  at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.handleWrite(KeyOutputStream.java:311)
>  at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.write(KeyOutputStream.java:273)
>  at 
> org.apache.hadoop.fs.ozone.OzoneFSOutputStream.write(OzoneFSOutputStream.java:46)
>  at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:57)
>  at java.io.DataOutputStream.write(DataOutputStream.java:107)
>  at org.apache.hadoop.io.SequenceFile$Writer.append(SequenceFile.java:1444)
>  at 
> org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat$1.write(SequenceFileOutputFormat.java:83)
>  at 
> org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:670)
>  at 
> org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
>  at 
> org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
>  at 
> org.apache.hadoop.examples.RandomWriter$RandomMapper.map(RandomWriter.java:199)
>  at 
> org.apache.hadoop.examples.RandomWriter$RandomMapper.map(RandomWriter.java:165)
>  at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
>  at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799)
>  at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347)
>  at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14377) Incorrect unit abbreviations shown for fmt_bytes

2019-03-19 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796459#comment-16796459
 ] 

Anu Engineer edited comment on HDFS-14377 at 3/19/19 10:14 PM:
---

[~dannytbecker] That would mean that UI says GB and it means 1500, command line 
says GB and it would mean 1536 – I am not sure that would fly. Even this 
current change, I am sure is going to be objected eventually by someone saying 
the UI is showing up as MiB and some tool somewhere prints out in MB. If you 
would like to make a change like this, you will end up sweeping the code base 
to see the usage (there are quite a few places that I am familiar with) and 
then trying to convert all of them (but that runs into the Hadoop app-compact 
rules, so touching any tool that has an output to screen is a complex problem 
by itself).

Then of-course, you have still not met the folks who would tell you that KB, MB 
looks much better than KiB and MiB etc.

In other words, it is not an easy problem or a single line fix – Unfortunately, 
a change like this is one of the hardest to make in the Hadoop world, trust me, 
been there, done and that and backed off since the community hit me on my head 
:). I am just saving you some pain 

 

 


was (Author: anu):
[~dannytbecker] That would mean that UI says GB and it means 1500, command line 
says GB and it would mean 1536 – I am not sure that would fly. Even this 
current change, I am sure is going to be objected eventually by someone saying 
the UI is showing up as MiB and some tool somewhere prints out in MB. If you 
would like to make a change like this, you will end up sweeping the code base 
to see the usage (there are quite I few places that I am familiar with) and 
then trying to convert all of them (but that runs into the Hadoop app-compact 
rules, so touching any tool that has an output to screen is a complex problem 
by itself).

Then of-course, you have still not met the folks who would tell you that KB, MB 
looks much better than KiB and MiB etc.

In other words, it is not an easy problem or a single line fix – Unfortunately, 
a change like this is one of the hardest to make in the Hadoop world, trust me, 
been there, done and that and backed off since the community hit me on my head 
:). I am just saving you some pain 

 

 

> Incorrect unit abbreviations shown for fmt_bytes
> 
>
> Key: HDFS-14377
> URL: https://issues.apache.org/jira/browse/HDFS-14377
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Danny Becker
>Assignee: Danny Becker
>Priority: Trivial
> Attachments: HDFS-14377.000.patch
>
>
> The function fmt_bytes show the abbreviations for Terabyte, Petabyte, etc. 
> the standard metric system units for data storage units. The function however 
> divides by a factor of 1024, which is the factor used for Pebibyte, Tebibyte, 
> etc. Change the abbreviations from TB, PB, etc to TiB, PiB, etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14377) Incorrect unit abbreviations shown for fmt_bytes

2019-03-19 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796459#comment-16796459
 ] 

Anu Engineer edited comment on HDFS-14377 at 3/19/19 10:18 PM:
---

[~dannytbecker] That would mean that UI says GB and it means 1500, command line 
says GB and it would mean 1536 – I am not sure that would fly. Even this 
current change, I am sure is going to be objected eventually by someone saying 
the UI is showing up as MiB and some tool somewhere prints out in MB. If you 
would like to make a change like this, you will end up sweeping the code base 
to see the usage (there are quite a few places that I am familiar with) and 
then trying to convert all of them (but that runs into the Hadoop app-compact 
rules, so touching any tool that has an output to screen is a complex problem 
by itself).

Then of-course, you have still not met the folks who would tell you that KB, MB 
looks much better than KiB and MiB etc.

In other words, it is not an easy problem or a single line fix – Unfortunately, 
a change like this is one of the hardest to make in the Hadoop world, trust me, 
been there, done  that and backed off since the community hit me on my head :). 
I am just saving you some pain 

 

 


was (Author: anu):
[~dannytbecker] That would mean that UI says GB and it means 1500, command line 
says GB and it would mean 1536 – I am not sure that would fly. Even this 
current change, I am sure is going to be objected eventually by someone saying 
the UI is showing up as MiB and some tool somewhere prints out in MB. If you 
would like to make a change like this, you will end up sweeping the code base 
to see the usage (there are quite a few places that I am familiar with) and 
then trying to convert all of them (but that runs into the Hadoop app-compact 
rules, so touching any tool that has an output to screen is a complex problem 
by itself).

Then of-course, you have still not met the folks who would tell you that KB, MB 
looks much better than KiB and MiB etc.

In other words, it is not an easy problem or a single line fix – Unfortunately, 
a change like this is one of the hardest to make in the Hadoop world, trust me, 
been there, done and that and backed off since the community hit me on my head 
:). I am just saving you some pain 

 

 

> Incorrect unit abbreviations shown for fmt_bytes
> 
>
> Key: HDFS-14377
> URL: https://issues.apache.org/jira/browse/HDFS-14377
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Danny Becker
>Assignee: Danny Becker
>Priority: Trivial
> Attachments: HDFS-14377.000.patch
>
>
> The function fmt_bytes show the abbreviations for Terabyte, Petabyte, etc. 
> the standard metric system units for data storage units. The function however 
> divides by a factor of 1024, which is the factor used for Pebibyte, Tebibyte, 
> etc. Change the abbreviations from TB, PB, etc to TiB, PiB, etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14377) Incorrect unit abbreviations shown for fmt_bytes

2019-03-19 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796459#comment-16796459
 ] 

Anu Engineer commented on HDFS-14377:
-

[~dannytbecker] That would mean that UI says GB and it means 1500, command line 
says GB and it would mean 1536 – I am not sure that would fly. Even this 
current change, I am sure is going to be objected eventually by someone saying 
the UI is showing up as MiB and some tool somewhere prints out in MB. If you 
would like to make a change like this, you will end up sweeping the code base 
to see the usage (there are quite I few places that I am familiar with) and 
then trying to convert all of them (but that runs into the Hadoop app-compact 
rules, so touching any tool that has an output to screen is a complex problem 
by itself).

Then of-course, you have still not met the folks who would tell you that KB, MB 
looks much better than KiB and MiB etc.

In other words, it is not an easy problem or a single line fix – Unfortunately, 
a change like this is one of the hardest to make in the Hadoop world, trust me, 
been there, done and that and backed off since the community hit me on my head 
:). I am just saving you some pain 

 

 

> Incorrect unit abbreviations shown for fmt_bytes
> 
>
> Key: HDFS-14377
> URL: https://issues.apache.org/jira/browse/HDFS-14377
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Danny Becker
>Assignee: Danny Becker
>Priority: Trivial
> Attachments: HDFS-14377.000.patch
>
>
> The function fmt_bytes show the abbreviations for Terabyte, Petabyte, etc. 
> the standard metric system units for data storage units. The function however 
> divides by a factor of 1024, which is the factor used for Pebibyte, Tebibyte, 
> etc. Change the abbreviations from TB, PB, etc to TiB, PiB, etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14377) Incorrect unit abbreviations shown for fmt_bytes

2019-03-19 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796309#comment-16796309
 ] 

Anu Engineer edited comment on HDFS-14377 at 3/19/19 5:38 PM:
--

[~xkrogen]/ [~dannytbecker],

Thanks for the links and the patch. I am ambivalent about this change. I 
debated whether I should reply and then I remembered that +friends don't let 
friends drive drunk.+ It is also quite obvious from this reply that I have 
nothing of real value to do on a Tuesday morning :).
 *First of all, I am a friend of Danny. He may not know me, but trust me on 
this.*

>From my experiences, this is how this patch is going to evolve. 
 Someone will look at this(Let us assume it is me for time being) and will 
comment
{noformat}
Anu: Danny, Thanks for the patch. This looks Awesome, 
I have always wanted Hadoop to be ISO compliant. 
+1,  Pending Jenkins.



Erik: Danny, this is so awesome, while you are at this, 
would you please fix the command line tool xyz, 
that still prints KB and MB instead of KiB and MiB.

Danny being good-natured and nice (remember he is my friend) 
decides to make changes to the tool xyz and puts up patch v2.



Engineer: Btw, Danny this breaks the app-compact rules. 
You *cannot* make this change. This can *only* be done 
in a major revision, oh, btw, if you are planning to 
make this change only in the UI, that is now inconsistent. 
Some places of the HDFS speaks ISO and parts speak 
"custom" memory Units. That is a mess. 
I really think we should do this in the next release.



Anu: oh...I did not think of that, you have a point. 
@Erik, what do you think?

Erik: Oh.. that is right, let us open a new branch, 
Hadoop 4.0  to commit this.

Engineer: Perhaps we should start a discussion thread 
in the mailing list to open a new branch? 

Anu: good, Idea, let me fire off a thread.
[100 reply thread ensues -- and finally, after 4 months, 
a new branch is opened, and Danny is sitting there 
wondering what the s$*t did I do? it was a one-line patch]

Anu: @Danny, Could you please rebase this patch, 
and btw, we found three other tools that need the fix. 
Can you please take care of that while you are at this? 

Gandalf(some wise-committer in Hadoop): 
This change impacts Hadoop Common, that means this 
change also impacts, Spark, Sqoop, YARN, HBase 
and Kitchen Sink. You cannot make this change 
without considering the downstream impact.




{noformat}
Danny being my friend and a new contributor, I was just trying to be nice and 
helpful and steering him gently away from the mine field he was about to step 
in. I did not even start the discussion on whether we should be a 
"traditionalist" vs. "modernist" and use "KB" vs. "KiB". I am sure some other 
committer will add that perspective.

Given all this,*I am +1 on this change*, I hope my parody of our lives will 
motivate us to stay away from a long discussion on the merits of this one line 
patch.

Yesterday night, I was truly in a good mood, having just seen how humanity 
saves dragons and was generally feeling good and charitable. In that moment of 
weakness, I decided to be kind to Danny and save him some pain.

Danny, I hope you see the wisdom in being my friend and hopefully you will be 
nice enough to buy me some beer when we finally meet.

Ps. Truly, I have nothing better to do :( that is a sad state of my life :(
 I need to find something better to do than comment on random JIRAs.


was (Author: anu):
[~xkrogen]/ [~dannytbecker],

Thanks for the links and the patch. I am ambivalent about this change. I 
debated whether I should reply and then I remembered that +friends don't let 
friends drive drunk.+ It is also quite obvious from this reply that I have 
nothing of real value to do on a Tuesday morning :).
 *First of all, I am a friend of Danny. He may not know me, but trust me on 
this.*

>From my experiences, this is how this patch is going to evolve. 
 Someone will look at this(at us assume it is me for time being) and will 
comment
{noformat}
Anu: Danny, Thanks for the patch. This looks Awesome, 
I have always wanted Hadoop to be ISO compliant. 
+1,  Pending Jenkins.



Erik: Danny, this is so awesome, while you are at this, 
would you please fix the command line tool xyz, 
that still prints KB and MB instead of KiB and MiB.

Danny being good-natured and nice (remember he is my friend) 
decides to make changes to the tool xyz and puts up patch v2.



Engineer: Btw, Danny this breaks the app-compact rules. 
You *cannot* make this change. This can *only* be done 
in a major revision, oh, btw, if you are planning to 
make this change only in the UI, that is now inconsistent. 
Some places of the HDFS speaks ISO and parts speak 
"custom" memory Units. That is a mess. 
I really think we should do this in the next release.



Anu: oh...I did not think of that, you have a point. 
@Erik, what do you think?

Erik: Oh.. that is 

[jira] [Comment Edited] (HDFS-14377) Incorrect unit abbreviations shown for fmt_bytes

2019-03-19 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796309#comment-16796309
 ] 

Anu Engineer edited comment on HDFS-14377 at 3/19/19 5:28 PM:
--

[~xkrogen]/ [~dannytbecker],

Thanks for the links and the patch. I am ambivalent about this change. I 
thought whether I should reply and then I remembered that +friends don't let 
friends drive drunk.+ It is also quite obvious from this reply that I have 
nothing of real value to do on a Tuesday morning :).
 *
 First of all, I am a friend of Danny. He may not know me, but trust me on 
this.*

>From my experiences, this is how is patch is going to evolve. 
 Someone will look at this(at us assume it is me for time being) and will 
comment
{noformat}
Anu: Danny, Thanks for the patch. This looks Awesome, 
I have always wanted Hadoop to be ISO compliant. 
+1,  Pending Jenkins.



Erik: Danny, this is so awesome, while you are at this, 
would you please fix the command line tool xyz, 
that still prints KB and MB instead of KiB and MiB.

Danny being good-natured and nice (remember he is my friend) 
decides to make changes to the tool xyz and puts up patch v2.



Engineer: Btw, Danny this breaks the apt-compact rules. 
You *cannot* make this change. This can *only* be done 
in a major revision, oh, btw, if you are planning to 
make this change only in the UI, that is now inconsistent. 
Some places of the HDFS speaks ISO and parts speak 
"custom" memory Units. That is a mess. 
I really think we should do this in the next release.



Anu: oh...I did not think of that, you have a point. 
@Erik, what do you think?

Erik: Oh.. that is right, let us open a new branch, 
Hadoop 4.0  to commit this.

Engineer: Perhaps we should start a discussion thread 
in the mailing list to open a new branch? 

Anu: good, Idea, let me fire off a thread.
[100 reply thread ensues -- and finally, after 4 months, 
a new branch is opened, and Danny is sitting there 
wondering what the s$*t did I do? it was a one-line patch]

Anu: @Danny, Could you please rebase this patch, 
and btw, we found three other tools that need the fix. 
Can you please take care of that while you are at this? 

Gandalf(some wise-committer in Hadoop): 
This change impacts Hadoop Common, that means this 
change also impacts, Spark, Sqoop, YARN, HBase 
and Kitchen Sink. You cannot make this change 
without considering the downstream impact.




{noformat}
Danny being my friend and a new contributor, I was just trying to be nice and 
helpful and steering him gently away from the mine field was about to step in. 
I did not even start the discussion on whether we should be a "traditionalist" 
vs. "modernist" and use "KB" vs. "KiB". I am sure some other committer will add 
that perspective.

Given all this,* I am +1 on this change*, I hope my parody of our lives will 
motivate us to stay away from a long discussion on the merits of this one line 
patch.

Yesterday night, I was truly in a good mood, having just seen how humanity 
saves dragons and was generally feeling good and charitable. In that moment of 
weakness, I decided to be kind of Danny and save him some pain.

Danny, I hope you see the wisdom in being my friend and hopefully you will be 
nice enough to buy me some beer when we finally meet.

Ps. Truly, I have nothing better to do :( that is a sad state of my life :(
 I need to find something better to do than comment on random JIRAs.


was (Author: anu):
[~xkrogen]/ [~dannytbecker],

Thanks for the links and the patch. I am ambivalent about this change. I 
thought whether I should reply and then I remembered that +friends don't let 
friends drive drunk.+ It is also quite obvious from this reply that I have 
nothing of real value to do on a Tuesday morning :).
*
First of all, I am a friend of Danny. He may not know me, but trust me on this.*

>From my experiences, this is how is patch is going to evolve. 
Someone will look at this(at us assume it is me for time being) and will comment

{noformat}
Anu: Danny, Thanks for the patch. This looks Awesome, I have always wanted 
Hadoop to be ISO compliant. +1,  Pending Jenkins.



Erik: Danny, this is so awesome, while you are at this, would you please fix 
the command line tool xyz, that still prints KB and MB instead of KiB and MiB.

Danny being good-natured and nice (remember he is my friend) decides to make 
changes to the tool xyz and puts up patch v2.



Engineer: Btw, Danny this breaks the apt-compact rules. You *cannot* make this 
change. This can *only* be done in a major revision, oh, btw, if you are 
planning to make this change only in the UI, that is now inconsistent. Some 
places of the HDFS speaks ISO and parts speak "custom" memory Units. That is a 
mess. I really think we should do this in the next release.



Anu: oh...I did not think of that, you have a point. @Erik, what do you think?

Erik: Oh.. that is right, let us 

[jira] [Comment Edited] (HDFS-14377) Incorrect unit abbreviations shown for fmt_bytes

2019-03-19 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796309#comment-16796309
 ] 

Anu Engineer edited comment on HDFS-14377 at 3/19/19 5:34 PM:
--

[~xkrogen]/ [~dannytbecker],

Thanks for the links and the patch. I am ambivalent about this change. I 
debated whether I should reply and then I remembered that +friends don't let 
friends drive drunk.+ It is also quite obvious from this reply that I have 
nothing of real value to do on a Tuesday morning :).
 *First of all, I am a friend of Danny. He may not know me, but trust me on 
this.*

>From my experiences, this is how this patch is going to evolve. 
 Someone will look at this(at us assume it is me for time being) and will 
comment
{noformat}
Anu: Danny, Thanks for the patch. This looks Awesome, 
I have always wanted Hadoop to be ISO compliant. 
+1,  Pending Jenkins.



Erik: Danny, this is so awesome, while you are at this, 
would you please fix the command line tool xyz, 
that still prints KB and MB instead of KiB and MiB.

Danny being good-natured and nice (remember he is my friend) 
decides to make changes to the tool xyz and puts up patch v2.



Engineer: Btw, Danny this breaks the app-compact rules. 
You *cannot* make this change. This can *only* be done 
in a major revision, oh, btw, if you are planning to 
make this change only in the UI, that is now inconsistent. 
Some places of the HDFS speaks ISO and parts speak 
"custom" memory Units. That is a mess. 
I really think we should do this in the next release.



Anu: oh...I did not think of that, you have a point. 
@Erik, what do you think?

Erik: Oh.. that is right, let us open a new branch, 
Hadoop 4.0  to commit this.

Engineer: Perhaps we should start a discussion thread 
in the mailing list to open a new branch? 

Anu: good, Idea, let me fire off a thread.
[100 reply thread ensues -- and finally, after 4 months, 
a new branch is opened, and Danny is sitting there 
wondering what the s$*t did I do? it was a one-line patch]

Anu: @Danny, Could you please rebase this patch, 
and btw, we found three other tools that need the fix. 
Can you please take care of that while you are at this? 

Gandalf(some wise-committer in Hadoop): 
This change impacts Hadoop Common, that means this 
change also impacts, Spark, Sqoop, YARN, HBase 
and Kitchen Sink. You cannot make this change 
without considering the downstream impact.




{noformat}
Danny being my friend and a new contributor, I was just trying to be nice and 
helpful and steering him gently away from the mine field he was about to step 
in. I did not even start the discussion on whether we should be a 
"traditionalist" vs. "modernist" and use "KB" vs. "KiB". I am sure some other 
committer will add that perspective.

Given all this,*I am +1 on this change*, I hope my parody of our lives will 
motivate us to stay away from a long discussion on the merits of this one line 
patch.

Yesterday night, I was truly in a good mood, having just seen how humanity 
saves dragons and was generally feeling good and charitable. In that moment of 
weakness, I decided to be kind to Danny and save him some pain.

Danny, I hope you see the wisdom in being my friend and hopefully you will be 
nice enough to buy me some beer when we finally meet.

Ps. Truly, I have nothing better to do :( that is a sad state of my life :(
 I need to find something better to do than comment on random JIRAs.


was (Author: anu):
[~xkrogen]/ [~dannytbecker],

Thanks for the links and the patch. I am ambivalent about this change. I 
debated whether I should reply and then I remembered that +friends don't let 
friends drive drunk.+ It is also quite obvious from this reply that I have 
nothing of real value to do on a Tuesday morning :).
 *First of all, I am a friend of Danny. He may not know me, but trust me on 
this.*

>From my experiences, this is how this patch is going to evolve. 
 Someone will look at this(at us assume it is me for time being) and will 
comment
{noformat}
Anu: Danny, Thanks for the patch. This looks Awesome, 
I have always wanted Hadoop to be ISO compliant. 
+1,  Pending Jenkins.



Erik: Danny, this is so awesome, while you are at this, 
would you please fix the command line tool xyz, 
that still prints KB and MB instead of KiB and MiB.

Danny being good-natured and nice (remember he is my friend) 
decides to make changes to the tool xyz and puts up patch v2.



Engineer: Btw, Danny this breaks the app-compact rules. 
You *cannot* make this change. This can *only* be done 
in a major revision, oh, btw, if you are planning to 
make this change only in the UI, that is now inconsistent. 
Some places of the HDFS speaks ISO and parts speak 
"custom" memory Units. That is a mess. 
I really think we should do this in the next release.



Anu: oh...I did not think of that, you have a point. 
@Erik, what do you think?

Erik: Oh.. that is 

[jira] [Comment Edited] (HDFS-14377) Incorrect unit abbreviations shown for fmt_bytes

2019-03-19 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796309#comment-16796309
 ] 

Anu Engineer edited comment on HDFS-14377 at 3/19/19 5:33 PM:
--

[~xkrogen]/ [~dannytbecker],

Thanks for the links and the patch. I am ambivalent about this change. I 
debated whether I should reply and then I remembered that +friends don't let 
friends drive drunk.+ It is also quite obvious from this reply that I have 
nothing of real value to do on a Tuesday morning :).
 *First of all, I am a friend of Danny. He may not know me, but trust me on 
this.*

>From my experiences, this is how this patch is going to evolve. 
 Someone will look at this(at us assume it is me for time being) and will 
comment
{noformat}
Anu: Danny, Thanks for the patch. This looks Awesome, 
I have always wanted Hadoop to be ISO compliant. 
+1,  Pending Jenkins.



Erik: Danny, this is so awesome, while you are at this, 
would you please fix the command line tool xyz, 
that still prints KB and MB instead of KiB and MiB.

Danny being good-natured and nice (remember he is my friend) 
decides to make changes to the tool xyz and puts up patch v2.



Engineer: Btw, Danny this breaks the app-compact rules. 
You *cannot* make this change. This can *only* be done 
in a major revision, oh, btw, if you are planning to 
make this change only in the UI, that is now inconsistent. 
Some places of the HDFS speaks ISO and parts speak 
"custom" memory Units. That is a mess. 
I really think we should do this in the next release.



Anu: oh...I did not think of that, you have a point. 
@Erik, what do you think?

Erik: Oh.. that is right, let us open a new branch, 
Hadoop 4.0  to commit this.

Engineer: Perhaps we should start a discussion thread 
in the mailing list to open a new branch? 

Anu: good, Idea, let me fire off a thread.
[100 reply thread ensues -- and finally, after 4 months, 
a new branch is opened, and Danny is sitting there 
wondering what the s$*t did I do? it was a one-line patch]

Anu: @Danny, Could you please rebase this patch, 
and btw, we found three other tools that need the fix. 
Can you please take care of that while you are at this? 

Gandalf(some wise-committer in Hadoop): 
This change impacts Hadoop Common, that means this 
change also impacts, Spark, Sqoop, YARN, HBase 
and Kitchen Sink. You cannot make this change 
without considering the downstream impact.




{noformat}
Danny being my friend and a new contributor, I was just trying to be nice and 
helpful and steering him gently away from the mine field was about to step in. 
I did not even start the discussion on whether we should be a "traditionalist" 
vs. "modernist" and use "KB" vs. "KiB". I am sure some other committer will add 
that perspective.

Given all this,* I am +1 on this change*, I hope my parody of our lives will 
motivate us to stay away from a long discussion on the merits of this one line 
patch.

Yesterday night, I was truly in a good mood, having just seen how humanity 
saves dragons and was generally feeling good and charitable. In that moment of 
weakness, I decided to be kind of Danny and save him some pain.

Danny, I hope you see the wisdom in being my friend and hopefully you will be 
nice enough to buy me some beer when we finally meet.

Ps. Truly, I have nothing better to do :( that is a sad state of my life :(
 I need to find something better to do than comment on random JIRAs.


was (Author: anu):
[~xkrogen]/ [~dannytbecker],

Thanks for the links and the patch. I am ambivalent about this change. I 
debated whether I should reply and then I remembered that +friends don't let 
friends drive drunk.+ It is also quite obvious from this reply that I have 
nothing of real value to do on a Tuesday morning :).
 *First of all, I am a friend of Danny. He may not know me, but trust me on 
this.*

>From my experiences, this is how this patch is going to evolve. 
 Someone will look at this(at us assume it is me for time being) and will 
comment
{noformat}
Anu: Danny, Thanks for the patch. This looks Awesome, 
I have always wanted Hadoop to be ISO compliant. 
+1,  Pending Jenkins.



Erik: Danny, this is so awesome, while you are at this, 
would you please fix the command line tool xyz, 
that still prints KB and MB instead of KiB and MiB.

Danny being good-natured and nice (remember he is my friend) 
decides to make changes to the tool xyz and puts up patch v2.



Engineer: Btw, Danny this breaks the app-compact rules. 
You *cannot* make this change. This can *only* be done 
in a major revision, oh, btw, if you are planning to 
make this change only in the UI, that is now inconsistent. 
Some places of the HDFS speaks ISO and parts speak 
"custom" memory Units. That is a mess. 
I really think we should do this in the next release.



Anu: oh...I did not think of that, you have a point. 
@Erik, what do you think?

Erik: Oh.. that is right, 

[jira] [Comment Edited] (HDFS-14377) Incorrect unit abbreviations shown for fmt_bytes

2019-03-19 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796309#comment-16796309
 ] 

Anu Engineer edited comment on HDFS-14377 at 3/19/19 5:26 PM:
--

[~xkrogen]/ [~dannytbecker],

Thanks for the links and the patch. I am ambivalent about this change. I 
thought whether I should reply and then I remembered that +friends don't let 
friends drive drunk.+ It is also quite obvious from this reply that I have 
nothing of real value to do on a Tuesday morning :).
*
First of all, I am a friend of Danny. He may not know me, but trust me on this.*

>From my experiences, this is how is patch is going to evolve. 
Someone will look at this(at us assume it is me for time being) and will comment

{noformat}
Anu: Danny, Thanks for the patch. This looks Awesome, I have always wanted 
Hadoop to be ISO compliant. +1,  Pending Jenkins.



Erik: Danny, this is so awesome, while you are at this, would you please fix 
the command line tool xyz, that still prints KB and MB instead of KiB and MiB.

Danny being good-natured and nice (remember he is my friend) decides to make 
changes to the tool xyz and puts up patch v2.



Engineer: Btw, Danny this breaks the apt-compact rules. You *cannot* make this 
change. This can *only* be done in a major revision, oh, btw, if you are 
planning to make this change only in the UI, that is now inconsistent. Some 
places of the HDFS speaks ISO and parts speak "custom" memory Units. That is a 
mess. I really think we should do this in the next release.



Anu: oh...I did not think of that, you have a point. @Erik, what do you think?

Erik: Oh.. that is right, let us open a new branch, Hadoop 4.0  to commit this.

Engineer: Perhaps we should start a discussion thread in the mailing list to 
open a new branch? 

Anu: good, Idea, let me fire off a thread.
[ 100 reply thread ensues -- and finally, after 4 months, a new branch is 
opened, and Danny is sitting there wondering what the s$*t did I do? it was a 
one-line patch]

Anu: @Danny, Could you please rebase this patch, and btw, we found three other 
tools that need the fix. Can you please take care of that while you are at 
this? 

Gandalf(some wise-committer in Hadoop) : This change impacts Hadoop Common, 
that means this change also impacts, Spark, Sqoop, YARN, HBase and Kitchen 
Sink. You cannot make this change without considering the downstream impact.




{noformat}
Danny being my friend and a new contributor, I was just trying to be nice and 
helpful and  steering him gently away from the mine field was about to step in. 
I did not even start the discussion on whether we should be a "traditionalist" 
vs. "modernist" and use "KB" vs. "KiB". I am sure some other committer will add 
that perspective.

Given all this,* I am +1 on this change*, I hope my parody of our lives will 
motivate us to stay away from a long discussion on the merits of this one line 
patch. 

Yesterday night, I was truly in a good mood, having just seen how humanity 
saves dragons and was generally feeling good and charitable. In that moment of 
weakness, I decided to be kind of Danny and save him some pain.

Danny, I hope you see the wisdom in being my friend and hopefully you will be 
nice enough to buy me some beer when we finally meet.

Ps. Truly, I have nothing better to do :(  that is a sad state of my life :(
I need to find something better to do than comment on random JIRAs.


was (Author: anu):
[~xkrogen]/ [~dannytbecker],

Thanks for the links and the patch. I am ambivalent about this change. I 
thought whether I should reply and then I remembered that +friends don't let 
friends drive drunk.+ It is also quite obvious from this reply that I have 
nothing of real value to do on a Tuesday morning :).
*
First of all, I am a friend of Danny. He may not know me, but trust me on this.*

>From my experiences, this is how is patch is going to evolve. 
Someone will look at this(at us assume it is me for time being) and will comment
{noformat}
Anu: Danny, Thanks for the patch. This looks Awesome, I have always wanted 
Hadoop to be ISO compliant. +1,  Pending Jenkins.



Erik: Danny, this is so awesome, while you are at this, would you please fix 
the command line tool xyz, that still prints KB and MB instead of KiB and MiB.

Danny being good-natured and nice (remember he is my friend) decides to make 
changes to the tool xyz and puts up patch v2.



Engineer: Btw, Danny this breaks the apt-compact rules. You *cannot* make this 
change. This can *only* be done in a major revision, oh, btw, if you are 
planning to make this change only in the UI, that is now inconsistent. Some 
places of the HDFS speaks ISO and parts speak "custom" memory Units. That is a 
mess. I really think we should do this in the next release.



Anu: oh...I did not think of that, you have a point. @Erik, what do you think?

Erik: Oh.. that is right, let us open a new 

[jira] [Comment Edited] (HDFS-14377) Incorrect unit abbreviations shown for fmt_bytes

2019-03-19 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796309#comment-16796309
 ] 

Anu Engineer edited comment on HDFS-14377 at 3/19/19 5:32 PM:
--

[~xkrogen]/ [~dannytbecker],

Thanks for the links and the patch. I am ambivalent about this change. I 
debated whether I should reply and then I remembered that +friends don't let 
friends drive drunk.+ It is also quite obvious from this reply that I have 
nothing of real value to do on a Tuesday morning :).
 *First of all, I am a friend of Danny. He may not know me, but trust me on 
this.*

>From my experiences, this is how this patch is going to evolve. 
 Someone will look at this(at us assume it is me for time being) and will 
comment
{noformat}
Anu: Danny, Thanks for the patch. This looks Awesome, 
I have always wanted Hadoop to be ISO compliant. 
+1,  Pending Jenkins.



Erik: Danny, this is so awesome, while you are at this, 
would you please fix the command line tool xyz, 
that still prints KB and MB instead of KiB and MiB.

Danny being good-natured and nice (remember he is my friend) 
decides to make changes to the tool xyz and puts up patch v2.



Engineer: Btw, Danny this breaks the app-compact rules. 
You *cannot* make this change. This can *only* be done 
in a major revision, oh, btw, if you are planning to 
make this change only in the UI, that is now inconsistent. 
Some places of the HDFS speaks ISO and parts speak 
"custom" memory Units. That is a mess. 
I really think we should do this in the next release.



Anu: oh...I did not think of that, you have a point. 
@Erik, what do you think?

Erik: Oh.. that is right, let us open a new branch, 
Hadoop 4.0  to commit this.

Engineer: Perhaps we should start a discussion thread 
in the mailing list to open a new branch? 

Anu: good, Idea, let me fire off a thread.
[100 reply thread ensues -- and finally, after 4 months, 
a new branch is opened, and Danny is sitting there 
wondering what the s$*t did I do? it was a one-line patch]

Anu: @Danny, Could you please rebase this patch, 
and btw, we found three other tools that need the fix. 
Can you please take care of that while you are at this? 

Gandalf(some wise-committer in Hadoop): 
This change impacts Hadoop Common, that means this 
change also impacts, Spark, Sqoop, YARN, HBase 
and Kitchen Sink. You cannot make this change 
without considering the downstream impact.




{noformat}
Danny being my friend and a new contributor, I was just trying to be nice and 
helpful and steering him gently away from the mine field was about to step in. 
I did not even start the discussion on whether we should be a "traditionalist" 
vs. "modernist" and use "KB" vs. "KiB". I am sure some other committer will add 
that perspective.

Given all this,* I am +1 on this change*, I hope my parody of our lives will 
motivate us to stay away from a long discussion on the merits of this one line 
patch.

Yesterday night, I was truly in a good mood, having just seen how humanity 
saves dragons and was generally feeling good and charitable. In that moment of 
weakness, I decided to be kind of Danny and save him some pain.

Danny, I hope you see the wisdom in being my friend and hopefully you will be 
nice enough to buy me some beer when we finally meet.

Ps. Truly, I have nothing better to do :( that is a sad state of my life :(
 I need to find something better to do than comment on random JIRAs.


was (Author: anu):
[~xkrogen]/ [~dannytbecker],

Thanks for the links and the patch. I am ambivalent about this change. I 
debated whether I should reply and then I remembered that +friends don't let 
friends drive drunk.+ It is also quite obvious from this reply that I have 
nothing of real value to do on a Tuesday morning :).
 *First of all, I am a friend of Danny. He may not know me, but trust me on 
this.*

>From my experiences, this is how this patch is going to evolve. 
 Someone will look at this(at us assume it is me for time being) and will 
comment
{noformat}
Anu: Danny, Thanks for the patch. This looks Awesome, 
I have always wanted Hadoop to be ISO compliant. 
+1,  Pending Jenkins.



Erik: Danny, this is so awesome, while you are at this, 
would you please fix the command line tool xyz, 
that still prints KB and MB instead of KiB and MiB.

Danny being good-natured and nice (remember he is my friend) 
decides to make changes to the tool xyz and puts up patch v2.



Engineer: Btw, Danny this breaks the apt-compact rules. 
You *cannot* make this change. This can *only* be done 
in a major revision, oh, btw, if you are planning to 
make this change only in the UI, that is now inconsistent. 
Some places of the HDFS speaks ISO and parts speak 
"custom" memory Units. That is a mess. 
I really think we should do this in the next release.



Anu: oh...I did not think of that, you have a point. 
@Erik, what do you think?

Erik: Oh.. that is right, 

[jira] [Comment Edited] (HDFS-14377) Incorrect unit abbreviations shown for fmt_bytes

2019-03-19 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796309#comment-16796309
 ] 

Anu Engineer edited comment on HDFS-14377 at 3/19/19 5:31 PM:
--

[~xkrogen]/ [~dannytbecker],

Thanks for the links and the patch. I am ambivalent about this change. I 
debated whether I should reply and then I remembered that +friends don't let 
friends drive drunk.+ It is also quite obvious from this reply that I have 
nothing of real value to do on a Tuesday morning :).
 *First of all, I am a friend of Danny. He may not know me, but trust me on 
this.*

>From my experiences, this is how this patch is going to evolve. 
 Someone will look at this(at us assume it is me for time being) and will 
comment
{noformat}
Anu: Danny, Thanks for the patch. This looks Awesome, 
I have always wanted Hadoop to be ISO compliant. 
+1,  Pending Jenkins.



Erik: Danny, this is so awesome, while you are at this, 
would you please fix the command line tool xyz, 
that still prints KB and MB instead of KiB and MiB.

Danny being good-natured and nice (remember he is my friend) 
decides to make changes to the tool xyz and puts up patch v2.



Engineer: Btw, Danny this breaks the apt-compact rules. 
You *cannot* make this change. This can *only* be done 
in a major revision, oh, btw, if you are planning to 
make this change only in the UI, that is now inconsistent. 
Some places of the HDFS speaks ISO and parts speak 
"custom" memory Units. That is a mess. 
I really think we should do this in the next release.



Anu: oh...I did not think of that, you have a point. 
@Erik, what do you think?

Erik: Oh.. that is right, let us open a new branch, 
Hadoop 4.0  to commit this.

Engineer: Perhaps we should start a discussion thread 
in the mailing list to open a new branch? 

Anu: good, Idea, let me fire off a thread.
[100 reply thread ensues -- and finally, after 4 months, 
a new branch is opened, and Danny is sitting there 
wondering what the s$*t did I do? it was a one-line patch]

Anu: @Danny, Could you please rebase this patch, 
and btw, we found three other tools that need the fix. 
Can you please take care of that while you are at this? 

Gandalf(some wise-committer in Hadoop): 
This change impacts Hadoop Common, that means this 
change also impacts, Spark, Sqoop, YARN, HBase 
and Kitchen Sink. You cannot make this change 
without considering the downstream impact.




{noformat}
Danny being my friend and a new contributor, I was just trying to be nice and 
helpful and steering him gently away from the mine field was about to step in. 
I did not even start the discussion on whether we should be a "traditionalist" 
vs. "modernist" and use "KB" vs. "KiB". I am sure some other committer will add 
that perspective.

Given all this,* I am +1 on this change*, I hope my parody of our lives will 
motivate us to stay away from a long discussion on the merits of this one line 
patch.

Yesterday night, I was truly in a good mood, having just seen how humanity 
saves dragons and was generally feeling good and charitable. In that moment of 
weakness, I decided to be kind of Danny and save him some pain.

Danny, I hope you see the wisdom in being my friend and hopefully you will be 
nice enough to buy me some beer when we finally meet.

Ps. Truly, I have nothing better to do :( that is a sad state of my life :(
 I need to find something better to do than comment on random JIRAs.


was (Author: anu):
[~xkrogen]/ [~dannytbecker],

Thanks for the links and the patch. I am ambivalent about this change. I 
debated whether I should reply and then I remembered that +friends don't let 
friends drive drunk.+ It is also quite obvious from this reply that I have 
nothing of real value to do on a Tuesday morning :).
 *First of all, I am a friend of Danny. He may not know me, but trust me on 
this.*

>From my experiences, this is how is patch is going to evolve. 
 Someone will look at this(at us assume it is me for time being) and will 
comment
{noformat}
Anu: Danny, Thanks for the patch. This looks Awesome, 
I have always wanted Hadoop to be ISO compliant. 
+1,  Pending Jenkins.



Erik: Danny, this is so awesome, while you are at this, 
would you please fix the command line tool xyz, 
that still prints KB and MB instead of KiB and MiB.

Danny being good-natured and nice (remember he is my friend) 
decides to make changes to the tool xyz and puts up patch v2.



Engineer: Btw, Danny this breaks the apt-compact rules. 
You *cannot* make this change. This can *only* be done 
in a major revision, oh, btw, if you are planning to 
make this change only in the UI, that is now inconsistent. 
Some places of the HDFS speaks ISO and parts speak 
"custom" memory Units. That is a mess. 
I really think we should do this in the next release.



Anu: oh...I did not think of that, you have a point. 
@Erik, what do you think?

Erik: Oh.. that is right, 

[jira] [Comment Edited] (HDFS-14377) Incorrect unit abbreviations shown for fmt_bytes

2019-03-19 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796309#comment-16796309
 ] 

Anu Engineer edited comment on HDFS-14377 at 3/19/19 5:30 PM:
--

[~xkrogen]/ [~dannytbecker],

Thanks for the links and the patch. I am ambivalent about this change. I 
debated whether I should reply and then I remembered that +friends don't let 
friends drive drunk.+ It is also quite obvious from this reply that I have 
nothing of real value to do on a Tuesday morning :).
 *
 First of all, I am a friend of Danny. He may not know me, but trust me on 
this.*

>From my experiences, this is how is patch is going to evolve. 
 Someone will look at this(at us assume it is me for time being) and will 
comment
{noformat}
Anu: Danny, Thanks for the patch. This looks Awesome, 
I have always wanted Hadoop to be ISO compliant. 
+1,  Pending Jenkins.



Erik: Danny, this is so awesome, while you are at this, 
would you please fix the command line tool xyz, 
that still prints KB and MB instead of KiB and MiB.

Danny being good-natured and nice (remember he is my friend) 
decides to make changes to the tool xyz and puts up patch v2.



Engineer: Btw, Danny this breaks the apt-compact rules. 
You *cannot* make this change. This can *only* be done 
in a major revision, oh, btw, if you are planning to 
make this change only in the UI, that is now inconsistent. 
Some places of the HDFS speaks ISO and parts speak 
"custom" memory Units. That is a mess. 
I really think we should do this in the next release.



Anu: oh...I did not think of that, you have a point. 
@Erik, what do you think?

Erik: Oh.. that is right, let us open a new branch, 
Hadoop 4.0  to commit this.

Engineer: Perhaps we should start a discussion thread 
in the mailing list to open a new branch? 

Anu: good, Idea, let me fire off a thread.
[100 reply thread ensues -- and finally, after 4 months, 
a new branch is opened, and Danny is sitting there 
wondering what the s$*t did I do? it was a one-line patch]

Anu: @Danny, Could you please rebase this patch, 
and btw, we found three other tools that need the fix. 
Can you please take care of that while you are at this? 

Gandalf(some wise-committer in Hadoop): 
This change impacts Hadoop Common, that means this 
change also impacts, Spark, Sqoop, YARN, HBase 
and Kitchen Sink. You cannot make this change 
without considering the downstream impact.




{noformat}
Danny being my friend and a new contributor, I was just trying to be nice and 
helpful and steering him gently away from the mine field was about to step in. 
I did not even start the discussion on whether we should be a "traditionalist" 
vs. "modernist" and use "KB" vs. "KiB". I am sure some other committer will add 
that perspective.

Given all this,* I am +1 on this change*, I hope my parody of our lives will 
motivate us to stay away from a long discussion on the merits of this one line 
patch.

Yesterday night, I was truly in a good mood, having just seen how humanity 
saves dragons and was generally feeling good and charitable. In that moment of 
weakness, I decided to be kind of Danny and save him some pain.

Danny, I hope you see the wisdom in being my friend and hopefully you will be 
nice enough to buy me some beer when we finally meet.

Ps. Truly, I have nothing better to do :( that is a sad state of my life :(
 I need to find something better to do than comment on random JIRAs.


was (Author: anu):
[~xkrogen]/ [~dannytbecker],

Thanks for the links and the patch. I am ambivalent about this change. I 
thought whether I should reply and then I remembered that +friends don't let 
friends drive drunk.+ It is also quite obvious from this reply that I have 
nothing of real value to do on a Tuesday morning :).
 *
 First of all, I am a friend of Danny. He may not know me, but trust me on 
this.*

>From my experiences, this is how is patch is going to evolve. 
 Someone will look at this(at us assume it is me for time being) and will 
comment
{noformat}
Anu: Danny, Thanks for the patch. This looks Awesome, 
I have always wanted Hadoop to be ISO compliant. 
+1,  Pending Jenkins.



Erik: Danny, this is so awesome, while you are at this, 
would you please fix the command line tool xyz, 
that still prints KB and MB instead of KiB and MiB.

Danny being good-natured and nice (remember he is my friend) 
decides to make changes to the tool xyz and puts up patch v2.



Engineer: Btw, Danny this breaks the apt-compact rules. 
You *cannot* make this change. This can *only* be done 
in a major revision, oh, btw, if you are planning to 
make this change only in the UI, that is now inconsistent. 
Some places of the HDFS speaks ISO and parts speak 
"custom" memory Units. That is a mess. 
I really think we should do this in the next release.



Anu: oh...I did not think of that, you have a point. 
@Erik, what do you think?

Erik: Oh.. that is right, 

[jira] [Comment Edited] (HDFS-14377) Incorrect unit abbreviations shown for fmt_bytes

2019-03-19 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796309#comment-16796309
 ] 

Anu Engineer edited comment on HDFS-14377 at 3/19/19 5:30 PM:
--

[~xkrogen]/ [~dannytbecker],

Thanks for the links and the patch. I am ambivalent about this change. I 
debated whether I should reply and then I remembered that +friends don't let 
friends drive drunk.+ It is also quite obvious from this reply that I have 
nothing of real value to do on a Tuesday morning :).
 *First of all, I am a friend of Danny. He may not know me, but trust me on 
this.*

>From my experiences, this is how is patch is going to evolve. 
 Someone will look at this(at us assume it is me for time being) and will 
comment
{noformat}
Anu: Danny, Thanks for the patch. This looks Awesome, 
I have always wanted Hadoop to be ISO compliant. 
+1,  Pending Jenkins.



Erik: Danny, this is so awesome, while you are at this, 
would you please fix the command line tool xyz, 
that still prints KB and MB instead of KiB and MiB.

Danny being good-natured and nice (remember he is my friend) 
decides to make changes to the tool xyz and puts up patch v2.



Engineer: Btw, Danny this breaks the apt-compact rules. 
You *cannot* make this change. This can *only* be done 
in a major revision, oh, btw, if you are planning to 
make this change only in the UI, that is now inconsistent. 
Some places of the HDFS speaks ISO and parts speak 
"custom" memory Units. That is a mess. 
I really think we should do this in the next release.



Anu: oh...I did not think of that, you have a point. 
@Erik, what do you think?

Erik: Oh.. that is right, let us open a new branch, 
Hadoop 4.0  to commit this.

Engineer: Perhaps we should start a discussion thread 
in the mailing list to open a new branch? 

Anu: good, Idea, let me fire off a thread.
[100 reply thread ensues -- and finally, after 4 months, 
a new branch is opened, and Danny is sitting there 
wondering what the s$*t did I do? it was a one-line patch]

Anu: @Danny, Could you please rebase this patch, 
and btw, we found three other tools that need the fix. 
Can you please take care of that while you are at this? 

Gandalf(some wise-committer in Hadoop): 
This change impacts Hadoop Common, that means this 
change also impacts, Spark, Sqoop, YARN, HBase 
and Kitchen Sink. You cannot make this change 
without considering the downstream impact.




{noformat}
Danny being my friend and a new contributor, I was just trying to be nice and 
helpful and steering him gently away from the mine field was about to step in. 
I did not even start the discussion on whether we should be a "traditionalist" 
vs. "modernist" and use "KB" vs. "KiB". I am sure some other committer will add 
that perspective.

Given all this,* I am +1 on this change*, I hope my parody of our lives will 
motivate us to stay away from a long discussion on the merits of this one line 
patch.

Yesterday night, I was truly in a good mood, having just seen how humanity 
saves dragons and was generally feeling good and charitable. In that moment of 
weakness, I decided to be kind of Danny and save him some pain.

Danny, I hope you see the wisdom in being my friend and hopefully you will be 
nice enough to buy me some beer when we finally meet.

Ps. Truly, I have nothing better to do :( that is a sad state of my life :(
 I need to find something better to do than comment on random JIRAs.


was (Author: anu):
[~xkrogen]/ [~dannytbecker],

Thanks for the links and the patch. I am ambivalent about this change. I 
debated whether I should reply and then I remembered that +friends don't let 
friends drive drunk.+ It is also quite obvious from this reply that I have 
nothing of real value to do on a Tuesday morning :).
 *
 First of all, I am a friend of Danny. He may not know me, but trust me on 
this.*

>From my experiences, this is how is patch is going to evolve. 
 Someone will look at this(at us assume it is me for time being) and will 
comment
{noformat}
Anu: Danny, Thanks for the patch. This looks Awesome, 
I have always wanted Hadoop to be ISO compliant. 
+1,  Pending Jenkins.



Erik: Danny, this is so awesome, while you are at this, 
would you please fix the command line tool xyz, 
that still prints KB and MB instead of KiB and MiB.

Danny being good-natured and nice (remember he is my friend) 
decides to make changes to the tool xyz and puts up patch v2.



Engineer: Btw, Danny this breaks the apt-compact rules. 
You *cannot* make this change. This can *only* be done 
in a major revision, oh, btw, if you are planning to 
make this change only in the UI, that is now inconsistent. 
Some places of the HDFS speaks ISO and parts speak 
"custom" memory Units. That is a mess. 
I really think we should do this in the next release.



Anu: oh...I did not think of that, you have a point. 
@Erik, what do you think?

Erik: Oh.. that is right, 

[jira] [Commented] (HDFS-14377) Incorrect unit abbreviations shown for fmt_bytes

2019-03-19 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796309#comment-16796309
 ] 

Anu Engineer commented on HDFS-14377:
-

[~xkrogen]/ [~dannytbecker],

Thanks for the links and the patch. I am ambivalent about this change. I 
thought whether I should reply and then I remembered that +friends don't let 
friends drive drunk.+ It is also quite obvious from this reply that I have 
nothing of real value to do on a Tuesday morning :).
*
First of all, I am a friend of Danny. He may not know me, but trust me on this.*

>From my experiences, this is how is patch is going to evolve. 
Someone will look at this(at us assume it is me for time being) and will comment
{noformat}
Anu: Danny, Thanks for the patch. This looks Awesome, I have always wanted 
Hadoop to be ISO compliant. +1,  Pending Jenkins.



Erik: Danny, this is so awesome, while you are at this, would you please fix 
the command line tool xyz, that still prints KB and MB instead of KiB and MiB.

Danny being good-natured and nice (remember he is my friend) decides to make 
changes to the tool xyz and puts up patch v2.



Engineer: Btw, Danny this breaks the apt-compact rules. You *cannot* make this 
change. This can *only* be done in a major revision, oh, btw, if you are 
planning to make this change only in the UI, that is now inconsistent. Some 
places of the HDFS speaks ISO and parts speak "custom" memory Units. That is a 
mess. I really think we should do this in the next release.



Anu: oh...I did not think of that, you have a point. @Erik, what do you think?

Erik: Oh.. that is right, let us open a new branch, Hadoop 4.0  to commit this.

Engineer: Perhaps we should start a discussion thread in the mailing list to 
open a new branch? 

Anu: good, Idea, let me fire off a thread.
[ 100 reply thread ensues -- and finally, after 4 months, a new branch is 
opened, and Danny is sitting there wondering what the s$*t did I do? it was a 
one-line patch]

Anu: @Danny, Could you please rebase this patch, and btw, we found three other 
tools that need the fix. Can you please take care of that while you are at 
this? 

Gandalf(some wise-committer in Hadoop) : This change impacts Hadoop Common, 
that means this change also impacts, Spark, Sqoop, YARN, HBase and Kitchen 
Sink. You cannot make this change without considering the downstream impact.




Danny being my friend and a new contributor, I was just trying to be nice and 
helpful and  steering him gently away from the mine field was about to step in. 
I did not even start the discussion on whether we should be a "traditionalist" 
vs. "modernist" and use "KB" vs. "KiB". I am sure some other committer will add 
that perspective.

Given all this,* I am +1 on this change*, I hope my parody of our lives will 
motivate us to stay away from a long discussion on the merits of this one line 
patch. 

Yesterday night, I was truly in a good mood, having just seen how humanity 
saves dragons and was generally feeling good and charitable. In that moment of 
weakness, I decided to be kind of Danny and save him some pain.

Danny, I hope you see the wisdom in being my friend and hopefully you will be 
nice enough to buy me some beer when we finally meet.

Ps. Truly, I have nothing better to do :(  that is a sad state of my life :(
I need to find something better to do than comment on random JIRAs.

> Incorrect unit abbreviations shown for fmt_bytes
> 
>
> Key: HDFS-14377
> URL: https://issues.apache.org/jira/browse/HDFS-14377
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Danny Becker
>Assignee: Danny Becker
>Priority: Trivial
> Attachments: HDFS-14377.000.patch
>
>
> The function fmt_bytes show the abbreviations for Terabyte, Petabyte, etc. 
> the standard metric system units for data storage units. The function however 
> divides by a factor of 1024, which is the factor used for Pebibyte, Tebibyte, 
> etc. Change the abbreviations from TB, PB, etc to TiB, PiB, etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14377) Incorrect unit abbreviations shown for fmt_bytes

2019-03-19 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795740#comment-16795740
 ] 

Anu Engineer commented on HDFS-14377:
-

[~dannytbecker] Would you be kind enough to provide a standards body reference 
that says this is the right definitions, and not what is in code?  I know 
different books, articles etc have different usage patterns. So unless we are 
really sure about this -- and there is really a standards body saying this is 
how it should be done, let us not do it. I know some Hard disk vendors like 
this, but most software people don't.

> Incorrect unit abbreviations shown for fmt_bytes
> 
>
> Key: HDFS-14377
> URL: https://issues.apache.org/jira/browse/HDFS-14377
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Danny Becker
>Priority: Trivial
> Attachments: HDFS-14377.000.patch
>
>
> The function fmt_bytes show the abbreviations for Terabyte, Petabyte, etc. 
> the standard metric system units for data storage units. The function however 
> divides by a factor of 1024, which is the factor used for Pebibyte, Tebibyte, 
> etc. Change the abbreviations from TB, PB, etc to TiB, PiB, etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1302) Fix SCM CLI does not list container with id 1

2019-03-18 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1302:
---
Target Version/s: 0.5.0  (was: 0.4.0)

> Fix SCM CLI does not list container with id 1
> -
>
> Key: HDDS-1302
> URL: https://issues.apache.org/jira/browse/HDDS-1302
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Vivek Ratnavel Subramanian
>Priority: Minor
>
> In HDDS-1263 it is changed to handle the list containers with containerID 1 
> by changing the actual logic of listContainers in ScmContainerManager.java. 
> But now with this change, it is contradicting with the javadoc.
> From [~nandakumar131] comments
> https://issues.apache.org/jira/browse/HDDS-1263?focusedCommentId=16794865=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16794865
>  
> I agree this will be the way to fix it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1278) [Ozone Upgrade] Add Support for HDFS Container protocol in the Ozone upgrade planner

2019-03-13 Thread Anu Engineer (JIRA)
Anu Engineer created HDDS-1278:
--

 Summary: [Ozone Upgrade] Add Support for HDFS Container protocol 
in the Ozone upgrade planner
 Key: HDDS-1278
 URL: https://issues.apache.org/jira/browse/HDDS-1278
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: upgrade
Reporter: Anu Engineer


Once the Ozone Manager DB (the namespace) and SCM DB ( the block space DBs) are 
constructed, we need to communicate to data nodes on how the new "HDFS 
containers" will look like. This communication will done by Ozone planner, in 
conjunction with SCM and OM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1280) [Ozone Upgrade] Add support for OM and SCM to run in upgrade mode.

2019-03-13 Thread Anu Engineer (JIRA)
Anu Engineer created HDDS-1280:
--

 Summary: [Ozone Upgrade] Add support for OM and SCM to run in 
upgrade mode.
 Key: HDDS-1280
 URL: https://issues.apache.org/jira/browse/HDDS-1280
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: upgrade
Reporter: Anu Engineer


Both SCM and OM need to be aware that they are running in the upgrade mode, and 
disallow normal system operations. We need to define what operations and who 
can perform them while SCM and OM is in upgrade mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1279) [Ozone Upgrade] Add support for generating block tokens to communicate with data nodes for upgrade.

2019-03-13 Thread Anu Engineer (JIRA)
Anu Engineer created HDDS-1279:
--

 Summary: [Ozone Upgrade] Add support for generating block tokens 
to communicate with data nodes for upgrade.
 Key: HDDS-1279
 URL: https://issues.apache.org/jira/browse/HDDS-1279
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: upgrade
Reporter: Anu Engineer


With Security support in ozone, the planner needs to get tokens from OM or SCM 
to be able to communicate with data nodes. While upgrading is happening SCM or 
OM might not be fully initialized. This JIRA tracks how we will solve the issue 
of upgrading a secure cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1277) [Ozone Upgrade] Add Support for HDFS containers in Datanode

2019-03-13 Thread Anu Engineer (JIRA)
Anu Engineer created HDDS-1277:
--

 Summary: [Ozone Upgrade] Add Support for HDFS containers in 
Datanode
 Key: HDDS-1277
 URL: https://issues.apache.org/jira/browse/HDDS-1277
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Anu Engineer


Today HDDS containers are constructed when data is written to a data node. But 
with upgrade of HDFS blocks, the data is already present in the HDFS data 
nodes. This JIRA proposes that we create a new "HDFS containers" that can be 
instructed to make hard links to existing HDFS data and send in container 
reports to SCM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1276) [Ozone Upgrade] Convert BlocktoNode map to Ozone SCM DB

2019-03-13 Thread Anu Engineer (JIRA)
Anu Engineer created HDDS-1276:
--

 Summary: [Ozone Upgrade] Convert BlocktoNode map to Ozone SCM DB
 Key: HDDS-1276
 URL: https://issues.apache.org/jira/browse/HDDS-1276
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Anu Engineer


HDDS is the block layer for Ozone. So in this step, we will map the HDFS blocks 
to Ozone Blocks  (without data copy, please see the parent JIRA for the design 
document.)  This SCM container DB keeps track of HDFS blocks in the cluster by 
making this part of SCM containers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1275) [Ozone Upgrade] Create a mover for executing the move plan

2019-03-13 Thread Anu Engineer (JIRA)
Anu Engineer created HDDS-1275:
--

 Summary: [Ozone Upgrade] Create a mover for executing the move plan
 Key: HDDS-1275
 URL: https://issues.apache.org/jira/browse/HDDS-1275
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Anu Engineer


Once a set of Moves are planned and reviewed by the admin, he/she might decide 
to execute those moves. This JIRA will allow a move plan to be executed against 
an HDFS cluster before the Ozone upgrade.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1270) [Ozone Upgrade] Create the protobuf definitions for persisting the block to node map

2019-03-13 Thread Anu Engineer (JIRA)
Anu Engineer created HDDS-1270:
--

 Summary: [Ozone Upgrade] Create the protobuf definitions for 
persisting the block to node map
 Key: HDDS-1270
 URL: https://issues.apache.org/jira/browse/HDDS-1270
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Anu Engineer


In a large cluster, there is lot of HDFS metadata. This information needs to 
read and processed by the Ozone Planner tool. However, unlike Namenode Ozone 
Planner can use secondary storage and need not keep all this metadata in 
memory. With the help of SSD/HDD our aim is to make sure that Ozone upgrade 
planner can be run from a normal laptop(if needed). In other words, the system 
will compute the information about HDFS cluster and Ozone Cluster, but with 
very small resource requirements.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1271) [Ozone Upgrade] Add support for persisting the blockMap to RocksDB

2019-03-13 Thread Anu Engineer (JIRA)
Anu Engineer created HDDS-1271:
--

 Summary: [Ozone Upgrade] Add support for persisting the blockMap 
to RocksDB
 Key: HDDS-1271
 URL: https://issues.apache.org/jira/browse/HDDS-1271
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: upgrade
Reporter: Anu Engineer


HDDS-1270 explains the rationale of using RocksDB for storing and processing 
the HDFS metadata. This JIRA is for adding the RocksDB support for the Ozone 
upgrade Planner, so that HDFS metadata came be persisted and computations run 
off-line.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1274) [Ozone Upgrade] Create a move plan

2019-03-13 Thread Anu Engineer (JIRA)
Anu Engineer created HDDS-1274:
--

 Summary: [Ozone Upgrade] Create a move plan
 Key: HDDS-1274
 URL: https://issues.apache.org/jira/browse/HDDS-1274
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Anu Engineer


The admin can choose to move 'X' amount of data, if the user chooses to move 
say 10 TB of data for optimal pack, this step will help the administrator see 
what those planned moves are. This is again for the informational view for the 
administrator.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1273) [Ozone Upgrade] Compute data moves at various sizes

2019-03-13 Thread Anu Engineer (JIRA)
Anu Engineer created HDDS-1273:
--

 Summary: [Ozone Upgrade] Compute data moves at various sizes
 Key: HDDS-1273
 URL: https://issues.apache.org/jira/browse/HDDS-1273
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: upgrade
Reporter: Anu Engineer


Even though, the base case of HDFS to Ozone supports inPlace upgrades, in some 
cases, it might make sense to move a tiny fraction of data in the HDFS cluster 
before any upgrade takes place. This JIRA will provide information to Admins 
about how various data moves influence the final layout of the Ozone cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1272) [Ozone Upgrade] Create HDFS cluster Status

2019-03-13 Thread Anu Engineer (JIRA)
Anu Engineer created HDDS-1272:
--

 Summary: [Ozone Upgrade] Create HDFS cluster Status
 Key: HDDS-1272
 URL: https://issues.apache.org/jira/browse/HDDS-1272
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: upgrade
Reporter: Anu Engineer


The most important part of upgrading HDFS to Ozone is getting good visibility 
to the current cluster. We should be able to answer questions that admins might 
have; the admin might decide to upgrade only directories which are old to 
Ozone, or might decide to upgrade directories with millions and millions files. 
The ability to ask these questions and understand that information is vital, 
This JIRA proposes to add HDFS metadata explorer that will allow admins to make 
an informed decisions while upgrading to Ozone.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1269) [Ozone Upgrade] Add the ability to read block information from Namenode.

2019-03-13 Thread Anu Engineer (JIRA)
Anu Engineer created HDDS-1269:
--

 Summary: [Ozone Upgrade] Add the ability to read block information 
from Namenode.
 Key: HDDS-1269
 URL: https://issues.apache.org/jira/browse/HDDS-1269
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: upgrade
Reporter: Anu Engineer


When upgrading an HDFS cluster, the user can choose to upgrade with zero data 
moves. However, it is also a good point to evaluate what-if? conditions and 
make optimal decisions for both HDFS and Ozone. First step in this process is 
to learn information about HDFS, FSImage provides then Namespace information, 
this JIRA will provide the blocks and data nodes part of HDFS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1268) [Ozone Upgrade] Add ability to read the FSImage from Namenode.

2019-03-13 Thread Anu Engineer (JIRA)
Anu Engineer created HDDS-1268:
--

 Summary: [Ozone Upgrade] Add ability to read the FSImage from 
Namenode.
 Key: HDDS-1268
 URL: https://issues.apache.org/jira/browse/HDDS-1268
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: upgrade
Reporter: Anu Engineer


Ozone upgrade planner should be able to show the current state of the HDFS 
cluster, including paths, EC and TDE. This will allow user to choose what parts 
of the cluster to upgrade to ozone and what should be the new path name. In 
order to do this, Ozone upgrade planner should support the ability to connect 
to name node, read FSImage and process it off-line.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1267) [Ozone Upgrade] Create a tool/background service called Planner

2019-03-13 Thread Anu Engineer (JIRA)
Anu Engineer created HDDS-1267:
--

 Summary: [Ozone Upgrade] Create a tool/background service called 
Planner
 Key: HDDS-1267
 URL: https://issues.apache.org/jira/browse/HDDS-1267
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: upgrade
Reporter: Anu Engineer


The first part of doing an HDFS to Ozone upgrade is understanding the current 
cluster information and the post-upgrade state. The Ozone upgrade planner will 
read the information of an HDFS cluster and allow the user to fine tune how the 
resulting HDFS and Ozone cluster should look like. This tool is primarily a 
visualizer for HDFS and an orchestrator for the the upgrade.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1266) [Ozone upgrade] Support Upgrading HDFS clusters to use Ozone

2019-03-13 Thread Anu Engineer (JIRA)
Anu Engineer created HDDS-1266:
--

 Summary: [Ozone upgrade] Support Upgrading HDFS clusters to use 
Ozone
 Key: HDDS-1266
 URL: https://issues.apache.org/jira/browse/HDDS-1266
 Project: Hadoop Distributed Data Store
  Issue Type: New Feature
Reporter: Anu Engineer
Assignee: Anu Engineer


This is the master JIRA to support upgrading existing HDFS clusters to have 
Ozone running concurrently. One of the requirements is that we support 
upgrading from HDFS to Ozone, without a full data copy. This requirement is 
called "In Place upgrade", the end result of such an upgrade would be to have 
the HDFS data appear in Ozone as if Ozone has taken a snap-shot of the HDFS 
data. Once upgrade is complete, Ozone and HDFS will act as independent systems. 
I will post a design document soon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1138) OzoneManager should return the pipeline info of the allocated block along with block info

2019-03-08 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788450#comment-16788450
 ] 

Anu Engineer commented on HDDS-1138:


[~xyao] Thanks, this is amazing, thanks for taking care of this. Appreciate it.


> OzoneManager should return the pipeline info of the allocated block along 
> with block info
> -
>
> Key: HDDS-1138
> URL: https://issues.apache.org/jira/browse/HDDS-1138
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client, Ozone Manager
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Attachments: HDDS-1138.001.patch
>
>
> Currently, while a block is allocated from OM, the request is forwarded to 
> SCM. However, even though the pipeline information is present with the OM for 
> block allocation, this information is passed through to the client.
> This optimization will help in reducing the number of hops for the client by 
> reducing 1 RPC round trip for each block allocated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1227) Avoid extra buffer copy during checksum computation in write Path

2019-03-08 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1227:
---
Target Version/s: 0.5.0  (was: 0.4.0)

> Avoid extra buffer copy during checksum computation in write Path
> -
>
> Key: HDDS-1227
> URL: https://issues.apache.org/jira/browse/HDDS-1227
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pushed-to-craterlake
> Fix For: 0.4.0
>
>
> The code here does a buffer copy to to compute checksum. This needs to be 
> avoided.
> {code:java}
> /**
>  * Computes checksum for give data.
>  * @param byteString input data in the form of ByteString.
>  * @return ChecksumData computed for input data.
>  */
> public ChecksumData computeChecksum(ByteString byteString)
> throws OzoneChecksumException {
>   return computeChecksum(byteString.toByteArray());
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1227) Avoid extra buffer copy during checksum computation in write Path

2019-03-08 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1227:
---
Fix Version/s: (was: 0.4.0)

> Avoid extra buffer copy during checksum computation in write Path
> -
>
> Key: HDDS-1227
> URL: https://issues.apache.org/jira/browse/HDDS-1227
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pushed-to-craterlake
>
> The code here does a buffer copy to to compute checksum. This needs to be 
> avoided.
> {code:java}
> /**
>  * Computes checksum for give data.
>  * @param byteString input data in the form of ByteString.
>  * @return ChecksumData computed for input data.
>  */
> public ChecksumData computeChecksum(ByteString byteString)
> throws OzoneChecksumException {
>   return computeChecksum(byteString.toByteArray());
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1227) Avoid extra buffer copy during checksum computation in write Path

2019-03-08 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1227:
---
Labels: pushed-to-craterlake  (was: )

> Avoid extra buffer copy during checksum computation in write Path
> -
>
> Key: HDDS-1227
> URL: https://issues.apache.org/jira/browse/HDDS-1227
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pushed-to-craterlake
> Fix For: 0.4.0
>
>
> The code here does a buffer copy to to compute checksum. This needs to be 
> avoided.
> {code:java}
> /**
>  * Computes checksum for give data.
>  * @param byteString input data in the form of ByteString.
>  * @return ChecksumData computed for input data.
>  */
> public ChecksumData computeChecksum(ByteString byteString)
> throws OzoneChecksumException {
>   return computeChecksum(byteString.toByteArray());
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1138) OzoneManager should return the pipeline info of the allocated block along with block info

2019-03-08 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16788352#comment-16788352
 ] 

Anu Engineer commented on HDDS-1138:


[~xyao] Can you please see if this works for the security issue.

> OzoneManager should return the pipeline info of the allocated block along 
> with block info
> -
>
> Key: HDDS-1138
> URL: https://issues.apache.org/jira/browse/HDDS-1138
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client, Ozone Manager
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
> Attachments: HDDS-1138.001.patch
>
>
> Currently, while a block is allocated from OM, the request is forwarded to 
> SCM. However, even though the pipeline information is present with the OM for 
> block allocation, this information is passed through to the client.
> This optimization will help in reducing the number of hops for the client by 
> reducing 1 RPC round trip for each block allocated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1131) destroy pipeline failed with PipelineNotFoundException

2019-03-08 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1131:
---
Labels: pushed-to-craterlake test-badlands  (was: test-badlands)

> destroy pipeline failed with PipelineNotFoundException
> --
>
> Key: HDDS-1131
> URL: https://issues.apache.org/jira/browse/HDDS-1131
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Nilotpal Nandi
>Assignee: Nanda kumar
>Priority: Major
>  Labels: pushed-to-craterlake, test-badlands
>
> steps taken :
> 
>  # created 12 datanodes cluster and running workload on all the nodes
> exceptions seen in scm log
> 
> {noformat}
> 2019-02-18 07:17:51,112 INFO 
> org.apache.hadoop.hdds.scm.pipeline.RatisPipelineUtils: destroying 
> pipeline:PipelineID=01d3ef2a-912c-4fc0-80b6-012343d76adb with 
> group-012343D76ADB:[a40a7b01-a30b-469c-b373-9fcb20a126ed:172.27.54.212:9858, 
> 8c77b16b-8054-49e3-b669-1ff759cfd271:172.27.23.196:9858, 
> 943007c8-4fdd-4926-89e2-2c8c52c05073:172.27.76.72:9858]
> 2019-02-18 07:17:51,112 INFO 
> org.apache.hadoop.hdds.scm.container.CloseContainerEventHandler: Close 
> container Event triggered for container : #40
> 2019-02-18 07:17:51,113 INFO 
> org.apache.hadoop.hdds.scm.container.CloseContainerEventHandler: Close 
> container Event triggered for container : #41
> 2019-02-18 07:17:51,114 INFO 
> org.apache.hadoop.hdds.scm.container.CloseContainerEventHandler: Close 
> container Event triggered for container : #42
> 2019-02-18 07:22:51,127 WARN 
> org.apache.hadoop.hdds.scm.pipeline.RatisPipelineUtils: Pipeline destroy 
> failed for pipeline=PipelineID=01d3ef2a-912c-4fc0-80b6-012343d76adb 
> dn=a40a7b01-a30b-469c-b373-9fcb20a126ed{ip: 172.27.54.212, host: 
> ctr-e139-1542663976389-62237-01-07.hwx.site}
> 2019-02-18 07:22:51,139 WARN 
> org.apache.hadoop.hdds.scm.pipeline.RatisPipelineUtils: Pipeline destroy 
> failed for pipeline=PipelineID=01d3ef2a-912c-4fc0-80b6-012343d76adb 
> dn=8c77b16b-8054-49e3-b669-1ff759cfd271{ip: 172.27.23.196, host: 
> ctr-e139-1542663976389-62237-01-15.hwx.site}
> 2019-02-18 07:22:51,149 WARN 
> org.apache.hadoop.hdds.scm.pipeline.RatisPipelineUtils: Pipeline destroy 
> failed for pipeline=PipelineID=01d3ef2a-912c-4fc0-80b6-012343d76adb 
> dn=943007c8-4fdd-4926-89e2-2c8c52c05073{ip: 172.27.76.72, host: 
> ctr-e139-1542663976389-62237-01-06.hwx.site}
> 2019-02-18 07:22:51,150 ERROR 
> org.apache.hadoop.hdds.scm.pipeline.RatisPipelineUtils: Destroy pipeline 
> failed for pipeline:PipelineID=01d3ef2a-912c-4fc0-80b6-012343d76adb with 
> group-012343D76ADB:[a40a7b01-a30b-469c-b373-9fcb20a126ed:172.27.54.212:9858, 
> 8c77b16b-8054-49e3-b669-1ff759cfd271:172.27.23.196:9858, 
> 943007c8-4fdd-4926-89e2-2c8c52c05073:172.27.76.72:9858]
> org.apache.hadoop.hdds.scm.pipeline.PipelineNotFoundException: 
> PipelineID=01d3ef2a-912c-4fc0-80b6-012343d76adb not found
>  at 
> org.apache.hadoop.hdds.scm.pipeline.PipelineStateMap.getPipeline(PipelineStateMap.java:112)
>  at 
> org.apache.hadoop.hdds.scm.pipeline.PipelineStateMap.removePipeline(PipelineStateMap.java:247)
>  at 
> org.apache.hadoop.hdds.scm.pipeline.PipelineStateManager.removePipeline(PipelineStateManager.java:90)
>  at 
> org.apache.hadoop.hdds.scm.pipeline.SCMPipelineManager.removePipeline(SCMPipelineManager.java:261)
>  at 
> org.apache.hadoop.hdds.scm.pipeline.RatisPipelineUtils.destroyPipeline(RatisPipelineUtils.java:103)
>  at 
> org.apache.hadoop.hdds.scm.pipeline.RatisPipelineUtils.lambda$finalizeAndDestroyPipeline$1(RatisPipelineUtils.java:133)
>  at 
> org.apache.ratis.util.TimeoutScheduler.lambda$onTimeout$0(TimeoutScheduler.java:85)
>  at 
> org.apache.ratis.util.TimeoutScheduler.lambda$onTimeout$1(TimeoutScheduler.java:104)
>  at org.apache.ratis.util.LogUtils.runAndLog(LogUtils.java:50)
>  at org.apache.ratis.util.LogUtils$1.run(LogUtils.java:91)
>  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748){noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, 

[jira] [Updated] (HDDS-1131) destroy pipeline failed with PipelineNotFoundException

2019-03-08 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1131:
---
Target Version/s: 0.5.0  (was: 0.4.0)

> destroy pipeline failed with PipelineNotFoundException
> --
>
> Key: HDDS-1131
> URL: https://issues.apache.org/jira/browse/HDDS-1131
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Reporter: Nilotpal Nandi
>Assignee: Nanda kumar
>Priority: Major
>  Labels: test-badlands
>
> steps taken :
> 
>  # created 12 datanodes cluster and running workload on all the nodes
> exceptions seen in scm log
> 
> {noformat}
> 2019-02-18 07:17:51,112 INFO 
> org.apache.hadoop.hdds.scm.pipeline.RatisPipelineUtils: destroying 
> pipeline:PipelineID=01d3ef2a-912c-4fc0-80b6-012343d76adb with 
> group-012343D76ADB:[a40a7b01-a30b-469c-b373-9fcb20a126ed:172.27.54.212:9858, 
> 8c77b16b-8054-49e3-b669-1ff759cfd271:172.27.23.196:9858, 
> 943007c8-4fdd-4926-89e2-2c8c52c05073:172.27.76.72:9858]
> 2019-02-18 07:17:51,112 INFO 
> org.apache.hadoop.hdds.scm.container.CloseContainerEventHandler: Close 
> container Event triggered for container : #40
> 2019-02-18 07:17:51,113 INFO 
> org.apache.hadoop.hdds.scm.container.CloseContainerEventHandler: Close 
> container Event triggered for container : #41
> 2019-02-18 07:17:51,114 INFO 
> org.apache.hadoop.hdds.scm.container.CloseContainerEventHandler: Close 
> container Event triggered for container : #42
> 2019-02-18 07:22:51,127 WARN 
> org.apache.hadoop.hdds.scm.pipeline.RatisPipelineUtils: Pipeline destroy 
> failed for pipeline=PipelineID=01d3ef2a-912c-4fc0-80b6-012343d76adb 
> dn=a40a7b01-a30b-469c-b373-9fcb20a126ed{ip: 172.27.54.212, host: 
> ctr-e139-1542663976389-62237-01-07.hwx.site}
> 2019-02-18 07:22:51,139 WARN 
> org.apache.hadoop.hdds.scm.pipeline.RatisPipelineUtils: Pipeline destroy 
> failed for pipeline=PipelineID=01d3ef2a-912c-4fc0-80b6-012343d76adb 
> dn=8c77b16b-8054-49e3-b669-1ff759cfd271{ip: 172.27.23.196, host: 
> ctr-e139-1542663976389-62237-01-15.hwx.site}
> 2019-02-18 07:22:51,149 WARN 
> org.apache.hadoop.hdds.scm.pipeline.RatisPipelineUtils: Pipeline destroy 
> failed for pipeline=PipelineID=01d3ef2a-912c-4fc0-80b6-012343d76adb 
> dn=943007c8-4fdd-4926-89e2-2c8c52c05073{ip: 172.27.76.72, host: 
> ctr-e139-1542663976389-62237-01-06.hwx.site}
> 2019-02-18 07:22:51,150 ERROR 
> org.apache.hadoop.hdds.scm.pipeline.RatisPipelineUtils: Destroy pipeline 
> failed for pipeline:PipelineID=01d3ef2a-912c-4fc0-80b6-012343d76adb with 
> group-012343D76ADB:[a40a7b01-a30b-469c-b373-9fcb20a126ed:172.27.54.212:9858, 
> 8c77b16b-8054-49e3-b669-1ff759cfd271:172.27.23.196:9858, 
> 943007c8-4fdd-4926-89e2-2c8c52c05073:172.27.76.72:9858]
> org.apache.hadoop.hdds.scm.pipeline.PipelineNotFoundException: 
> PipelineID=01d3ef2a-912c-4fc0-80b6-012343d76adb not found
>  at 
> org.apache.hadoop.hdds.scm.pipeline.PipelineStateMap.getPipeline(PipelineStateMap.java:112)
>  at 
> org.apache.hadoop.hdds.scm.pipeline.PipelineStateMap.removePipeline(PipelineStateMap.java:247)
>  at 
> org.apache.hadoop.hdds.scm.pipeline.PipelineStateManager.removePipeline(PipelineStateManager.java:90)
>  at 
> org.apache.hadoop.hdds.scm.pipeline.SCMPipelineManager.removePipeline(SCMPipelineManager.java:261)
>  at 
> org.apache.hadoop.hdds.scm.pipeline.RatisPipelineUtils.destroyPipeline(RatisPipelineUtils.java:103)
>  at 
> org.apache.hadoop.hdds.scm.pipeline.RatisPipelineUtils.lambda$finalizeAndDestroyPipeline$1(RatisPipelineUtils.java:133)
>  at 
> org.apache.ratis.util.TimeoutScheduler.lambda$onTimeout$0(TimeoutScheduler.java:85)
>  at 
> org.apache.ratis.util.TimeoutScheduler.lambda$onTimeout$1(TimeoutScheduler.java:104)
>  at org.apache.ratis.util.LogUtils.runAndLog(LogUtils.java:50)
>  at org.apache.ratis.util.LogUtils$1.run(LogUtils.java:91)
>  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748){noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1198) Rename chill mode to safe mode

2019-03-06 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16786149#comment-16786149
 ] 

Anu Engineer commented on HDDS-1198:


Please go ahead. 

> Rename chill mode to safe mode
> --
>
> Key: HDDS-1198
> URL: https://issues.apache.org/jira/browse/HDDS-1198
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Arpit Agarwal
>Priority: Major
>  Labels: newbie
>
> Let's go back to calling it safe mode. HDFS admins already understand what it 
> means.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1226) ozone-filesystem jar missing in hadoop classpath

2019-03-05 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16784999#comment-16784999
 ] 

Anu Engineer commented on HDDS-1226:


[~elek]

> ozone-filesystem jar missing in hadoop classpath
> 
>
> Key: HDDS-1226
> URL: https://issues.apache.org/jira/browse/HDDS-1226
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem, Ozone Manager
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>
> hadoop-ozone-filesystem-lib-*.jar is missing in hadoop classpath.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1171) Add benchmark for OM and OM client in Genesis

2019-03-05 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1171:
---
   Resolution: Fixed
Fix Version/s: 0.5.0
   0.4.0
   Status: Resolved  (was: Patch Available)

[~ljain] Thanks for the contribution. While committing I have cleaned up the 
following CheckStyle issues.
{noformat}
./hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkOzoneManager.java:180:
state.om.allocateBlock(omKeyArgs, openKeySession.getId(), new 
ExcludeList());: Line is longer than 80 characters (found 81). [LineLength]
./hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkSCM.java:27:import
 org.apache.hadoop.hdds.protocol.DatanodeDetails;:8: Unused import - 
org.apache.hadoop.hdds.protocol.DatanodeDetails. [UnusedImports]
./hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkSCM.java:32:import
 org.apache.hadoop.hdds.scm.pipeline.PipelineID;:8: Unused import - 
org.apache.hadoop.hdds.scm.pipeline.PipelineID. [UnusedImports]
./hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkSCM.java:35:import
 org.apache.hadoop.hdds.scm.server.SCMStorageConfig;:8: Unused import - 
org.apache.hadoop.hdds.scm.server.SCMStorageConfig. [UnusedImports]
./hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkSCM.java:37:import
 org.apache.hadoop.hdds.server.ServerUtils;:8: Unused import - 
org.apache.hadoop.hdds.server.ServerUtils. [UnusedImports]
./hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkSCM.java:38:import
 org.apache.hadoop.ozone.OzoneConsts;:8: Unused import - 
org.apache.hadoop.ozone.OzoneConsts. [UnusedImports]
./hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkSCM.java:39:import
 org.apache.hadoop.ozone.common.Storage;:8: Unused import - 
org.apache.hadoop.ozone.common.Storage. [UnusedImports]
./hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkSCM.java:41:import
 org.apache.hadoop.utils.MetadataStore;:8: Unused import - 
org.apache.hadoop.utils.MetadataStore. [UnusedImports]
./hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkSCM.java:42:import
 org.apache.hadoop.utils.MetadataStoreBuilder;:8: Unused import - 
org.apache.hadoop.utils.MetadataStoreBuilder. [UnusedImports]
./hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkSCM.java:48:import
 java.util.UUID;:8: Unused import - java.util.UUID. [UnusedImports]
./hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkSCM.java:49:import
 java.util.List;:8: Unused import - java.util.List. [UnusedImports]
./hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkSCM.java:50:import
 java.util.ArrayList;:8: Unused import - java.util.ArrayList. [UnusedImports]
./hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkSCM.java:53:import
 static 
org.apache.hadoop.hdds.scm.ScmConfigKeys.OZONE_SCM_DB_CACHE_SIZE_DEFAULT;:15: 
Unused import - 
org.apache.hadoop.hdds.scm.ScmConfigKeys.OZONE_SCM_DB_CACHE_SIZE_DEFAULT. 
[UnusedImports]
./hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkSCM.java:54:import
 static 
org.apache.hadoop.hdds.scm.ScmConfigKeys.OZONE_SCM_DB_CACHE_SIZE_MB;:15: Unused 
import - org.apache.hadoop.hdds.scm.ScmConfigKeys.OZONE_SCM_DB_CACHE_SIZE_MB. 
[UnusedImports]
./hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/genesis/BenchMarkSCM.java:57:import
 static org.apache.hadoop.ozone.OzoneConsts.SCM_PIPELINE_DB;:15: Unused import 
- org.apache.hadoop.ozone.OzoneConsts.SCM_PIPELINE_DB. [UnusedImports]
{noformat}

I have committed this patch to the trunk and ozone-0.4 branches. 

> Add benchmark for OM and OM client in Genesis
> -
>
> Key: HDDS-1171
> URL: https://issues.apache.org/jira/browse/HDDS-1171
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 0.4.0, 0.5.0
>
> Attachments: HDDS-1171.001.patch, HDDS-1171.002.patch, 
> HDDS-1171.003.patch
>
>
> This Jira aims to add benchmark for OM and OM client in Genesis.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1136) Add metric counters to capture the RocksDB checkpointing statistics.

2019-03-04 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1136:
---
   Resolution: Fixed
Fix Version/s: 0.5.0
   Status: Resolved  (was: Patch Available)

Thanks for your contribution.  I have committed this to the trunk branch.

> Add metric counters to capture the RocksDB checkpointing statistics.
> 
>
> Key: HDDS-1136
> URL: https://issues.apache.org/jira/browse/HDDS-1136
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
> Attachments: HDDS-1136-000.patch
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> As per the discussion with [~anu] on HDDS-1085, this JIRA tracks the effort 
> to add metric counters to capture ROcksDB checkpointing performance. 
> From [~anu]'s comments, it might be interesting to have 3 counters – or a map 
> of counters.
> * How much time are we taking for each CheckPoint
> * How much time are we taking for each Tar operation – along with sizes
> * How much time are we taking for the transfer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1136) Add metric counters to capture the RocksDB checkpointing statistics.

2019-03-04 Thread Anu Engineer (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HDDS-1136:
---
Target Version/s: 0.5.0
 Component/s: Ozone Recon

> Add metric counters to capture the RocksDB checkpointing statistics.
> 
>
> Key: HDDS-1136
> URL: https://issues.apache.org/jira/browse/HDDS-1136
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1136-000.patch
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> As per the discussion with [~anu] on HDDS-1085, this JIRA tracks the effort 
> to add metric counters to capture ROcksDB checkpointing performance. 
> From [~anu]'s comments, it might be interesting to have 3 counters – or a map 
> of counters.
> * How much time are we taking for each CheckPoint
> * How much time are we taking for each Tar operation – along with sizes
> * How much time are we taking for the transfer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1214) Enable tracing for the datanode read/write path

2019-03-04 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783719#comment-16783719
 ] 

Anu Engineer commented on HDDS-1214:


There is a bunch of CheckStyle and Findbugs issues. Thanks for taking a look at 
those. 

> Enable tracing for the datanode read/write path
> ---
>
> Key: HDDS-1214
> URL: https://issues.apache.org/jira/browse/HDDS-1214
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> HDDS-1150 introduced distributed for ozone components. But we have no trace 
> context propagation between the clients and Ozone Datanodes.
> As we use Grpc and Ratis on this RPC path the full tracing could be quite 
> complex: we should propagate the trace id in Ratis and include it in all the 
> log entries.
> I propose a simplified solution here: to trace only the StateMachine 
> operations.
> As Ratis is a library we provide the implementation of the appropriate Raft 
> elements especially the StateMachine and the raft messages. We can add the 
> tracing information to the raft messages (in fact, we already have this 
> field) and we can restore the tracing context during the StateMachine 
> operations.
> This approach is very simple (only a few lines of codes) and can show the 
> time of the real write/read operations, but can't see the internals of the 
> Ratis operations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1171) Add benchmark for OM and OM client in Genesis

2019-03-04 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16783680#comment-16783680
 ] 

Anu Engineer commented on HDDS-1171:


Sorry, we might need a rebase to commit this. Could you please rebase this 
patch. Thanks in advance.

> Add benchmark for OM and OM client in Genesis
> -
>
> Key: HDDS-1171
> URL: https://issues.apache.org/jira/browse/HDDS-1171
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: HDDS-1171.001.patch, HDDS-1171.002.patch
>
>
> This Jira aims to add benchmark for OM and OM client in Genesis.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



<    1   2   3   4   5   6   7   8   9   10   >