[jira] [Created] (HDDS-4398) Ozone TLP - move site to separated repository

2020-10-28 Thread Marton Elek (Jira)
Marton Elek created HDDS-4398:
-

 Summary: Ozone TLP - move site to separated repository
 Key: HDDS-4398
 URL: https://issues.apache.org/jira/browse/HDDS-4398
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Marton Elek
Assignee: Marton Elek


Ozone is approved to become a separated Apache project which requires separated 
repository for ozone.apache.org.

At the beginning Ozone site was developed in 
https://github.com/apache/ozone-site but after a while it was moved to 
https://github.com/apache/hadoop-site/tree/asf-site/ozone

Under this jira:

1. the original repository will be cleaned up
2. the latest ozone site from the hadoop-site repository will be moved back to 
the separated repository 
3. hadoop-site can be updated to have a link to the new site (but keeping old 
content to make all the old links workable)
4. .asf.yaml will be added

As it require multiple technical commits but doesn't introduce new changes, PRs 
may not be created for each commits.
 





--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4312) findbugs check succeeds despite compile error

2020-10-09 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek updated HDDS-4312:
--
Fix Version/s: 1.1.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> findbugs check succeeds despite compile error
> -
>
> Key: HDDS-4312
> URL: https://issues.apache.org/jira/browse/HDDS-4312
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Affects Versions: 1.1.0
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> Findbugs check has been silently failing but reporting success for some time 
> now.  The problem is that {{findbugs.sh}} determines exit code based on the 
> number of findbugs failures.  If {{compile}} step fails, exit code is 0, ie. 
> success.
> {code:title=https://github.com/apache/hadoop-ozone/runs/1210535433#step:3:866}
> 2020-10-02T18:37:57.0699502Z [ERROR] Failed to execute goal on project 
> hadoop-hdds-client: Could not resolve dependencies for project 
> org.apache.hadoop:hadoop-hdds-client:jar:1.1.0-SNAPSHOT: Could not find 
> artifact org.apache.hadoop:hadoop-hdds-common:jar:tests:1.1.0-SNAPSHOT in 
> apache.snapshots.https 
> (https://repository.apache.org/content/repositories/snapshots) -> [Help 1]
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4311) Type-safe config design doc points to OM HA

2020-10-09 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek updated HDDS-4311:
--
Fix Version/s: 1.1.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Type-safe config design doc points to OM HA
> ---
>
> Key: HDDS-4311
> URL: https://issues.apache.org/jira/browse/HDDS-4311
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Minor
> Fix For: 1.1.0
>
>
> Abstract and links for 
> http://hadoop.apache.org/ozone/docs/1.0.0/design/typesafeconfig.html are 
> wrong, reference OM HA design doc.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-3814) Drop a column family through debug ldb tool

2020-10-09 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek resolved HDDS-3814.
---
Fix Version/s: 1.1.0
   Resolution: Fixed

> Drop a column family through debug ldb tool
> ---
>
> Key: HDDS-3814
> URL: https://issues.apache.org/jira/browse/HDDS-3814
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Tools
>Affects Versions: 1.1.0
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4209) S3A Filesystem does not work with Ozone S3 in file system compat mode

2020-10-09 Thread Marton Elek (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17210831#comment-17210831
 ] 

Marton Elek commented on HDDS-4209:
---

If not, It might be better to add this information to the documentation.

There is an open pull request for this jira. Shall we close it?

> S3A Filesystem does not work with Ozone S3 in file system compat mode
> -
>
> Key: HDDS-4209
> URL: https://issues.apache.org/jira/browse/HDDS-4209
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
>  Labels: OzoneS3, S3A, pull-request-available
>
> When *ozone.om.enable.filesystem.paths* is enabled
>  
> hdfs dfs -mkdir -p s3a://b12345/d11/d12 -> Success
> hdfs dfs -put /tmp/file1 s3a://b12345/d11/d12/file1 -> fails with below error
>  
> {code:java}
> 2020-09-04 03:53:51,377 ERROR 
> org.apache.hadoop.ozone.om.request.key.OMKeyCreateRequest: Key creation 
> failed. Volume:s3v, Bucket:b1234, Keyd11/d12/file1._COPYING_. Exception:{}
> NOT_A_FILE org.apache.hadoop.ozone.om.exceptions.OMException: Can not create 
> file: cp/k1._COPYING_ as there is already file in the given path
>  at 
> org.apache.hadoop.ozone.om.request.key.OMKeyCreateRequest.validateAndUpdateCache(OMKeyCreateRequest.java:256)
>  at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerRequestHandler.handleWriteRequest(OzoneManagerRequestHandler.java:227)
>  at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.runCommand(OzoneManagerStateMachine.java:428)
>  at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.lambda$applyTransaction$1(OzoneManagerStateMachine.java:246)
>  at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748){code}
> *Reason for this*
>  S3A filesystem when create directory creates an empty file
> *Now entries in Ozone KeyTable after create directory*
>  d11/
>  d11/d12
> Because of this in OMFileRequest.VerifyInFilesPath fails with 
> FILE_EXISTS_IN_GIVEN_PATH because d11/d12 is considered as file not a 
> directory. (As in ozone currently, directories end with trailing "/")
> So, when d11/d12/file is created, we check parent exists, now d11/d12 is 
> considered as file and fails with NOT_A_FILE
> When disabled it works fine, as when disabled during key create we do not 
> check any filesystem semantics and also does not create intermediate 
> directories.
> {code:java}
> [root@bvoz-1 ~]# hdfs dfs -mkdir -p s3a://b12345/d11/d12
> [root@bvoz-1 ~]# hdfs dfs -put /etc/hadoop/conf/ozone-site.xml 
> s3a://b12345/d11/d12/k1
> [root@bvoz-1 ~]# hdfs dfs -ls s3a://b12345/d11/d12
> Found 1 items
> -rw-rw-rw-   1 systest systest   2373 2020-09-04 04:45 
> s3a://b12345/d11/d12/k1
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-4300) Remove no longer needed class DatanodeAdminNodeDetails

2020-10-05 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek resolved HDDS-4300.
---
Fix Version/s: 1.1.0
   Resolution: Fixed

> Remove no longer needed class DatanodeAdminNodeDetails
> --
>
> Key: HDDS-4300
> URL: https://issues.apache.org/jira/browse/HDDS-4300
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Affects Versions: 1.1.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> DatanodeAdminNodeDetails was added earlier in the decommission branch, to 
> track metrics and, the decommission state and maintenance end time. 
> After enhancing NodeStatus to old the Maintenance Expiry time, this class is 
> no longer needed and it also duplicates information which is stored in other 
> existing places.
> This change removes it and then metrics etc can be added later in a different 
> way.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4300) Remove no longer needed class DatanodeAdminNodeDetails

2020-10-05 Thread Marton Elek (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17208087#comment-17208087
 ] 

Marton Elek commented on HDDS-4300:
---

Nice jira number ;-)

> Remove no longer needed class DatanodeAdminNodeDetails
> --
>
> Key: HDDS-4300
> URL: https://issues.apache.org/jira/browse/HDDS-4300
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Affects Versions: 1.1.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
>
> DatanodeAdminNodeDetails was added earlier in the decommission branch, to 
> track metrics and, the decommission state and maintenance end time. 
> After enhancing NodeStatus to old the Maintenance Expiry time, this class is 
> no longer needed and it also duplicates information which is stored in other 
> existing places.
> This change removes it and then metrics etc can be added later in a different 
> way.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-4156) add hierarchical layout to Chinese doc

2020-10-05 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek resolved HDDS-4156.
---
Fix Version/s: 1.1.0
   Resolution: Fixed

> add hierarchical layout to Chinese doc
> --
>
> Key: HDDS-4156
> URL: https://issues.apache.org/jira/browse/HDDS-4156
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Xiang Zhang
>Assignee: Zheng Huang-Mu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> English doc is updated a lot in 
> https://issues.apache.org/jira/browse/HDDS-4042, and its flat structure 
> becomes more hierarchical, to keep the consistency, we need to update the 
> Chinese doc too.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-4242) Copy PrefixInfo proto to new project hadoop-ozone/interface-storage

2020-10-05 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek resolved HDDS-4242.
---
Fix Version/s: 1.1.0
   Resolution: Fixed

> Copy PrefixInfo proto to new project hadoop-ozone/interface-storage
> ---
>
> Key: HDDS-4242
> URL: https://issues.apache.org/jira/browse/HDDS-4242
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Rui Wang
>Assignee: Rui Wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-4264) Uniform naming conventions of Ozone Shell Options.

2020-10-05 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek resolved HDDS-4264.
---
Target Version/s: 1.1.0
  Resolution: Fixed

> Uniform naming conventions of Ozone Shell Options.
> --
>
> Key: HDDS-4264
> URL: https://issues.apache.org/jira/browse/HDDS-4264
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: mingchao zhao
>Assignee: mingchao zhao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
> Attachments: image-2020-09-22-14-51-18-968.png
>
>
> Current Shell command of Ozone, some use hump connection, some use '-' 
> connection. We need to unify the naming conventions.
> See the usage [documentation of 
> Picocli|https://picocli.info/#command-methods], which use '-' connection. So 
> I'm going to unify the naming conventions here.
>  !image-2020-09-22-14-51-18-968.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-4271) Avoid logging chunk content in Ozone Insight

2020-10-05 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek resolved HDDS-4271.
---
Fix Version/s: 1.1.0
   Resolution: Fixed

> Avoid logging chunk content in Ozone Insight
> 
>
> Key: HDDS-4271
> URL: https://issues.apache.org/jira/browse/HDDS-4271
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> HDDS-2660 added an insight point for the datanode dispatcher.  At trace level 
> it logs all chunk content, which can be huge and contain control characters, 
> so I think we should avoid it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4097) S3/Ozone Filesystem inter-op

2020-10-05 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek updated HDDS-4097:
--
Attachment: Ozone filesystem path enabled v3.xlsx

> S3/Ozone Filesystem inter-op
> 
>
> Key: HDDS-4097
> URL: https://issues.apache.org/jira/browse/HDDS-4097
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Attachments: Ozone FileSystem Paths Enabled.docx, Ozone filesystem 
> path enabled v3.xlsx, Ozone filesystem path enabled.xlsx
>
>
> This Jira is to implement changes required to use Ozone buckets when data is 
> ingested via S3 and use the bucket/volume via OzoneFileSystem. Initial 
> implementation for this is done as part of HDDS-3955. There are few API's 
> which have missed the changes during the implementation of HDDS-3955. 
> Attached design document which discusses each API,  and what changes are 
> required.
> Excel sheet has information about each API, from what all interfaces the OM 
> API is used, and what changes are required for the API to support 
> inter-operability.
> Note: The proposal for delete/rename is still under discussion, not yet 
> finalized. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4299) Display Ratis version with ozone version

2020-10-01 Thread Marton Elek (Jira)
Marton Elek created HDDS-4299:
-

 Summary: Display Ratis version with ozone version
 Key: HDDS-4299
 URL: https://issues.apache.org/jira/browse/HDDS-4299
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Marton Elek
Assignee: Marton Elek


During the development Ozone uses snapshot releases from Ratis. It can be 
useful to print out the exact version of the used Ratis as part of the output 
of "ozone version".

Ratis versions are part of the jar files since RATIS-1050 

It can make the testing easier, as it's easier to check which Ratis version is 
used. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4298) Use an interface in Ozone client instead of XceiverClientManager

2020-09-30 Thread Marton Elek (Jira)
Marton Elek created HDDS-4298:
-

 Summary: Use an interface in Ozone client instead of 
XceiverClientManager
 Key: HDDS-4298
 URL: https://issues.apache.org/jira/browse/HDDS-4298
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Marton Elek
Assignee: Marton Elek


XceiverClientManager is used everywhere in the ozone client (Key/Block 
Input/OutputStream) to get a client when required.

To make it easier to create genesis/real unit tests, it would be better to use 
a generic interface instead of XceiverClientManager which can make it easy to 
replace the manager with a mock implementation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4290) Enable insight point for SCM heartbeat protocol

2020-09-29 Thread Marton Elek (Jira)
Marton Elek created HDDS-4290:
-

 Summary: Enable insight point for SCM heartbeat protocol
 Key: HDDS-4290
 URL: https://issues.apache.org/jira/browse/HDDS-4290
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Marton Elek
Assignee: Marton Elek


The registration of the already implemented insigh-tpoint seems to be missing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4288) the icon of hadoop-ozone is bigger than ever

2020-09-29 Thread Marton Elek (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17203905#comment-17203905
 ] 

Marton Elek commented on HDDS-4288:
---

Thanks to report this issue. I think it's not related to HDDS-4166, but related 
to the Jenkins migration of Apache INFRA. The new jenkins adds a more secure 
HTTP headers:

{code}
< Content-Security-Policy: sandbox; default-src 'none'; img-src 'self'; 
style-src 'self';
< X-WebKit-CSP: sandbox; default-src 'none'; img-src 'self'; style-src 'self';
{code}

IMHO the inline styles which are used in the current code are disabled:

{code}

{code}

While it's not a production issue, we can move the custom styles to the css, to 
make it compatible with the jenkins.

> the icon of hadoop-ozone is bigger than ever
> 
>
> Key: HDDS-4288
> URL: https://issues.apache.org/jira/browse/HDDS-4288
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 1.0.0
> Environment: web : chrome /firefox /safari
>Reporter: Shiyou xin
>Assignee: Marton Elek
>Priority: Trivial
> Attachments: 1751601366944_.pic.jpg
>
>
> It could be a by-product of the introduction of the issue: 
> https://issues.apache.org/jira/browse/HDDS-4166



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-4288) the icon of hadoop-ozone is bigger than ever

2020-09-29 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek reassigned HDDS-4288:
-

Assignee: Marton Elek

> the icon of hadoop-ozone is bigger than ever
> 
>
> Key: HDDS-4288
> URL: https://issues.apache.org/jira/browse/HDDS-4288
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 1.0.0
> Environment: web : chrome /firefox /safari
>Reporter: Shiyou xin
>Assignee: Marton Elek
>Priority: Trivial
> Attachments: 1751601366944_.pic.jpg
>
>
> It could be a by-product of the introduction of the issue: 
> https://issues.apache.org/jira/browse/HDDS-4166



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-4215) update freon doc.

2020-09-29 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek resolved HDDS-4215.
---
Target Version/s: 1.1.0
  Resolution: Fixed

> update freon doc.
> -
>
> Key: HDDS-4215
> URL: https://issues.apache.org/jira/browse/HDDS-4215
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: mingchao zhao
>Assignee: mingchao zhao
>Priority: Major
>  Labels: pull-request-available
>
> At present, the link to the Freon introduction document is 0.4.0, and now 1.0 
> has been released and the URL needs to be updated to 1.0



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4289) Throw exception from hadoop2 filesystem jar in HA environment

2020-09-29 Thread Marton Elek (Jira)
Marton Elek created HDDS-4289:
-

 Summary: Throw exception from hadoop2 filesystem jar in HA 
environment
 Key: HDDS-4289
 URL: https://issues.apache.org/jira/browse/HDDS-4289
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: OM HA
Reporter: Marton Elek


Thanks for Tamas Pleszkan for reporting this problem.

ozone-filesystem-hadoop2 doesn't support OM-HA (today) as the used 
Hadoop3OmTransport uses FailoverProxyProvider which is not available in hadoop2.

Long-term we need a custom failover mechanism, but this jira suggests to 
improve the error handling. `Hadoop27OmTransportFactory` should throw an 
exception if HA is used.

Used command:

{code}
spark-submit --master yarn --deploy-mode client --executor-memory 1g --conf 
"spark.yarn.access.hadoopFileSystems=o3fs://bucket.hdfs.ozone1/" --jars 
"/opt/cloudera/parcels/CDH-7.1.3-1.cdh7.1.3.p0.4992530/jars/hadoop-ozone-filesystem-hadoop2-0.5.0.7.1.3.0-100.jar"
 SparkWordCount.py o3fs://bucket.hdfs.ozone1/words 2
{code}

Current exception:

{code}
Caused by: 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ozone.om.exceptions.OMNotLeaderException):
 OM:om2 is not the leader. Suggested leader is OM:om1.
{code}

Expected exception: Unsupported operation exception with meaningful hint to use 
hadoop3 filesystem jar.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4102) Normalize Keypath for lookupKey

2020-09-28 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek updated HDDS-4102:
--
Target Version/s: 1.1.0
  Resolution: Fixed
  Status: Resolved  (was: Patch Available)

> Normalize Keypath for lookupKey
> ---
>
> Key: HDDS-4102
> URL: https://issues.apache.org/jira/browse/HDDS-4102
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> When ozone.om.enable.filesystem.paths, OM normalizes path, and stores the 
> Keyname.
> Now when user tries to read the file from S3 using the keyName which user has 
> used to create the Key, it will return error KEY_NOT_FOUND
> The issue is, lookupKey need to normalize path, when 
> ozone.om.enable.filesystem.paths is enabled. This is common API used by 
> S3/FS. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4285) Read is slow due to the frequent usage of UGI.getCurrentUserCall()

2020-09-28 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek updated HDDS-4285:
--
Description: 
Ozone read operation turned out to be slow mainly because we do a new 
UGI.getCurrentUser for block token for each of the calls.

We need to cache the block token / UGI.getCurrentUserCall() to make it faster.

 !image-2020-09-28-16-19-17-581.png! 

To reproduce:

Checkout: https://github.com/elek/hadoop-ozone/tree/mocked-read

{code}
cd hadoop-ozone/client

export 
MAVEN_OPTS=-agentpath:/home/elek/prog/async-profiler/build/libasyncProfiler.so=start,file=/tmp/profile-%t-%p.svg

mvn compile exec:java 
-Dexec.mainClass=org.apache.hadoop.ozone.client.io.TestKeyOutputStreamUnit 
-Dexec.classpathScope=test
{code}

  was:
Ozone read operation turned out to be slow mainly because we do a new 
UGI.getCurrentUser for block token for each of the calls.

We need to cache the block token / UGI.getCurrentUserCall() to make it faster.

 !image-2020-09-28-16-19-17-581.png! 


> Read is slow due to the frequent usage of UGI.getCurrentUserCall()
> --
>
> Key: HDDS-4285
> URL: https://issues.apache.org/jira/browse/HDDS-4285
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
> Attachments: image-2020-09-28-16-19-17-581.png, 
> profile-20200928-161631-180518.svg
>
>
> Ozone read operation turned out to be slow mainly because we do a new 
> UGI.getCurrentUser for block token for each of the calls.
> We need to cache the block token / UGI.getCurrentUserCall() to make it faster.
>  !image-2020-09-28-16-19-17-581.png! 
> To reproduce:
> Checkout: https://github.com/elek/hadoop-ozone/tree/mocked-read
> {code}
> cd hadoop-ozone/client
> export 
> MAVEN_OPTS=-agentpath:/home/elek/prog/async-profiler/build/libasyncProfiler.so=start,file=/tmp/profile-%t-%p.svg
> mvn compile exec:java 
> -Dexec.mainClass=org.apache.hadoop.ozone.client.io.TestKeyOutputStreamUnit 
> -Dexec.classpathScope=test
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4285) Read is slow due to the frequent usage of UGI.getCurrentUserCall()

2020-09-28 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek updated HDDS-4285:
--
Attachment: profile-20200928-161631-180518.svg

> Read is slow due to the frequent usage of UGI.getCurrentUserCall()
> --
>
> Key: HDDS-4285
> URL: https://issues.apache.org/jira/browse/HDDS-4285
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
> Attachments: image-2020-09-28-16-19-17-581.png, 
> profile-20200928-161631-180518.svg
>
>
> Ozone read operation turned out to be slow mainly because we do a new 
> UGI.getCurrentUser for block token for each of the calls.
> We need to cache the block token / UGI.getCurrentUserCall() to make it faster.
>  !image-2020-09-28-16-19-17-581.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4285) Read is slow due to the frequent usage of UGI.getCurrentUserCall()

2020-09-28 Thread Marton Elek (Jira)
Marton Elek created HDDS-4285:
-

 Summary: Read is slow due to the frequent usage of 
UGI.getCurrentUserCall()
 Key: HDDS-4285
 URL: https://issues.apache.org/jira/browse/HDDS-4285
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Marton Elek
Assignee: Marton Elek
 Attachments: image-2020-09-28-16-19-17-581.png, 
profile-20200928-161631-180518.svg

Ozone read operation turned out to be slow mainly because we do a new 
UGI.getCurrentUser for block token for each of the calls.

We need to cache the block token / UGI.getCurrentUserCall() to make it faster.

 !image-2020-09-28-16-19-17-581.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-4234) Add important comment to ListVolumes logic

2020-09-23 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek resolved HDDS-4234.
---
   Fix Version/s: (was: 1.0.0)
  1.1.0
Target Version/s: 1.1.0  (was: 0.5.0)
  Resolution: Fixed

> Add important comment to ListVolumes logic
> --
>
> Key: HDDS-4234
> URL: https://issues.apache.org/jira/browse/HDDS-4234
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Affects Versions: 0.5.0
>Reporter: Xie Lei
>Priority: Major
> Fix For: 1.1.0
>
> Attachments: image-2020-09-11-11-23-50-504.png
>
>
> when do following command, the statistics of list request is 2
> {code:java}
> ozone sh volume ls
> {code}
>  
>  
> !image-2020-09-11-11-23-50-504.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-4236) Move "Om*Codec.java" to new project hadoop-ozone/interface-storage

2020-09-23 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek resolved HDDS-4236.
---
Fix Version/s: 1.1.0
   Resolution: Fixed

> Move "Om*Codec.java" to new project hadoop-ozone/interface-storage
> --
>
> Key: HDDS-4236
> URL: https://issues.apache.org/jira/browse/HDDS-4236
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Rui Wang
>Assignee: Rui Wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> This is the first step to separate storage and RPC proto files. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4270) Add more reusable byteman scripts to debug ofs/o3fs performance

2020-09-23 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek updated HDDS-4270:
--
Description: 
I am using https://byteman.jboss.org to debug the performance of spark + terage 
with different scripts. Some byteman scripts are already shared by HDDS-4095 or 
HDDS-342 but it seems to be a good practice to share the newer scripts to make 
it possible to reproduce performance problems.

For using byteman with Ozone, see this video:
https://www.youtube.com/watch?v=_4eYsH8F50E&list=PLCaV-jpCBO8U_WqyySszmbmnL-dhlzF6o&index=5

  was:
I am using https://byteman.jboss.org to debug the performance of spark + terage 
with different scripts. Some byteman scripts are already shared by HDDS-4095 or 
HDDS-342 but it seems to be a good practice to share the newer scripts to make 
it possible to reproduce performance problems.




> Add more reusable byteman scripts to debug ofs/o3fs performance
> ---
>
> Key: HDDS-4270
> URL: https://issues.apache.org/jira/browse/HDDS-4270
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>
> I am using https://byteman.jboss.org to debug the performance of spark + 
> terage with different scripts. Some byteman scripts are already shared by 
> HDDS-4095 or HDDS-342 but it seems to be a good practice to share the newer 
> scripts to make it possible to reproduce performance problems.
> For using byteman with Ozone, see this video:
> https://www.youtube.com/watch?v=_4eYsH8F50E&list=PLCaV-jpCBO8U_WqyySszmbmnL-dhlzF6o&index=5



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4270) Add more reusable byteman scripts to debug ofs/o3fs performance

2020-09-23 Thread Marton Elek (Jira)
Marton Elek created HDDS-4270:
-

 Summary: Add more reusable byteman scripts to debug ofs/o3fs 
performance
 Key: HDDS-4270
 URL: https://issues.apache.org/jira/browse/HDDS-4270
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Marton Elek
Assignee: Marton Elek


I am using https://byteman.jboss.org to debug the performance of spark + terage 
with different scripts. Some byteman scripts are already shared by HDDS-4095 or 
HDDS-342 but it seems to be a good practice to share the newer scripts to make 
it possible to reproduce performance problems.





--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-3102) ozone getconf command should use the GenericCli parent class

2020-09-18 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek resolved HDDS-3102.
---
Fix Version/s: 1.1.0
   Resolution: Fixed

> ozone getconf command should use the GenericCli parent class
> 
>
> Key: HDDS-3102
> URL: https://issues.apache.org/jira/browse/HDDS-3102
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone CLI
>Reporter: Marton Elek
>Assignee: Rui Wang
>Priority: Major
>  Labels: newbie, pull-request-available
> Fix For: 1.1.0
>
>
> org.apache.hadoop.ozone.freon.OzoneGetCOnf implements a tool to print out 
> current configuration values
> With all the other CLI tools we already started to use picocli and the 
> GenericCli parent class.
> To provide better user experience we should migrate the tool to use 
> GenericCli (+move it to the tools project + remove freon from the package 
> name)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4255) Remove unused Ant and Jdiff dependency versions

2020-09-17 Thread Marton Elek (Jira)
Marton Elek created HDDS-4255:
-

 Summary: Remove unused Ant and Jdiff dependency versions
 Key: HDDS-4255
 URL: https://issues.apache.org/jira/browse/HDDS-4255
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Marton Elek
Assignee: Marton Elek


Versions of Ant and JDiff are not used in ozone project, but we have some 
version declaration (inherited from the Hadoo parent pom which was used as a 
base for the main pom.xml).

As the (unused) ANT version has security issues, I would remove them to avoid 
any confusion  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3927) Rename Ozone OM,DN,SCM runtime options to conform to naming conventions

2020-09-13 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek updated HDDS-3927:
--
Fix Version/s: 1.1.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Rename Ozone OM,DN,SCM runtime options to conform to naming conventions
> ---
>
> Key: HDDS-3927
> URL: https://issues.apache.org/jira/browse/HDDS-3927
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> Similar to {{HDFS_NAMENODE_OPTS}}, {{HDFS_DATANODE_OPTS}}, etc., we should 
> have {{OZONE_MANAGER_OPTS}}, {{OZONE_DATANODE_OPTS}} to allow adding JVM args 
> for GC tuning and debugging.
> Update 1:
> [~bharat] mentioned we already have some equivalents for OM and Ozone DNs:
> - 
> [HDFS_OM_OPTS|https://github.com/apache/hadoop-ozone/blob/bc7786a2fafb2d36923506f8de6c25fcfd26d55b/hadoop-ozone/dist/src/shell/ozone/ozone#L157]
>  for Ozone OM. This looks like a typo, should begin with HDDS
> - 
> [HDDS_DN_OPTS|https://github.com/apache/hadoop-ozone/blob/bc7786a2fafb2d36923506f8de6c25fcfd26d55b/hadoop-ozone/dist/src/shell/ozone/ozone#L108]
>  for Ozone DNs
> Update 2:
> - HDFS_OM_OPTS -> OZONE_OM_OPTS
> - HDDS_DN_OPTS -> OZONE_DATANODE_OPTS
> - HDFS_STORAGECONTAINERMANAGER_OPTS -> OZONE_SCM_OPTS
> The new names conforms to {{hadoop_subcommand_opts}}. Thanks [~elek] for 
> pointing this out.
> Objective:
> Rename the environment variables to be in accordance with the convention, and 
> keep the compatibility by deprecating the old variable names.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4238) Test AWS S3 client compatibility with fs incompatible keys

2020-09-13 Thread Marton Elek (Jira)
Marton Elek created HDDS-4238:
-

 Summary: Test AWS S3 client compatibility with fs incompatible keys
 Key: HDDS-4238
 URL: https://issues.apache.org/jira/browse/HDDS-4238
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Marton Elek
Assignee: Marton Elek


There is a discussion in HDDS-4097 to define the ofs and s3 behavior (how to 
normalize / store keys).

Keys which has FS compatible names (like a/b/v/d) can be handled easily, but 
there are corner cases with fs incompatible path (like a/bd or a/b/../c)

This patch creates a new robot test suite to test these cases (based on the 
behavior of AWS S3).

Note: based on the discussion in HDDS_4097 there are cases where the new test 
is failed (with specific settings we can prefer ofs/o3fs compatibilty/full-view 
instead of 100% s3 compatibility)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-4119) Improve performance of the BufferPool management of Ozone client

2020-09-11 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek resolved HDDS-4119.
---
Fix Version/s: 1.1.0
   Resolution: Fixed

> Improve performance of the BufferPool management of Ozone client
> 
>
> Key: HDDS-4119
> URL: https://issues.apache.org/jira/browse/HDDS-4119
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> Teragen reported to be slow with low number of mappers compared to HDFS.
> In my test (one pipeline, 3 yarn nodes) 10 g teragen with HDFS was ~3 mins 
> but with Ozone it was 6 mins. It could be fixed with using more mappers, but 
> when I investigated the execution I found a few problems reagrding to the 
> BufferPool management.
>  1. IncrementalChunkBuffer is slow and it might not be required as BufferPool 
> itself is incremental
>  2. For each write operation the bufferPool.allocateBufferIfNeeded is called 
> which can be a slow operation (positions should be calculated).
>  3. There is no explicit support for write(byte) operations
> In the flamegraph it's clearly visible that with low number of mappers the 
> client is busy with buffer operations. After the patch the rpc call and the 
> checksum calculation give the majority of the time. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4233) Interrupted exeception printed out from DatanodeStateMachine

2020-09-10 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek updated HDDS-4233:
--
Summary: Interrupted exeception printed out from DatanodeStateMachine  
(was: Interrupted execption printed out from DatanodeStateMachine)

> Interrupted exeception printed out from DatanodeStateMachine
> 
>
> Key: HDDS-4233
> URL: https://issues.apache.org/jira/browse/HDDS-4233
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>
> A strange exception is visible in the log during normal run:
> {code}
> 2020-09-10 11:31:41 WARN  DatanodeStateMachine:245 - Interrupt the execution.
> java.lang.InterruptedException: sleep interrupted
> at java.lang.Thread.sleep(Native Method)
> at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.start(DatanodeStateMachine.java:243)
> at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.lambda$startDaemon$0(DatanodeStateMachine.java:405)
> at java.lang.Thread.run(Thread.java:748)
> {code}
> The most common reason to this is triggering a new HB request.
> As this is a normal behavior we shouldn't log exception on WARN level.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4233) Interrupted execption printed out from DatanodeStateMachine

2020-09-10 Thread Marton Elek (Jira)
Marton Elek created HDDS-4233:
-

 Summary: Interrupted execption printed out from 
DatanodeStateMachine
 Key: HDDS-4233
 URL: https://issues.apache.org/jira/browse/HDDS-4233
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Marton Elek
Assignee: Marton Elek


A strange exception is visible in the log during normal run:

{code}
2020-09-10 11:31:41 WARN  DatanodeStateMachine:245 - Interrupt the execution.
java.lang.InterruptedException: sleep interrupted
at java.lang.Thread.sleep(Native Method)
at 
org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.start(DatanodeStateMachine.java:243)
at 
org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.lambda$startDaemon$0(DatanodeStateMachine.java:405)
at java.lang.Thread.run(Thread.java:748)
{code}


The most common reason to this is triggering a new HB request.

As this is a normal behavior we shouldn't log exception on WARN level.




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4209) S3A Filesystem does not work with Ozone S3

2020-09-09 Thread Marton Elek (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17192839#comment-17192839
 ] 

Marton Elek commented on HDDS-4209:
---

I checked it with aws s3 api, and after the first step I can see a good entry:

{code}
aws s3api list-objects --bucket ozonetest --prefix=o11
{
"Contents": [
{
"Key": "o11/o12/",
"LastModified": "2020-09-09T12:34:36.000Z",
"ETag": "\"d41d8cd98f00b204e9800998ecf8427e\"",
"Size": 0,
"StorageClass": "STANDARD",
"Owner": {
"DisplayName": "e1428",
"ID": 
"b8c021b4343e316b28b545df160c6720479a998001ebf7019328b64417fe152d"
}
}
]
}
{code}

As the `/` postfix is added to the path the intermediate directory creation 
logic can understand it's a directory and can reuse the object. IMHO.

> S3A Filesystem does not work with Ozone S3
> --
>
> Key: HDDS-4209
> URL: https://issues.apache.org/jira/browse/HDDS-4209
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
>
> When *ozone.om.enable.filesystem.paths* is enabled
>  
> hdfs dfs -mkdir -p s3a://b12345/d11/d12 -> Success
> hdfs dfs -put /tmp/file1 s3a://b12345/d11/d12/file1 -> fails with below error
>  
> {code:java}
> 2020-09-04 03:53:51,377 ERROR 
> org.apache.hadoop.ozone.om.request.key.OMKeyCreateRequest: Key creation 
> failed. Volume:s3v, Bucket:b1234, Keyd11/d12/file1._COPYING_. Exception:{}
> NOT_A_FILE org.apache.hadoop.ozone.om.exceptions.OMException: Can not create 
> file: cp/k1._COPYING_ as there is already file in the given path
>  at 
> org.apache.hadoop.ozone.om.request.key.OMKeyCreateRequest.validateAndUpdateCache(OMKeyCreateRequest.java:256)
>  at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerRequestHandler.handleWriteRequest(OzoneManagerRequestHandler.java:227)
>  at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.runCommand(OzoneManagerStateMachine.java:428)
>  at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.lambda$applyTransaction$1(OzoneManagerStateMachine.java:246)
>  at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748){code}
> *Reason for this*
>  S3A filesystem when create directory creates an empty file
> *Now entries in Ozone KeyTable after create directory*
>  d11/
>  d11/d12
> Because of this in OMFileRequest.VerifyInFilesPath fails with 
> FILE_EXISTS_IN_GIVEN_PATH because d11/d12 is considered as file not a 
> directory. (As in ozone currently, directories end with trailing "/")
> So, when d11/d12/file is created, we check parent exists, now d11/d12 is 
> considered as file and fails with NOT_A_FILE
> When disabled it works fine, as when disabled during key create we do not 
> check any filesystem semantics and also does not create intermediate 
> directories.
> {code:java}
> [root@bvoz-1 ~]# hdfs dfs -mkdir -p s3a://b12345/d11/d12
> [root@bvoz-1 ~]# hdfs dfs -put /etc/hadoop/conf/ozone-site.xml 
> s3a://b12345/d11/d12/k1
> [root@bvoz-1 ~]# hdfs dfs -ls s3a://b12345/d11/d12
> Found 1 items
> -rw-rw-rw-   1 systest systest   2373 2020-09-04 04:45 
> s3a://b12345/d11/d12/k1
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4155) Directory and filename can end up with same name in a path

2020-09-08 Thread Marton Elek (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17192405#comment-17192405
 ] 

Marton Elek commented on HDDS-4155:
---

> s3a does not handle it. It just displays both if they exist.

Thanks the comment. I fixed the previous comment:

{quote}> We will not "support" them, they will just exist

Sorry, I am not sure what does this answer mean exactly. Can you please define 
what ofs/o3fs behaviour is suggested by you?

With the previous example I tried to explain that this behavior is independent 
of s3 strict compatibility, as s3a can *display* both dir/file with the same 
name, even if the S3 compatibility is strict (==can work together with AWS 
S3)_{quote}

> Directory and filename can end up with same name in a path
> --
>
> Key: HDDS-4155
> URL: https://issues.apache.org/jira/browse/HDDS-4155
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> Scenario:
> Create Key via S3, and Create Directory through Fs.
>  # open key -> /a/b/c
>  # CreateDirectory -> /a/b/c
>  # CommitKey -> /a/b/c
> So, now in Ozone we will have directory and file with name "c"
> When created through Fs interface.
>  # create file -> /a/b/c
>  # CreateDirectory -> /a/b/c
>  # CommitKey -> /a/b/c
> So, now in Ozone we will have directory and file with name "c"
>  
>  # InitiateMPU /a/b/c
>  # Create Part1 /a/b/c
>  # Commit Part1 /a/b/c
>  # Create Directory /a/b/c
>  # Complete MPU /a/b/c
> So, now in Ozone, we will have directory and file with name "c".  In MPU this 
> is one example scenario.
>  
> Few proposals/ideas to solve this:
>  # Check during commit whether a directory already exists with same name. But 
> disadvantage is after user uploads the entire data during last stage we fail. 
>  (File system with create in progress acts similarly. Scenario: 1. vi t1 2. 
> mkdir t1 3. Save t1: (Fail:"t1" is a directory)
>  # During create directory check are there any open key creation with same 
> name and fail.
>  
> Any of the above approaches are not final, this Jira is opened to discuss 
> this issue and come up with solution.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4155) Directory and filename can end up with same name in a path

2020-09-08 Thread Marton Elek (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17192015#comment-17192015
 ] 

Marton Elek commented on HDDS-4155:
---

Thanks the answer [~arp].

> We will not "support" them, they will just exist

Sorry, I am not sure what does this answer mean exactly. Can you please define 
what ofs/o3fs behaviour is suggested by you?


With the previous example I tried to explain that this behavior is independent 
of s3 strict compatibility, as s3a can handle both dir/file with the same name, 
even if the S3 compatibility is strict (==can work together with AWS S3)

> Directory and filename can end up with same name in a path
> --
>
> Key: HDDS-4155
> URL: https://issues.apache.org/jira/browse/HDDS-4155
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> Scenario:
> Create Key via S3, and Create Directory through Fs.
>  # open key -> /a/b/c
>  # CreateDirectory -> /a/b/c
>  # CommitKey -> /a/b/c
> So, now in Ozone we will have directory and file with name "c"
> When created through Fs interface.
>  # create file -> /a/b/c
>  # CreateDirectory -> /a/b/c
>  # CommitKey -> /a/b/c
> So, now in Ozone we will have directory and file with name "c"
>  
>  # InitiateMPU /a/b/c
>  # Create Part1 /a/b/c
>  # Commit Part1 /a/b/c
>  # Create Directory /a/b/c
>  # Complete MPU /a/b/c
> So, now in Ozone, we will have directory and file with name "c".  In MPU this 
> is one example scenario.
>  
> Few proposals/ideas to solve this:
>  # Check during commit whether a directory already exists with same name. But 
> disadvantage is after user uploads the entire data during last stage we fail. 
>  (File system with create in progress acts similarly. Scenario: 1. vi t1 2. 
> mkdir t1 3. Save t1: (Fail:"t1" is a directory)
>  # During create directory check are there any open key creation with same 
> name and fail.
>  
> Any of the above approaches are not final, this Jira is opened to discuss 
> this issue and come up with solution.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-4203) Publish docker image for ozone 1.0.0

2020-09-08 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek resolved HDDS-4203.
---
Fix Version/s: 1.0.0
   Resolution: Fixed

> Publish docker image for ozone 1.0.0
> 
>
> Key: HDDS-4203
> URL: https://issues.apache.org/jira/browse/HDDS-4203
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.0.0
>
>
> Docker image is based on the voted and approved artifacts which is availabe. 
> We can create the image.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-3441) Enable TestKeyManagerImpl test cases

2020-09-07 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek resolved HDDS-3441.
---
Fix Version/s: 1.1.0
   Resolution: Fixed

> Enable TestKeyManagerImpl test cases
> 
>
> Key: HDDS-3441
> URL: https://issues.apache.org/jira/browse/HDDS-3441
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 0.5.0
>Reporter: Nanda kumar
>Assignee: Aryan Gupta
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> Fix and enable TestKeyManagerImpl test cases



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4216) Separate storage / RPC proto files

2020-09-07 Thread Marton Elek (Jira)
Marton Elek created HDDS-4216:
-

 Summary: Separate storage / RPC proto files
 Key: HDDS-4216
 URL: https://issues.apache.org/jira/browse/HDDS-4216
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Marton Elek
Assignee: Marton Elek


The change of HDDS-3792 separated client/admin/tools proto files and introduced 
 a new way to check the proto files backward compatiblity.

To make it easier to check the compatibility of persistent data (proto 
structures persisted in RocksDB) we need to separated RPC and storage proto 
files:

 1. create a hadoop-ozone/inteface-storage project
 2. Clone the required proto files and generate java with different package name
 3. Copy all the Codec implementation together with the code



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4204) upgrade docker environment does not work with KEEP_RUNNING=true

2020-09-07 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek updated HDDS-4204:
--
Fix Version/s: 1.1.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> upgrade docker environment does not work with KEEP_RUNNING=true
> ---
>
> Key: HDDS-4204
> URL: https://issues.apache.org/jira/browse/HDDS-4204
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: docker, test
>Affects Versions: 1.0.0
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> Ozone {{upgrade}} Docker Compose environment fails if run with 
> {{KEEP_RUNNING=true}}.  The variable is applied to both runs (pre- and 
> post-upgrade), but pre-upgrade containers should be stopped anyway, since 
> they will be replaced by the new ones.
> {code}
> $ cd hadoop-ozone/dist/target/ozone-1.1.0-SNAPSHOT/compose/upgrade
> $ KEEP_RUNNING=true ./test.sh
> ...
> Failed: IO error: While lock file: scm.db/LOCK: Resource temporarily 
> unavailable
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-4202) Upgrade ratis to 1.1.0-ea949f1-SNAPSHOT

2020-09-07 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek resolved HDDS-4202.
---
Fix Version/s: 1.1.0
   Resolution: Fixed

> Upgrade ratis to 1.1.0-ea949f1-SNAPSHOT
> ---
>
> Key: HDDS-4202
> URL: https://issues.apache.org/jira/browse/HDDS-4202
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-4193) Range used by S3 MultipartUpload copy-from-source should be inclusive

2020-09-07 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek resolved HDDS-4193.
---
Fix Version/s: 1.0.1
   Resolution: Fixed

> Range used by S3 MultipartUpload copy-from-source should be inclusive
> -
>
> Key: HDDS-4193
> URL: https://issues.apache.org/jira/browse/HDDS-4193
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.0.1
>
>
> S3 API provides a feature to copy a specific range from an existing key.
> Based on the documentation, this range definitions is inclusive:
> https://docs.aws.amazon.com/cli/latest/reference/s3api/upload-part-copy.html
> {quote}
> -copy-source-range (string)
> The range of bytes to copy from the source object. The range value must 
> use the form bytes=first-last, where the first and last are the zero-based 
> byte offsets to copy. For example, bytes=0-9 indicates that you want to copy 
> the first 10 bytes of the source. You can copy a range only if the source 
> object is greater than 5 MB.
> {quote}
> But as it's visible from our [robot test|http://example.com], in our case we 
> use exclusive range:
> {code}
> upload-part-copy ... --copy-source-range bytes=0-10485758
> upload-part-copy ... --copy-source-range bytes=10485758-10485760
> {code}
> Based on this AWS documentation it will return with a (10485758 + 1) + 3 
> bytes long key,  which is impossible if our original source key is just 
> 10485760.
> I think the right usage to get the original key is the following:
> {code}
> upload-part-copy ... --copy-source-range bytes=0-10485757
> upload-part-copy ... --copy-source-range bytes=10485758-10485759
> {code}
> (Note, this bug is found with the script in HDDS-4194, which showed that AWS 
> S3 is working in different way).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4198) Compile Ozone with multiple Java versions

2020-09-07 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek updated HDDS-4198:
--
Fix Version/s: 1.1.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Compile Ozone with multiple Java versions
> -
>
> Key: HDDS-4198
> URL: https://issues.apache.org/jira/browse/HDDS-4198
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> Add matrix build in GitHub Actions for compiling Ozone with both Java 8 and 
> 11.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4155) Directory and filename can end up with same name in a path

2020-09-06 Thread Marton Elek (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17191280#comment-17191280
 ] 

Marton Elek commented on HDDS-4155:
---

Does it mean that we are planning to support files and directories with the 
same name in ofs/o3fs (!!!)  when strict s3 compatibility is configured?

> Directory and filename can end up with same name in a path
> --
>
> Key: HDDS-4155
> URL: https://issues.apache.org/jira/browse/HDDS-4155
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> Scenario:
> Create Key via S3, and Create Directory through Fs.
>  # open key -> /a/b/c
>  # CreateDirectory -> /a/b/c
>  # CommitKey -> /a/b/c
> So, now in Ozone we will have directory and file with name "c"
> When created through Fs interface.
>  # create file -> /a/b/c
>  # CreateDirectory -> /a/b/c
>  # CommitKey -> /a/b/c
> So, now in Ozone we will have directory and file with name "c"
>  
>  # InitiateMPU /a/b/c
>  # Create Part1 /a/b/c
>  # Commit Part1 /a/b/c
>  # Create Directory /a/b/c
>  # Complete MPU /a/b/c
> So, now in Ozone, we will have directory and file with name "c".  In MPU this 
> is one example scenario.
>  
> Few proposals/ideas to solve this:
>  # Check during commit whether a directory already exists with same name. But 
> disadvantage is after user uploads the entire data during last stage we fail. 
>  (File system with create in progress acts similarly. Scenario: 1. vi t1 2. 
> mkdir t1 3. Save t1: (Fail:"t1" is a directory)
>  # During create directory check are there any open key creation with same 
> name and fail.
>  
> Any of the above approaches are not final, this Jira is opened to discuss 
> this issue and come up with solution.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4209) S3A Filesystem does not work with Ozone S3

2020-09-04 Thread Marton Elek (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17190993#comment-17190993
 ] 

Marton Elek commented on HDDS-4209:
---

Nice catch, thanks the report [~bharat].

Seems to be related to HDDS-4155 and HDDS-4097. 

I think the intermediate directory creation should be more permissive. As from 
object-store point of view it's allowed to create file and dir with the same 
name we can just create them.

But nice corner case . Should be added to the design of HDDS-4097.

> S3A Filesystem does not work with Ozone S3
> --
>
> Key: HDDS-4209
> URL: https://issues.apache.org/jira/browse/HDDS-4209
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Blocker
>
> When *ozone.om.enable.filesystem.paths* is enabled
>  
> hdfs dfs -mkdir -p s3a://b12345/d11/d12 -> Success
> hdfs dfs -put /tmp/file1 s3a://b12345/d11/d12/file1 -> fails with below error
>  
> {code:java}
> 2020-09-04 03:53:51,377 ERROR 
> org.apache.hadoop.ozone.om.request.key.OMKeyCreateRequest: Key creation 
> failed. Volume:s3v, Bucket:b1234, Keyd11/d12/file1._COPYING_. Exception:{}
> NOT_A_FILE org.apache.hadoop.ozone.om.exceptions.OMException: Can not create 
> file: cp/k1._COPYING_ as there is already file in the given path
>  at 
> org.apache.hadoop.ozone.om.request.key.OMKeyCreateRequest.validateAndUpdateCache(OMKeyCreateRequest.java:256)
>  at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerRequestHandler.handleWriteRequest(OzoneManagerRequestHandler.java:227)
>  at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.runCommand(OzoneManagerStateMachine.java:428)
>  at 
> org.apache.hadoop.ozone.om.ratis.OzoneManagerStateMachine.lambda$applyTransaction$1(OzoneManagerStateMachine.java:246)
>  at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748){code}
> *Reason for this*
>  S3A filesystem when create directory creates an empty file
> *Now entries in Ozone KeyTable after create directory*
>  d11/
>  d11/d12
> Because of this in OMFileRequest.VerifyInFilesPath fails with 
> FILE_EXISTS_IN_GIVEN_PATH because d11/d12 is considered as file not a 
> directory. (As in ozone currently, directories end with trailing "/")
> So, when d11/d12/file is created, we check parent exists, now d11/d12 is 
> considered as file and fails with NOT_A_FILE
> When disabled it works fine, as when disabled during key create we do not 
> check any filesystem semantics and also does not create intermediate 
> directories.
> {code:java}
> [root@bvoz-1 ~]# hdfs dfs -mkdir -p s3a://b12345/d11/d12
> [root@bvoz-1 ~]# hdfs dfs -put /etc/hadoop/conf/ozone-site.xml 
> s3a://b12345/d11/d12/k1
> [root@bvoz-1 ~]# hdfs dfs -ls s3a://b12345/d11/d12
> Found 1 items
> -rw-rw-rw-   1 systest systest   2373 2020-09-04 04:45 
> s3a://b12345/d11/d12/k1
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4171) DEFAULT_PIPELIME_LIMIT typo in MiniOzoneCluster.java

2020-09-04 Thread Marton Elek (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17190990#comment-17190990
 ] 

Marton Elek commented on HDDS-4171:
---

I am not familiar with any similar spell check. My intellij usually shows spell 
errors but some of them are false positive.

But we can add any new check if you have any ideas for tools. The contract is 
simple:

Create a new bash script under hadoop-ozone/dev-support/checks which returns 
with 255 in case of any error, and prints out the error to the stderr (and -- 
optionally -- to the top $REPORT_DIR level)

> DEFAULT_PIPELIME_LIMIT typo in MiniOzoneCluster.java
> 
>
> Key: HDDS-4171
> URL: https://issues.apache.org/jira/browse/HDDS-4171
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Xiang Zhang
>Assignee: Xiang Zhang
>Priority: Minor
>
> [https://github.com/apache/hadoop-ozone/blob/master/hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/MiniOzoneCluster.java#L277]
> I believe it should be DEFAULT_PIPELINE_LIMIT



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4155) Directory and filename can end up with same name in a path

2020-09-04 Thread Marton Elek (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17190989#comment-17190989
 ] 

Marton Elek commented on HDDS-4155:
---

Thanks the answer [~arp]. 

Sorry, If I used the wrong words in my question ("support") let's talk about 
behavior instead of how to name it (bug or not).

Let's say I have an S3 bucket:

{code}
 aws s3api list-objects --bucket ozonetest --prefix=a/b | jq '.Contents[] | 
.Key'
"a/b/../c"
"a/b/../e"
"a/b/./c"
"a/b/./f"
"a/b//c"
"a/b//y"
"a/b/c"
"a/b/c/file1"
"a/b/h"
"a/b/h/"
"a/b/i/"
"a/b/x/README.md"
{code}

Using s3a we can see that some (invalid) path are hidden, others (for example 
the directory and file with the same name):

{code}
drwxrwxrwx   - elek elek  0 2020-09-05 08:23 s3a://ozonetest/a/b/a
-rw-rw-rw-   1 elek elek   3841 2020-08-28 11:18 s3a://ozonetest/a/b/c
drwxrwxrwx   - elek elek  0 2020-09-05 08:23 s3a://ozonetest/a/b/c
-rw-rw-rw-   1 elek elek   3841 2020-09-02 12:34 s3a://ozonetest/a/b/h
drwxrwxrwx   - elek elek  0 2020-09-05 08:23 s3a://ozonetest/a/b/h
drwxrwxrwx   - elek elek  0 2020-09-05 08:23 s3a://ozonetest/a/b/i
drwxrwxrwx   - elek elek  0 2020-09-05 08:23 s3a://ozonetest/a/b/x
{code}

I don't think it's an "unfixable" bug, but it's a decision about the mapping: 
to display both file and directories instead of hiding one (or throwing an 
exception).

I understand that HDFS couldn't support it as it's not an object store.  But 
Ozone is an object store, and I asked (without the intention to suggest 
anything different) **why** do we choose to follow the behavior of HDFS instead 
of S3A which seems to be close to S3FS.

Second question: what is the behavior of some other object store connectors 
(ADLS, google )? 

In general (as I wrote in the other thread) I think we should consider some 
level of compatibility between S3A (and other object store connectors) and 
ofs/o3fs to support seamless move between on-prem and cloud in case of hybrid 
cloud.

> Directory and filename can end up with same name in a path
> --
>
> Key: HDDS-4155
> URL: https://issues.apache.org/jira/browse/HDDS-4155
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> Scenario:
> Create Key via S3, and Create Directory through Fs.
>  # open key -> /a/b/c
>  # CreateDirectory -> /a/b/c
>  # CommitKey -> /a/b/c
> So, now in Ozone we will have directory and file with name "c"
> When created through Fs interface.
>  # create file -> /a/b/c
>  # CreateDirectory -> /a/b/c
>  # CommitKey -> /a/b/c
> So, now in Ozone we will have directory and file with name "c"
>  
>  # InitiateMPU /a/b/c
>  # Create Part1 /a/b/c
>  # Commit Part1 /a/b/c
>  # Create Directory /a/b/c
>  # Complete MPU /a/b/c
> So, now in Ozone, we will have directory and file with name "c".  In MPU this 
> is one example scenario.
>  
> Few proposals/ideas to solve this:
>  # Check during commit whether a directory already exists with same name. But 
> disadvantage is after user uploads the entire data during last stage we fail. 
>  (File system with create in progress acts similarly. Scenario: 1. vi t1 2. 
> mkdir t1 3. Save t1: (Fail:"t1" is a directory)
>  # During create directory check are there any open key creation with same 
> name and fail.
>  
> Any of the above approaches are not final, this Jira is opened to discuss 
> this issue and come up with solution.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-4168) Remove reference to Skaffold in the README in dist/

2020-09-04 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek resolved HDDS-4168.
---
Fix Version/s: 1.1.0
   Resolution: Fixed

> Remove reference to Skaffold in the README in dist/
> ---
>
> Key: HDDS-4168
> URL: https://issues.apache.org/jira/browse/HDDS-4168
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Alex Scammon
>Assignee: Alex Scammon
>Priority: Trivial
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> I got sidetracked when I ran into the README in the dist folder because it 
> referenced Skaffold which hasn't been used in a while.
> So that others don't get confused as I did,  I created a PR  to fix the 
> wayward reference:
>  * [https://github.com/apache/hadoop-ozone/pull/1360]
> This issue is merely to track the PR.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-4197) Failed to load existing service definition files: ...SubcommandWithParent

2020-09-04 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek resolved HDDS-4197.
---
Fix Version/s: 1.1.0
   Resolution: Fixed

> Failed to load existing service definition files: ...SubcommandWithParent
> -
>
> Key: HDDS-4197
> URL: https://issues.apache.org/jira/browse/HDDS-4197
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: jdk11, pull-request-available
> Fix For: 1.1.0
>
>
> {code}
> [INFO] Apache Hadoop HDDS Tools ... FAILURE
> ...
> [ERROR] Failed to load existing service definition files: 
> java.nio.file.NoSuchFileException: 
> hadoop-hdds/tools/target/classes/META-INF/services/org.apache.hadoop.hdds.cli.SubcommandWithParent
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4155) Directory and filename can end up with same name in a path

2020-09-04 Thread Marton Elek (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17190662#comment-17190662
 ] 

Marton Elek commented on HDDS-4155:
---

Yesterday I learned that s3a supports this: it can display file name and 
directory with the same name.

I am wondering what is the problem with it. It seems that Hadoop Compatible 
File System can handle this and display both.

It seems to be very strange for me as this is something which should be avoid 
with posix fs, but if hadoop can support it...

> Directory and filename can end up with same name in a path
> --
>
> Key: HDDS-4155
> URL: https://issues.apache.org/jira/browse/HDDS-4155
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> Scenario:
> Create Key via S3, and Create Directory through Fs.
>  # open key -> /a/b/c
>  # CreateDirectory -> /a/b/c
>  # CommitKey -> /a/b/c
> So, now in Ozone we will have directory and file with name "c"
> When created through Fs interface.
>  # create file -> /a/b/c
>  # CreateDirectory -> /a/b/c
>  # CommitKey -> /a/b/c
> So, now in Ozone we will have directory and file with name "c"
>  
>  # InitiateMPU /a/b/c
>  # Create Part1 /a/b/c
>  # Commit Part1 /a/b/c
>  # Create Directory /a/b/c
>  # Complete MPU /a/b/c
> So, now in Ozone, we will have directory and file with name "c".  In MPU this 
> is one example scenario.
>  
> Few proposals/ideas to solve this:
>  # Check during commit whether a directory already exists with same name. But 
> disadvantage is after user uploads the entire data during last stage we fail. 
>  (File system with create in progress acts similarly. Scenario: 1. vi t1 2. 
> mkdir t1 3. Save t1: (Fail:"t1" is a directory)
>  # During create directory check are there any open key creation with same 
> name and fail.
>  
> Any of the above approaches are not final, this Jira is opened to discuss 
> this issue and come up with solution.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4205) Fix or disable coverage upload to codecov (pull requests)

2020-09-03 Thread Marton Elek (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17190198#comment-17190198
 ] 

Marton Elek commented on HDDS-4205:
---

Thanks the idea. I am fine with that. In this case we should upload data only 
from the branch build (not from PR): we will have the eventually updated 
codecov but not the comments...

> Fix or disable coverage upload to codecov (pull requests)
> -
>
> Key: HDDS-4205
> URL: https://issues.apache.org/jira/browse/HDDS-4205
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>
> I would like to start a conversation about disabling the coverage data 
> uploading to codecov
>  1. It seems to be unreliable, HTTP 400 is a very common answer. I checked 
> and the @v1 is the latest version from the action  
> {code}
>  ->  Pinging Codecov
> https://codecov.io/upload/v4?package=bash-20200825-997b141&token=secret&branch=HDDS-4193&commit=361ddccd0666a8bd24240cda2652a903d4b69013&build=237476928&build_url=http%3A%2F%2Fgithub.com%2Fapache%2Fhadoop-ozone%2Factions%2Fruns%2F237476928&name=codecov-umbrella&tag=&slug=apache%2Fhadoop-ozone&service=github-actions&flags=&pr=1384&job=&cmd_args=f,n,F,Z
> HTTP 400
> {code} 
> 2. The reported coverage data is meaningless as  there is no good mechanism 
> to use the latest stable build as baseline.
> I think we should disable codecov (sonar is still available) until we will 
> have a solution for these issue.
> [~adoroszlai], [~vivekratnavel]: What do you think?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4205) Fix or disable coverage upload to codecov (pull requests)

2020-09-03 Thread Marton Elek (Jira)
Marton Elek created HDDS-4205:
-

 Summary: Fix or disable coverage upload to codecov (pull requests)
 Key: HDDS-4205
 URL: https://issues.apache.org/jira/browse/HDDS-4205
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Marton Elek
Assignee: Marton Elek


I would like to start a conversation about disabling the coverage data 
uploading to codecov

 1. It seems to be unreliable, HTTP 400 is a very common answer. I checked and 
the @v1 is the latest version from the action  

{code}
 ->  Pinging Codecov
https://codecov.io/upload/v4?package=bash-20200825-997b141&token=secret&branch=HDDS-4193&commit=361ddccd0666a8bd24240cda2652a903d4b69013&build=237476928&build_url=http%3A%2F%2Fgithub.com%2Fapache%2Fhadoop-ozone%2Factions%2Fruns%2F237476928&name=codecov-umbrella&tag=&slug=apache%2Fhadoop-ozone&service=github-actions&flags=&pr=1384&job=&cmd_args=f,n,F,Z
HTTP 400
{code} 

2. The reported coverage data is meaningless as  there is no good mechanism to 
use the latest stable build as baseline.

I think we should disable codecov (sonar is still available) until we will have 
a solution for these issue.

[~adoroszlai], [~vivekratnavel]: What do you think?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4203) Publish docker image for ozone 1.0.0

2020-09-03 Thread Marton Elek (Jira)
Marton Elek created HDDS-4203:
-

 Summary: Publish docker image for ozone 1.0.0
 Key: HDDS-4203
 URL: https://issues.apache.org/jira/browse/HDDS-4203
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Marton Elek
Assignee: Marton Elek


Docker image is based on the voted and approved artifacts which is availabe. We 
can create the image.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4198) Compile Ozone with multiple Java versions

2020-09-03 Thread Marton Elek (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17190008#comment-17190008
 ] 

Marton Elek commented on HDDS-4198:
---

(((I am wondering what is the plan with the java 8 support in ozone. But it 
requires a wider discussion.)))

> Compile Ozone with multiple Java versions
> -
>
> Key: HDDS-4198
> URL: https://issues.apache.org/jira/browse/HDDS-4198
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>
> Add matrix build in GitHub Actions for compiling Ozone with both Java 8 and 
> 11.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4150) recon.api.TestEndpoints is flaky

2020-09-03 Thread Marton Elek (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17189954#comment-17189954
 ] 

Marton Elek commented on HDDS-4150:
---

Turning of this test while we found the problem as it still causes false 
positive feedback on PRs. 

(today: 
https://github.com/elek/hadoop-ozone/runs/1062735360?check_suite_focus=true)

> recon.api.TestEndpoints is flaky
> 
>
> Key: HDDS-4150
> URL: https://issues.apache.org/jira/browse/HDDS-4150
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Marton Elek
>Assignee: Vivek Ratnavel Subramanian
>Priority: Blocker
>
> Failed on the PR:
> https://github.com/apache/hadoop-ozone/pull/1349
> And on the master:
> https://github.com/elek/ozone-build-results/blob/master/2020/08/25/2533/unit/hadoop-ozone/recon/org.apache.hadoop.ozone.recon.api.TestEndpoints.txt
> and here:
> https://github.com/elek/ozone-build-results/blob/master/2020/08/22/2499/unit/hadoop-ozone/recon/org.apache.hadoop.ozone.recon.api.TestEndpoints.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4097) S3/Ozone Filesystem inter-op

2020-09-03 Thread Marton Elek (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17189947#comment-17189947
 ] 

Marton Elek commented on HDDS-4097:
---

Thanks for the feedback [~arp]

I have three comments:

 *first*: sorry if it was not clear from the document, but the doc proposes 
only minimal changes, it mostly explains the situation and the motivations. In 
fact is very close to the current path. The main differences:

1. ozone.om.enable.filesystem.paths proposed to be true by default --> one line 
change 
2. normalization patches should depend on a *different* configuration key (in 
the document, it's ozone.keyspace.scheme) --> minimal change
3. the error handling fileList operation should be checked (invalid paths 
should be ignored) --> requires some work which should be done anyway...
 
*second part*:
 
{quote}

There is a set of use-cases where files are ingested via S3 and accessed via 
HCFS. E.g. someone ingesting logs into Hive external tables via fluentd/S3. For 
those use cases, the key names must look like valid paths and we need to ensure 
they are checked and normalized appropriately and directory prefixes created.

There is another set of pure object store use cases where the paths are random 
strings and may have arbitrary characters including {{/}} or other characters 
which are not valid in an FS path. Ingestion of such keys should be successful 
and must not fail.

These are mutually exclusive use cases...
{quote}

Thanks to summarize it in this way as it clearly shows how my proposal is 
strongly related and not a separated discussion:

In the first point there are 4 statements:

 1. There is a set of use-cases where files are ingested via S3 and accessed 
via HCFS. E.g. someone ingesting logs into Hive external tables via fluentd/S3
 2. For those use cases, the key names must look like valid paths
 3. and we need to ensure they are checked
 4. and normalized appropriately

I agree with 1 and 2. But *IF*  we do the check (3) on the client side (which 
means that valid file path should be sent by s3 client to make it visible from 
Hive) and 4 is optional: they are *not mutually exclusive* anymore, which is a 
HUGE win from compatibility point of view.

(see my previous comments, when I used XOR expression I talked about the same 
mutual exclusive behavor )

With the proposed approach you can use Ozone (by default) in multiple ways at 
the same (!) time:
 1. you can ingest keys (which are valid file-path) which will be visible from 
HIVE
 2. and you can ingest random keys (invalid path) and the invalid keys won't be 
visible from HIVE but remain visible from S3

The two functionalities can work side by side, which is very important IMHO 
(see the goals section in the document)

And at the same time, a stricter behavior can be turned on at any time (see the 
"Handling of the incompatible paths" in the document.

*three*

One additional comment. The proposed approach mimics the behavior of S3A. I 
think it's important to provide similar functionalities as one of our promise  
is that with Ozone it's possible to switch back and to the cloud from/to 
on-prem.

> S3/Ozone Filesystem inter-op
> 
>
> Key: HDDS-4097
> URL: https://issues.apache.org/jira/browse/HDDS-4097
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: Ozone FileSystem Paths Enabled.docx, Ozone filesystem 
> path enabled.xlsx
>
>
> This Jira is to implement changes required to use Ozone buckets when data is 
> ingested via S3 and use the bucket/volume via OzoneFileSystem. Initial 
> implementation for this is done as part of HDDS-3955. There are few API's 
> which have missed the changes during the implementation of HDDS-3955. 
> Attached design document which discusses each API,  and what changes are 
> required.
> Excel sheet has information about each API, from what all interfaces the OM 
> API is used, and what changes are required for the API to support 
> inter-operability.
> Note: The proposal for delete/rename is still under discussion, not yet 
> finalized. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4193) Range used by S3 MultipartUpload copy-from-source should be inclusive

2020-09-02 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek updated HDDS-4193:
--
Description: 
S3 API provides a feature to copy a specific range from an existing key.

Based on the documentation, this range definitions is inclusive:

https://docs.aws.amazon.com/cli/latest/reference/s3api/upload-part-copy.html

{quote}
-copy-source-range (string)

The range of bytes to copy from the source object. The range value must use 
the form bytes=first-last, where the first and last are the zero-based byte 
offsets to copy. For example, bytes=0-9 indicates that you want to copy the 
first 10 bytes of the source. You can copy a range only if the source object is 
greater than 5 MB.
{quote}

But as it's visible from our [robot test|http://example.com], in our case we 
use exclusive range:

{code}
upload-part-copy ... --copy-source-range bytes=0-10485758
upload-part-copy ... --copy-source-range bytes=10485758-10485760
{code}

Based on this AWS documentation it will return with a (10485758 + 1) + 3 bytes 
long key,  which is impossible if our original source key is just 10485760.

I think the right usage to get the original key is the following:

{code}
upload-part-copy ... --copy-source-range bytes=0-10485757
upload-part-copy ... --copy-source-range bytes=10485758-10485759
{code}

(Note, this bug is found with the script in HDDS-4194, which showed that AWS S3 
is working in different way).

  was:
S3 API provides a feature to copy a specific range from an existing key.

Based on the documentation, this range definitions is inclusive:

https://docs.aws.amazon.com/cli/latest/reference/s3api/upload-part-copy.html

{quote}
-copy-source-range (string)

The range of bytes to copy from the source object. The range value must use 
the form bytes=first-last, where the first and last are the zero-based byte 
offsets to copy. For example, bytes=0-9 indicates that you want to copy the 
first 10 bytes of the source. You can copy a range only if the source object is 
greater than 5 MB.
{quote}

But as it's visible from our [robot test|http://example.com], in our case we 
use exclusive range:

{code}
upload-part-copy ... --copy-source-range bytes=0-10485758
upload-part-copy ... --copy-source-range bytes=10485758-10485760
{code}

Based on this AWS documentation it will return with a (10485758 + 1) + 3 bytes 
long key,  which is impossible if our original source key is just 10485760.

I think the right usage to get the original key is the following:

{code}
upload-part-copy ... --copy-source-range bytes=0-10485757
upload-part-copy ... --copy-source-range bytes=10485758-10485759
{code}



> Range used by S3 MultipartUpload copy-from-source should be inclusive
> -
>
> Key: HDDS-4193
> URL: https://issues.apache.org/jira/browse/HDDS-4193
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Blocker
>
> S3 API provides a feature to copy a specific range from an existing key.
> Based on the documentation, this range definitions is inclusive:
> https://docs.aws.amazon.com/cli/latest/reference/s3api/upload-part-copy.html
> {quote}
> -copy-source-range (string)
> The range of bytes to copy from the source object. The range value must 
> use the form bytes=first-last, where the first and last are the zero-based 
> byte offsets to copy. For example, bytes=0-9 indicates that you want to copy 
> the first 10 bytes of the source. You can copy a range only if the source 
> object is greater than 5 MB.
> {quote}
> But as it's visible from our [robot test|http://example.com], in our case we 
> use exclusive range:
> {code}
> upload-part-copy ... --copy-source-range bytes=0-10485758
> upload-part-copy ... --copy-source-range bytes=10485758-10485760
> {code}
> Based on this AWS documentation it will return with a (10485758 + 1) + 3 
> bytes long key,  which is impossible if our original source key is just 
> 10485760.
> I think the right usage to get the original key is the following:
> {code}
> upload-part-copy ... --copy-source-range bytes=0-10485757
> upload-part-copy ... --copy-source-range bytes=10485758-10485759
> {code}
> (Note, this bug is found with the script in HDDS-4194, which showed that AWS 
> S3 is working in different way).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4193) Range used by S3 MultipartUpload copy-from-source should be inclusive

2020-09-02 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek updated HDDS-4193:
--
Description: 
S3 API provides a feature to copy a specific range from an existing key.

Based on the documentation, this range definitions is inclusive:

https://docs.aws.amazon.com/cli/latest/reference/s3api/upload-part-copy.html

{quote}
-copy-source-range (string)

The range of bytes to copy from the source object. The range value must use 
the form bytes=first-last, where the first and last are the zero-based byte 
offsets to copy. For example, bytes=0-9 indicates that you want to copy the 
first 10 bytes of the source. You can copy a range only if the source object is 
greater than 5 MB.
{quote}

But as it's visible from our [robot test|http://example.com], in our case we 
use exclusive range:

{code}
upload-part-copy ... --copy-source-range bytes=0-10485758
upload-part-copy ... --copy-source-range bytes=10485758-10485760
{code}

Based on this AWS documentation it will return with a (10485758 + 1) + 3 bytes 
long key,  which is impossible if our original source key is just 10485760.

I think the right usage to get the original key is the following:

{code}
upload-part-copy ... --copy-source-range bytes=0-10485757
upload-part-copy ... --copy-source-range bytes=10485758-10485759
{code}


  was:
S3 API provides a feature to copy a specific range from an existing key.

Based on the documentation, this range definitions is inclusive:

https://docs.aws.amazon.com/cli/latest/reference/s3api/upload-part-copy.html

{quote}
-copy-source-range (string)

The range of bytes to copy from the source object. The range value must use 
the form bytes=first-last, where the first and last are the zero-based byte 
offsets to copy. For example, bytes=0-9 indicates that you want to copy the 
first 10 bytes of the source. You can copy a range only if the source object is 
greater than 5 MB.
{quote}

But as it's visible from our [robot test|http://example.com], in our case we 
use exclusive range:

{code}
upload-part-copy ... --copy-source-range bytes=0-10485758
upload-part-copy ... --copy-source-range bytes=10485758-10485760
{code}

Based on this AWS documentation it should return with a (10485758 + 1) + 3 
bytes long key,  which is impossible if our original source key is just 
10485760.

I think the right usage to get the original key is the following:

{code}
upload-part-copy ... --copy-source-range bytes=0-10485757
upload-part-copy ... --copy-source-range bytes=10485758-10485759
{code}



> Range used by S3 MultipartUpload copy-from-source should be inclusive
> -
>
> Key: HDDS-4193
> URL: https://issues.apache.org/jira/browse/HDDS-4193
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Blocker
>
> S3 API provides a feature to copy a specific range from an existing key.
> Based on the documentation, this range definitions is inclusive:
> https://docs.aws.amazon.com/cli/latest/reference/s3api/upload-part-copy.html
> {quote}
> -copy-source-range (string)
> The range of bytes to copy from the source object. The range value must 
> use the form bytes=first-last, where the first and last are the zero-based 
> byte offsets to copy. For example, bytes=0-9 indicates that you want to copy 
> the first 10 bytes of the source. You can copy a range only if the source 
> object is greater than 5 MB.
> {quote}
> But as it's visible from our [robot test|http://example.com], in our case we 
> use exclusive range:
> {code}
> upload-part-copy ... --copy-source-range bytes=0-10485758
> upload-part-copy ... --copy-source-range bytes=10485758-10485760
> {code}
> Based on this AWS documentation it will return with a (10485758 + 1) + 3 
> bytes long key,  which is impossible if our original source key is just 
> 10485760.
> I think the right usage to get the original key is the following:
> {code}
> upload-part-copy ... --copy-source-range bytes=0-10485757
> upload-part-copy ... --copy-source-range bytes=10485758-10485759
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4193) Range used by S3 MultipartUpload copy-from-source should be inclusive

2020-09-02 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek updated HDDS-4193:
--
Summary: Range used by S3 MultipartUpload copy-from-source should be 
inclusive  (was: Range used by S3 MultipartUpload copy-from-source should be 
incusive)

> Range used by S3 MultipartUpload copy-from-source should be inclusive
> -
>
> Key: HDDS-4193
> URL: https://issues.apache.org/jira/browse/HDDS-4193
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Blocker
>
> S3 API provides a feature to copy a specific range from an existing key.
> Based on the documentation, this range definitions is inclusive:
> https://docs.aws.amazon.com/cli/latest/reference/s3api/upload-part-copy.html
> {quote}
> -copy-source-range (string)
> The range of bytes to copy from the source object. The range value must 
> use the form bytes=first-last, where the first and last are the zero-based 
> byte offsets to copy. For example, bytes=0-9 indicates that you want to copy 
> the first 10 bytes of the source. You can copy a range only if the source 
> object is greater than 5 MB.
> {quote}
> But as it's visible from our [robot test|http://example.com], in our case we 
> use exclusive range:
> {code}
> upload-part-copy ... --copy-source-range bytes=0-10485758
> upload-part-copy ... --copy-source-range bytes=10485758-10485760
> {code}
> Based on this AWS documentation it should return with a (10485758 + 1) + 3 
> bytes long key,  which is impossible if our original source key is just 
> 10485760.
> I think the right usage to get the original key is the following:
> {code}
> upload-part-copy ... --copy-source-range bytes=0-10485757
> upload-part-copy ... --copy-source-range bytes=10485758-10485759
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4194) Create a script to check AWS S3 compatiblity

2020-09-02 Thread Marton Elek (Jira)
Marton Elek created HDDS-4194:
-

 Summary: Create a script to check AWS S3 compatiblity
 Key: HDDS-4194
 URL: https://issues.apache.org/jira/browse/HDDS-4194
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Marton Elek
Assignee: Marton Elek


Ozone S3G implements the REST interface of AWS S3 protocol. Our robot test 
based scripts check if it's possible to use Ozone S3 with the AWS client tool.

But occasionally we should check if our robot test definitions are valid: robot 
tests should be executed with using real AWS endpoint and bucket(s) and all the 
test cases should be passed.

This patch provides a simple shell script to make this cross-check easier.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4193) Range used by S3 MultipartUpload copy-from-source should be incusive

2020-09-02 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek updated HDDS-4193:
--
Issue Type: Bug  (was: Improvement)

> Range used by S3 MultipartUpload copy-from-source should be incusive
> 
>
> Key: HDDS-4193
> URL: https://issues.apache.org/jira/browse/HDDS-4193
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Blocker
>
> S3 API provides a feature to copy a specific range from an existing key.
> Based on the documentation, this range definitions is inclusive:
> https://docs.aws.amazon.com/cli/latest/reference/s3api/upload-part-copy.html
> {quote}
> -copy-source-range (string)
> The range of bytes to copy from the source object. The range value must 
> use the form bytes=first-last, where the first and last are the zero-based 
> byte offsets to copy. For example, bytes=0-9 indicates that you want to copy 
> the first 10 bytes of the source. You can copy a range only if the source 
> object is greater than 5 MB.
> {quote}
> But as it's visible from our [robot test|http://example.com], in our case we 
> use exclusive range:
> {code}
> upload-part-copy ... --copy-source-range bytes=0-10485758
> upload-part-copy ... --copy-source-range bytes=10485758-10485760
> {code}
> Based on this AWS documentation it should return with a (10485758 + 1) + 3 
> bytes long key,  which is impossible if our original source key is just 
> 10485760.
> I think the right usage to get the original key is the following:
> {code}
> upload-part-copy ... --copy-source-range bytes=0-10485757
> upload-part-copy ... --copy-source-range bytes=10485758-10485759
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4193) Range used by S3 MultipartUpload copy-from-source should be incusive

2020-09-02 Thread Marton Elek (Jira)
Marton Elek created HDDS-4193:
-

 Summary: Range used by S3 MultipartUpload copy-from-source should 
be incusive
 Key: HDDS-4193
 URL: https://issues.apache.org/jira/browse/HDDS-4193
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Marton Elek
Assignee: Marton Elek


S3 API provides a feature to copy a specific range from an existing key.

Based on the documentation, this range definitions is inclusive:

https://docs.aws.amazon.com/cli/latest/reference/s3api/upload-part-copy.html

{quote}
-copy-source-range (string)

The range of bytes to copy from the source object. The range value must use 
the form bytes=first-last, where the first and last are the zero-based byte 
offsets to copy. For example, bytes=0-9 indicates that you want to copy the 
first 10 bytes of the source. You can copy a range only if the source object is 
greater than 5 MB.
{quote}

But as it's visible from our [robot test|http://example.com], in our case we 
use exclusive range:

{code}
upload-part-copy ... --copy-source-range bytes=0-10485758
upload-part-copy ... --copy-source-range bytes=10485758-10485760
{code}

Based on this AWS documentation it should return with a (10485758 + 1) + 3 
bytes long key,  which is impossible if our original source key is just 
10485760.

I think the right usage to get the original key is the following:

{code}
upload-part-copy ... --copy-source-range bytes=0-10485757
upload-part-copy ... --copy-source-range bytes=10485758-10485759
{code}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4097) S3/Ozone Filesystem inter-op

2020-09-02 Thread Marton Elek (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17189338#comment-17189338
 ] 

Marton Elek commented on HDDS-4097:
---

Deleted the attached odt format as the conversion didn't work very well.

Markdown format can be read from here: 
https://github.com/elek/ozone-notes/blob/master/content/design/s3_hcfs.md

Can be commented on hackmd (https://hackmd.io/@elek/BJft-yaXP)

Or I can create a pull request where everybody can comment it line-by-line.

> S3/Ozone Filesystem inter-op
> 
>
> Key: HDDS-4097
> URL: https://issues.apache.org/jira/browse/HDDS-4097
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: Ozone FileSystem Paths Enabled.docx, Ozone filesystem 
> path enabled.xlsx
>
>
> This Jira is to implement changes required to use Ozone buckets when data is 
> ingested via S3 and use the bucket/volume via OzoneFileSystem. Initial 
> implementation for this is done as part of HDDS-3955. There are few API's 
> which have missed the changes during the implementation of HDDS-3955. 
> Attached design document which discusses each API,  and what changes are 
> required.
> Excel sheet has information about each API, from what all interfaces the OM 
> API is used, and what changes are required for the API to support 
> inter-operability.
> Note: The proposal for delete/rename is still under discussion, not yet 
> finalized. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4097) S3/Ozone Filesystem inter-op

2020-09-02 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek updated HDDS-4097:
--
Attachment: (was: s3_hcfs.odt)

> S3/Ozone Filesystem inter-op
> 
>
> Key: HDDS-4097
> URL: https://issues.apache.org/jira/browse/HDDS-4097
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: Ozone FileSystem Paths Enabled.docx, Ozone filesystem 
> path enabled.xlsx
>
>
> This Jira is to implement changes required to use Ozone buckets when data is 
> ingested via S3 and use the bucket/volume via OzoneFileSystem. Initial 
> implementation for this is done as part of HDDS-3955. There are few API's 
> which have missed the changes during the implementation of HDDS-3955. 
> Attached design document which discusses each API,  and what changes are 
> required.
> Excel sheet has information about each API, from what all interfaces the OM 
> API is used, and what changes are required for the API to support 
> inter-operability.
> Note: The proposal for delete/rename is still under discussion, not yet 
> finalized. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4165) GitHub Actions cache does not work outside of workspace

2020-09-02 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek updated HDDS-4165:
--
Fix Version/s: 1.1.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> GitHub Actions cache does not work outside of workspace
> ---
>
> Key: HDDS-4165
> URL: https://issues.apache.org/jira/browse/HDDS-4165
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> Ozone source is checked out for _acceptance_ and _kubernetes_ checks to 
> {{/mnt}}, outside of {{GITHUB_WORKSPACE}}, and only after _Cache_ steps.  
> Therefore no files are found for which hash would be computed to be included 
> in cache keys.
> {code:title=https://github.com/apache/hadoop-ozone/blob/44acf78aec6c3a4e1c5fea3a43971144c6da9a4c/.github/workflows/post-commit.yml#L167-L171}
>   - name: Cache for maven dependencies
> uses: actions/cache@v2
> with:
>   path: ~/.m2/repository
>   key: maven-repo-${{ hashFiles('**/pom.xml') }}
> {code}
> Cache key is always {{maven-repo-}}:
> {code:title=https://github.com/apache/hadoop-ozone/runs/1042358389#step:2:10}
> Cache restored from key: maven-repo-
> {code}
> The same old cache is used for all builds, even if dependencies are changed 
> in {{pom.xml}}, gradually resulting in more and more downloads during builds:
> {code:title=https://github.com/apache/hadoop-ozone/runs/1036271227#step:9:680}
> [INFO] Downloaded from central: 
> https://repo.maven.apache.org/maven2/info/picocli/picocli/4.4.0/picocli-4.4.0.jar
>  (389 kB at 2.4 MB/s)
> [INFO] Downloaded from central: 
> https://repo.maven.apache.org/maven2/org/apache/ratis/ratis-server/1.0.0/ratis-server-1.0.0.jar
>  (380 kB at 2.4 MB/s)
> [INFO] Downloaded from central: 
> https://repo.maven.apache.org/maven2/org/apache/logging/log4j/log4j-api/2.13.3/log4j-api-2.13.3.jar
>  (292 kB at 1.9 MB/s)
> [INFO] Downloaded from central: 
> https://repo.maven.apache.org/maven2/org/apache/ratis/ratis-proto/1.0.0/ratis-proto-1.0.0.jar
>  (1.2 MB at 6.6 MB/s)
> [INFO] Downloaded from central: 
> https://repo.maven.apache.org/maven2/org/apache/logging/log4j/log4j-core/2.13.3/log4j-core-2.13.3.jar
>  (1.7 MB at 8.5 MB/s)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4097) S3/Ozone Filesystem inter-op

2020-09-02 Thread Marton Elek (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17189181#comment-17189181
 ] 

Marton Elek commented on HDDS-4097:
---

[~arp] suggested yesterday to formalize my proposal with more information:

I uploaded a file with the details  (see the latest attachment)

(for online read, you can use this link: 
https://github.com/elek/ozone-notes/blob/master/content/design/s3_hcfs.md)

Please let me know what do you think.

> S3/Ozone Filesystem inter-op
> 
>
> Key: HDDS-4097
> URL: https://issues.apache.org/jira/browse/HDDS-4097
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: Ozone FileSystem Paths Enabled.docx, Ozone filesystem 
> path enabled.xlsx, s3_hcfs.odt
>
>
> This Jira is to implement changes required to use Ozone buckets when data is 
> ingested via S3 and use the bucket/volume via OzoneFileSystem. Initial 
> implementation for this is done as part of HDDS-3955. There are few API's 
> which have missed the changes during the implementation of HDDS-3955. 
> Attached design document which discusses each API,  and what changes are 
> required.
> Excel sheet has information about each API, from what all interfaces the OM 
> API is used, and what changes are required for the API to support 
> inter-operability.
> Note: The proposal for delete/rename is still under discussion, not yet 
> finalized. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4097) S3/Ozone Filesystem inter-op

2020-09-02 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek updated HDDS-4097:
--
Attachment: s3_hcfs.odt

> S3/Ozone Filesystem inter-op
> 
>
> Key: HDDS-4097
> URL: https://issues.apache.org/jira/browse/HDDS-4097
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: Ozone FileSystem Paths Enabled.docx, Ozone filesystem 
> path enabled.xlsx, s3_hcfs.odt
>
>
> This Jira is to implement changes required to use Ozone buckets when data is 
> ingested via S3 and use the bucket/volume via OzoneFileSystem. Initial 
> implementation for this is done as part of HDDS-3955. There are few API's 
> which have missed the changes during the implementation of HDDS-3955. 
> Attached design document which discusses each API,  and what changes are 
> required.
> Excel sheet has information about each API, from what all interfaces the OM 
> API is used, and what changes are required for the API to support 
> inter-operability.
> Note: The proposal for delete/rename is still under discussion, not yet 
> finalized. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4181) Add acceptance tests for upgrade, finalization and downgrade

2020-09-02 Thread Marton Elek (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17189082#comment-17189082
 ] 

Marton Elek commented on HDDS-4181:
---

Thanks, I was sure that you are aware of it, but it was a low-hanging fruit to 
add some cross-references to make it easier to navigate for everybody  ;-)

> Add acceptance tests for upgrade, finalization and downgrade
> 
>
> Key: HDDS-4181
> URL: https://issues.apache.org/jira/browse/HDDS-4181
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Aravindan Vijayan
>Priority: Major
> Fix For: 1.1.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4097) S3/Ozone Filesystem inter-op

2020-09-01 Thread Marton Elek (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17188544#comment-17188544
 ] 

Marton Elek commented on HDDS-4097:
---

> Unfortunately there is no way you can guarantee that. A filesystem client 
> will need all the intermediate directories to exist for navigating the tree.

Is there any problem with always creating the intermediate directories? I see 
some possible, minor performance problems but as RocksDB is already the fastest 
part shouldn't be a blocker. Especially as we can support both S3 and HCFS with 
this approach.

> S3/Ozone Filesystem inter-op
> 
>
> Key: HDDS-4097
> URL: https://issues.apache.org/jira/browse/HDDS-4097
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: Ozone FileSystem Paths Enabled.docx, Ozone filesystem 
> path enabled.xlsx
>
>
> This Jira is to implement changes required to use Ozone buckets when data is 
> ingested via S3 and use the bucket/volume via OzoneFileSystem. Initial 
> implementation for this is done as part of HDDS-3955. There are few API's 
> which have missed the changes during the implementation of HDDS-3955. 
> Attached design document which discusses each API,  and what changes are 
> required.
> Excel sheet has information about each API, from what all interfaces the OM 
> API is used, and what changes are required for the API to support 
> inter-operability.
> Note: The proposal for delete/rename is still under discussion, not yet 
> finalized. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4097) S3/Ozone Filesystem inter-op

2020-09-01 Thread Marton Elek (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17188536#comment-17188536
 ] 

Marton Elek commented on HDDS-4097:
---

[~arp] It was discussed with more details during the community sync (the 
recording is shared in the ozone-dev mailing list).

In short, my proposal is the following:

1. Using simple, acceptable key names (/a/b/c, /a/b/c/d) *both s3 and HCFS 
should work out-of-the box, without any additional settings*. (Based on my 
understanding this is not true today as we need to turn on 
`ozone.om.enable.filesystem.paths` to get intermediate directories)

2. There are some conflicts between AWS S3 / HCFS interface. We need a new 
option to express how to resolve the conflicts. Let's say we have 
ozone.key.compatibility settings.

 a) ozone.key.compatibility=aws means that we enable (almost) everything which 
is enabled by aws s3, but we couldn't show all the keys in the hadoop 
interaface. For example if directory and key are created with the same prefix 
(possible with AWS S3), HCFS will show only the directory, not the key. 

 b) ozone.key.compatibility=hadoop is the opposite, we can validate the path, 
and throw an exception on s3 interface if dir/key are created with the same name

> S3/Ozone Filesystem inter-op
> 
>
> Key: HDDS-4097
> URL: https://issues.apache.org/jira/browse/HDDS-4097
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: Ozone FileSystem Paths Enabled.docx, Ozone filesystem 
> path enabled.xlsx
>
>
> This Jira is to implement changes required to use Ozone buckets when data is 
> ingested via S3 and use the bucket/volume via OzoneFileSystem. Initial 
> implementation for this is done as part of HDDS-3955. There are few API's 
> which have missed the changes during the implementation of HDDS-3955. 
> Attached design document which discusses each API,  and what changes are 
> required.
> Excel sheet has information about each API, from what all interfaces the OM 
> API is used, and what changes are required for the API to support 
> inter-operability.
> Note: The proposal for delete/rename is still under discussion, not yet 
> finalized. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4167) Acceptance test logs missing if SCM fails to exit safe mode

2020-09-01 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek updated HDDS-4167:
--
Fix Version/s: 1.1.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Acceptance test logs missing if SCM fails to exit safe mode
> ---
>
> Key: HDDS-4167
> URL: https://issues.apache.org/jira/browse/HDDS-4167
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> Acceptance test sometimes fails due to SCM not coming out of safe mode.  If 
> this happens, the cluster is stopped without running Robot tests.  {{rebot}} 
> command to process test results fails due to missing input, and acceptance 
> check is abruptly stopped without fetching docker logs or running tests in 
> other environments.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-3102) ozone getconf command should use the GenericCli parent class

2020-09-01 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek reassigned HDDS-3102:
-

Assignee: (was: Marton Elek)

> ozone getconf command should use the GenericCli parent class
> 
>
> Key: HDDS-3102
> URL: https://issues.apache.org/jira/browse/HDDS-3102
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone CLI
>Reporter: Marton Elek
>Priority: Major
>  Labels: newbie
>
> org.apache.hadoop.ozone.freon.OzoneGetCOnf implements a tool to print out 
> current configuration values
> With all the other CLI tools we already started to use picocli and the 
> GenericCli parent class.
> To provide better user experience we should migrate the tool to use 
> GenericCli (+move it to the tools project + remove freon from the package 
> name)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4119) Improve performance of the BufferPool management of Ozone client

2020-09-01 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek updated HDDS-4119:
--
Target Version/s: 1.0.1  (was: 1.1.0)

> Improve performance of the BufferPool management of Ozone client
> 
>
> Key: HDDS-4119
> URL: https://issues.apache.org/jira/browse/HDDS-4119
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Blocker
>  Labels: pull-request-available
>
> Teragen reported to be slow with low number of mappers compared to HDFS.
> In my test (one pipeline, 3 yarn nodes) 10 g teragen with HDFS was ~3 mins 
> but with Ozone it was 6 mins. It could be fixed with using more mappers, but 
> when I investigated the execution I found a few problems reagrding to the 
> BufferPool management.
>  1. IncrementalChunkBuffer is slow and it might not be required as BufferPool 
> itself is incremental
>  2. For each write operation the bufferPool.allocateBufferIfNeeded is called 
> which can be a slow operation (positions should be calculated).
>  3. There is no explicit support for write(byte) operations
> In the flamegraph it's clearly visible that with low number of mappers the 
> client is busy with buffer operations. After the patch the rpc call and the 
> checksum calculation give the majority of the time. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4185) Remove IncrementalByteBuffer from Ozone client

2020-09-01 Thread Marton Elek (Jira)
Marton Elek created HDDS-4185:
-

 Summary: Remove IncrementalByteBuffer from Ozone client
 Key: HDDS-4185
 URL: https://issues.apache.org/jira/browse/HDDS-4185
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Marton Elek
Assignee: Marton Elek


During the teragen test it was identified that the IncrementalByteBuffer is one 
of the biggest bottlenecks. 

In the PR of HDDS-4119 a long conversation has been started if it can be 
removed or we need other solution to optimize.

This jira is opened to continue the discussion and either remove or optimize 
the IncrementalByteByffer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4181) Add acceptance tests for upgrade, finalization and downgrade

2020-09-01 Thread Marton Elek (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17188244#comment-17188244
 ] 

Marton Elek commented on HDDS-4181:
---

Related: HDDS-3855 which shows an initial attempt.

> Add acceptance tests for upgrade, finalization and downgrade
> 
>
> Key: HDDS-4181
> URL: https://issues.apache.org/jira/browse/HDDS-4181
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Aravindan Vijayan
>Priority: Major
> Fix For: 1.1.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-4181) Add acceptance tests for upgrade, finalization and downgrade

2020-09-01 Thread Marton Elek (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17188244#comment-17188244
 ] 

Marton Elek edited comment on HDDS-4181 at 9/1/20, 8:10 AM:


Related: HDDS-3855 which shows an initial version of upgrade acceptance test.


was (Author: elek):
Related: HDDS-3855 which shows an initial attempt.

> Add acceptance tests for upgrade, finalization and downgrade
> 
>
> Key: HDDS-4181
> URL: https://issues.apache.org/jira/browse/HDDS-4181
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Aravindan Vijayan
>Priority: Major
> Fix For: 1.1.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-3867) Extend the chunkinfo tool to display information from all nodes in the pipeline.

2020-08-31 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek resolved HDDS-3867.
---
Fix Version/s: 0.7.0
   Resolution: Fixed

> Extend the chunkinfo tool to display information from all nodes in the 
> pipeline.
> 
>
> Key: HDDS-3867
> URL: https://issues.apache.org/jira/browse/HDDS-3867
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Tools
>Reporter: Sadanand Shenoy
>Assignee: Sadanand Shenoy
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.7.0
>
>
> Currently the chunk-info tool 
> ([HDDS-3134|https://issues.apache.org/jira/browse/HDDS-3134])  inside ozone 
> debug displays information (chunk/block files that constitute the key) only 
> from the first node of the pipeline. The plan here is to extend it  for the 
> replicas.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4097) S3/Ozone Filesystem inter-op

2020-08-28 Thread Marton Elek (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17186412#comment-17186412
 ] 

Marton Elek commented on HDDS-4097:
---

Well, that's a valid question, but I think it might be possible to support it 
on some level. 

Let's say we create implicit dir entries with a flag (impicit=?)

Create Key /a/b/c/file1

Created real keys:
/a (implicit=true)
/a/b (implicit=true)
/a/b/c (implicit=true)
/a/b/c/file1 (implicit=false)

In this case you can create dir (!) entry a/b/c, but we need to make /a/b/c 
(implicit=false). We can also hide all the implicit entries by default when 
keys are listed from s3 endpoint.

Storage can be the same but o3fs/ofs and s3g can display different 
representation.

The harder question is when /a/b/c/file1 and /a/b/c files are created, both 
with content. As far as I see there are multiple options:

 1. Reject it from s3 interface (AWS incompatible)
 2. Persist it to the keys but show only one from hcfs interface (eg. show only 
the file without subkeys / dirs, or show only the dirs)
 3. We can also show a technical file on hcfs interface (like .CONFLICT) in 
additional to the dir which can solve this problem

And we have a similar question about the normalization

I am not sure what is the best decision here (1, 2, 3 or other). But I have 
some concerns that we're giving up from the compatibility without collection 
all the incompatibilities which will be introduced, or without checking what 
are the minimum level of the required compatibility support.

They might be collected and considered but I miss it.

In general, I think trying to be S3 compatible is a very important goal (as we 
are an object store), but also understand that providing file system semantics 
is also important. What I am interested is the support of both, with just the 
minimal level of differences with S3.  

> S3/Ozone Filesystem inter-op
> 
>
> Key: HDDS-4097
> URL: https://issues.apache.org/jira/browse/HDDS-4097
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: Ozone FileSystem Paths Enabled.docx, Ozone filesystem 
> path enabled.xlsx
>
>
> This Jira is to implement changes required to use Ozone buckets when data is 
> ingested via S3 and use the bucket/volume via OzoneFileSystem. Initial 
> implementation for this is done as part of HDDS-3955. There are few API's 
> which have missed the changes during the implementation of HDDS-3955. 
> Attached design document which discusses each API,  and what changes are 
> required.
> Excel sheet has information about each API, from what all interfaces the OM 
> API is used, and what changes are required for the API to support 
> inter-operability.
> Note: The proposal for delete/rename is still under discussion, not yet 
> finalized. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4097) S3/Ozone Filesystem inter-op

2020-08-27 Thread Marton Elek (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17186060#comment-17186060
 ] 

Marton Elek commented on HDDS-4097:
---

When do you suggest turning on a compatibility flag? What are the reasons for 
turn it on or off?

I have partial understanding, but as I understood it's better to turn it on if 
I would like to use ofs/o3fs for serious things. Which would make a XOR 
relationship, you can either have the good AWS compatibility or good file 
system semantics. 

At least this is my understanding based on the description of HDDS-3955. If I 
don't enable the flag, the file which is written from fluentd + s3 couldn't be 
read well from o3fs (at least some directories are missing). Therefore, I have 
to enable this flag If I would like to use AWS + ofs. But I can lose the AWS 
compatibility with this change.

I am wondering if it's possible to get both: AWS compatibility and full file 
system semantics. For example S3g can filter out the intermediate directory key 
entries if they are not explicit created. Or moving some of the normalization 
to the ofs/o3fs side.

I am not sure what is the right approach, but I think giving up the AWS 
compatibility is a big decision which breaks one of our promises.

What are the possible alternatives which can keep AWS compatibilty?

> S3/Ozone Filesystem inter-op
> 
>
> Key: HDDS-4097
> URL: https://issues.apache.org/jira/browse/HDDS-4097
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: Ozone FileSystem Paths Enabled.docx, Ozone filesystem 
> path enabled.xlsx
>
>
> This Jira is to implement changes required to use Ozone buckets when data is 
> ingested via S3 and use the bucket/volume via OzoneFileSystem. Initial 
> implementation for this is done as part of HDDS-3955. There are few API's 
> which have missed the changes during the implementation of HDDS-3955. 
> Attached design document which discusses each API,  and what changes are 
> required.
> Excel sheet has information about each API, from what all interfaces the OM 
> API is used, and what changes are required for the API to support 
> inter-operability.
> Note: The proposal for delete/rename is still under discussion, not yet 
> finalized. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2411) Create DataChunkValidator Freon test

2020-08-27 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek reassigned HDDS-2411:
-

Assignee: François Risch

> Create DataChunkValidator Freon test
> 
>
> Key: HDDS-2411
> URL: https://issues.apache.org/jira/browse/HDDS-2411
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: freon
>Reporter: Marton Elek
>Assignee: François Risch
>Priority: Major
>  Labels: newbie, pull-request-available
>
> HDDS-2327 introduced a new load test which generates a lot of WriteChunk 
> request.
> As with other freon test (for example with. 
> HadoopFsGenerator/HadoopFsValidator) we need an other load test for 
> validation/read path.
> It should be almost the same DatanodeChunkGenerator but it should read the 
> first chunk and compare all the others (very similar to the HadoopFsValidator 
> or OzoneClientKeyValidator)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4097) S3/Ozone Filesystem inter-op

2020-08-27 Thread Marton Elek (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17185733#comment-17185733
 ] 

Marton Elek commented on HDDS-4097:
---

> Marton Elek this comment is unclear. What do you mean by keep 100% AWS 
> compatibility?

Ozone S3g should have the save behavior as an AWS S3 endpoint. Same error code, 
same output, etc. 

S3 endpoint can be used from any external tools (python, go, tensorflow, 
goofy...). To avoid any unexpected problem, we should mimic S3 as close as 
possible. (IMHO)

> S3/Ozone Filesystem inter-op
> 
>
> Key: HDDS-4097
> URL: https://issues.apache.org/jira/browse/HDDS-4097
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: Ozone FileSystem Paths Enabled.docx, Ozone filesystem 
> path enabled.xlsx
>
>
> This Jira is to implement changes required to use Ozone buckets when data is 
> ingested via S3 and use the bucket/volume via OzoneFileSystem. Initial 
> implementation for this is done as part of HDDS-3955. There are few API's 
> which have missed the changes during the implementation of HDDS-3955. 
> Attached design document which discusses each API,  and what changes are 
> required.
> Excel sheet has information about each API, from what all interfaces the OM 
> API is used, and what changes are required for the API to support 
> inter-operability.
> Note: The proposal for delete/rename is still under discussion, not yet 
> finalized. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-4140) Auto-close /pending pull requests after 21 days of inactivity

2020-08-27 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek resolved HDDS-4140.
---
Fix Version/s: 0.7.0
   Resolution: Fixed

> Auto-close /pending pull requests after 21 days of inactivity
> -
>
> Key: HDDS-4140
> URL: https://issues.apache.org/jira/browse/HDDS-4140
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.7.0
>
>
> Earlier we introduced a way to mark the inactive pull requests with "pending" 
> label (with the help of /pending comment).
> This pull requests introduce a new scheduled build which closes the "pending" 
> pull requests after 21 days of inactivity.
> IMPORTANT: Only the pull requests  which are pending on the author will be 
> closed.
> We should NEVER close a pull requests which are waiting for the attention of 
> a committer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4152) Archive container logs for kubernetes check

2020-08-27 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek updated HDDS-4152:
--
Fix Version/s: 0.7.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Archive container logs for kubernetes check
> ---
>
> Key: HDDS-4152
> URL: https://issues.apache.org/jira/browse/HDDS-4152
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build, test
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.7.0
>
>
> _kubernetes_ check archives only Robot results.  It should also include logs 
> from all pods, similar to compose-based acceptance tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4056) Convert OzoneAdmin to pluggable model

2020-08-27 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek updated HDDS-4056:
--
Fix Version/s: 0.7.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Convert OzoneAdmin to pluggable model
> -
>
> Key: HDDS-4056
> URL: https://issues.apache.org/jira/browse/HDDS-4056
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone CLI
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.7.0
>
>
> Ozone Shell's {{OzoneAdmin}} implements {{WithScmClient}} interface to be 
> able to provide SCM client to sub-commands.  We can convert it to a 
> {{Mixin}}, which would allow converting {{OzoneAdmin}} to the pluggable model 
> introduced by HDDS-4046.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-3972) Add option to limit number of items while displaying through ldb tool.

2020-08-27 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek resolved HDDS-3972.
---
Fix Version/s: 0.7.0
   Resolution: Fixed

> Add option to limit number of items while displaying through ldb tool.
> --
>
> Key: HDDS-3972
> URL: https://issues.apache.org/jira/browse/HDDS-3972
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Tools
>Reporter: Sadanand Shenoy
>Assignee: Sadanand Shenoy
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.7.0
>
>
> This Jira aims to add an option in the ldb tool  to limit the number of 
> displayed items.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4153) Increase default timeout in kubernetes tests

2020-08-26 Thread Marton Elek (Jira)
Marton Elek created HDDS-4153:
-

 Summary: Increase default timeout in kubernetes tests
 Key: HDDS-4153
 URL: https://issues.apache.org/jira/browse/HDDS-4153
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Marton Elek
Assignee: Marton Elek


Kubernetes tests are timing out sometimes. (eg. here: 
https://github.com/elek/ozone-build-results/tree/master/2020/08/26/2562/kubernetes)

Based on the log, SCM couldn't move out from safe mode. It's either a real 
issue or github environment is slow sometimes.

To make it clear what is the problem I propose to increase the default timeout 
from 90 sec to 300 sec (5 min).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4150) recon.api.TestEndpoints is flaky

2020-08-26 Thread Marton Elek (Jira)
Marton Elek created HDDS-4150:
-

 Summary: recon.api.TestEndpoints is flaky
 Key: HDDS-4150
 URL: https://issues.apache.org/jira/browse/HDDS-4150
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Marton Elek



Failed on the PR:
https://github.com/apache/hadoop-ozone/pull/1349

And on the master:

https://github.com/elek/ozone-build-results/blob/master/2020/08/25/2533/unit/hadoop-ozone/recon/org.apache.hadoop.ozone.recon.api.TestEndpoints.txt

and here:

https://github.com/elek/ozone-build-results/blob/master/2020/08/22/2499/unit/hadoop-ozone/recon/org.apache.hadoop.ozone.recon.api.TestEndpoints.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4150) recon.api.TestEndpoints is flaky

2020-08-26 Thread Marton Elek (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17185128#comment-17185128
 ] 

Marton Elek commented on HDDS-4150:
---

Might be related to HDDS-4009.

> recon.api.TestEndpoints is flaky
> 
>
> Key: HDDS-4150
> URL: https://issues.apache.org/jira/browse/HDDS-4150
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Marton Elek
>Priority: Blocker
>
> Failed on the PR:
> https://github.com/apache/hadoop-ozone/pull/1349
> And on the master:
> https://github.com/elek/ozone-build-results/blob/master/2020/08/25/2533/unit/hadoop-ozone/recon/org.apache.hadoop.ozone.recon.api.TestEndpoints.txt
> and here:
> https://github.com/elek/ozone-build-results/blob/master/2020/08/22/2499/unit/hadoop-ozone/recon/org.apache.hadoop.ozone.recon.api.TestEndpoints.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4097) S3/Ozone Filesystem inter-op

2020-08-25 Thread Marton Elek (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17184936#comment-17184936
 ] 

Marton Elek commented on HDDS-4097:
---

> S3, does allow creating files and directories with the same name and also it 
> does not follow/check any filesystem semantics. So, if we want object-store 
> semantics, this flag provides a fall back. So, I think we still need this, as 
> We cannot make 100% AWS S3 compatibility.

Can you please add more information about this?. In current Ozone we guarantee 
100% AWS compatibility and HCFS compatiblity at the same time. This is one of 
the biggest selling point (in addition to the scalability). If I understood 
well with this approach we either provider 100% AWS compatibility *OR* proper 
file system semantics.

I think before the implementation it should be explained in more details 
 
 1. why is it impossible to provie AWS compatibility (what is the exact case, 
what the other options, etc.)
 2. what is the use case of turning on this settings and why is it required

Or (with repeating my previous comment):

bq. Why do we need that specific settings at all? IF we can provide 100% AWS s3 
compatibility with the new approach why is it required to be optional? Do you 
see any disadvantage of the new approach?

*I would prefer to keep 100% AWS compatibility even with the new approach, 
unless we have very strong arguments why is it impossible*

bq. Prefix table work changes the format it stores the key, but to break the 
path in to components still we need to normalize path when 

Thanks to explain it. I got it.

> S3/Ozone Filesystem inter-op
> 
>
> Key: HDDS-4097
> URL: https://issues.apache.org/jira/browse/HDDS-4097
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: Ozone FileSystem Paths Enabled.docx, Ozone filesystem 
> path enabled.xlsx
>
>
> This Jira is to implement changes required to use Ozone buckets when data is 
> ingested via S3 and use the bucket/volume via OzoneFileSystem. Initial 
> implementation for this is done as part of HDDS-3955. There are few API's 
> which have missed the changes during the implementation of HDDS-3955. 
> Attached design document which discusses each API,  and what changes are 
> required.
> Excel sheet has information about each API, from what all interfaces the OM 
> API is used, and what changes are required for the API to support 
> inter-operability.
> Note: The proposal for delete/rename is still under discussion, not yet 
> finalized. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-4111) Keep the CSI.zh.md consistent with CSI.md

2020-08-25 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek resolved HDDS-4111.
---
Fix Version/s: 0.7.0
   Resolution: Fixed

> Keep the CSI.zh.md consistent with CSI.md 
> --
>
> Key: HDDS-4111
> URL: https://issues.apache.org/jira/browse/HDDS-4111
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 0.6.0
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.7.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-3654) Let backgroundCreator create pipeline for the support replication factors alternately

2020-08-25 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek resolved HDDS-3654.
---
Fix Version/s: 0.7.0
   Resolution: Fixed

> Let backgroundCreator create pipeline for the support replication factors 
> alternately
> -
>
> Key: HDDS-3654
> URL: https://issues.apache.org/jira/browse/HDDS-3654
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Affects Versions: 0.6.0
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.7.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-4112) Improve SCM webui page performance

2020-08-25 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek resolved HDDS-4112.
---
Fix Version/s: 0.7.0
   Resolution: Fixed

> Improve SCM webui page performance
> --
>
> Key: HDDS-4112
> URL: https://issues.apache.org/jira/browse/HDDS-4112
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Affects Versions: 0.6.0
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.7.0
>
>
> The current scm page now send two jmx request, get the same result.
> One is jmx?qry=Hadoop:service=*,name=*,component=ServerRuntime
> The other is 
> jmx?qry=Hadoop:service=StorageContainerManager,name=StorageContainerManagerInfo,component=ServerRuntime
> Now, i suppose to remove the second one, using ctrl.overview to reference the 
> result.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4145) Bump version to 1.1.0-SNAPSHOT on master

2020-08-25 Thread Marton Elek (Jira)
Marton Elek created HDDS-4145:
-

 Summary: Bump version to 1.1.0-SNAPSHOT on master
 Key: HDDS-4145
 URL: https://issues.apache.org/jira/browse/HDDS-4145
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Marton Elek
Assignee: Marton Elek


s/0.6.0-SNAPSHOT/1.1.0-SNAPSHOT/g



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Deleted] (HDDS-4134) KAMALJIT CHAKRÀBORTY / AMANDACHAKRABORTYKAMALJIT 1h982839-h264-c1759kamaljitc16197266amandachakrabo...@mit.edu

2020-08-25 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek deleted HDDS-4134:
--


> KAMALJIT CHAKRÀBORTY / AMANDACHAKRABORTYKAMALJIT 
> 1h982839-h264-c1759kamaljitc16197266amandachakrabo...@mit.edu
> --
>
> Key: HDDS-4134
> URL: https://issues.apache.org/jira/browse/HDDS-4134
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
> Environment: I AM WORKING IN AN AIRPLANE MANUFACTURING COMPANY , 
> UNITED STATES OF AMERICA ... [ THE BOEING COMPANY , 100 NORTH RIVER SIDE 
> PLAZA , ILLIONOS , CHICAGO , 60606 , Phone : +12536579813 , Email Address : 
> i...@boeing.com EMPLOYEE ID : 6881HL74231152G ] I WORKED FROM MYSELF HOUSE 
> THROUGH INTERNET SERVICE ... I DID MYSELF BE ENGINEERING DEGREE IN COMPUTER 
> SCIENCE FROM [ MASSACHUSETTS INSTITUTE CSAIL, UNITED STATES OF AMERICA [ 77 
> Massachusetts Avenue, Room 11-400 Cambridge, MA 02139-4307 Email Address: 
> newsoff...@mit.edu Phone: 617-253-2700 Fax: 617-258-8762 ] ... I COMPLETE 
> THESE DEGREE FROM MYSELF HOUSE THROUGH ONLINE COURSE STUDIES ... BEMSID : 
> 1H982839 MEMSID : H264 PhD. COURSE ID : P033--T-7B-10HF [ FLYGHT CONTROLLER 
> SYSTEM ][ FLYGHT CONTROLLER SYSTEM ]
>  
> !Dq2UgcaW4AASNuR.jpeg|width=621,height=414!
>  
> !M8u4tvGa.jpeg!
>  
>  
>Reporter: KAMALJIT CHAKRABORTY 
>Priority: Major
>  Labels: AWS, Debian, RedHat
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> I AM WORKING IN AN AIRPLANE MANUFACTURING COMPANY , UNITED STATES OF AMERICA 
> ... [ THE BOEING COMPANY , 100 NORTH RIVER SIDE PLAZA , ILLIONOS , CHICAGO , 
> 60606 , Phone : +12536579813 , Email Address : i...@boeing.com EMPLOYEE ID : 
> 6881HL74231152G ] I WORKED FROM MYSELF HOUSE THROUGH INTERNET SERVICE ... I 
> DID MYSELF BE ENGINEERING DEGREE IN COMPUTER SCIENCE FROM [ MASSACHUSETTS 
> INSTITUTE CSAIL, UNITED STATES OF AMERICA [ 77 Massachusetts Avenue, Room 
> 11-400 Cambridge, MA 02139-4307 Email Address: newsoff...@mit.edu Phone: 
> 617-253-2700 Fax: 617-258-8762 ] ... I COMPLETE THESE DEGREE FROM MYSELF 
> HOUSE THROUGH ONLINE COURSE STUDIES ... BEMSID : 1H982839 MEMSID : H264 PhD. 
> COURSE ID : P033--T-7B-10HF [ FLYGHT CONTROLLER SYSTEM ][ FLYGHT CONTROLLER 
> SYSTEM ]
> [link title|https://boeing.mediaroom.com]
>  
> [^HDDS-252.01.patch]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-4074) [OFS] Implement AbstractFileSystem for RootedOzoneFileSystem

2020-08-25 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek resolved HDDS-4074.
---
Resolution: Fixed

> [OFS] Implement AbstractFileSystem for RootedOzoneFileSystem
> 
>
> Key: HDDS-4074
> URL: https://issues.apache.org/jira/browse/HDDS-4074
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Filesystem
>Reporter: Attila Doroszlai
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.7.0
>
>
> Extracted from HDDS-3805: introduce an implementation of 
> {{AbstractFileSystem}}, similar to {{OzFs}}, for {{RootedOzoneFileSystem}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4074) [OFS] Implement AbstractFileSystem for RootedOzoneFileSystem

2020-08-25 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek updated HDDS-4074:
--
Fix Version/s: 0.7.0

> [OFS] Implement AbstractFileSystem for RootedOzoneFileSystem
> 
>
> Key: HDDS-4074
> URL: https://issues.apache.org/jira/browse/HDDS-4074
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Filesystem
>Reporter: Attila Doroszlai
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.7.0
>
>
> Extracted from HDDS-3805: introduce an implementation of 
> {{AbstractFileSystem}}, similar to {{OzFs}}, for {{RootedOzoneFileSystem}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-4144) Update version info in hadoop client dependency readme

2020-08-25 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek resolved HDDS-4144.
---
Fix Version/s: 0.6.0
   Resolution: Fixed

> Update version info in hadoop client dependency readme
> --
>
> Key: HDDS-4144
> URL: https://issues.apache.org/jira/browse/HDDS-4144
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>Priority: Major
> Fix For: 0.6.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4140) Auto-close /pending pull requests after 21 days of inactivity

2020-08-24 Thread Marton Elek (Jira)
Marton Elek created HDDS-4140:
-

 Summary: Auto-close /pending pull requests after 21 days of 
inactivity
 Key: HDDS-4140
 URL: https://issues.apache.org/jira/browse/HDDS-4140
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: build
Reporter: Marton Elek
Assignee: Marton Elek


Earlier we introduced a way to mark the inactive pull requests with "pending" 
label (with the help of /pending comment).

This pull requests introduce a new scheduled build which closes the "pending" 
pull requests after 21 days of inactivity.

IMPORTANT: Only the pull requests  which are pending on the author will be 
closed.

We should NEVER close a pull requests which are waiting for the attention of a 
committer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4097) S3/Ozone Filesystem inter-op

2020-08-24 Thread Marton Elek (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17183281#comment-17183281
 ] 

Marton Elek commented on HDDS-4097:
---

Thanks to explain it [~bharat]. I am closer now, but still have some questions:

bq. >> 1. What does it mean from compatibility point of view? Will it work 
exactly the same way as Amazon S3? Does it mean that we start to support a 
different semantic when ozone.om.enable.filesystem.paths is turned on?

bq. > Yes, when ozone.om.enable.filesystem.paths, paths are treated as 
filesystem paths, so we check file system semantics and normalize the path.

Does it mean that turning on`ozone.om.enable.filesystem.paths` breaking AWS s3 
compatibility?

bq. (And also planning to make this bucket level property, instead of 
cluster-wide, not yet finalized)

This bucket level settings sounds very cool.

bq.  Related to 1 + 2. Is it possible to create the intermediate "dir" keys but 
remove them from the list when listed from S3?

bq. Yes, it can be. But right now when this property is enabled, we show all 
intermediate directories also. Arpit Agarwal brought a point that if we don;t 
show intermediate keys, and when user tries to create a key with that 
intermediate path it will fail, and the user will be confused intermediate 
paths are not shown, and the user is not able to create a key.

bq. From usability point of view, we can show intermediate dirs. Do you see any 
advantage or any other favorable points in hiding those when list operation? We 
can revisit this if required.

I am fine to show them on o3fs/o3/ofs interfaces, but I would prefer to keep 
the 100% AWS S3 compatibility. If it means that we need to hide the 
intermediate directories *from s3 output* we might need that change.

bq. Not sure, what is meant here. Any more info will help to answer the 
question.

Prefix table effort creates prefixes for each parent directories (AFAIK). Do we 
need this code after a working prefix table? Will this concept be changed after 
using prefix table?


And one more question:

Why do we need that specific settings at all? IF we can provide 100% AWS s3 
compatibility with the new approach why is it required to be optional? Do you 
see any disadvantage of the new approach?

Seems to be harder to test both of the approaches... 

> S3/Ozone Filesystem inter-op
> 
>
> Key: HDDS-4097
> URL: https://issues.apache.org/jira/browse/HDDS-4097
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: Ozone FileSystem Paths Enabled.docx, Ozone filesystem 
> path enabled.xlsx
>
>
> This Jira is to implement changes required to use Ozone buckets when data is 
> ingested via S3 and use the bucket/volume via OzoneFileSystem. Initial 
> implementation for this is done as part of HDDS-3955. There are few API's 
> which have missed the changes during the implementation of HDDS-3955. 
> Attached design document which discusses each API,  and what changes are 
> required.
> Excel sheet has information about each API, from what all interfaces the OM 
> API is used, and what changes are required for the API to support 
> inter-operability.
> Note: The proposal for delete/rename is still under discussion, not yet 
> finalized. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



  1   2   3   4   5   6   7   8   9   >