[ 
https://issues.apache.org/jira/browse/HDFS-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16234770#comment-16234770
 ] 

Steve Loughran commented on HDFS-7240:
--------------------------------------

I'm starting with hadoop-common and hadoop-ozone; more to follow on thursday.

For now, biggest issue I have is that OzoneException needs to become an IOE, so 
simplifying excpetion handling all round, preserving information, not losing 
stack traces, and generally leading to happy support teams as well as 
developers. Changing the base class isn't itself traumatic, but it will 
implicate the client code as there's almost no longer any need to catch & wrap 
things.


Other: What's your scale limit? I see a single PUT for the upload, GET path > 
tmp in open() . Is there a test for different sizes of file?

h2. hadoop-common

h3. Config


I've filed some comments on thecreated HADOOP-15007, "Stabilize and document 
Configuration <tag> element", to cover making sure that there are the tests & 
docs for this to go in.

* HDFSPropertyTag: s/DEPRICATED/r/DEPRECATED/
* OzonePropertyTag: s/there/their/ 
* OzoneConfig Property.toString() is going to be "key valuenull" if there is no 
tag defined. Space?



h3. FileUtils

minor: imports all shuffled about compared to trunk & branch-2. revert.

h3. OzoneException

This is is own exception, not an IOE, and at least in OzoneFileSystem the 
process to build an IOE from itinvariably loses the inner stack trace and all 
meaningful information about the exception type. Equally, OzoneBucket catches 
all forms of IOException, converts to an {{OzoneRestClientException}}. 

We don't need to do this. 

it will lose stack trace data, cause confusion, is already making the client 
code over complex with catching IOEs, wrapping to OzoneException, catching 
OzoneException and converting to an IOE, at which point all core information is 
lost. 

1. Make this subclass of IOE, consistent with the rest of our code, and then 
clients can throw up untouched, except in the special case that they need to 
perform some form of exception.
1. Except for (any?) special cases, pass up IOEs raised in the http client as 
is.


Also.
* confused by the overridding of message/getmessage. Is for serialization? 
* Consider adding a setMessage(String format, string...args) and calling 
STring.format: it would tie in with uses in the code.
* override setThrowable and setMessage() called to set the nested ex (hence 
full stack) and handle the case where the exception returns null for 
getMessage().

{code}
OzoneException initCause(Throwable t) {
  super.initCause(t)
  setMessage(t.getMessage() != null ? t.getMessage() : t.toString())
}
{code}

h2. OzoneFileSystem

h3. general


* various places use LOG.info("text " + something). they should all move to 
LOG.info("text {}", something)
* Once OzoneException -> IOE, you can cut the catch and translate here.
* qualify path before all uses. That's needed to stop them being relative, and 
to catch things like someone calling ozfs.rename("o3://bucket/src", 
"s3a://bucket/dest"), delete("s3a://bucket/path"), etc, as well as problems 
with validation happening before paths are made absolute.



* {{RenameIterator.iterate()}} it's going to log @ warn whenever it can't 
delete a temp file because it doesn't exist, which may be a distraction in 
failures. Better: {{if(!tmpFile.delete() && tmpFile.exists())}}, as that will 
only warn if the temp file is actually there. 

h3. OzoneFileSystem.rename(). 
Rename() is the operation to fear on an object store. I haven't looked at in 
full detail,. 
* Qualify all the paths before doing directory validation. Otherwise you can 
defeat the "don't rename into self checks"  rename("/path/src", 
"/path/../path/src/dest").
* Log @ debu all the paths taken before returning so you can debug if needed. 
* S3A rename ended up having a special RenameFailedException() which 
innerRename() raises, with text and return code. Outer rename logs the text and 
returns the return code. This means that all failing paths have an exception 
clearly thrown, and when we eventually make rename/3 public, it's lined up to 
throw exceptions back to the caller. Consider copying this code.

h3. OzoneFileSystem.delete

* qualify path before use
* dont' log at error if you can't delete a nonexistent path, it is used 
everywhere for silent cleanup. Cut it

h3. OzoneFileSystem.ListStatusIterator

* make status field final

h3. OzoneFileSystem.mkdir

Liked your algorithm here; took me a moment to understand how rollback didn't 
need to track all created directories. nice.
* do qualify path first.

h3. OzoneFileSystem.getFileStatus

{{getKeyInfo()}} catches all exceptions and maps to null, which is interpreted 
not found and eventually surfaces as FNFE. This is misleading if the failure is 
for any other reason.

Once OzoneException -> IOException, {{getKeyInfo()}} should only catch & 
downgrade the explicit not found (404?) responses.

h3. OzoneFileSystem.listKeys()

unless this needs to be tagged as VisibleForTesting, make private. 


h2. OzoneOutputStream

* Implement StreamCapabilities and declare that hsync/hflush are not supported.
* Unless there is no limit on the size of a PUT request/multipart uploads are 
supported, consider having the 
stream's {{write(int)}} method fail when the limit is reached. That way, things 
will at least fail fast.
* after close, set backupStream = null.
* {{flush()}} should be a no-op if called on a closed stream, so {{if (closed) 
return}}
* {{write()}} must fail if called on a closed stream,
* Again, OzoneException -> IOE translation which could/should be eliminated.


h2. OzoneInputStream

* You have chosen an interesting solution to the "efficient seek" problem here: 
D/L the entire file and
then seek around. While this probably works for the first release, larger files 
will have problems in both
disk space and size of 

* Again, OzoneException -> IOE translation which could/should be eliminated.

h2. Testing

* Implement something like {{AbstractSTestS3AHugeFiles}} for scale tests, again 
with the ability to spec on the maven build how big the files to be created 
are. Developers should be able to ask for a test run with an 8GB test write, 
read and seek, to see what happens.
* Add a subclass of {{org.apache.hadoop.fs.FileSystemContractBaseTest}}, 
ideally {{org.apache.hadoop.fs.FSMainOperationsBaseTest}}. These test things 
which the newer contract tests haven't yet reimplimented.

h3. TestOzoneFileInterfaces

* Needs a {{Timeout}} rule for test timeouts.
* all your assertEquals strings are the wrong way round. sorry.


> Object store in HDFS
> --------------------
>
>                 Key: HDFS-7240
>                 URL: https://issues.apache.org/jira/browse/HDFS-7240
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>            Reporter: Jitendra Nath Pandey
>            Assignee: Jitendra Nath Pandey
>            Priority: Major
>         Attachments: HDFS-7240.001.patch, HDFS-7240.002.patch, 
> HDFS-7240.003.patch, HDFS-7240.003.patch, HDFS-7240.004.patch, 
> Ozone-architecture-v1.pdf, Ozonedesignupdate.pdf, ozone_user_v0.pdf
>
>
> This jira proposes to add object store capabilities into HDFS. 
> As part of the federation work (HDFS-1052) we separated block storage as a 
> generic storage layer. Using the Block Pool abstraction, new kinds of 
> namespaces can be built on top of the storage layer i.e. datanodes.
> In this jira I will explore building an object store using the datanode 
> storage, but independent of namespace metadata.
> I will soon update with a detailed design document.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to