Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change 
notification.

The "HCFS/Progress" page has been changed by JayVyas:
https://wiki.apache.org/hadoop/HCFS/Progress?action=diff&rev1=20&rev2=21

+ '''How to update the HCFS contract and test classes as the FileSystem 
evolves.'''
+ 
+ When we define new file system behaviours, its critical to update the 
contract documentation and tests.  However, its not particularly difficult to 
do this,
+ 
+ the HCFS contracts only consist of:
+ 
+  1. Unit tests which are extended for file systems, annotated with fields from
+  1. XML files which define a filesystem's semantics
+  1. A series of .md files, which define a semi formal specification.
+ 
+ The steps to extend any FileSystem semantics are now quite simple, and 
explicit.
+ 
+ 1) Check out the guide 
[[https://github.com/apache/hadoop-common/blob/trunk/hadoop-common-project/hadoop-common/src/site/markdown/filesystem/extending.md|here]],
 and update the .md contract files.
+ 
+ 2) Update the existing unit tests (test/java/org/apache/hadoop/fs/contract/) 
where relevant.
+ 
  '''Hadoop FileSystem and FileContext work, largely concluded.'''
  
- Good news ! With https://issues.apache.org/jira/browse/HADOOP-9361, we are 
now able to test hadoop FileSystems in an unambiguous and declarative manner,
+ Good news ! With https://issues.apache.org/jira/browse/HADOOP-9361, we are 
now able to test hadoop FileSystems in an unambiguous and declarative manner, 
using a combination of:
- using a combination of:
  
  * An XML File to define FileSystem semantics.  This file needs to be loaded 
in your unit tests.  The contract will define the semantics of your file 
system, and the unit tests will then test based on the parameters you define.  
For example,
+ 
  {{{
     <property>
     <name>fs.contract.supports-unix-permissions</name>
@@ -15, +31 @@

  * The standard contract test super classes bundled into Hadoop.  These are 
built in the hadoop common tests jar (hadoop-common-3.0.0-SNAPSHOT-tests.jar).
  
  * Adding custom classes to override each of the above super classes.  To do 
this, you manually create classes extending from the super classes in the 
hadoop tests jar, like so:
+ 
  {{{
   public class TestMyHCFSBaseContract extends AbstractFSContract
   public class TestMyHCFSCreateTests extends AbstractContractCreateTest
@@ -25, +42 @@

  }}}
  And so on (all the classes which you can overide are in 
org.apache.hadoop.fs.contract., and you can scan the existing hadoop source 
code for examples of how to properly override them.
  
+ The completion of this coherent and flexible test framework allows us to 
expand upon and customize hadoop file system work.  To extend the contract 
tests, or add new semantics, there is a clear path : The .md files, which exist 
inside of existing hadoop-common source code.  See the 
src/site/markdown/filesystem/.... files to do so. These can easily be browsed 
here:
- The completion of this coherent and flexible test framework allows us to 
expand upon and customize hadoop file system work.  To extend the contract 
tests, or add new semantics,
- there is a clear path : The .md files, which exist inside of existing 
hadoop-common source code.  See the src/site/markdown/filesystem/.... files to 
do so.
- These can easily be browsed here:
  
  
https://github.com/apache/hadoop-common/tree/trunk/hadoop-common-project/hadoop-common/src/site/markdown/filesystem
  
@@ -50, +65 @@

  * BIGTOP-1089: Scale testing as a universal HCFS integration test, confirming 
that the whole ecosystem works together on FS interface level.  (Scale testing 
/ updateing to 50 input splits pending from OrangeFS community).
  
  In another thread, we will work to improve coverage of RawLocalFileSystem 
(LocalFs/LocalFileSystem)
- 
  
  '''Hadoop FileSystem Validation Workstream (2013)'''
  

Reply via email to