[jira] [Updated] (HDFS-8154) Extract WebHDFS protocol out as a specification to allow easier clients and servers

2016-11-27 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8154:
--
Assignee: (was: Jakob Homan)

> Extract WebHDFS protocol out as a specification to allow easier clients and 
> servers
> ---
>
> Key: HDFS-8154
> URL: https://issues.apache.org/jira/browse/HDFS-8154
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Reporter: Jakob Homan
>
> WebHDFS would be more useful if there were a programmatic description of its 
> interface, which would allow one to more easily create servers and clients.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HDFS-9191) Typo in Hdfs.java. NoSuchElementException is misspelled

2015-10-02 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-9191:
--
Comment: was deleted

(was: Thanks for the tutorial!  I'm going to write up my notes and share.  

Good luck to you at your next gig!  You know that if you take your bike to 
work, you can ride to Seattle and back over the bridge at lunch.  Or there's a 
restaurant at the top of Mercer Island, Roanoke Inn that's a great lunch stop.  
You cannot kayak there though.  :)

You can also ride to Factoria mall area via bike trail for lunch.  It will get 
you out of the office.  

Cathy

)

> Typo in  Hdfs.java.  NoSuchElementException is misspelled
> -
>
> Key: HDFS-9191
> URL: https://issues.apache.org/jira/browse/HDFS-9191
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS
>Reporter: Catherine Palmer
>Assignee: Catherine Palmer
>Priority: Trivial
>  Labels: newbie
> Fix For: 3.0.0
>
> Attachments: hdfs-9191.patch
>
>
> Line 241 NoSuchElementException has a typo



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-9191) Typo in Hdfs.java. NoSuchElementException is misspelled

2015-10-02 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-9191:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

+1.  Since it's a comment change, not waiting for Jenkins.  Thanks for the 
contribution, Catherine!

> Typo in  Hdfs.java.  NoSuchElementException is misspelled
> -
>
> Key: HDFS-9191
> URL: https://issues.apache.org/jira/browse/HDFS-9191
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: HDFS
>Reporter: Catherine Palmer
>Assignee: Catherine Palmer
>Priority: Trivial
>  Labels: newbie
> Fix For: 3.0.0
>
> Attachments: hdfs-9191.patch
>
>
> Line 241 NoSuchElementException has a typo



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8939) Test(S)WebHdfsFileContextMainOperations failing on branch-2

2015-09-03 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8939:
--
  Resolution: Fixed
Assignee: Chris Nauroth  (was: Jakob Homan)
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

+1.  I've committed this to branch-2 and trunk.  Thanks, Chris.  Resolving.

> Test(S)WebHdfsFileContextMainOperations failing on branch-2
> ---
>
> Key: HDFS-8939
> URL: https://issues.apache.org/jira/browse/HDFS-8939
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.8.0
>Reporter: Jakob Homan
>Assignee: Chris Nauroth
> Fix For: 2.8.0
>
> Attachments: HDFS-8939-branch-2.001.patch, 
> HDFS-8939-branch-2.002.patch, HDFS-8939-branch-2.003.patch, 
> HDFS-8939.003.patch
>
>
> After HDFS-8180, TestWebHdfsFileContextMainOperations and 
> TestSWebHdfsFileContextMainOperations are failing with runtime NPEs while 
> instantiating the wrapped WebHDFSFileSystems because {{getDefaultPort}} is 
> trying to access a conf that was never provided.  In the constructor both 
> both WebHdfs and SWebhdfs the underlying (S)WebHdfsFileSystems are 
> instantiated in the constructor and never have a chance to have their 
> {{setConf}} methods called:
> {code}  SWebHdfs(URI theUri, Configuration conf)
>   throws IOException, URISyntaxException {
> super(theUri, new SWebHdfsFileSystem(), conf, SCHEME, false);
>   }r{code}
> The test passes on trunk because HDFS-5321 removed the call to the 
> Configuration instance as part of {{getDefaultPort}}.  HDFS-5321 was applied 
> to branch-2 but reverted in HDFS-6632, so there's a bit of a difference in 
> how branch-2 versus trunk handles default values (branch-2 pulls them from 
> configs if specified, trunk just returns the hard-coded value from the 
> constants file).
> I've fixed this behave like trunk and return just the hard-coded value, which 
> causes the test to pass.
>   There is no WebHdfsFileSystem that takes a Config, which would be another 
> way to fix this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8155) Support OAuth2 in WebHDFS

2015-08-29 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8155:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8
   Status: Resolved  (was: Patch Available)

The test failures, numerous though they are, are unrelated and happening to 
other JIRAS as well (HDFS-8983).  Having received the rare double-Chris(D|N) 
+1, I've committed this to trunk and branch-2.  Resolving.

> Support OAuth2 in WebHDFS
> -
>
> Key: HDFS-8155
> URL: https://issues.apache.org/jira/browse/HDFS-8155
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Reporter: Jakob Homan
>Assignee: Jakob Homan
> Fix For: 2.8
>
> Attachments: HDFS-8155-1.patch, HDFS-8155.002.patch, 
> HDFS-8155.003.patch, HDFS-8155.004.patch, HDFS-8155.005.patch, 
> HDFS-8155.006.patch
>
>
> WebHDFS should be able to accept OAuth2 credentials.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8155) Support OAuth2 in WebHDFS

2015-08-28 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8155:
--
Status: Open  (was: Patch Available)

> Support OAuth2 in WebHDFS
> -
>
> Key: HDFS-8155
> URL: https://issues.apache.org/jira/browse/HDFS-8155
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Reporter: Jakob Homan
>Assignee: Jakob Homan
> Attachments: HDFS-8155-1.patch, HDFS-8155.002.patch, 
> HDFS-8155.003.patch, HDFS-8155.004.patch, HDFS-8155.005.patch, 
> HDFS-8155.006.patch
>
>
> WebHDFS should be able to accept OAuth2 credentials.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8155) Support OAuth2 in WebHDFS

2015-08-28 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8155:
--
Attachment: HDFS-8155.006.patch

Fixed ChrisN's final coment.  Yeah, Jenkins is being weird for me.  I ran 
through all the HDFS tests manually and except for a couple non-repeatable, 
unrelated failures, everything passed.  I'll let Jenkins run again, but unless 
it's something real, I'll go ahead and commit this later today.  Thanks.

> Support OAuth2 in WebHDFS
> -
>
> Key: HDFS-8155
> URL: https://issues.apache.org/jira/browse/HDFS-8155
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Reporter: Jakob Homan
>Assignee: Jakob Homan
> Attachments: HDFS-8155-1.patch, HDFS-8155.002.patch, 
> HDFS-8155.003.patch, HDFS-8155.004.patch, HDFS-8155.005.patch, 
> HDFS-8155.006.patch
>
>
> WebHDFS should be able to accept OAuth2 credentials.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8155) Support OAuth2 in WebHDFS

2015-08-28 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8155:
--
Status: Patch Available  (was: Open)

> Support OAuth2 in WebHDFS
> -
>
> Key: HDFS-8155
> URL: https://issues.apache.org/jira/browse/HDFS-8155
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Reporter: Jakob Homan
>Assignee: Jakob Homan
> Attachments: HDFS-8155-1.patch, HDFS-8155.002.patch, 
> HDFS-8155.003.patch, HDFS-8155.004.patch, HDFS-8155.005.patch, 
> HDFS-8155.006.patch
>
>
> WebHDFS should be able to accept OAuth2 credentials.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8155) Support OAuth2 in WebHDFS

2015-08-27 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8155:
--
Status: Patch Available  (was: Open)

> Support OAuth2 in WebHDFS
> -
>
> Key: HDFS-8155
> URL: https://issues.apache.org/jira/browse/HDFS-8155
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Reporter: Jakob Homan
>Assignee: Jakob Homan
> Attachments: HDFS-8155-1.patch, HDFS-8155.002.patch, 
> HDFS-8155.003.patch, HDFS-8155.004.patch, HDFS-8155.005.patch
>
>
> WebHDFS should be able to accept OAuth2 credentials.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8155) Support OAuth2 in WebHDFS

2015-08-27 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8155:
--
Attachment: HDFS-8155.005.patch

Fixed ChrisD's comment.

> Support OAuth2 in WebHDFS
> -
>
> Key: HDFS-8155
> URL: https://issues.apache.org/jira/browse/HDFS-8155
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Reporter: Jakob Homan
>Assignee: Jakob Homan
> Attachments: HDFS-8155-1.patch, HDFS-8155.002.patch, 
> HDFS-8155.003.patch, HDFS-8155.004.patch, HDFS-8155.005.patch
>
>
> WebHDFS should be able to accept OAuth2 credentials.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8155) Support OAuth2 in WebHDFS

2015-08-27 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8155:
--
Status: Open  (was: Patch Available)

> Support OAuth2 in WebHDFS
> -
>
> Key: HDFS-8155
> URL: https://issues.apache.org/jira/browse/HDFS-8155
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Reporter: Jakob Homan
>Assignee: Jakob Homan
> Attachments: HDFS-8155-1.patch, HDFS-8155.002.patch, 
> HDFS-8155.003.patch, HDFS-8155.004.patch
>
>
> WebHDFS should be able to accept OAuth2 credentials.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8155) Support OAuth2 in WebHDFS

2015-08-27 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8155:
--
Status: Patch Available  (was: Open)

> Support OAuth2 in WebHDFS
> -
>
> Key: HDFS-8155
> URL: https://issues.apache.org/jira/browse/HDFS-8155
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Reporter: Jakob Homan
>Assignee: Jakob Homan
> Attachments: HDFS-8155-1.patch, HDFS-8155.002.patch, 
> HDFS-8155.003.patch, HDFS-8155.004.patch
>
>
> WebHDFS should be able to accept OAuth2 credentials.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8155) Support OAuth2 in WebHDFS

2015-08-27 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8155:
--
Attachment: HDFS-8155.004.patch

> Support OAuth2 in WebHDFS
> -
>
> Key: HDFS-8155
> URL: https://issues.apache.org/jira/browse/HDFS-8155
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Reporter: Jakob Homan
>Assignee: Jakob Homan
> Attachments: HDFS-8155-1.patch, HDFS-8155.002.patch, 
> HDFS-8155.003.patch, HDFS-8155.004.patch
>
>
> WebHDFS should be able to accept OAuth2 credentials.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8155) Support OAuth2 in WebHDFS

2015-08-27 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8155:
--
Status: Open  (was: Patch Available)

> Support OAuth2 in WebHDFS
> -
>
> Key: HDFS-8155
> URL: https://issues.apache.org/jira/browse/HDFS-8155
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Reporter: Jakob Homan
>Assignee: Jakob Homan
> Attachments: HDFS-8155-1.patch, HDFS-8155.002.patch, 
> HDFS-8155.003.patch, HDFS-8155.004.patch
>
>
> WebHDFS should be able to accept OAuth2 credentials.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8155) Support OAuth2 in WebHDFS

2015-08-27 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8155:
--
Status: Patch Available  (was: Open)

> Support OAuth2 in WebHDFS
> -
>
> Key: HDFS-8155
> URL: https://issues.apache.org/jira/browse/HDFS-8155
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Reporter: Jakob Homan
>Assignee: Jakob Homan
> Attachments: HDFS-8155-1.patch, HDFS-8155.002.patch, 
> HDFS-8155.003.patch
>
>
> WebHDFS should be able to accept OAuth2 credentials.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8155) Support OAuth2 in WebHDFS

2015-08-27 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8155:
--
Attachment: HDFS-8155.003.patch

New patch addressing checkstyle and ChrisN's review.

bq. ConfRefreshTokenBasedAccessTokenProvider, 
ConfRefreshTokenBasedAccessTokenProvider: There are no timeouts specified in 
the calls to the refresh URL. Timeouts can be controlled by calling 
client.setConnectTimeout and client.setReadTimeout.
Done.
bq. AccessTokenProvider: Optional - consider extending Configured so that it 
inherits the implementations of getConf and setConf for free.
Configured sets the conf as part of the constructor, which breaks the way ATP 
implementations sets its values.  I kept it as Configurable just to avoid code 
churn.
bq. WebHDFS.md: Typo: "toekns" instead of "tokens"
Done.
bq. Please address the javac and checkstyle warnings.
Done.  my local test patch is happy (at last)

> Support OAuth2 in WebHDFS
> -
>
> Key: HDFS-8155
> URL: https://issues.apache.org/jira/browse/HDFS-8155
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Reporter: Jakob Homan
>Assignee: Jakob Homan
> Attachments: HDFS-8155-1.patch, HDFS-8155.002.patch, 
> HDFS-8155.003.patch
>
>
> WebHDFS should be able to accept OAuth2 credentials.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8155) Support OAuth2 in WebHDFS

2015-08-27 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8155:
--
Status: Open  (was: Patch Available)

> Support OAuth2 in WebHDFS
> -
>
> Key: HDFS-8155
> URL: https://issues.apache.org/jira/browse/HDFS-8155
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Reporter: Jakob Homan
>Assignee: Jakob Homan
> Attachments: HDFS-8155-1.patch, HDFS-8155.002.patch
>
>
> WebHDFS should be able to accept OAuth2 credentials.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8155) Support OAuth2 in WebHDFS

2015-08-24 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8155:
--
Attachment: HDFS-8155.002.patch

Revised patch based on ChrisD's comments.  Still applies to trunk and branch-2.

bq. Instead of adding its own Time classes under utils, it may make sense to 
move Clock from YARN to Common. Or use the Timer class in Common, if that's 
sufficient.
Done. The Clock in YARN is public stable so can't be moved.  There are about 
six other clocks or timers floating around.  I picked the most useful and 
public in common.
bq. If the timing classes could use existing code, the Utils class could be 
package-private in o.a.h.hdfs.web.oauth2
Done.  They did and disappeared.  I ended up moving the AccessTokenTimer out of 
utils package, which caused that package to vanish in a puff of smoke.  Utils 
is now pp.
bq. Please add scope/visibility annotations to classes
Done.
bq. The protected, mutable fields in the AccessTokenProvider classes can be 
private. nextRefreshMSSinceEpoch in Timer, also (constructor should explicitly 
init to 0 instead of self-ref)
Done.
bq. Instead of requiring initialize, would it make sense for 
AccessTokenProvider implementations to implement Configurable as appropriate?
Done.  There's quite a lot going on in setConf, but oh well.
bq. What is the expectation for multithreaded access for these classes?
Individual implementations should do the appropriate needful.  For the two 
provided ones I've synchronized on the refresh method, which should work.
bq. If AccessTokenProvider were an abstract class, could the impls share more 
of the code in refresh? Superficially, they look very similar...
I've gone back and forth on how much to share.  I'd like to leave them a bit 
separate for now and see how many other implementations are provided and common 
they are.  Since the code is public/evolving, it will work to update things as 
necessary.  As part of the Configurable change, the class did became abstract 
rather than an interface, which will make changes in the future easier.
bq. Should refresh throw IOException instead of IllegalArgumentException, since 
AccessTokenProvider::getAccessToken supports it?
Done.

> Support OAuth2 in WebHDFS
> -
>
> Key: HDFS-8155
> URL: https://issues.apache.org/jira/browse/HDFS-8155
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Reporter: Jakob Homan
>Assignee: Jakob Homan
> Attachments: HDFS-8155-1.patch, HDFS-8155.002.patch
>
>
> WebHDFS should be able to accept OAuth2 credentials.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8155) Support OAuth2 in WebHDFS

2015-08-24 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8155:
--
Status: Patch Available  (was: Open)

> Support OAuth2 in WebHDFS
> -
>
> Key: HDFS-8155
> URL: https://issues.apache.org/jira/browse/HDFS-8155
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Reporter: Jakob Homan
>Assignee: Jakob Homan
> Attachments: HDFS-8155-1.patch, HDFS-8155.002.patch
>
>
> WebHDFS should be able to accept OAuth2 credentials.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HDFS-8943) Read apis in ByteRangeInputStream does not read all the bytes specified when chunked transfer-encoding is used in the server

2015-08-24 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan reopened HDFS-8943:
---

We need a fix for this, let's not close it until we've reached one.

> Read apis in ByteRangeInputStream does not read all the bytes specified when 
> chunked transfer-encoding is used in the server
> 
>
> Key: HDFS-8943
> URL: https://issues.apache.org/jira/browse/HDFS-8943
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.7.1
>Reporter: Shradha Revankar
>Assignee: Shradha Revankar
> Attachments: HDFS-8943.000.patch
>
>
> With the default Webhdfs server implementation the read apis in 
> ByteRangeInputStream work as expected reading the correct number of bytes for 
> these apis :
> {{public int read(byte b[], int off, int len)}}
> {{public int read(long position, byte[] buffer, int offset, int length)}}
> But when a custom Webhdfs server implementation is plugged in which uses 
> chunked Transfer-encoding, these apis read only the first chunk. Simple fix 
> would be to loop and read till bytes specified similar to {{readfully()}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8939) Test(S)WebHdfsFileContextMainOperations failing on branch-2

2015-08-24 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14710298#comment-14710298
 ] 

Jakob Homan commented on HDFS-8939:
---

btw, [~aw], any idea why Jenkins isn't picking this up to run against branch 2? 
 I believe it's named correctly according to 
[HowToContribute|https://wiki.apache.org/hadoop/HowToContribute#Naming_your_patch].

> Test(S)WebHdfsFileContextMainOperations failing on branch-2
> ---
>
> Key: HDFS-8939
> URL: https://issues.apache.org/jira/browse/HDFS-8939
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.8.0
>Reporter: Jakob Homan
>Assignee: Jakob Homan
> Fix For: 2.8.0
>
> Attachments: HDFS-8939-branch-2.001.patch, 
> HDFS-8939-branch-2.002.patch
>
>
> After HDFS-8180, TestWebHdfsFileContextMainOperations and 
> TestSWebHdfsFileContextMainOperations are failing with runtime NPEs while 
> instantiating the wrapped WebHDFSFileSystems because {{getDefaultPort}} is 
> trying to access a conf that was never provided.  In the constructor both 
> both WebHdfs and SWebhdfs the underlying (S)WebHdfsFileSystems are 
> instantiated in the constructor and never have a chance to have their 
> {{setConf}} methods called:
> {code}  SWebHdfs(URI theUri, Configuration conf)
>   throws IOException, URISyntaxException {
> super(theUri, new SWebHdfsFileSystem(), conf, SCHEME, false);
>   }r{code}
> The test passes on trunk because HDFS-5321 removed the call to the 
> Configuration instance as part of {{getDefaultPort}}.  HDFS-5321 was applied 
> to branch-2 but reverted in HDFS-6632, so there's a bit of a difference in 
> how branch-2 versus trunk handles default values (branch-2 pulls them from 
> configs if specified, trunk just returns the hard-coded value from the 
> constants file).
> I've fixed this behave like trunk and return just the hard-coded value, which 
> causes the test to pass.
>   There is no WebHdfsFileSystem that takes a Config, which would be another 
> way to fix this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8939) Test(S)WebHdfsFileContextMainOperations failing on branch-2

2015-08-24 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8939:
--
Status: Open  (was: Patch Available)

> Test(S)WebHdfsFileContextMainOperations failing on branch-2
> ---
>
> Key: HDFS-8939
> URL: https://issues.apache.org/jira/browse/HDFS-8939
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.8.0
>Reporter: Jakob Homan
>Assignee: Jakob Homan
> Fix For: 2.8.0
>
> Attachments: HDFS-8939-branch-2.001.patch, 
> HDFS-8939-branch-2.002.patch
>
>
> After HDFS-8180, TestWebHdfsFileContextMainOperations and 
> TestSWebHdfsFileContextMainOperations are failing with runtime NPEs while 
> instantiating the wrapped WebHDFSFileSystems because {{getDefaultPort}} is 
> trying to access a conf that was never provided.  In the constructor both 
> both WebHdfs and SWebhdfs the underlying (S)WebHdfsFileSystems are 
> instantiated in the constructor and never have a chance to have their 
> {{setConf}} methods called:
> {code}  SWebHdfs(URI theUri, Configuration conf)
>   throws IOException, URISyntaxException {
> super(theUri, new SWebHdfsFileSystem(), conf, SCHEME, false);
>   }r{code}
> The test passes on trunk because HDFS-5321 removed the call to the 
> Configuration instance as part of {{getDefaultPort}}.  HDFS-5321 was applied 
> to branch-2 but reverted in HDFS-6632, so there's a bit of a difference in 
> how branch-2 versus trunk handles default values (branch-2 pulls them from 
> configs if specified, trunk just returns the hard-coded value from the 
> constants file).
> I've fixed this behave like trunk and return just the hard-coded value, which 
> causes the test to pass.
>   There is no WebHdfsFileSystem that takes a Config, which would be another 
> way to fix this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8939) Test(S)WebHdfsFileContextMainOperations failing on branch-2

2015-08-24 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8939:
--
Status: Patch Available  (was: Open)

> Test(S)WebHdfsFileContextMainOperations failing on branch-2
> ---
>
> Key: HDFS-8939
> URL: https://issues.apache.org/jira/browse/HDFS-8939
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.8.0
>Reporter: Jakob Homan
>Assignee: Jakob Homan
> Fix For: 2.8.0
>
> Attachments: HDFS-8939-branch-2.001.patch, 
> HDFS-8939-branch-2.002.patch
>
>
> After HDFS-8180, TestWebHdfsFileContextMainOperations and 
> TestSWebHdfsFileContextMainOperations are failing with runtime NPEs while 
> instantiating the wrapped WebHDFSFileSystems because {{getDefaultPort}} is 
> trying to access a conf that was never provided.  In the constructor both 
> both WebHdfs and SWebhdfs the underlying (S)WebHdfsFileSystems are 
> instantiated in the constructor and never have a chance to have their 
> {{setConf}} methods called:
> {code}  SWebHdfs(URI theUri, Configuration conf)
>   throws IOException, URISyntaxException {
> super(theUri, new SWebHdfsFileSystem(), conf, SCHEME, false);
>   }r{code}
> The test passes on trunk because HDFS-5321 removed the call to the 
> Configuration instance as part of {{getDefaultPort}}.  HDFS-5321 was applied 
> to branch-2 but reverted in HDFS-6632, so there's a bit of a difference in 
> how branch-2 versus trunk handles default values (branch-2 pulls them from 
> configs if specified, trunk just returns the hard-coded value from the 
> constants file).
> I've fixed this behave like trunk and return just the hard-coded value, which 
> causes the test to pass.
>   There is no WebHdfsFileSystem that takes a Config, which would be another 
> way to fix this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8939) Test(S)WebHdfsFileContextMainOperations failing on branch-2

2015-08-24 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8939:
--
Attachment: HDFS-8939-branch-2.002.patch

Branch 2 has a test for these custom properties, which of course fails with 
their removal.  Updated the patch to remove the test, but am getting a bit 
concerned about the backwards compatibility of this change.  The concept of 
changing defaults is of course wonky and is correct to remove; not sure I'm 
convinced it's safe to remove it as part of branch-2, however...

> Test(S)WebHdfsFileContextMainOperations failing on branch-2
> ---
>
> Key: HDFS-8939
> URL: https://issues.apache.org/jira/browse/HDFS-8939
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.8.0
>Reporter: Jakob Homan
>Assignee: Jakob Homan
> Fix For: 2.8.0
>
> Attachments: HDFS-8939-branch-2.001.patch, 
> HDFS-8939-branch-2.002.patch
>
>
> After HDFS-8180, TestWebHdfsFileContextMainOperations and 
> TestSWebHdfsFileContextMainOperations are failing with runtime NPEs while 
> instantiating the wrapped WebHDFSFileSystems because {{getDefaultPort}} is 
> trying to access a conf that was never provided.  In the constructor both 
> both WebHdfs and SWebhdfs the underlying (S)WebHdfsFileSystems are 
> instantiated in the constructor and never have a chance to have their 
> {{setConf}} methods called:
> {code}  SWebHdfs(URI theUri, Configuration conf)
>   throws IOException, URISyntaxException {
> super(theUri, new SWebHdfsFileSystem(), conf, SCHEME, false);
>   }r{code}
> The test passes on trunk because HDFS-5321 removed the call to the 
> Configuration instance as part of {{getDefaultPort}}.  HDFS-5321 was applied 
> to branch-2 but reverted in HDFS-6632, so there's a bit of a difference in 
> how branch-2 versus trunk handles default values (branch-2 pulls them from 
> configs if specified, trunk just returns the hard-coded value from the 
> constants file).
> I've fixed this behave like trunk and return just the hard-coded value, which 
> causes the test to pass.
>   There is no WebHdfsFileSystem that takes a Config, which would be another 
> way to fix this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-8939) Test(S)WebHdfsFileContextMainOperations failing on branch-2

2015-08-21 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan reassigned HDFS-8939:
-

Assignee: Jakob Homan

> Test(S)WebHdfsFileContextMainOperations failing on branch-2
> ---
>
> Key: HDFS-8939
> URL: https://issues.apache.org/jira/browse/HDFS-8939
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.8.0
>Reporter: Jakob Homan
>Assignee: Jakob Homan
> Fix For: 2.8.0
>
> Attachments: HDFS-8939-branch-2.001.patch
>
>
> After HDFS-8180, TestWebHdfsFileContextMainOperations and 
> TestSWebHdfsFileContextMainOperations are failing with runtime NPEs while 
> instantiating the wrapped WebHDFSFileSystems because {{getDefaultPort}} is 
> trying to access a conf that was never provided.  In the constructor both 
> both WebHdfs and SWebhdfs the underlying (S)WebHdfsFileSystems are 
> instantiated in the constructor and never have a chance to have their 
> {{setConf}} methods called:
> {code}  SWebHdfs(URI theUri, Configuration conf)
>   throws IOException, URISyntaxException {
> super(theUri, new SWebHdfsFileSystem(), conf, SCHEME, false);
>   }r{code}
> The test passes on trunk because HDFS-5321 removed the call to the 
> Configuration instance as part of {{getDefaultPort}}.  HDFS-5321 was applied 
> to branch-2 but reverted in HDFS-6632, so there's a bit of a difference in 
> how branch-2 versus trunk handles default values (branch-2 pulls them from 
> configs if specified, trunk just returns the hard-coded value from the 
> constants file).
> I've fixed this behave like trunk and return just the hard-coded value, which 
> causes the test to pass.
>   There is no WebHdfsFileSystem that takes a Config, which would be another 
> way to fix this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8939) Test(S)WebHdfsFileContextMainOperations failing on branch-2

2015-08-21 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8939:
--
Status: Patch Available  (was: Open)

> Test(S)WebHdfsFileContextMainOperations failing on branch-2
> ---
>
> Key: HDFS-8939
> URL: https://issues.apache.org/jira/browse/HDFS-8939
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.8.0
>Reporter: Jakob Homan
> Fix For: 2.8.0
>
> Attachments: HDFS-8939-branch-2.001.patch
>
>
> After HDFS-8180, TestWebHdfsFileContextMainOperations and 
> TestSWebHdfsFileContextMainOperations are failing with runtime NPEs while 
> instantiating the wrapped WebHDFSFileSystems because {{getDefaultPort}} is 
> trying to access a conf that was never provided.  In the constructor both 
> both WebHdfs and SWebhdfs the underlying (S)WebHdfsFileSystems are 
> instantiated in the constructor and never have a chance to have their 
> {{setConf}} methods called:
> {code}  SWebHdfs(URI theUri, Configuration conf)
>   throws IOException, URISyntaxException {
> super(theUri, new SWebHdfsFileSystem(), conf, SCHEME, false);
>   }r{code}
> The test passes on trunk because HDFS-5321 removed the call to the 
> Configuration instance as part of {{getDefaultPort}}.  HDFS-5321 was applied 
> to branch-2 but reverted in HDFS-6632, so there's a bit of a difference in 
> how branch-2 versus trunk handles default values (branch-2 pulls them from 
> configs if specified, trunk just returns the hard-coded value from the 
> constants file).
> I've fixed this behave like trunk and return just the hard-coded value, which 
> causes the test to pass.
>   There is no WebHdfsFileSystem that takes a Config, which would be another 
> way to fix this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8939) Test(S)WebHdfsFileContextMainOperations failing on branch-2

2015-08-21 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8939:
--
Attachment: HDFS-8939-branch-2.001.patch

Match getDefaultPort behavior on trunk.

> Test(S)WebHdfsFileContextMainOperations failing on branch-2
> ---
>
> Key: HDFS-8939
> URL: https://issues.apache.org/jira/browse/HDFS-8939
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.8.0
>Reporter: Jakob Homan
> Fix For: 2.8.0
>
> Attachments: HDFS-8939-branch-2.001.patch
>
>
> After HDFS-8180, TestWebHdfsFileContextMainOperations and 
> TestSWebHdfsFileContextMainOperations are failing with runtime NPEs while 
> instantiating the wrapped WebHDFSFileSystems because {{getDefaultPort}} is 
> trying to access a conf that was never provided.  In the constructor both 
> both WebHdfs and SWebhdfs the underlying (S)WebHdfsFileSystems are 
> instantiated in the constructor and never have a chance to have their 
> {{setConf}} methods called:
> {code}  SWebHdfs(URI theUri, Configuration conf)
>   throws IOException, URISyntaxException {
> super(theUri, new SWebHdfsFileSystem(), conf, SCHEME, false);
>   }r{code}
> The test passes on trunk because HDFS-5321 removed the call to the 
> Configuration instance as part of {{getDefaultPort}}.  HDFS-5321 was applied 
> to branch-2 but reverted in HDFS-6632, so there's a bit of a difference in 
> how branch-2 versus trunk handles default values (branch-2 pulls them from 
> configs if specified, trunk just returns the hard-coded value from the 
> constants file).
> I've fixed this behave like trunk and return just the hard-coded value, which 
> causes the test to pass.
>   There is no WebHdfsFileSystem that takes a Config, which would be another 
> way to fix this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8939) Test(S)WebHdfsFileContextMainOperations failing on branch-2

2015-08-21 Thread Jakob Homan (JIRA)
Jakob Homan created HDFS-8939:
-

 Summary: Test(S)WebHdfsFileContextMainOperations failing on 
branch-2
 Key: HDFS-8939
 URL: https://issues.apache.org/jira/browse/HDFS-8939
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.8.0
Reporter: Jakob Homan
 Fix For: 2.8.0


After HDFS-8180, TestWebHdfsFileContextMainOperations and 
TestSWebHdfsFileContextMainOperations are failing with runtime NPEs while 
instantiating the wrapped WebHDFSFileSystems because {{getDefaultPort}} is 
trying to access a conf that was never provided.  In the constructor both both 
WebHdfs and SWebhdfs the underlying (S)WebHdfsFileSystems are instantiated in 
the constructor and never have a chance to have their {{setConf}} methods 
called:
{code}  SWebHdfs(URI theUri, Configuration conf)
  throws IOException, URISyntaxException {
super(theUri, new SWebHdfsFileSystem(), conf, SCHEME, false);
  }r{code}

The test passes on trunk because HDFS-5321 removed the call to the 
Configuration instance as part of {{getDefaultPort}}.  HDFS-5321 was applied to 
branch-2 but reverted in HDFS-6632, so there's a bit of a difference in how 
branch-2 versus trunk handles default values (branch-2 pulls them from configs 
if specified, trunk just returns the hard-coded value from the constants file).

I've fixed this behave like trunk and return just the hard-coded value, which 
causes the test to pass.

  There is no WebHdfsFileSystem that takes a Config, which would be another way 
to fix this.






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8435) createNonRecursive support needed in WebHdfsFileSystem to support HBase

2015-08-18 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8435:
--
Status: Patch Available  (was: Open)

> createNonRecursive support needed in WebHdfsFileSystem to support HBase
> ---
>
> Key: HDFS-8435
> URL: https://issues.apache.org/jira/browse/HDFS-8435
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.6.0
>Reporter: Vinoth Sathappan
>Assignee: Jakob Homan
> Attachments: HDFS-8435-branch-2.7.001.patch, HDFS-8435.001.patch, 
> HDFS-8435.002.patch, HDFS-8435.003.patch, HDFS-8435.004.patch, 
> HDFS-8435.005.patch
>
>
> The WebHdfsFileSystem implementation doesn't support createNonRecursive. 
> HBase extensively depends on that for proper functioning. Currently, when the 
> region servers are started over web hdfs, they crash due with -
> createNonRecursive unsupported for this filesystem class 
> org.apache.hadoop.hdfs.web.SWebHdfsFileSystem
> at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1137)
> at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1112)
> at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1088)
> at 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.init(ProtobufLogWriter.java:85)
> at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createWriter(HLogFactory.java:198)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8435) createNonRecursive support needed in WebHdfsFileSystem to support HBase

2015-08-18 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8435:
--
Status: Open  (was: Patch Available)

> createNonRecursive support needed in WebHdfsFileSystem to support HBase
> ---
>
> Key: HDFS-8435
> URL: https://issues.apache.org/jira/browse/HDFS-8435
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.6.0
>Reporter: Vinoth Sathappan
>Assignee: Jakob Homan
> Attachments: HDFS-8435-branch-2.7.001.patch, HDFS-8435.001.patch, 
> HDFS-8435.002.patch, HDFS-8435.003.patch, HDFS-8435.004.patch, 
> HDFS-8435.005.patch
>
>
> The WebHdfsFileSystem implementation doesn't support createNonRecursive. 
> HBase extensively depends on that for proper functioning. Currently, when the 
> region servers are started over web hdfs, they crash due with -
> createNonRecursive unsupported for this filesystem class 
> org.apache.hadoop.hdfs.web.SWebHdfsFileSystem
> at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1137)
> at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1112)
> at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1088)
> at 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.init(ProtobufLogWriter.java:85)
> at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createWriter(HLogFactory.java:198)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8435) createNonRecursive support needed in WebHdfsFileSystem to support HBase

2015-08-18 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8435:
--
Attachment: HDFS-8435.005.patch

One last patch that suppresses the warning rather than accepting it.  Again, 
test failures are spurious.

> createNonRecursive support needed in WebHdfsFileSystem to support HBase
> ---
>
> Key: HDFS-8435
> URL: https://issues.apache.org/jira/browse/HDFS-8435
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.6.0
>Reporter: Vinoth Sathappan
>Assignee: Jakob Homan
> Attachments: HDFS-8435-branch-2.7.001.patch, HDFS-8435.001.patch, 
> HDFS-8435.002.patch, HDFS-8435.003.patch, HDFS-8435.004.patch, 
> HDFS-8435.005.patch
>
>
> The WebHdfsFileSystem implementation doesn't support createNonRecursive. 
> HBase extensively depends on that for proper functioning. Currently, when the 
> region servers are started over web hdfs, they crash due with -
> createNonRecursive unsupported for this filesystem class 
> org.apache.hadoop.hdfs.web.SWebHdfsFileSystem
> at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1137)
> at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1112)
> at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1088)
> at 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.init(ProtobufLogWriter.java:85)
> at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createWriter(HLogFactory.java:198)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8435) createNonRecursive support needed in WebHdfsFileSystem to support HBase

2015-08-18 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8435:
--
Status: Patch Available  (was: Open)

> createNonRecursive support needed in WebHdfsFileSystem to support HBase
> ---
>
> Key: HDFS-8435
> URL: https://issues.apache.org/jira/browse/HDFS-8435
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.6.0
>Reporter: Vinoth Sathappan
>Assignee: Jakob Homan
> Attachments: HDFS-8435-branch-2.7.001.patch, HDFS-8435.001.patch, 
> HDFS-8435.002.patch, HDFS-8435.003.patch, HDFS-8435.004.patch, 
> HDFS-8435.005.patch
>
>
> The WebHdfsFileSystem implementation doesn't support createNonRecursive. 
> HBase extensively depends on that for proper functioning. Currently, when the 
> region servers are started over web hdfs, they crash due with -
> createNonRecursive unsupported for this filesystem class 
> org.apache.hadoop.hdfs.web.SWebHdfsFileSystem
> at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1137)
> at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1112)
> at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1088)
> at 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.init(ProtobufLogWriter.java:85)
> at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createWriter(HLogFactory.java:198)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8435) createNonRecursive support needed in WebHdfsFileSystem to support HBase

2015-08-18 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8435:
--
Status: Open  (was: Patch Available)

> createNonRecursive support needed in WebHdfsFileSystem to support HBase
> ---
>
> Key: HDFS-8435
> URL: https://issues.apache.org/jira/browse/HDFS-8435
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.6.0
>Reporter: Vinoth Sathappan
>Assignee: Jakob Homan
> Attachments: HDFS-8435-branch-2.7.001.patch, HDFS-8435.001.patch, 
> HDFS-8435.002.patch, HDFS-8435.003.patch, HDFS-8435.004.patch, 
> HDFS-8435.005.patch
>
>
> The WebHdfsFileSystem implementation doesn't support createNonRecursive. 
> HBase extensively depends on that for proper functioning. Currently, when the 
> region servers are started over web hdfs, they crash due with -
> createNonRecursive unsupported for this filesystem class 
> org.apache.hadoop.hdfs.web.SWebHdfsFileSystem
> at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1137)
> at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1112)
> at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1088)
> at 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.init(ProtobufLogWriter.java:85)
> at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createWriter(HLogFactory.java:198)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8435) createNonRecursive support needed in WebHdfsFileSystem to support HBase

2015-08-17 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8435:
--
Status: Patch Available  (was: Open)

> createNonRecursive support needed in WebHdfsFileSystem to support HBase
> ---
>
> Key: HDFS-8435
> URL: https://issues.apache.org/jira/browse/HDFS-8435
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.6.0
>Reporter: Vinoth Sathappan
>Assignee: Jakob Homan
> Attachments: HDFS-8435-branch-2.7.001.patch, HDFS-8435.001.patch, 
> HDFS-8435.002.patch, HDFS-8435.003.patch, HDFS-8435.004.patch
>
>
> The WebHdfsFileSystem implementation doesn't support createNonRecursive. 
> HBase extensively depends on that for proper functioning. Currently, when the 
> region servers are started over web hdfs, they crash due with -
> createNonRecursive unsupported for this filesystem class 
> org.apache.hadoop.hdfs.web.SWebHdfsFileSystem
> at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1137)
> at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1112)
> at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1088)
> at 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.init(ProtobufLogWriter.java:85)
> at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createWriter(HLogFactory.java:198)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8435) createNonRecursive support needed in WebHdfsFileSystem to support HBase

2015-08-17 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8435:
--
Attachment: HDFS-8435.004.patch

Fixed javadoc and whitespace complaints.  Unfortunately, as we're adding a 
deprecated API to WebHDFS, the javac warning is unavoidable.  Unit tests that 
failed/timed-out on Jenkins pass repeatedly for me; I consider them spurious.

> createNonRecursive support needed in WebHdfsFileSystem to support HBase
> ---
>
> Key: HDFS-8435
> URL: https://issues.apache.org/jira/browse/HDFS-8435
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.6.0
>Reporter: Vinoth Sathappan
>Assignee: Jakob Homan
> Attachments: HDFS-8435-branch-2.7.001.patch, HDFS-8435.001.patch, 
> HDFS-8435.002.patch, HDFS-8435.003.patch, HDFS-8435.004.patch
>
>
> The WebHdfsFileSystem implementation doesn't support createNonRecursive. 
> HBase extensively depends on that for proper functioning. Currently, when the 
> region servers are started over web hdfs, they crash due with -
> createNonRecursive unsupported for this filesystem class 
> org.apache.hadoop.hdfs.web.SWebHdfsFileSystem
> at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1137)
> at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1112)
> at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1088)
> at 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.init(ProtobufLogWriter.java:85)
> at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createWriter(HLogFactory.java:198)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8435) createNonRecursive support needed in WebHdfsFileSystem to support HBase

2015-08-17 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8435:
--
Status: Open  (was: Patch Available)

> createNonRecursive support needed in WebHdfsFileSystem to support HBase
> ---
>
> Key: HDFS-8435
> URL: https://issues.apache.org/jira/browse/HDFS-8435
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.6.0
>Reporter: Vinoth Sathappan
>Assignee: Jakob Homan
> Attachments: HDFS-8435-branch-2.7.001.patch, HDFS-8435.001.patch, 
> HDFS-8435.002.patch, HDFS-8435.003.patch
>
>
> The WebHdfsFileSystem implementation doesn't support createNonRecursive. 
> HBase extensively depends on that for proper functioning. Currently, when the 
> region servers are started over web hdfs, they crash due with -
> createNonRecursive unsupported for this filesystem class 
> org.apache.hadoop.hdfs.web.SWebHdfsFileSystem
> at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1137)
> at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1112)
> at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1088)
> at 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.init(ProtobufLogWriter.java:85)
> at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createWriter(HLogFactory.java:198)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7822) Make webhdfs handling of URI standard compliant

2015-08-17 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-7822:
--
Summary: Make webhdfs handling of URI standard compliant  (was: Make 
webhdfs handling of URI stardard compliant)

> Make webhdfs handling of URI standard compliant
> ---
>
> Key: HDFS-7822
> URL: https://issues.apache.org/jira/browse/HDFS-7822
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Reporter: Kihwal Lee
>Priority: Critical
>
> As seen in HDFS-7816, webhdfs client is not encoding URI properly. But since 
> webhdfs is often used as the compatibility layer, we cannot simply fix it and 
> break the compatibility. Instead, we should stage the fix so that breakages 
> caused by incompatibility can be minimized.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7822) Make webhdfs handling of URI stardard compliant

2015-08-17 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-7822:
--
Component/s: webhdfs

> Make webhdfs handling of URI stardard compliant
> ---
>
> Key: HDFS-7822
> URL: https://issues.apache.org/jira/browse/HDFS-7822
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Reporter: Kihwal Lee
>Priority: Critical
>
> As seen in HDFS-7816, webhdfs client is not encoding URI properly. But since 
> webhdfs is often used as the compatibility layer, we cannot simply fix it and 
> break the compatibility. Instead, we should stage the fix so that breakages 
> caused by incompatibility can be minimized.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8435) createNonRecursive support needed in WebHdfsFileSystem to support HBase

2015-08-13 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8435:
--
Status: Patch Available  (was: Open)

> createNonRecursive support needed in WebHdfsFileSystem to support HBase
> ---
>
> Key: HDFS-8435
> URL: https://issues.apache.org/jira/browse/HDFS-8435
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.6.0
>Reporter: Vinoth Sathappan
>Assignee: Jakob Homan
> Attachments: HDFS-8435-branch-2.7.001.patch, HDFS-8435.001.patch, 
> HDFS-8435.002.patch, HDFS-8435.003.patch
>
>
> The WebHdfsFileSystem implementation doesn't support createNonRecursive. 
> HBase extensively depends on that for proper functioning. Currently, when the 
> region servers are started over web hdfs, they crash due with -
> createNonRecursive unsupported for this filesystem class 
> org.apache.hadoop.hdfs.web.SWebHdfsFileSystem
> at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1137)
> at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1112)
> at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1088)
> at 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.init(ProtobufLogWriter.java:85)
> at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createWriter(HLogFactory.java:198)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8435) createNonRecursive support needed in WebHdfsFileSystem to support HBase

2015-08-13 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8435:
--
Attachment: HDFS-8435.003.patch

New patch that applies to both trunk and branch 2.  

The failed tests were because the default of createParent param in WebHDFS was 
being set to false, but then not being used by the actual call and overridden 
to true in the create call on the dfsclient.  I've fixed this to pay attention 
to the parameter and updated the spec to be correct.

Good catch on the throw.  Removed.

I had played around with that uber test a bit.  Using the annotation loses the 
explicit method about what went wrong on each test.  I put as much into the 
helper method as looked reasonable (judgment call here); when I put more of the 
per-test logic into the helper (expected exception, subsequent message), it got 
really crowded and ugly.  

> createNonRecursive support needed in WebHdfsFileSystem to support HBase
> ---
>
> Key: HDFS-8435
> URL: https://issues.apache.org/jira/browse/HDFS-8435
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.6.0
>Reporter: Vinoth Sathappan
>Assignee: Jakob Homan
> Attachments: HDFS-8435-branch-2.7.001.patch, HDFS-8435.001.patch, 
> HDFS-8435.002.patch, HDFS-8435.003.patch
>
>
> The WebHdfsFileSystem implementation doesn't support createNonRecursive. 
> HBase extensively depends on that for proper functioning. Currently, when the 
> region servers are started over web hdfs, they crash due with -
> createNonRecursive unsupported for this filesystem class 
> org.apache.hadoop.hdfs.web.SWebHdfsFileSystem
> at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1137)
> at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1112)
> at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1088)
> at 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.init(ProtobufLogWriter.java:85)
> at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createWriter(HLogFactory.java:198)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8435) createNonRecursive support needed in WebHdfsFileSystem to support HBase

2015-08-13 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8435:
--
Status: Open  (was: Patch Available)

> createNonRecursive support needed in WebHdfsFileSystem to support HBase
> ---
>
> Key: HDFS-8435
> URL: https://issues.apache.org/jira/browse/HDFS-8435
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.6.0
>Reporter: Vinoth Sathappan
>Assignee: Jakob Homan
> Attachments: HDFS-8435-branch-2.7.001.patch, HDFS-8435.001.patch, 
> HDFS-8435.002.patch
>
>
> The WebHdfsFileSystem implementation doesn't support createNonRecursive. 
> HBase extensively depends on that for proper functioning. Currently, when the 
> region servers are started over web hdfs, they crash due with -
> createNonRecursive unsupported for this filesystem class 
> org.apache.hadoop.hdfs.web.SWebHdfsFileSystem
> at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1137)
> at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1112)
> at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1088)
> at 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.init(ProtobufLogWriter.java:85)
> at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createWriter(HLogFactory.java:198)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8866) Typo in docs: Rumtime -> Runtime

2015-08-07 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8866:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

+rum.  I've committed this.  Thanks, Gabor!

> Typo in docs: Rumtime -> Runtime
> 
>
> Key: HDFS-8866
> URL: https://issues.apache.org/jira/browse/HDFS-8866
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, webhdfs
>Reporter: Jakob Homan
>Assignee: Gabor Liptak
>Priority: Trivial
>  Labels: newbie
> Fix For: 2.8.0
>
> Attachments: HDFS-8866.1.patch
>
>
> From WebHDFS site doc:
> {noformat}### HTTP Response Codes
> | Exceptions | HTTP Response Codes |
> |: |: |
> | `IllegalArgumentException ` | `400 Bad Request ` |
> | `UnsupportedOperationException` | `400 Bad Request ` |
> | `SecurityException ` | `401 Unauthorized ` |
> | `IOException ` | `403 Forbidden ` |
> | `FileNotFoundException ` | `404 Not Found ` |
> | `RumtimeException ` | `500 Internal Server Error` |{noformat}
> Everyone knows there's no exception to rum time.  Rum time is mandatory, but 
> irrelevant to WebHDFS.  Let's make it Runtime...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8866) Typo in docs: Rumtime -> Runtime

2015-08-06 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8866:
--
Labels: newbie  (was: )

> Typo in docs: Rumtime -> Runtime
> 
>
> Key: HDFS-8866
> URL: https://issues.apache.org/jira/browse/HDFS-8866
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: documentation, webhdfs
>Reporter: Jakob Homan
>  Labels: newbie
>
> From WebHDFS site doc:
> {noformat}### HTTP Response Codes
> | Exceptions | HTTP Response Codes |
> |: |: |
> | `IllegalArgumentException ` | `400 Bad Request ` |
> | `UnsupportedOperationException` | `400 Bad Request ` |
> | `SecurityException ` | `401 Unauthorized ` |
> | `IOException ` | `403 Forbidden ` |
> | `FileNotFoundException ` | `404 Not Found ` |
> | `RumtimeException ` | `500 Internal Server Error` |{noformat}
> Everyone knows there's no exception to rum time.  Rum time is mandatory, but 
> irrelevant to WebHDFS.  Let's make it Runtime...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8866) Typo in docs: Rumtime -> Runtime

2015-08-06 Thread Jakob Homan (JIRA)
Jakob Homan created HDFS-8866:
-

 Summary: Typo in docs: Rumtime -> Runtime
 Key: HDFS-8866
 URL: https://issues.apache.org/jira/browse/HDFS-8866
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation, webhdfs
Reporter: Jakob Homan


>From WebHDFS site doc:
{noformat}### HTTP Response Codes

| Exceptions | HTTP Response Codes |
|: |: |
| `IllegalArgumentException ` | `400 Bad Request ` |
| `UnsupportedOperationException` | `400 Bad Request ` |
| `SecurityException ` | `401 Unauthorized ` |
| `IOException ` | `403 Forbidden ` |
| `FileNotFoundException ` | `404 Not Found ` |
| `RumtimeException ` | `500 Internal Server Error` |{noformat}
Everyone knows there's no exception to rum time.  Rum time is mandatory, but 
irrelevant to WebHDFS.  Let's make it Runtime...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8155) Support OAuth2 in WebHDFS

2015-07-31 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8155:
--
Attachment: HDFS-8155-1.patch

First patch for review.  We've been testing a version of this code for a few 
months and it's working well.

Two types of OAuth code grants (client credentials and refresh/access tokens 
provided by the conf) are supported by default and other code grants are user 
implementable.  I had planned on using Apache Oltu for this, but that project 
doesn't seem very active and it's main benefit - special case support for 
oauth2 providers like github/twitter/fb, etc. - is of marginal benefit for 
WebHDFS and could easily be implemented by the user if necessary.

I didn't end up using the Authenticator client class because it's too closely 
tied to the spnego implementation, but after this goes in it will be a good 
idea to make that class more generic and use it for the oauth stuff as well.

> Support OAuth2 in WebHDFS
> -
>
> Key: HDFS-8155
> URL: https://issues.apache.org/jira/browse/HDFS-8155
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Reporter: Jakob Homan
>Assignee: Jakob Homan
> Attachments: HDFS-8155-1.patch
>
>
> WebHDFS should be able to accept OAuth2 credentials.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-8155) Support OAuth2 in WebHDFS

2015-07-31 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan reassigned HDFS-8155:
-

Assignee: Jakob Homan  (was: Kai Zheng)

> Support OAuth2 in WebHDFS
> -
>
> Key: HDFS-8155
> URL: https://issues.apache.org/jira/browse/HDFS-8155
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Reporter: Jakob Homan
>Assignee: Jakob Homan
>
> WebHDFS should be able to accept OAuth2 credentials.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8180) AbstractFileSystem Implementation for WebHdfs

2015-07-28 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8180:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

I've committed this.  Resolving.  Thanks, Santhosh!

> AbstractFileSystem Implementation for WebHdfs
> -
>
> Key: HDFS-8180
> URL: https://issues.apache.org/jira/browse/HDFS-8180
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.6.0
>Reporter: Santhosh G Nayak
>Assignee: Santhosh G Nayak
>  Labels: hadoop
> Fix For: 2.8.0
>
> Attachments: HDFS-8180-1.patch, HDFS-8180-2.patch, HDFS-8180-3.patch, 
> HDFS-8180-4.patch
>
>
> Add AbstractFileSystem implementation for WebHdfs to support FileContext APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8180) AbstractFileSystem Implementation for WebHdfs

2015-07-28 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14645417#comment-14645417
 ] 

Jakob Homan commented on HDFS-8180:
---

+1

> AbstractFileSystem Implementation for WebHdfs
> -
>
> Key: HDFS-8180
> URL: https://issues.apache.org/jira/browse/HDFS-8180
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.6.0
>Reporter: Santhosh G Nayak
>Assignee: Santhosh G Nayak
>  Labels: hadoop
> Attachments: HDFS-8180-1.patch, HDFS-8180-2.patch, HDFS-8180-3.patch, 
> HDFS-8180-4.patch
>
>
> Add AbstractFileSystem implementation for WebHdfs to support FileContext APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8180) AbstractFileSystem Implementation for WebHdfs

2015-07-28 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8180:
--
Status: Patch Available  (was: Open)

> AbstractFileSystem Implementation for WebHdfs
> -
>
> Key: HDFS-8180
> URL: https://issues.apache.org/jira/browse/HDFS-8180
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.6.0
>Reporter: Santhosh G Nayak
>Assignee: Santhosh G Nayak
>  Labels: hadoop
> Attachments: HDFS-8180-1.patch, HDFS-8180-2.patch, HDFS-8180-3.patch, 
> HDFS-8180-4.patch
>
>
> Add AbstractFileSystem implementation for WebHdfs to support FileContext APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8180) AbstractFileSystem Implementation for WebHdfs

2015-07-28 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8180:
--
Status: Open  (was: Patch Available)

> AbstractFileSystem Implementation for WebHdfs
> -
>
> Key: HDFS-8180
> URL: https://issues.apache.org/jira/browse/HDFS-8180
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.6.0
>Reporter: Santhosh G Nayak
>Assignee: Santhosh G Nayak
>  Labels: hadoop
> Attachments: HDFS-8180-1.patch, HDFS-8180-2.patch, HDFS-8180-3.patch, 
> HDFS-8180-4.patch
>
>
> Add AbstractFileSystem implementation for WebHdfs to support FileContext APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8180) AbstractFileSystem Implementation for WebHdfs

2015-07-26 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14642270#comment-14642270
 ] 

Jakob Homan commented on HDFS-8180:
---

The failed unit tests pass for me, I'm assuming the failures were transient. 
[~snayak], can you fix the checkstyle errors Jenkins flagged and we'll commit 
this?

> AbstractFileSystem Implementation for WebHdfs
> -
>
> Key: HDFS-8180
> URL: https://issues.apache.org/jira/browse/HDFS-8180
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.6.0
>Reporter: Santhosh G Nayak
>Assignee: Jakob Homan
>  Labels: hadoop
> Attachments: HDFS-8180-1.patch, HDFS-8180-2.patch, HDFS-8180-3.patch
>
>
> Add AbstractFileSystem implementation for WebHdfs to support FileContext APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8180) AbstractFileSystem Implementation for WebHdfs

2015-07-24 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8180:
--
Status: Patch Available  (was: In Progress)

Submitting patch to be picked up by Jenkins.  For some reason the submit patch 
button wasn't showing up for me until I assigned the JIRA to myself.  Not sure 
why.

> AbstractFileSystem Implementation for WebHdfs
> -
>
> Key: HDFS-8180
> URL: https://issues.apache.org/jira/browse/HDFS-8180
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.6.0
>Reporter: Santhosh G Nayak
>Assignee: Jakob Homan
>  Labels: hadoop
> Attachments: HDFS-8180-1.patch, HDFS-8180-2.patch, HDFS-8180-3.patch
>
>
> Add AbstractFileSystem implementation for WebHdfs to support FileContext APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-8180) AbstractFileSystem Implementation for WebHdfs

2015-07-24 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan reassigned HDFS-8180:
-

Assignee: Jakob Homan  (was: Santhosh G Nayak)

> AbstractFileSystem Implementation for WebHdfs
> -
>
> Key: HDFS-8180
> URL: https://issues.apache.org/jira/browse/HDFS-8180
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.6.0
>Reporter: Santhosh G Nayak
>Assignee: Jakob Homan
>  Labels: hadoop
> Attachments: HDFS-8180-1.patch, HDFS-8180-2.patch, HDFS-8180-3.patch
>
>
> Add AbstractFileSystem implementation for WebHdfs to support FileContext APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8180) AbstractFileSystem Implementation for WebHdfs

2015-07-24 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8180:
--
Flags: Patch

> AbstractFileSystem Implementation for WebHdfs
> -
>
> Key: HDFS-8180
> URL: https://issues.apache.org/jira/browse/HDFS-8180
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.6.0
>Reporter: Santhosh G Nayak
>Assignee: Santhosh G Nayak
>  Labels: hadoop
> Attachments: HDFS-8180-1.patch, HDFS-8180-2.patch, HDFS-8180-3.patch
>
>
> Add AbstractFileSystem implementation for WebHdfs to support FileContext APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8180) AbstractFileSystem Implementation for WebHdfs

2015-07-24 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8180:
--
Issue Type: Improvement  (was: New Feature)

> AbstractFileSystem Implementation for WebHdfs
> -
>
> Key: HDFS-8180
> URL: https://issues.apache.org/jira/browse/HDFS-8180
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.6.0
>Reporter: Santhosh G Nayak
>Assignee: Santhosh G Nayak
>  Labels: hadoop
> Attachments: HDFS-8180-1.patch, HDFS-8180-2.patch, HDFS-8180-3.patch
>
>
> Add AbstractFileSystem implementation for WebHdfs to support FileContext APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8435) createNonRecursive support needed in WebHdfsFileSystem to support HBase

2015-07-01 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14610744#comment-14610744
 ] 

Jakob Homan commented on HDFS-8435:
---

Haven't had a chance to look. Scheduled for Thursday, maybe Monday.

> createNonRecursive support needed in WebHdfsFileSystem to support HBase
> ---
>
> Key: HDFS-8435
> URL: https://issues.apache.org/jira/browse/HDFS-8435
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.6.0
>Reporter: Vinoth Sathappan
>Assignee: Jakob Homan
> Attachments: HDFS-8435-branch-2.7.001.patch, HDFS-8435.001.patch, 
> HDFS-8435.002.patch
>
>
> The WebHdfsFileSystem implementation doesn't support createNonRecursive. 
> HBase extensively depends on that for proper functioning. Currently, when the 
> region servers are started over web hdfs, they crash due with -
> createNonRecursive unsupported for this filesystem class 
> org.apache.hadoop.hdfs.web.SWebHdfsFileSystem
> at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1137)
> at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1112)
> at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1088)
> at 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.init(ProtobufLogWriter.java:85)
> at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createWriter(HLogFactory.java:198)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8435) createNonRecursive support needed in WebHdfsFileSystem to support HBase

2015-06-24 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8435:
--
Status: Patch Available  (was: Open)

> createNonRecursive support needed in WebHdfsFileSystem to support HBase
> ---
>
> Key: HDFS-8435
> URL: https://issues.apache.org/jira/browse/HDFS-8435
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.6.0
>Reporter: Vinoth Sathappan
>Assignee: Jakob Homan
> Attachments: HDFS-8435-branch-2.7.001.patch, HDFS-8435.001.patch, 
> HDFS-8435.002.patch
>
>
> The WebHdfsFileSystem implementation doesn't support createNonRecursive. 
> HBase extensively depends on that for proper functioning. Currently, when the 
> region servers are started over web hdfs, they crash due with -
> createNonRecursive unsupported for this filesystem class 
> org.apache.hadoop.hdfs.web.SWebHdfsFileSystem
> at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1137)
> at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1112)
> at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1088)
> at 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.init(ProtobufLogWriter.java:85)
> at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createWriter(HLogFactory.java:198)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8435) createNonRecursive support needed in WebHdfsFileSystem to support HBase

2015-06-24 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8435:
--
Attachment: HDFS-8435.002.patch

Thanks Chris.  Fixed nit and add overwrite to CreateFlag as it's implied by the 
overwrite parameter.  I think I confused Jenkins with two patches last time.  
Trying just one.  This patch applies to both trunk and branch-2.  After 
Jenkin's ok and your renewed +1, I'll commit to both places.

> createNonRecursive support needed in WebHdfsFileSystem to support HBase
> ---
>
> Key: HDFS-8435
> URL: https://issues.apache.org/jira/browse/HDFS-8435
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.6.0
>Reporter: Vinoth Sathappan
>Assignee: Jakob Homan
> Attachments: HDFS-8435-branch-2.7.001.patch, HDFS-8435.001.patch, 
> HDFS-8435.002.patch
>
>
> The WebHdfsFileSystem implementation doesn't support createNonRecursive. 
> HBase extensively depends on that for proper functioning. Currently, when the 
> region servers are started over web hdfs, they crash due with -
> createNonRecursive unsupported for this filesystem class 
> org.apache.hadoop.hdfs.web.SWebHdfsFileSystem
> at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1137)
> at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1112)
> at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1088)
> at 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.init(ProtobufLogWriter.java:85)
> at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createWriter(HLogFactory.java:198)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8435) createNonRecursive support needed in WebHdfsFileSystem to support HBase

2015-06-24 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8435:
--
Status: Open  (was: Patch Available)

> createNonRecursive support needed in WebHdfsFileSystem to support HBase
> ---
>
> Key: HDFS-8435
> URL: https://issues.apache.org/jira/browse/HDFS-8435
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.6.0
>Reporter: Vinoth Sathappan
>Assignee: Jakob Homan
> Attachments: HDFS-8435-branch-2.7.001.patch, HDFS-8435.001.patch
>
>
> The WebHdfsFileSystem implementation doesn't support createNonRecursive. 
> HBase extensively depends on that for proper functioning. Currently, when the 
> region servers are started over web hdfs, they crash due with -
> createNonRecursive unsupported for this filesystem class 
> org.apache.hadoop.hdfs.web.SWebHdfsFileSystem
> at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1137)
> at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1112)
> at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1088)
> at 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.init(ProtobufLogWriter.java:85)
> at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createWriter(HLogFactory.java:198)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8542) WebHDFS getHomeDirectory behavior does not match specification

2015-06-22 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8542:
--
   Resolution: Fixed
Fix Version/s: 2.8.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Jenkins isn't running against minor versions.  I've committed this to trunk and 
branch-2.  Thanks, Kanaka.  Resolving.

> WebHDFS getHomeDirectory behavior does not match specification
> --
>
> Key: HDFS-8542
> URL: https://issues.apache.org/jira/browse/HDFS-8542
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.6.0
>Reporter: Jakob Homan
>Assignee: kanaka kumar avvaru
> Fix For: 2.8.0
>
> Attachments: HDFS-8542-00.patch, HDFS-8542-01.patch, 
> HDFS-8542-02.patch, HDFS-8542-branch-2.7.002.patch
>
>
> Per the 
> [spec|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Get_Home_Directory],
>  WebHDFS provides a REST endpoint for getting the user's home directory:
> {noformat}Submit a HTTP GET request.
> curl -i "http://:/webhdfs/v1/?op=GETHOMEDIRECTORY"{noformat}
> However, WebHDFSFileSystem.java does not use this, instead building the home 
> [directory 
> locally|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java#L271]:
> {code}  /** @return the home directory. */
>   public static String getHomeDirectoryString(final UserGroupInformation ugi) 
> {
> return "/user/" + ugi.getShortUserName();
>   }
>   @Override
>   public Path getHomeDirectory() {
> return makeQualified(new Path(getHomeDirectoryString(ugi)));
>   }{code}
> The WebHDFSFileSystem client should call to the REST service to determine the 
> home directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8542) WebHDFS getHomeDirectory behavior does not match specification

2015-06-22 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8542:
--
Status: Patch Available  (was: Open)

> WebHDFS getHomeDirectory behavior does not match specification
> --
>
> Key: HDFS-8542
> URL: https://issues.apache.org/jira/browse/HDFS-8542
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.6.0
>Reporter: Jakob Homan
>Assignee: kanaka kumar avvaru
> Attachments: HDFS-8542-00.patch, HDFS-8542-01.patch, 
> HDFS-8542-02.patch, HDFS-8542-branch-2.7.002.patch
>
>
> Per the 
> [spec|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Get_Home_Directory],
>  WebHDFS provides a REST endpoint for getting the user's home directory:
> {noformat}Submit a HTTP GET request.
> curl -i "http://:/webhdfs/v1/?op=GETHOMEDIRECTORY"{noformat}
> However, WebHDFSFileSystem.java does not use this, instead building the home 
> [directory 
> locally|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java#L271]:
> {code}  /** @return the home directory. */
>   public static String getHomeDirectoryString(final UserGroupInformation ugi) 
> {
> return "/user/" + ugi.getShortUserName();
>   }
>   @Override
>   public Path getHomeDirectory() {
> return makeQualified(new Path(getHomeDirectoryString(ugi)));
>   }{code}
> The WebHDFSFileSystem client should call to the REST service to determine the 
> home directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8542) WebHDFS getHomeDirectory behavior does not match specification

2015-06-22 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8542:
--
Status: Open  (was: Patch Available)

> WebHDFS getHomeDirectory behavior does not match specification
> --
>
> Key: HDFS-8542
> URL: https://issues.apache.org/jira/browse/HDFS-8542
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.6.0
>Reporter: Jakob Homan
>Assignee: kanaka kumar avvaru
> Attachments: HDFS-8542-00.patch, HDFS-8542-01.patch, 
> HDFS-8542-02.patch, HDFS-8542-branch-2.7.002.patch
>
>
> Per the 
> [spec|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Get_Home_Directory],
>  WebHDFS provides a REST endpoint for getting the user's home directory:
> {noformat}Submit a HTTP GET request.
> curl -i "http://:/webhdfs/v1/?op=GETHOMEDIRECTORY"{noformat}
> However, WebHDFSFileSystem.java does not use this, instead building the home 
> [directory 
> locally|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java#L271]:
> {code}  /** @return the home directory. */
>   public static String getHomeDirectoryString(final UserGroupInformation ugi) 
> {
> return "/user/" + ugi.getShortUserName();
>   }
>   @Override
>   public Path getHomeDirectory() {
> return makeQualified(new Path(getHomeDirectoryString(ugi)));
>   }{code}
> The WebHDFSFileSystem client should call to the REST service to determine the 
> home directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-3620) WebHdfsFileSystem getHomeDirectory() should not resolve locally

2015-06-22 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan resolved HDFS-3620.
---
Resolution: Duplicate

This issue was duplicated and dealt with in HDFS-8542.

> WebHdfsFileSystem getHomeDirectory() should not resolve locally
> ---
>
> Key: HDFS-3620
> URL: https://issues.apache.org/jira/browse/HDFS-3620
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 1.0.3, 2.0.0-alpha
>Reporter: Alejandro Abdelnur
>Priority: Critical
>
> WebHdfsFileSystem getHomeDirectory() method it is hardcoded to return 
> '/user/' + UGI#shortname. Instead, it should make a HTTP REST call with 
> op=GETHOMEDIRECTORY.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8542) WebHDFS getHomeDirectory behavior does not match specification

2015-06-22 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8542:
--
Attachment: HDFS-8542-branch-2.7.002.patch

I'm still not wild about caching the result since again, (a) the value is never 
discarded, so it's not a cache and (b) backing systems could choose to change 
this value on a subsequent call.  However, both FileSystem and 
DistributedFileSystem are doing some questionable things with this API, so I'll 
worry about those issues later, if we run into them.

+1 on current patch.  Failed tests are spurious.  Attaching a version for 2.7 
(same except location of JsonUtils).  Will commit both after Jenkins has a pass 
over backport.

> WebHDFS getHomeDirectory behavior does not match specification
> --
>
> Key: HDFS-8542
> URL: https://issues.apache.org/jira/browse/HDFS-8542
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.6.0
>Reporter: Jakob Homan
>Assignee: kanaka kumar avvaru
> Attachments: HDFS-8542-00.patch, HDFS-8542-01.patch, 
> HDFS-8542-02.patch, HDFS-8542-branch-2.7.002.patch
>
>
> Per the 
> [spec|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Get_Home_Directory],
>  WebHDFS provides a REST endpoint for getting the user's home directory:
> {noformat}Submit a HTTP GET request.
> curl -i "http://:/webhdfs/v1/?op=GETHOMEDIRECTORY"{noformat}
> However, WebHDFSFileSystem.java does not use this, instead building the home 
> [directory 
> locally|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java#L271]:
> {code}  /** @return the home directory. */
>   public static String getHomeDirectoryString(final UserGroupInformation ugi) 
> {
> return "/user/" + ugi.getShortUserName();
>   }
>   @Override
>   public Path getHomeDirectory() {
> return makeQualified(new Path(getHomeDirectoryString(ugi)));
>   }{code}
> The WebHDFSFileSystem client should call to the REST service to determine the 
> home directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8542) WebHDFS getHomeDirectory behavior does not match specification

2015-06-19 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8542:
--
Status: Open  (was: Patch Available)

Canceling patch post-review.

> WebHDFS getHomeDirectory behavior does not match specification
> --
>
> Key: HDFS-8542
> URL: https://issues.apache.org/jira/browse/HDFS-8542
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.6.0
>Reporter: Jakob Homan
>Assignee: kanaka kumar avvaru
> Attachments: HDFS-8542-00.patch, HDFS-8542-01.patch
>
>
> Per the 
> [spec|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Get_Home_Directory],
>  WebHDFS provides a REST endpoint for getting the user's home directory:
> {noformat}Submit a HTTP GET request.
> curl -i "http://:/webhdfs/v1/?op=GETHOMEDIRECTORY"{noformat}
> However, WebHDFSFileSystem.java does not use this, instead building the home 
> [directory 
> locally|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java#L271]:
> {code}  /** @return the home directory. */
>   public static String getHomeDirectoryString(final UserGroupInformation ugi) 
> {
> return "/user/" + ugi.getShortUserName();
>   }
>   @Override
>   public Path getHomeDirectory() {
> return makeQualified(new Path(getHomeDirectoryString(ugi)));
>   }{code}
> The WebHDFSFileSystem client should call to the REST service to determine the 
> home directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8542) WebHDFS getHomeDirectory behavior does not match specification

2015-06-19 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14594161#comment-14594161
 ] 

Jakob Homan commented on HDFS-8542:
---

* The checkstyle ding is a bit convoluted, but probably correct since the 
matching brace is nested pretty heavily.  Let's make checkstyle happy and give 
it what it wants.
* Please remove the println from the unit test
* Not sure it's worth caching the response here.  Home directories *shouldn't* 
change, but there's no huge reason why they couldn't between calls.  Since 
there's no way to clear the cache and this call is pretty light, I'd rather not 
cache.  Not caching will simplify the code.
* The existing org/apache/hadoop/hdfs/web/TestWebHdfsFileSystemContract.java is 
incorrectly testing the getHomeDirectory method:
{code}{//test GETHOMEDIRECTORY
  final URL url = webhdfs.toUrl(GetOpParam.Op.GETHOMEDIRECTORY, root);
  final HttpURLConnection conn = (HttpURLConnection) url.openConnection();
  final Map m = WebHdfsTestUtil.connectAndGetJson(
  conn, HttpServletResponse.SC_OK);
  assertEquals(WebHdfsFileSystem.getHomeDirectoryString(ugi),
  m.get(Path.class.getSimpleName()));
  conn.disconnect();
}{code} 
since it's calling the static getHomeDirectoryString method rather than the 
instance getHomeDirectory method.  We should fix this and deprecate the static 
getHomeDirectoryString method since there's no use (or callers) for it.

Otherwise looks good.  Thanks.

> WebHDFS getHomeDirectory behavior does not match specification
> --
>
> Key: HDFS-8542
> URL: https://issues.apache.org/jira/browse/HDFS-8542
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.6.0
>Reporter: Jakob Homan
>Assignee: kanaka kumar avvaru
> Attachments: HDFS-8542-00.patch, HDFS-8542-01.patch
>
>
> Per the 
> [spec|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Get_Home_Directory],
>  WebHDFS provides a REST endpoint for getting the user's home directory:
> {noformat}Submit a HTTP GET request.
> curl -i "http://:/webhdfs/v1/?op=GETHOMEDIRECTORY"{noformat}
> However, WebHDFSFileSystem.java does not use this, instead building the home 
> [directory 
> locally|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java#L271]:
> {code}  /** @return the home directory. */
>   public static String getHomeDirectoryString(final UserGroupInformation ugi) 
> {
> return "/user/" + ugi.getShortUserName();
>   }
>   @Override
>   public Path getHomeDirectory() {
> return makeQualified(new Path(getHomeDirectoryString(ugi)));
>   }{code}
> The WebHDFSFileSystem client should call to the REST service to determine the 
> home directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8435) createNonRecursive support needed in WebHdfsFileSystem to support HBase

2015-06-19 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8435:
--
Affects Version/s: 2.6.0
   Status: Patch Available  (was: Open)

> createNonRecursive support needed in WebHdfsFileSystem to support HBase
> ---
>
> Key: HDFS-8435
> URL: https://issues.apache.org/jira/browse/HDFS-8435
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 2.6.0
>Reporter: Vinoth Sathappan
>Assignee: Jakob Homan
> Attachments: HDFS-8435-branch-2.7.001.patch, HDFS-8435.001.patch
>
>
> The WebHdfsFileSystem implementation doesn't support createNonRecursive. 
> HBase extensively depends on that for proper functioning. Currently, when the 
> region servers are started over web hdfs, they crash due with -
> createNonRecursive unsupported for this filesystem class 
> org.apache.hadoop.hdfs.web.SWebHdfsFileSystem
> at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1137)
> at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1112)
> at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1088)
> at 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.init(ProtobufLogWriter.java:85)
> at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createWriter(HLogFactory.java:198)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8435) createNonRecursive support needed in WebHdfsFileSystem to support HBase

2015-06-19 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8435:
--
Attachment: HDFS-8435-branch-2.7.001.patch
HDFS-8435.001.patch

Patches to add support for createNonRecursive in WebHDFS.

* Add new method to WebHDFS itself
* Create required CreateFlag param (with documentation)
* Refactor TestFileCreate method that address createNonRecursive to be 
accessible from TestWebHDFS and use it.

> createNonRecursive support needed in WebHdfsFileSystem to support HBase
> ---
>
> Key: HDFS-8435
> URL: https://issues.apache.org/jira/browse/HDFS-8435
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Reporter: Vinoth Sathappan
>Assignee: Jakob Homan
> Attachments: HDFS-8435-branch-2.7.001.patch, HDFS-8435.001.patch
>
>
> The WebHdfsFileSystem implementation doesn't support createNonRecursive. 
> HBase extensively depends on that for proper functioning. Currently, when the 
> region servers are started over web hdfs, they crash due with -
> createNonRecursive unsupported for this filesystem class 
> org.apache.hadoop.hdfs.web.SWebHdfsFileSystem
> at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1137)
> at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1112)
> at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1088)
> at 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.init(ProtobufLogWriter.java:85)
> at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createWriter(HLogFactory.java:198)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8348) WebHDFS Concat - Support sources list passed a POST body

2015-06-15 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8348:
--
Assignee: vishwajeet dusane

> WebHDFS Concat - Support sources list passed a POST body
> 
>
> Key: HDFS-8348
> URL: https://issues.apache.org/jira/browse/HDFS-8348
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Reporter: vishwajeet dusane
>Assignee: vishwajeet dusane
>
> Problem - Current webhdfs behavior for concat support sources list to be 
> passed as query parameter. This approach limits on the number of sources list 
> send as query parameter. 
> Proposed Solution - Add support to send sources list as part of the request 
> body instead of query parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8542) WebHDFS getHomeDirectory behavior does not match specification

2015-06-04 Thread Jakob Homan (JIRA)
Jakob Homan created HDFS-8542:
-

 Summary: WebHDFS getHomeDirectory behavior does not match 
specification
 Key: HDFS-8542
 URL: https://issues.apache.org/jira/browse/HDFS-8542
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.6.0
Reporter: Jakob Homan


Per the 
[spec|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Get_Home_Directory],
 WebHDFS provides a REST endpoint for getting the user's home directory:
{noformat}Submit a HTTP GET request.

curl -i "http://:/webhdfs/v1/?op=GETHOMEDIRECTORY"{noformat}

However, WebHDFSFileSystem.java does not use this, instead building the home 
[directory 
locally|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java#L271]:
{code}  /** @return the home directory. */
  public static String getHomeDirectoryString(final UserGroupInformation ugi) {
return "/user/" + ugi.getShortUserName();
  }

  @Override
  public Path getHomeDirectory() {
return makeQualified(new Path(getHomeDirectoryString(ugi)));
  }{code}

The WebHDFSFileSystem client should call to the REST service to determine the 
home directory.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8490) Typo in trace enabled log in WebHDFS exception handler

2015-05-27 Thread Jakob Homan (JIRA)
Jakob Homan created HDFS-8490:
-

 Summary: Typo in trace enabled log in WebHDFS exception handler
 Key: HDFS-8490
 URL: https://issues.apache.org/jira/browse/HDFS-8490
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: webhdfs
Reporter: Jakob Homan
Priority: Trivial


/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/ExceptionHandler.java:
{code}  static DefaultFullHttpResponse exceptionCaught(Throwable cause) {
Exception e = cause instanceof Exception ? (Exception) cause : new 
Exception(cause);

if (LOG.isTraceEnabled()) {
  LOG.trace("GOT EXCEPITION", e);
}{code}
EXCEPITION is a typo.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-8435) createNonRecursive support needed in WebHdfsFileSystem to support HBase

2015-05-20 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan reassigned HDFS-8435:
-

Assignee: Jakob Homan

> createNonRecursive support needed in WebHdfsFileSystem to support HBase
> ---
>
> Key: HDFS-8435
> URL: https://issues.apache.org/jira/browse/HDFS-8435
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Reporter: Vinoth Sathappan
>Assignee: Jakob Homan
>
> The WebHdfsFileSystem implementation doesn't support createNonRecursive. 
> HBase extensively depends on that for proper functioning. Currently, when the 
> region servers are started over web hdfs, they crash due with -
> createNonRecursive unsupported for this filesystem class 
> org.apache.hadoop.hdfs.web.SWebHdfsFileSystem
> at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1137)
> at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1112)
> at 
> org.apache.hadoop.fs.FileSystem.createNonRecursive(FileSystem.java:1088)
> at 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.init(ProtobufLogWriter.java:85)
> at 
> org.apache.hadoop.hbase.regionserver.wal.HLogFactory.createWriter(HLogFactory.java:198)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-8348) WebHDFS Concat - Support sources list passed a POST body

2015-05-20 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-8348:
--
Summary: WebHDFS Concat - Support sources list passed a POST body  (was: 
Concat - Support sources list passed a POST body)

> WebHDFS Concat - Support sources list passed a POST body
> 
>
> Key: HDFS-8348
> URL: https://issues.apache.org/jira/browse/HDFS-8348
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Reporter: vishwajeet dusane
>
> Problem - Current webhdfs behavior for concat support sources list to be 
> passed as query parameter. This approach limits on the number of sources list 
> send as query parameter. 
> Proposed Solution - Add support to send sources list as part of the request 
> body instead of query parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8180) AbstractFileSystem Implementation for WebHdfs

2015-05-14 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14544340#comment-14544340
 ] 

Jakob Homan commented on HDFS-8180:
---

Hey Santhosh-
   Thanks for the update.  A couple minor changes and I'll be ready to commit 
this:

* The two tests have quite a lot (say 70%) of duplicated code.  Can we have 
TestSWebHdfsFileContextMainOperations extend 
TestWebHdfsFileContextMainOperations?  This is particularly true since there 
are comments about the need to add more testing/verification on WebHDFS versus 
HDFS behavior in both tests.  Not sure if you'll run into problems with the 
JUnit {{@BeforeClass}} annotations.  If so, just go ahead and split out the 
common code to another, non-test class.
* The comments about WebHDFS/HDFS behavior are formatted incorrectly and run 
into their function definitions:
{noformat}  @Test
  /** Test FileContext APIs when symlinks are not supported
   * TODO: Open separate JIRA for full support of the Symlink in webhdfs
   * */ public void testUnsupportedSymlink() throws IOException {{noformat}
* Typo: clusterSetupAtBegining > clusterSetupAtBeginning



> AbstractFileSystem Implementation for WebHdfs
> -
>
> Key: HDFS-8180
> URL: https://issues.apache.org/jira/browse/HDFS-8180
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Affects Versions: 2.6.0
>Reporter: Santhosh G Nayak
>Assignee: Santhosh G Nayak
>  Labels: hadoop
> Attachments: HDFS-8180-1.patch, HDFS-8180-2.patch
>
>
> Add AbstractFileSystem implementation for WebHdfs to support FileContext APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8348) Concat - Support sources list passed a POST body

2015-05-08 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14534821#comment-14534821
 ] 

Jakob Homan commented on HDFS-8348:
---

So, we'd keep the current operation parameter and provide the ability to give, 
say a JSON array of the paths as the body, ie
{noformat}
{"paths" : [ "/a/b/c", "/a/b/d", "/a/b/e" ] }
{noformat}
Sounds reasonable to me. 

> Concat - Support sources list passed a POST body
> 
>
> Key: HDFS-8348
> URL: https://issues.apache.org/jira/browse/HDFS-8348
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Reporter: vishwajeet dusane
>
> Problem - Current webhdfs behavior for concat support sources list to be 
> passed as query parameter. This approach limits on the number of sources list 
> send as query parameter. 
> Proposed Solution - Add support to send sources list as part of the request 
> body instead of query parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8290) WebHDFS calls before namesystem initialization can cause NullPointerException.

2015-04-29 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14520358#comment-14520358
 ] 

Jakob Homan commented on HDFS-8290:
---

+1.

> WebHDFS calls before namesystem initialization can cause NullPointerException.
> --
>
> Key: HDFS-8290
> URL: https://issues.apache.org/jira/browse/HDFS-8290
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 2.6.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
> Attachments: HDFS-8290.001.patch
>
>
> The NameNode has a brief window of time when the HTTP server has been 
> initialized, but the namesystem has not been initialized.  During this 
> window, a WebHDFS call can cause a {{NullPointerException}}.  We can catch 
> this condition and return a more meaningful error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8155) Support OAuth2 in WebHDFS

2015-04-17 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14500944#comment-14500944
 ] 

Jakob Homan commented on HDFS-8155:
---

bq. I agree when an initial authentication like Kerberos/SPNEGO is passed, a DT 
will need to be generated and passed to the server in all subsequent usages.
Why would it need to be? A backing store could accept a delegation token, or 
the datanodes could continue to accept the SPENGO/Kerberos credentials or OAuth 
tokens.  DTs are one option, but I do not want to rule out 
datanodes/datanode-like-servers accepting standard credentials.

> Support OAuth2 in WebHDFS
> -
>
> Key: HDFS-8155
> URL: https://issues.apache.org/jira/browse/HDFS-8155
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Reporter: Jakob Homan
>Assignee: Kai Zheng
>
> WebHDFS should be able to accept OAuth2 credentials.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8155) Support OAuth2 authentication in WebHDFS

2015-04-16 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14499092#comment-14499092
 ] 

Jakob Homan commented on HDFS-8155:
---

bq. For the first use case – WebHDFS now recognizes the auth cookie of the UI 
therefore the UI works as long as any third-party filter behaves correctly 
w.r.t. the UI pages.
I agree.  I'm not considering UI right now.

bq. For the second use case – WebHDFS is designed to use DT as the 
authentication method.
WebHDFS supports [three distinct types of 
authentication|https://hadoop.apache.org/docs/r2.5.1/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Authentication]:
 SPENGO, simple, delegation token.  



Please consider JIRA in light of the linked JIRA, HDFS-8154, which is going to 
extract WebHDFS as a separate interface that other backing stores will support. 
 Currently the only way for some backing store to gain access to the Hadoop 
ecosystem is to implement oah.FileSystem, which would give it access to JVM 
based frameworks (Pig, Hive, Spark, etc.).  Additionally, such a store may wish 
to expose a REST interface to itself or provide easy access to non-JVM systems. 
 Such a system could go about defining a REST specification into the 
oah.FileSystem, but that definition would look exactly (or pretty much) like 
what WebHDFS already defines.  Instead of such duplication, HDFS-8154 looks to 
make what we already have (WebHDFS) more general and useful.  As part of that, 
we need to add support for a more widely used authorization system, OAuth2.

An important point is that 
[WebHDFS|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java#L91]
 is misnamed:
{code:title=WebHDFSFileSystem.java}
public class WebHdfsFileSystem extends FileSystem
implements DelegationTokenRenewer.Renewable, 
TokenAspect.TokenManagementDelegator {
{code}
WebHDFS extends FileSystem, not DistributedFileSystem and so should properly be 
called WebFileSystem.  As such, the general purpose methods that it implements 
(and its REST endpoints expose) are suitable for implementation for lots of 
backing stores.  HDFS-8154 and this JIRA are about making that extensibility 
explicit and easy.

bq.  To authenticate, the third-party filter (OAuth2 filter included) should 
control when to issue a DT when getting the GETDELEGATIONTOKEN call. The DT 
needs to be presented to the server in all subsequent usages.
Not all file systems issue delegation tokens, so it should not be a requirement 
for WebHDFS-backed systems to either.  Instead, OAuth2 credentials (generic 
credentials per RFC spec section 4.3, explicit bearer/refresh tokens, or even 
maybe plaintext password/usernames) should be able to be provided and passed 
into whatever framework is actually handling the negotiation (ie, the filters).

bq. I don't think injecting any third-party payload (e.g., OAuth tokens) into 
WebHdfsFileSystem make sense.
SPNEGO is already a third-party payload.  This JIRA only adds OAuth as another 
option.

> Support OAuth2 authentication in WebHDFS
> 
>
> Key: HDFS-8155
> URL: https://issues.apache.org/jira/browse/HDFS-8155
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Reporter: Jakob Homan
>Assignee: Kai Zheng
>
> WebHDFS should be able to accept OAuth2 credentials.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8155) Support OAuth2 authentication in WebHDFS

2015-04-16 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14498281#comment-14498281
 ] 

Jakob Homan commented on HDFS-8155:
---

Hey Kai-
   This JIRA is part of the larger effort of 8154 to make the WebHDFS REST 
specification more general and accessible to other clients and back-end 
implementations.  It will likely build on your work to add OAuth2 throughout 
the system.  

Effectively, this JIRA is for two items: a) add OAuth2 as a possible 
[authentication 
method|https://hadoop.apache.org/docs/r2.5.1/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Authentication]
 (along with SPENGO, simple and delegation tokens) and b) add support in the 
WebHDFSFileSystem for passing OAuth tokens (or obtaining those tokens via 
configuration-supplied credentials or user/name password) to the WebHDFS 
backend.  I'm interested in the client and non-Namenode WebHDFS backends, while 
you're focusing on the Namenode and other current components.  

I would like to get the change to the WebHDFS spec and support on the client in 
soon.  Happy to use your code, or to commit it if it's ready.

> Support OAuth2 authentication in WebHDFS
> 
>
> Key: HDFS-8155
> URL: https://issues.apache.org/jira/browse/HDFS-8155
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Reporter: Jakob Homan
>Assignee: Kai Zheng
>
> WebHDFS should be able to accept OAuth2 credentials.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8155) Support OAuth2 authentication in WebHDFS

2015-04-15 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14497039#comment-14497039
 ] 

Jakob Homan commented on HDFS-8155:
---

After HDFS-8154, it will be much easier for other backends than Hadoop to offer 
access via the WebHDFS specification.  In this environment, it would be good to 
support more types of authentication, even if Hadoop itself does not 
immediately support it.  OAuth2 would be a good candidate.  We should amend the 
WebHDFS spec to support OAuth tokens, specifically by providing either 
bearer/refresh tokens in the config ([RFC 
4.1|https://tools.ietf.org/html/rfc6749#section-4.1], with the allowance that 
the tokens have already been obtained to obviate the need for user 
interaction), or via a credential that can be exchanged for those tokens ([RFC 
4.3|https://tools.ietf.org/html/rfc6749#section-4.3]).
This would allow a WebHDFS backed to support either OAuth2 or SPENGO.  WebHDFS 
backends (including Hadoop) would only be expected to support one type of 
authentication per system and would be able to reject calls made using another 
type.
Under this proposal, post HDFS-8154, the WebHDFSFileSystem will need to be 
updated to support presenting OAuth credentials, but it is not necessary to 
modify the Namenode or Datanodes to accept them.  That can be done as part of 
HADOOP-11744.

> Support OAuth2 authentication in WebHDFS
> 
>
> Key: HDFS-8155
> URL: https://issues.apache.org/jira/browse/HDFS-8155
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Reporter: Jakob Homan
>
> WebHDFS should be able to accept OAuth2 credentials.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8155) Support OAuth2 authentication in WebHDFS

2015-04-15 Thread Jakob Homan (JIRA)
Jakob Homan created HDFS-8155:
-

 Summary: Support OAuth2 authentication in WebHDFS
 Key: HDFS-8155
 URL: https://issues.apache.org/jira/browse/HDFS-8155
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: webhdfs
Reporter: Jakob Homan


WebHDFS should be able to accept OAuth2 credentials.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-8154) Extract WebHDFS protocol out as a specification to allow easier clients and servers

2015-04-15 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14497038#comment-14497038
 ] 

Jakob Homan commented on HDFS-8154:
---

Currently WebHDFS exists as a [human-readable 
specification|https://hadoop.apache.org/docs/r2.5.1/hadoop-project-dist/hadoop-hdfs/WebHDFS.html]
 and a single implementation, that provided by the NameNode and its DataNodes.  
We should extract WebHDFS out as a specification, which would allow easier 
implementation of both WebHDFS-backed servers and clients.  Additionally, this 
would make it easier to generate documentation and verify correctness.
The current human-readable spec makes it possible to implement WebHDFS clients 
(for example, 
[Perl|http://search.cpan.org/~afaris/Apache-Hadoop-WebHDFS-0.04/lib/Apache/Hadoop/WebHDFS.pm],
 [Python|https://pypi.python.org/pypi/pywebhdfs], 
[.NET|https://hadoopsdk.codeplex.com/wikipage?title=WebHDFS%20Client] , 
[non-Hadoop backed JVM|https://issues.apache.org/jira/browse/HADOOP-10741], 
etc.).  However, each client must be built by someone parsing out that spec and 
writing up their own client implementation from scratch.

There are frameworks, such as [Swagger|http://swagger.io/] and 
[RAML|http://raml.org/] that allow one to define a REST interface and then 
create documentation, generate client stubs and built tests against that 
framework.
In addition to clients, an more programmatic WebHDFS specification would allow 
other backend systems to more easily implement the WebHDFS interface.  Any 
Hadoop application would then be able to access that back end through the 
oah.WebHDFSFileSystem or through one of the non-JVM clients described above.

This JIRA will cover specifying the WebHDFS spec in a framework like Swaggger 
or RAML, switching the WebHDFS documentation to be built from this so as to be 
authoritative, and verifying that the current implementation provided by the 
namenode and datanodes comports with this specification.

> Extract WebHDFS protocol out as a specification to allow easier clients and 
> servers
> ---
>
> Key: HDFS-8154
> URL: https://issues.apache.org/jira/browse/HDFS-8154
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: webhdfs
>Reporter: Jakob Homan
>Assignee: Jakob Homan
>
> WebHDFS would be more useful if there were a programmatic description of its 
> interface, which would allow one to more easily create servers and clients.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8154) Extract WebHDFS protocol out as a specification to allow easier clients and servers

2015-04-15 Thread Jakob Homan (JIRA)
Jakob Homan created HDFS-8154:
-

 Summary: Extract WebHDFS protocol out as a specification to allow 
easier clients and servers
 Key: HDFS-8154
 URL: https://issues.apache.org/jira/browse/HDFS-8154
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: webhdfs
Reporter: Jakob Homan
Assignee: Jakob Homan


WebHDFS would be more useful if there were a programmatic description of its 
interface, which would allow one to more easily create servers and clients.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7696) FsDatasetImpl.getTmpInputStreams(..) may leak file descriptors

2015-01-28 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-7696:
--
Summary: FsDatasetImpl.getTmpInputStreams(..) may leak file descriptors  
(was: FsDatasetImpl.getTmpInputStreams(..) may lead file descriptors)

> FsDatasetImpl.getTmpInputStreams(..) may leak file descriptors
> --
>
> Key: HDFS-7696
> URL: https://issues.apache.org/jira/browse/HDFS-7696
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Tsz Wo Nicholas Sze
>
> getTmpInputStreams(..) opens a block file and a meta file, and then return 
> them as ReplicaInputStreams.  The caller responses to closes those streams.  
> In case of errors, an exception is thrown without closing the files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7590) Stabilize and document getBlockLocation API in WebHDFS

2015-01-07 Thread Jakob Homan (JIRA)
Jakob Homan created HDFS-7590:
-

 Summary: Stabilize and document getBlockLocation API in WebHDFS
 Key: HDFS-7590
 URL: https://issues.apache.org/jira/browse/HDFS-7590
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.6.0
Reporter: Jakob Homan


Currently the GET_BLOCK_LOCATIONS op is marked as private, unstable and is not 
documented in the WebHDFS web page.  The getBlockLocations is a public, stable 
API on FileSystem.  WebHDFS' GBL response is private-unstable because the API 
currently directly serializes out the LocatedBlocks instance and LocatedBlocks 
is private-unstable.  

A public-stable version of the response should be agreed upon and documented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-327) DataNode should warn about unknown files in storage

2014-07-21 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-327:
-

Assignee: (was: Jakob Homan)

> DataNode should warn about unknown files in storage
> ---
>
> Key: HDFS-327
> URL: https://issues.apache.org/jira/browse/HDFS-327
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Raghu Angadi
>  Labels: newbie
>
> DataNode currently just ignores the files it does not know about. There could 
> be a lot of files left in DataNode's storage that never get noticed or 
> deleted. These files could be left because of bugs or by a misconfiguration. 
> E.g. while upgrading from 0.17, DN left a lot of metada files that were not 
> named in correct format for 0.18 (HADOOP-4663).
> The proposal here is simply to make DN print a warning for each of the 
> unknown files at the start up. This at least gives a way to list all the 
> unknown files and  (equally importantly) forces a notion of "known" and 
> "unknown" files in the storage.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6317) Add snapshot quota

2014-05-02 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13987791#comment-13987791
 ] 

Jakob Homan commented on HDFS-6317:
---

First as a matter of cluster policy for the admins, they may wish to impose 
such limits.  Second, to help control the potential size of recursive tools 
used against snapshotted directories against an (effectively) unbounded tree.

bq.  Note that we already has namespace quota which will as well limit the 
namespace usage used by snapshots.
Noted, but this is a different quota.

> Add snapshot quota
> --
>
> Key: HDFS-6317
> URL: https://issues.apache.org/jira/browse/HDFS-6317
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Alex Shafer
>
> Either allow the 65k snapshot limit to be set with a configuration option  or 
> add a per-directory snapshot quota settable with the `hdfs dfsadmin` CLI and 
> viewable by appending fields to `hdfs dfs -count -q` output.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6317) Add snapshot quota

2014-05-02 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13987773#comment-13987773
 ] 

Jakob Homan commented on HDFS-6317:
---

To allow admins to limit the number of snapshots per directory to a number 
below the currently hardcoded value of 64k.

> Add snapshot quota
> --
>
> Key: HDFS-6317
> URL: https://issues.apache.org/jira/browse/HDFS-6317
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Alex Shafer
>
> Either allow the 65k snapshot limit to be set with a configuration option  or 
> add a per-directory snapshot quota settable with the `hdfs dfsadmin` CLI and 
> viewable by appending fields to `hdfs dfs -count -q` output.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6241) Unable to reset password

2014-04-14 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13968464#comment-13968464
 ] 

Jakob Homan commented on HDFS-6241:
---

Did you mean to open this under infra?

> Unable to reset password
> 
>
> Key: HDFS-6241
> URL: https://issues.apache.org/jira/browse/HDFS-6241
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Larry McCay
>Priority: Blocker
>
> I tried emailing r...@apache.org - as indicated in INFRA-5241 - about these 
> difficulties but it seems to be bouncing. Here is the email that I sent:
> Greetings -
> I am trying to reset my password and have encountered the following problems:
> 1. it seems that the public key associated with my account is erroneously 
> that of the lead of our project (Knox). His must have set up my account in 
> the beginning and provided his key maybe? Anyway, this means that he has to 
> decrypt my email.
> 2. Once he does decrypt it and I follow the link to reset it - I get a No 
> Such Token error message and am unable to reset my password.
> The email below indicated that I should email root with problems.
> Please let me know if I should file and Infra jira. I did find a similar one 
> there that told them to email root. So, that is where I am starting.
> We are in the process of trying to get a release out - so I would greatly 
> appreciate the help here.
> thanks!
> --larry
> Hi Larry McCay,
> 96.235.186.40 has asked Apache ID 
> to initiate a password reset for your apache.org account 'lmccay'.
> If you requested this password reset, please use the following link to
> reset your Apache LDAP password:
> 
> If you did not request this password reset, please email r...@apache.org --- 
> but
> delete the above URL from the text of the reply email before sending it.
> This link will expire at 2014-04-14 14:31:25 +, and can only be used from 
> 96.235.186.40.
> --
> Best Regards,
> Apache Infrastructure



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-2538) option to disable fsck dots

2014-02-27 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13915118#comment-13915118
 ] 

Jakob Homan commented on HDFS-2538:
---

Anyone have concerns over the plan to make this backwards incompatible, apply 
to trunk and not apply to branch?  That's my intention.  Speak now, yadda, 
yadda.

> option to disable fsck dots 
> 
>
> Key: HDFS-2538
> URL: https://issues.apache.org/jira/browse/HDFS-2538
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.2.0
>Reporter: Allen Wittenauer
>Assignee: Mohammad Kamrul Islam
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-2538-branch-0.20-security-204.patch, 
> HDFS-2538-branch-0.20-security-204.patch, HDFS-2538-branch-1.0.patch, 
> HDFS-2538.1.patch, HDFS-2538.2.patch, HDFS-2538.3.patch
>
>
> this patch turns the dots during fsck off by default and provides an option 
> to turn them back on if you have a fetish for millions and millions of dots 
> on your terminal.  i haven't done any benchmarks, but i suspect fsck is now 
> 300% faster to boot.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HDFS-2538) option to disable fsck dots

2014-02-25 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13911897#comment-13911897
 ] 

Jakob Homan commented on HDFS-2538:
---

No, I don't want extra options and differing capabilities running around.  If I 
commit an incompatible change on trunk, I don't want to apply a different patch 
to the branch.

> option to disable fsck dots 
> 
>
> Key: HDFS-2538
> URL: https://issues.apache.org/jira/browse/HDFS-2538
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.2.0
>Reporter: Allen Wittenauer
>Assignee: Mohammad Kamrul Islam
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-2538-branch-0.20-security-204.patch, 
> HDFS-2538-branch-0.20-security-204.patch, HDFS-2538-branch-1.0.patch, 
> HDFS-2538.1.patch, HDFS-2538.2.patch
>
>
> this patch turns the dots during fsck off by default and provides an option 
> to turn them back on if you have a fetish for millions and millions of dots 
> on your terminal.  i haven't done any benchmarks, but i suspect fsck is now 
> 300% faster to boot.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HDFS-2538) option to disable fsck dots

2014-02-25 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13911855#comment-13911855
 ] 

Jakob Homan commented on HDFS-2538:
---

I'm willing to do the silent default and mark as incompatible on trunk, but not 
back to a branch.  If the consensus is to do so, we can patch trunk and leave 
the branch patch for those who wish it.  

> option to disable fsck dots 
> 
>
> Key: HDFS-2538
> URL: https://issues.apache.org/jira/browse/HDFS-2538
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.2.0
>Reporter: Allen Wittenauer
>Assignee: Mohammad Kamrul Islam
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-2538-branch-0.20-security-204.patch, 
> HDFS-2538-branch-0.20-security-204.patch, HDFS-2538-branch-1.0.patch, 
> HDFS-2538.1.patch, HDFS-2538.2.patch
>
>
> this patch turns the dots during fsck off by default and provides an option 
> to turn them back on if you have a fetish for millions and millions of dots 
> on your terminal.  i haven't done any benchmarks, but i suspect fsck is now 
> 300% faster to boot.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HDFS-2538) option to disable fsck dots

2014-02-24 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-2538:
--

Status: Open  (was: Patch Available)

> option to disable fsck dots 
> 
>
> Key: HDFS-2538
> URL: https://issues.apache.org/jira/browse/HDFS-2538
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.2.0
>Reporter: Allen Wittenauer
>Assignee: Mohammad Kamrul Islam
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-2538-branch-0.20-security-204.patch, 
> HDFS-2538-branch-0.20-security-204.patch, HDFS-2538-branch-1.0.patch, 
> HDFS-2538.1.patch
>
>
> this patch turns the dots during fsck off by default and provides an option 
> to turn them back on if you have a fetish for millions and millions of dots 
> on your terminal.  i haven't done any benchmarks, but i suspect fsck is now 
> 300% faster to boot.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HDFS-2538) option to disable fsck dots

2014-02-24 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-2538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13910651#comment-13910651
 ] 

Jakob Homan commented on HDFS-2538:
---

Line: 46: Need a dash in the option.

Also, this does need to have the dots on by default for compatibility.  Option 
should be -quiet or -noprogress to suppress them.

> option to disable fsck dots 
> 
>
> Key: HDFS-2538
> URL: https://issues.apache.org/jira/browse/HDFS-2538
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.2.0
>Reporter: Allen Wittenauer
>Assignee: Mohammad Kamrul Islam
>Priority: Minor
>  Labels: newbie
> Attachments: HDFS-2538-branch-0.20-security-204.patch, 
> HDFS-2538-branch-0.20-security-204.patch, HDFS-2538-branch-1.0.patch, 
> HDFS-2538.1.patch
>
>
> this patch turns the dots during fsck off by default and provides an option 
> to turn them back on if you have a fetish for millions and millions of dots 
> on your terminal.  i haven't done any benchmarks, but i suspect fsck is now 
> 300% faster to boot.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Assigned] (HDFS-5934) New Namenode UI back button doesn't work as expected

2014-02-11 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan reassigned HDFS-5934:
-

Assignee: Travis Thompson

> New Namenode UI back button doesn't work as expected
> 
>
> Key: HDFS-5934
> URL: https://issues.apache.org/jira/browse/HDFS-5934
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.3.0
>Reporter: Travis Thompson
>Assignee: Travis Thompson
>Priority: Minor
>
> When I navigate to the Namenode page, and I click on the "Datanodes" tab, it 
> will take me to the Datanodes page.  If I click my browser back button, it 
> does not take me back to the overview page as one would expect.  This is true 
> of choosing any tab.
> Another example of the back button acting weird is when browsing HDFS, if I 
> click back one page, either the previous directory I was viewing, or the page 
> I was viewing before entering the FS browser.  Instead I am always taken back 
> to the previous page I was viewing before entering the FS browser.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HDFS-4670) Style Hadoop HDFS web ui's with Twitter's bootstrap.

2013-04-08 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13626091#comment-13626091
 ] 

Jakob Homan commented on HDFS-4670:
---

Can you post some screenshots of what the new ui looks like?

> Style Hadoop HDFS web ui's with Twitter's bootstrap.
> 
>
> Key: HDFS-4670
> URL: https://issues.apache.org/jira/browse/HDFS-4670
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Minor
> Attachments: HDFS-4670-0.patch
>
>
> A users' first experience of Apache Hadoop is often looking at the web ui.  
> This should give the user confidence that the project is usable and 
> relatively current.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HDFS-4670) Style Hadoop HDFS web ui's with Twitter's bootstrap.

2013-04-08 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan reassigned HDFS-4670:
-

Assignee: Elliott Clark

> Style Hadoop HDFS web ui's with Twitter's bootstrap.
> 
>
> Key: HDFS-4670
> URL: https://issues.apache.org/jira/browse/HDFS-4670
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Minor
> Attachments: HDFS-4670-0.patch
>
>
> A users' first experience of Apache Hadoop is often looking at the web ui.  
> This should give the user confidence that the project is usable and 
> relatively current.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HDFS-4549) WebHDFS hits a Jetty performance issue

2013-03-04 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan reassigned HDFS-4549:
-

Assignee: Mark Wagner

> WebHDFS hits a Jetty performance issue
> --
>
> Key: HDFS-4549
> URL: https://issues.apache.org/jira/browse/HDFS-4549
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 1.1.2
>Reporter: Mark Wagner
>Assignee: Mark Wagner
> Attachments: HDFS-4549.1.patch
>
>
> WebHDFS on branch-1 is hitting a Jetty issue for me when it does chunked 
> transfers. This is the same Jetty issue as MAPREDUCE-4399. I have not 
> observed this on trunk.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-3367) WebHDFS doesn't use the logged in user when opening connections

2013-02-12 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-3367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13577012#comment-13577012
 ] 

Jakob Homan commented on HDFS-3367:
---

Reviewed.  +1.

> WebHDFS doesn't use the logged in user when opening connections
> ---
>
> Key: HDFS-3367
> URL: https://issues.apache.org/jira/browse/HDFS-3367
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 0.23.0, 1.0.2, 2.0.0-alpha, 3.0.0
>Reporter: Jakob Homan
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HDFS-3367.branch-23.patch, HDFS-3367.patch
>
>
> Something along the lines of
> {noformat}
> UserGroupInformation.loginUserFromKeytab()
> Filesystem fs = FileSystem.get(new URI("webhdfs://blah"), conf)
> {noformat}
> doesn't work as webhdfs doesn't use the correct context and the user shows up 
> to the spnego filter without kerberos credentials:
> {noformat}Exception in thread "main" java.io.IOException: Authentication 
> failed, 
> url=http://:50070/webhdfs/v1/?op=GETDELEGATIONTOKEN&user.name=
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHttpUrlConnection(WebHdfsFileSystem.java:337)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.httpConnect(WebHdfsFileSystem.java:347)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.run(WebHdfsFileSystem.java:403)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getDelegationToken(WebHdfsFileSystem.java:675)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.initDelegationToken(WebHdfsFileSystem.java:176)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.initialize(WebHdfsFileSystem.java:160)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386)
> ...
> Caused by: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:232)
>   at 
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:141)
>   at 
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:217)
>   at 
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHttpUrlConnection(WebHdfsFileSystem.java:332)
>   ... 16 more
> Caused by: GSSException: No valid credentials provided (Mechanism level: 
> Failed to find any Kerberos tgt)
>   at 
> sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:130)
> ...{noformat}
> Explicitly getting the current user's context via a doAs block works, but 
> this should be done by webhdfs. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HDFS-3768) Exception in TestJettyHelper is incorrect

2012-08-08 Thread Jakob Homan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Homan updated HDFS-3768:
--

   Resolution: Fixed
Fix Version/s: 3.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed. Resolving. Thanks, Eli.

> Exception in TestJettyHelper is incorrect
> -
>
> Key: HDFS-3768
> URL: https://issues.apache.org/jira/browse/HDFS-3768
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Jakob Homan
>Assignee: Eli Reisman
>Priority: Minor
>  Labels: newbie, patch
> Fix For: 3.0.0
>
> Attachments: HDFS-3768.patch
>
>
> hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/test/TestJettyHelper.java:80
> {noformat}
> throw new RuntimeException("Could not stop embedded servlet container, " + 
> ex.getMessage(), ex);
> {noformat}
> This is being thrown from createJettyServer and was copied and pasted from 
> stop.  Should say we can't start the servlet container.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




  1   2   3   4   5   6   7   8   >