Hi Natasha -

We do have some work in place for compatibility with existing clients in
what is called the "default" topology - this is a logical topology that
internally dispatches requests to the actual topology that is configured as
the default. OOTB the default topology is "sandbox"

I believe that there is still an outstanding issue with it for redirected
requests.
For instance, when a webhdfs request is redirected to a datanode from the
namenode it expects the credentials to be provided again - which they are
not.

We need to resolve that issue, provide tests and documentation before it
can be called an actual feature.

We need to ensure that there is a JIRA for resolving that issue and would
certainly welcome any contributions toward fixing it.

Until we have this resolved, there is little option but to use other
clients - unless you wanted to play with putting another reverse proxy in
front of Knox to rewrite the requests to the Knox URLs.

thanks,

--larry

On Sun, Aug 30, 2015 at 1:34 AM, Natasha d'silva <[email protected]>
wrote:

> Hi,
> I have an application that supports communication with arbitrary webhdfs
> URIs using  the Apache hadoop java Api for file operations. how does such
> an application add support for knox authentication?
> I have seen the example code in the gateway.shell package but this amounts
> to duplicating all the file operations  that is already available via
> Filesystem.
> Is there a way for knox to return a webhdfs URI after authentication that
> can be consumed by existing clients?
> Or is writing my own client the only way to do this in Java?
> Thanks!
>

Reply via email to