Dylan Hutchison wrote:
You can configure HDFS to use the RawLocalFileSystem class forfile://
> URIs which is what is done for a majority of the integration tests. Beware
> that you configure the RawLocalFileSystem as the ChecksumFileSystem
> (default forfile://) will fail miserably around
On Mon, Jan 16, 2017 at 1:56 PM, Josh Elser wrote:
> That's true, but HDFS supports multiple "implementations" based on the
> scheme of the URI being used.
>
> e.g. hdfs:// is mapped to DistributedFileSystem
>
> You can configure HDFS to use the RawLocalFileSystem class for
That's true, but HDFS supports multiple "implementations" based on the
scheme of the URI being used.
e.g. hdfs:// is mapped to DistributedFileSystem
You can configure HDFS to use the RawLocalFileSystem class for file://
URIs which is what is done for a majority of the integration tests.
IIRC, Accumulo *only* uses the HDFS client, so it needs something on the other
side that can respond to that protocol. MiniAccumulo starts up MiniHDFS for
this. You could run some other type of service locally that is HDFS client
compatible (something like Quantcast QFS[1], setting up client
Hi folks,
A friend of mine asked about running Accumulo on a normal file system in
place of Hadoop, similar to the way MiniAccumulo runs. How possible is
this, or how much work would it take to do so?
I think my friend is just interested in running on a single node, but I am
curious about both