You mean like that:
hadoop jar Rdg.jar my.hadoop.Rdg -libjars Rdg_lib/* tester rdg_output
Where Rdg_lib is the a folder containing all reqd classes/jars stored on
HDFS.
We get this error though. We do something wrong?
12/08/10 08:16:24 ERROR security.UserGroupInformation:
PriviledgedActionExcep
is a simple way of doing it.
>
> Regards
>
> Bertrand
>
> On Mon, Aug 13, 2012 at 7:59 PM, Pierre Antoine DuBoDeNa
> wrote:
>
> > Hello,
> >
> > We use hadoop to distribute a task over our machines.
> >
> > This task requires only the mapper class
as a workaround to mount it. This is
> what we do in our company, but we were experiencing some stability
> issues.
>
> Ruslan
>
> On Sat, Jun 16, 2012 at 12:22 AM, Pierre Antoine DuBoDeNa
> wrote:
> > Hello,
> >
> > I have installed hdfs to use it with h
Hello,
I have installed hdfs to use it with hadoop and hbase. I am wondering if i
can use it as a normal file system too that just connects several HDDs ?
For example i can see the files i have stored through the webinterface
(browse filesystem) or with the dfs commands. However if I go to the ex
how up.
> >
> > /* Joey */
> > On Jun 9, 2012 2:52 PM, "Pierre Antoine DuBoDeNa"
> > wrote:
> >
> > > Hello everyone..
> > >
> > > I have a cluster of 5 VMs, 1 as master/slave the rest are slaves. I run
> > > bin/start-all.sh
t. if interested, let me
> know, and i see what can be done. (releasing the code is on our todo list,
> but if there is some demand, we can do it sooner)
>
>
> stijn
>
>
>
> On 05/18/2012 05:07 PM, Pierre Antoine DuBoDeNa wrote:
>
>> I am also interested to learn ab
I am also interested to learn about myHadoop as I use a shared storage
system and everything runs on VMs and not actual dedicated servers.
in like amazon EC2 environment which you just have VMs and huge central
storage, is it any helpful to use hadoop to distribute jobs and maybe
parallelize algor
You used HDFS too? or storing everything on SAN immediately?
I don't have number of GB/TB (it might be about 2TB so not really that
"huge") but they are more than 100 million documents to be processed. In a
single machine currently we can process about 200.000 docs/day (several
parsing, indexing,