Correction (early here and no coffee):
If you set the webuser to the superuser you can see everything with the
webgui. Was not cleary enough. The webuser in default is a user with lesser
permissions.
Sorry,
Alex
On Tue, Nov 29, 2011 at 8:49 AM, Mohammad Tariq donta...@gmail.com wrote:
Hey Alex,
Thank you so much..:)
Regards,
Mohammad Tariq
On Tue, Nov 29, 2011 at 1:54 PM, Alexander C.H. Lorenz
wget.n...@googlemail.com wrote:
Hey, thanks. I corrected the answer.
best,
Alex
btw skype is open ;)
On Tue, Nov 29, 2011 at 9:24 AM, Alexander C.H. Lorenz
Hello,
I would like to connect to hdfs as another user than my unix login. How
can I do that?
I am most intersted how can I change the current user programmatically
at DFSClient level; but if there are any command-line options i am
interested too, I will take a look at the code that
Hi all,
I am interested in exploring how the blockID is generated in hadoop world
by the namenode. Any pointers to the class/method which takes care of this
generation?
Thanks in advance,
~Kartheek.
Petru,
Take a look at this conversation that came up some time ago:
http://search-hadoop.com/m/BspSb2Wf38t
On 29-Nov-2011, at 5:24 PM, Petru Dimulescu wrote:
Hello,
I would like to connect to hdfs as another user than my unix login. How can I
do that?
I am most intersted how can I
Kartheek,
(- hdfs-user (bcc'd))
Its simple enough to trace back a FileSystem - DFSClient - NameNode call in
code, for the operation of creating a file (and thereafter a block).
What you are looking for, specifically, is in FSNamesystem#allocateBlock(…).
On 29-Nov-2011, at 6:07 PM, kartheek
you can find the code directly in FSNameSystem#allocateBlock
It is just a random long number and will ensure that blockid is not created
already by NN.
Regards,
Uma
From: kartheek muthyala [kartheek0...@gmail.com]
Sent: Tuesday, November 29, 2011 6:07 PM
Hey Stuti,
Fuse is probably the most commonly used solution. It has some
limitations because HDFS isn't posix compliant, but it it works for a
lot of use cases. You can try out both the contrib driver and the
google code version. I'm not sure which will work better for your
Hadoop version. Newer
Uma, first of all thanks for the detailed exemplified explanation.
So to confirm, the primary use of having this generationTimeStamp is to
ensure consistency of the block?. So, when the pipeline is failed at DN3,
and the client invokes recovery, then the NN will chose DN1 to complete the
No, this will not provide symlink support to FsShell. The shell is not yet
using FileContext although adding the support is planned.
Daryn
On Nov 28, 2011, at 10:37 PM, Stuti Awasthi wrote:
HI all,
Any thoughts on this ??
-Original Message-
From: Stuti Awasthi
Sent: Monday,
Hey Joey,
Thanks for update :). I will try both as you have suggested .
-Original Message-
From: Joey Echeverria [mailto:j...@cloudera.com]
Sent: Tuesday, November 29, 2011 7:25 PM
To: hdfs-user@hadoop.apache.org
Subject: Re: Best option for mounting HDFS
Hey Stuti,
Fuse is probably
11 matches
Mail list logo