Hi ,
Can anyone give me details why i'm getting the below error?
06:08:34,313 ERROR UserGroupInformation:1411 -
PriviledgedActionException as:root (auth:SIMPLE)
cause:org.apache.hadoop.security.AccessControlException: Permission
denied: user=root, access=WRITE,
read the error
denied: user=root, access=WRITE,inode=/user:hdfs:supergroup:drwxr-xr-x
you are executing the command as user root (remember a linux user root has
no root level access to hdfs) .
Your user (root) does not have write permission to /user
either create a directory as user hdfs and
hi,all i setup namenode HA hadoop cluster
and write some demo code
import java.io.FileNotFoundException;
import java.io.IOException;
import java.net.URI;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.FileStatus;
import
hi,all:
if i have three data node, and data block replicator number is 2, one of
data node failed,so it's data block on it will move to another alive DN,the
replicator still be
3 ,what if the failed DN recover ,the replicator will become 4?
Yup that's the responsibility of namenode to control under and over
replicated blocks automatically, However you can you balancer script any
time.
Thanks
On Fri, Aug 16, 2013 at 2:09 PM, bharath vissapragada
bharathvissapragada1...@gmail.com wrote:
No, namenode deletes over-replicated blocks
Thanks for all the suggestion. I will explore more and raise specific questions
if needed.
Regards,
Anand.C
From: Sandy Ryza [mailto:sandy.r...@cloudera.com]
Sent: Thursday, August 15, 2013 1:30 AM
To: user@hadoop.apache.org
Subject: Re: Calling a MATLAB library in map reduce program
To add to
Hello friends i m using webhdfs to fetch a remote hadoop file in my browser
is there any caching mechanism that you guys know to load this file faster
http://termin1:50070/webhdfs/v1/Name1Home/new_file_d561yht35-9a1a-4a7b-9n.jpg?op=OPEN
I think Inmemory hadoop mechanism will full fill your requirement.
Thanks
On Fri, Aug 16, 2013 at 2:21 PM, Visioner Sadak visioner.sa...@gmail.comwrote:
Hello friends i m using webhdfs to fetch a remote hadoop file in my
browser is there any caching mechanism that you guys know to load this
You require hadoop-hdfs dependency for HDFS FS to get initialized.
Your issue lies in how you're running the application, not your code.
If you use Maven, include hadoop-client dependency to get all the
required dependency for a hadoop client program. Otherwise, run your
program with hadoop jar,
Hi Vimal ,
Could you elaborate this , what do you mean running 6 process in on single node,
How many nodes do you have in total and all are used for hadoop and hbase as
well.
Possibilities: -
Might be there is dn no listed in your slave file and configure as DN in you
cluster.
chances are
thanks jeetu is there any configurations in order to implement it
On Fri, Aug 16, 2013 at 2:26 PM, Jitendra Yadav
jeetuyadav200...@gmail.comwrote:
I think Inmemory hadoop mechanism will full fill your requirement.
Thanks
On Fri, Aug 16, 2013 at 2:21 PM, Visioner Sadak
Frankly, I am not using hadoop as in memory cache, but you can integrate it
with other vendors offerings.
below link might help you.
http://www.gridgain.com/products/in-memory-hadoop-accelerator/
Thanks
On Fri, Aug 16, 2013 at 4:33 PM, Visioner Sadak visioner.sa...@gmail.comwrote:
thanks
I think it would make Hadoop installation easier if we released
standardized packages.
What if Ubuntu users could simply apt-get install hadoop they same way
they apt-get install apache2?
Similarly, could we release a Chocolatey http://chocolatey.org/ package
for Windows users? The easier the
That sounds like what Bigtop is doing, at least covering the Linux
distros. http://bigtop.apache.org/
On Fri, Aug 16, 2013 at 11:23 AM, Andrew Pennebaker
apenneba...@42six.com wrote:
I think it would make Hadoop installation easier if we released standardized
packages.
What if Ubuntu users
Yup, patches are always welcome!
As for win support - more the merrier! Although I doubt that many people here
have such experience (say my own stops with a C++ compiler on 3.11; and never
touched it since).
Considering a very low interest in packaged stack from win'd crowd , I
personally would
And in the case of Fedora, there's work underway to truly use distro
standard packages so yum install hadoop will be handled by the Fedora
infrastructure.
If you're interested check out the Fedora Big Data SIG.
Best,
matt
On 08/16/2013 02:06 PM, Konstantin Boudnik wrote:
Yup, patches are
Hi,
I am wondering if there is any tutorial to see.
What are the challenges for reading and/or writing to/from database.
Is there a common flavor across all the database.
For example, the dbs start a server on some host : port
Establish connection to that host:port
It can be across proxy?
Hello,
Does anybody know an e-Science application to run on Hadoop?
Thanks.
Felipe
--
*--
-- Felipe Oliveira Gutierrez
-- felipe.o.gutier...@gmail.com
-- https://sites.google.com/site/lipe82/Home/diaadia*
there are literally hundreds. Here is a great review article for how
mapreduce is used in the bioinformatics algorithms space:
http://www.biomedcentral.com/1471-2105/11/S12/S1
On Fri, Aug 16, 2013 at 3:38 PM, Felipe Gutierrez
felipe.o.gutier...@gmail.com wrote:
Hello,
Does anybody know an
friends is there any open source caching mechanism for hadoop
On Fri, Aug 16, 2013 at 4:56 PM, Jitendra Yadav
jeetuyadav200...@gmail.comwrote:
Frankly, I am not using hadoop as in memory cache, but you can integrate
it with other vendors offerings.
below link might help you.
What do u want to do ? View the .LZO file on HDFS ?
From: Sandeep Nemuri nhsande...@gmail.commailto:nhsande...@gmail.com
Reply-To: user@hadoop.apache.orgmailto:user@hadoop.apache.org
user@hadoop.apache.orgmailto:user@hadoop.apache.org
Date: Tuesday, August 6, 2013 12:08 AM
To:
21 matches
Mail list logo