Hi Adam,
Beause I have very important data,I wanna use encryption for preventing
physical access of other users.
Michael,how can I use coprocessor for encryption? Is there any function for
doing that ?
Thanks for helping me
On Tue, Aug 7, 2012 at 10:23 PM, Adam Brown wrote:
> Hi Farrokh,
>
> wh
Jian,
>From your NN, can you get us the output "netstat -anp | grep 50070"?
On Fri, Aug 10, 2012 at 9:29 AM, Jian Fang
wrote:
> Thanks Harsh. But there is no firewall there, the two clusters are on the
> same networks. I cannot telnet to the port even on the same machine.
>
>
> On Thu, Aug 9, 20
Hi Anand,
Its clearly telling namenode not able to access the lock file inside name
dir.
/var/lib/hadoop-0.20/cache/hadoop/dfs/name/in_use.lock (Permission denied)
Did you format the namenode using one user and starting namenode in another
user..?
Try formatting and starting from same
yes Owen i did.
On Thu, Aug 9, 2012 at 6:23 PM, Owen Duan wrote:
> have you tried hadoop namenode -format?
>
> 2012/8/9 anand sharma
>
>> yea Tariq !1 its a fresh installation i m doing it for the first time,
>> hope someone will know the error code and the reason of error.
>>
>>
>> On Thu, Au
its false... Abhishek
dfs.permissions
false
dfs.name.dir
/var/lib/hadoop-0.20/cache/hadoop/dfs/name
On Thu, Aug 9, 2012 at 6:29 PM, Abhishek wrote:
> Hi Anand,
>
> What are the permissions, on dfs.name.dir directory in hdfs-site.xml
>
> Regards
> Abhishek
>
Thanks Harsh. But there is no firewall there, the two clusters are on the
same networks. I cannot telnet to the port even on the same machine.
On Thu, Aug 9, 2012 at 6:00 PM, Harsh J wrote:
> Hi Jian,
>
> HFTP is always-on by default. Can you check and make sure that the
> firewall isn't the cau
--
刘鎏
Also, VERSION appears in the operating system's filesystem, not on HDFS.
Check out the dfs.name.dir property in ${HADOOP_HOME}/conf/hdfs-site.xml
for the location. The file is current/VERSION.
On Thu, Aug 9, 2012 at 4:41 PM, Harsh J wrote:
> Stephen,
>
> A NameNode can be considered 'formatted'
Stephen,
A NameNode can be considered 'formatted' if its up (it does not start
otherwise).
If you mean to check if the NameNode has been _freshly_ formatted,
then checking the count of "hadoop fs -ls /" to be non-zero may help,
cause by default the NameNode carries no files.
$ hadoop fs -ls / |
Hi,
VERSION does not appear to show up when running "hadoop dfs -lsr /"
it displays the files previously placed into hdfs . but nothing about
VERSION. I can see the VERSION file on the datanodes under the hdfs
managed data directoy
find /data | xargs ls -lrta | grep VERSION
-rw-rw
Hello Stephen,
You can use the VERSION file to verify that.
Regards,
Mohammad Tariq
On Fri, Aug 10, 2012 at 4:34 AM, Stephen Boesch wrote:
>
> Hi, what's your take? I was thinking to check if a certain always-present
> file exists via a hadoop dfs -ls . Other suggestions welcom
Hi Jian,
HFTP is always-on by default. Can you check and make sure that the
firewall isn't the cause of the connection refused on port 50070 on
the NN and ports 50075 on the DNs here?
On Fri, Aug 10, 2012 at 1:47 AM, Jian Fang
wrote:
> Hi,
>
> We have a hadoop cluster of version 0.20.2 in produc
Indeed - please let us know if you would like to unsubscribe ;-)
Sent from my iPhone
On 9 Aug 2012, at 21:21, Gavin Yue wrote:
> Thank you for letting us know your schedule...
>
>
> On Thu, Aug 9, 2012 at 4:07 PM, Yuan Jin wrote:
> I am out of the office until 08/13/2012.
>
> I am out of of
Thank you for letting us know your schedule...
On Thu, Aug 9, 2012 at 4:07 PM, Yuan Jin wrote:
> I am out of the office until 08/13/2012.
>
> I am out of office.
>
> For HAMSTER related things, you can contact Jason(Deng Peng Zhou/China/IBM)
> For CFM related things, you can contact Daniel(Lian
I am facing this issue on 0.20.2, is there a workaround for this that I can
employ? I can create alias scripts to return expected results if that is an
option.
https://issues.apache.org/jira/browse/MAPREDUCE-3661
Hi,
We have a hadoop cluster of version 0.20.2 in production. Now we have
another new Hadoop cluster using cloudera's CDH3U4. We like to run distcp
to copy files between the two clusters. Since the hadoop versions are
different, we have to use hftp protocol to copy files based on the hadoop
docume
On 9 August 2012 10:31, Arjun Reddy wrote:
> I did go through the 6 steps listed at the link and telnet to port
> 45965 is failing. Netstat -a shows nothing is listing on port 45965.
>
That's a high number - I wonder if it's something transient that didn't
start.
I worry about the 127.0.1.1 n
I am out of the office until 08/13/2012.
I am out of office.
For HAMSTER related things, you can contact Jason(Deng Peng Zhou/China/IBM)
For CFM related things, you can contact Daniel(Liang SH Su/China/Contr/IBM)
For TMB related things, you can contact Flora(Jun Ying Li/China/IBM)
For TWB relat
I am out of the office until 08/13/2012.
I am out of office.
For HAMSTER related things, you can contact Jason(Deng Peng Zhou/China/IBM)
For CFM related things, you can contact Daniel(Liang SH Su/China/Contr/IBM)
For TMB related things, you can contact Flora(Jun Ying Li/China/IBM)
For TWB relat
Hello all,
I added the following setting
hadoop.http.authentication.simple.anonymous.allowed. Now when I try to login to
Web UI, I get 401 error unless I specify user.name=username. This is exactly
what I want but I noticed that I can pass any user name and it will work as
long as it is not n
Hello all,
I added the following setting
hadoop.http.authentication.simple.anonymous.allowed. Now when I try to login to
Web UI, I get 401 error unless I specify user.name=username. This is exactly
what I want but I noticed that I can pass any user name and it will work as
long as it is not n
Unsubscribe
Thanks for the detailled explanation on this. I have been fighting with
this off and on in 0.20.203.
Cheers,
Andrew
On Aug 9, 2012 1:55 PM, "John Armstrong" wrote:
> On 08/09/2012 01:52 PM, Justin Woody wrote:
>
>> Just to close the thread, the problem was that the user and group were
>> not pre
On 08/09/2012 01:52 PM, Justin Woody wrote:
Just to close the thread, the problem was that the user and group were
not present on the name node which is why the ACE was thrown. Once he
added them to the NN, everything was good.
Yes, thanks again (you beat me to finding the thread to reply). Th
All,
Just to close the thread, the problem was that the user and group were
not present on the name node which is why the ACE was thrown. Once he
added them to the NN, everything was good.
Cheers,
Justin
On Thu, Aug 9, 2012 at 8:00 AM, Justin Woody wrote:
> John,
>
> Hadoop group permissions fo
I did go through the 6 steps listed at the link and telnet to port 45965 is
failing. Netstat -a shows nothing is listing on port 45965.
We can run hadoop 1.0.3 version without any issues on this cluster and I am
new to 2.0.0 version. I followed the document
http://hadoop.apache.org/common/docs/r
On 8 August 2012 14:39, Arjun Reddy wrote:
> I am trying to setup a small cluster using hadoop 2.0.0 and using PI
> example to validate the setup. When I have 1 master and 1 slave the
> example works fine. I am getting exceptions with the PI example when
> additional slave nodes are added to th
On 9 August 2012 03:46, Pankaj Misra wrote:
> Thanks Ioan for the help and sharing the link, appreciate it.
>
> The symlink as specified below already exists, and the the response of "$
> which ld" is
>
> [root@fedora-0 container-executor]# which ld
> /bin/ld
>
> Yes, I will surely raise a JIRA f
I would recommend you to look at the yahoo tutorial for more information.
Here is the part we are discussing about :
http://developer.yahoo.com/hadoop/tutorial/module5.html#writable-comparator
Regards
Bertrand
On Thu, Aug 9, 2012 at 5:03 PM, Björn-Elmar Macek wrote:
> Hi Bertrand,
>
> i am us
Ok, i found a tutorial for this myself. For everybody who ran into the
problem: here is a tutorial explaining WriteableComparable types.
http://developer.yahoo.com/hadoop/tutorial/module5.html
Am 09.08.2012 17:14, schrieb Björn-Elmar Macek:
Ah ok, i got the idea: i can use the abstract class i
Ah ok, i got the idea: i can use the abstract class instead of the low
level interface, though i am not sure, how to use it. It would just be
nice, if complexer mechanics like the sorting would have an up-to-date
tutorial with some example code. If i find the time, i will make one,
since i want
Hi Bertrand,
i am using RawComperator because this one was used in the tutorial of
some famous (hadoop) guy describing how to sort the input for the
reducer. Is there an easier alternative?
Am 09.08.2012 16:57, schrieb Bertrand Dechoux:
I am just curious but are you using Writable? If so the
I am just curious but are you using Writable? If so there is a
WritableComparator...
If you are going to interpret every bytes (you create a String, so you do),
there no clear reason for choosing such a low level API.
Regards
Bertrand
On Thu, Aug 9, 2012 at 4:47 PM, Björn-Elmar Macek wrote:
> H
HI Rahul
Better to to start a new thread than hijacking others .:) It helps to keep
the mailing list archives clean.
Learning java, you need to get some JAVA books and start off.
If you just want to run wordcount example just follow the steps in below url
http://wiki.apache.org/hadoop/WordCount
Hi again,
this is an direct response to my previous posting with the title "Logs
cannot be created", where logs could not be created (Spill failed). I
got the hint, that i gotta check privileges, but that was not the
problem, because i own the folders that were used for this.
I finally found
That's not the same, as you could pool only special users (like
administrator or system users) on the group which is the supergroup.
Other users would not be in that group, then they would not have
rwxrwxrwx access in whole HDFS.
2012/8/9 John Armstrong :
> On 08/09/2012 09:48 AM, Wellington Chevr
This was a version dependency issue. The class is not in 0.20.203.0.
From: Artem Ervits [mailto:are9...@nyp.org]
Sent: Wednesday, August 08, 2012 2:34 PM
To: user@hadoop.apache.org
Subject: Setting up HTTP authentication
Hello all,
I followed the 1.0.3 docs to setup http simple authentication. I
Hi Tariq,
im not getting the right start..
On Thu, Aug 9, 2012 at 7:59 PM, Mohammad Tariq wrote:
> Hello Rahul,
>
>That's great. That's the best way to learn(I am doing the same :)
> ). Since the installation part is over, I would suggest to get
> yourself familiar with Hdfs and MapReduce f
Hi Tariq,
I am trying to start wordcount mapreduce, i am not getting how to start and
where to start ..
i very new to java.
can you help how to work with this..any help will appreciated.
Hi All,
Please help start with Hadoop on CDH , i have instaleed in my local PC.
any help will appreciated.
On
Sent from my iPhone
On Aug 9, 2012, at 12:46, Pankaj Misra
wrote:
Thanks Ioan for the help and sharing the link, appreciate it.
The symlink as specified below already exists, and the the response
of "$ which ld" is
[root@fedora-0 container-executor]# which ld
/bin/ld
Yes, I will sur
On 08/09/2012 09:48 AM, Wellington Chevreuil wrote:
Have you tried set this myhdfsgroup group in
dfs.permissions.superusergroup property? If you set this property in
your hdfs-site.xml file, this group will be a super user group, and
should have full permissions in any HDFS place.
Unfortunatel
Hi John,
Have you tried set this myhdfsgroup group in
dfs.permissions.superusergroup property? If you set this property in
your hdfs-site.xml file, this group will be a super user group, and
should have full permissions in any HDFS place.
Ex:
dfs.permissions.superusergroup
myh
Hi,
If you're running this as a non-superuser (i.e. not the user thats
running the DataNodes), have you set your dfs.datanode.data.dir.perm
to be 755 instead of the default 700?
On Thu, Aug 9, 2012 at 3:06 PM, hadoop wrote:
>
> Hi all
>
>
> I found some warning in the log, and it may the bug of
Hi all!
Can someone please briefly explain the difference? I do not see
deprecated warnings for fs.local.block.size when I run with them set and
I see two copies of RawLocalFileSystem.java (the other is
local/RawLocalFs.java).
The things I really need to get answers to are:
1. Is the defaul
We should tell this whole mailing list story to the XKCD guy :)
*Fabio Pitzolu*
Consultant - BI & Infrastructure
Mob. +39 3356033776
Telefono 02 87157239
Fax. 02 93664786
*Gruppo Consulenza Innovazione - http://www.gr-ci.com*
2012/8/9
>
>
> LOL. This is the list to unsubscribe.
>
>
>
> P.S.
Hi Anand,
What are the permissions, on dfs.name.dir directory in hdfs-site.xml
Regards
Abhishek
Sent from my iPhone
On Aug 9, 2012, at 8:41 AM, anand sharma wrote:
> yea Tariq !1 its a fresh installation i m doing it for the first time, hope
> someone will know the error code and the reas
LOL. This is the list to unsubscribe.
P.S. Just kidding.
Original Message
Subject: Hadoop list?
From: Ryan Rosario
Date: Thu, August 09, 2012 1:03 am
To: user@hadoop.apache.org
Is this the correct list for Hadoop help?
anand if you are trying single node instance, I had written an ugly
script to setup the single node mode.
you can refer to it @ https://github.com/nitinpawar/hadoop/
I did face these issues but mostly they were due to permissions related.
On Thu, Aug 9, 2012 at 6:23 PM, Owen Duan wrote:
> have
Thanks Tariq,
let me start with that.
On Thu, Aug 9, 2012 at 7:59 PM, Mohammad Tariq wrote:
> Hello Rahul,
>
>That's great. That's the best way to learn(I am doing the same :)
> ). Since the installation part is over, I would suggest to get
> yourself familiar with Hdfs and MapReduce first.
have you tried hadoop namenode -format?
2012/8/9 anand sharma
> yea Tariq !1 its a fresh installation i m doing it for the first time,
> hope someone will know the error code and the reason of error.
>
>
> On Thu, Aug 9, 2012 at 5:35 PM, Mohammad Tariq wrote:
>
>> Hi Anand,
>>
>> Have yo
yea Tariq !1 its a fresh installation i m doing it for the first time,
hope someone will know the error code and the reason of error.
On Thu, Aug 9, 2012 at 5:35 PM, Mohammad Tariq wrote:
> Hi Anand,
>
> Have you tried any other Hadoop distribution or version also??In
> that case first re
Hi Anand,
Have you tried any other Hadoop distribution or version also??In
that case first remove the older one and start fresh.
Regards,
Mohammad Tariq
On Thu, Aug 9, 2012 at 5:29 PM, Mohammad Tariq wrote:
> Hello Rahul,
>
>That's great. That's the best way to learn(I am doing t
John,
Hadoop group permissions follow the BSD model, which is outlined in
the documentation. I think your settings are correct on that
directory, but you may want to check the higher level directories as
well (/path, /path/to, etc).
Hope that helps.
Justin
On Thu, Aug 9, 2012 at 7:55 AM, John Ar
Hello Rahul,
That's great. That's the best way to learn(I am doing the same :)
). Since the installation part is over, I would suggest to get
yourself familiar with Hdfs and MapReduce first. Try to do basic
filesystem operations using the Hdfs API and run the wordcount
program, if you haven't d
I'm having some trouble with permissions on HDFS. I'm trying to create a file in a directory where
the user belongs to a group that has write permissions, but it doesn't seem to be working.
First, the directory:
myuser$ hadoop fs -ls /path/to/parent/
drwxrwxr-x - hdfs myhdfsgroup 0
Hi Tariq,
I am also new to Hadoop trying to learn my self can anyone help me on the
same.
i have installed CDH3.
On Thu, Aug 9, 2012 at 6:21 PM, Mohammad Tariq wrote:
> Hello Anand,
>
> Is there any specific reason behind not using ssh??
>
> Regards,
> Mohammad Tariq
>
>
> On Thu, Aug
Thanks all for reply, yes the user has access to that directory and i have
already formatted the namenode; just for simplicity i am not using ssh as i
am doing things for the first time.
On Thu, Aug 9, 2012 at 3:58 PM, shashwat shriparv wrote:
> format the filesystem
>
> bin/hadoop namenode -for
Hi,
Thanks a lot everybody for your replies.
Your ideas about cold and hot data using different storage policies prove to
be very interesting.
Regards,
Sourygna Luangsay
Hello Pankaj,
Sorry, I pressed send too fast. The linker can't find some libraries.
Check to see if you have them installed (locate, find, rpm). If not
install them and try again.
If you have them installed and ld can't find them, add the paths to
the proper environment variable (I think LD_LIBR
Thanks Ioan for the help and sharing the link, appreciate it.
The symlink as specified below already exists, and the the response of "$ which
ld" is
[root@fedora-0 container-executor]# which ld
/bin/ld
Yes, I will surely raise a JIRA for this issue if it does not get resolved, and
once I am su
It seems that /bin/ld dos not exist so the compiler cannot perform
linking. Looking at the Fedora docs it seems that ld is located in
/usr/bin/ld so you may have to create a symlink to it:
$ ln -s /usr/bin/ld /bin/ld
First check that you have ld installed with: $ which ld
The scripts should also
Ok...
So under Apache Hadoop, how do you specify the location of when and where a
directory will be created on HDFS?
As an example, if I want to create a /coldData directory in HDFS as a place to
store my older data sets, How does that get assigned specifically to a RAIDed
HDFS?
(Or even spe
format the filesystem
bin/hadoop namenode -format
then try to start namenode :)
On Thu, Aug 9, 2012 at 3:51 PM, Mohammad Tariq wrote:
> Hello Anand,
>
> Is there any specific reason behind not using ssh??
>
> Regards,
> Mohammad Tariq
>
>
> On Thu, Aug 9, 2012 at 3:46 PM, anand sharma
Hello Anand,
Is there any specific reason behind not using ssh??
Regards,
Mohammad Tariq
On Thu, Aug 9, 2012 at 3:46 PM, anand sharma wrote:
> Hi, i am just learning the Hadoop and i am setting the development
> environment with CDH3 pseudo distributed mode without any ssh cofiguration
which user you are starting namenode?
if you are not root, does the user have access to mentioned directory?
On Thu, Aug 9, 2012 at 3:46 PM, anand sharma wrote:
> Hi, i am just learning the Hadoop and i am setting the development
> environment with CDH3 pseudo distributed mode without any ssh co
Hi, i am just learning the Hadoop and i am setting
the development environment with CDH3 pseudo distributed mode without any
ssh cofiguration in CentOS 6.2 . i can run the sample programs as usual
but when i try and run namenode this is the error it logs...
[hive@localhost ~]$ hadoop namenode
12/
Dear All,
I am building hadoop 0.23.1 release from source with native support. I have
already built/installed the following pre-requisites for native support
1. gcc-c++ 4.7.1
2. protoc 2.4.1
3. autotools chain
4. JDK 1.6.0_33
5. zlib 1.2.5-6
6. lzo 2.06-2
I have also set the following variables
Thanks all. I had a typo in core-site.xml path. It works now.
It would be better if conf.addResource throws an exception like
FileNotFoundException when it is not able to find the xml file specified.
I like idea of setting HADOOP_CONF_DIR. I will try that too.
Regards,
Anand.C
-Original
Also, Hadoop jars must be in the classpath when you run the app from
command line or as a jar.
Regards,
Mohammad Tariq
On Thu, Aug 9, 2012 at 2:02 PM, Mohammad Tariq wrote:
> Try this and let me know if it works,
>
> Configuration conf = new Configuration();
>
Also - make sure the fs.default.name parameter is actually in
core-site.xml, not hdfs-site.xml
On 9 August 2012 09:32, Mohammad Tariq wrote:
> Try this and let me know if it works,
>
> Configuration conf = new Configuration();
> conf.addResource(new
> Path("YOUR_H
Try this and let me know if it works,
Configuration conf = new Configuration();
conf.addResource(new
Path("YOUR_HADOOP_HOME/conf/core-site.xml"));
conf.addResource(new
Path("YOUR_HADOOP_HOME/conf/hdfs-site.xml"));
FileSystem fs = Fi
Instead of setting the xml files programmatically, why not set the
HADOOP_CONF_DIR env variable to '/usr/local/hadoop-1.0.2/conf/'?
That way, you can just create a new Configuration() object and the files
will be loaded for you without any extra work.
On 9 August 2012 09:28, Chandra Mohan, Ananda
Hi,
I have added other xml files too. But the issue is configuration object is not
getting updated with the xml conents.
When I add
System.out.println(configuration.get("fs.default.name"));
I am not getting my hdfs url which I have in core-site.xml. When I run my code
in eclipse,
unsubscribe
在 2012-8-9 下午1:07,"Saniya Khalsa" 写道:
>
>
Hello there,
Add the "conf/core-site.xml" file as well.
Regards,
Mohammad Tariq
On Thu, Aug 9, 2012 at 12:37 PM, Chandra Mohan, Ananda Vel Murugan
wrote:
> Hi,
>
>
>
> I am trying to add a file to HDFS programmatically.
>
>
>
> In my code, I am adding hdfs-site.xml and other xml to
Hi,
I am trying to add a file to HDFS programmatically.
In my code, I am adding hdfs-site.xml and other xml to Hadoop Configuration
object as follows
Configuration configuration = null;
configuration.addResource(new
URL("file:///usr/local/hadoop-1.0.2/conf/hdfs-site.xml"));
configuration.addRe
76 matches
Mail list logo