Hi,
I am doing no activity on my single node cluster which is using
2.1.0-beta, and still observed that it has gone to safe mode by itself
after a while. I was looking at the name node log and see many of these
kinds of entries.. Can anything be interpreted from these?
2013-07-12 09:06:11,256
hi,
from the log:
NameNode low on available disk space. Entering safe mode.
this is the root cause.
On Jul 15, 2013 2:45 PM, Krishna Kishore Bonagiri
write2kish...@gmail.com wrote:
Hi,
I am doing no activity on my single node cluster which is using
2.1.0-beta, and still observed that it
Hi,
pls see the available space for NN storage directory.
Thanks Regards
Venkat
On Mon, Jul 15, 2013 at 12:14 PM, Krishna Kishore Bonagiri
write2kish...@gmail.com wrote:
Hi,
I am doing no activity on my single node cluster which is using
2.1.0-beta, and still observed that it has gone
I have had enough space on the disk that is used, like around 30 Gigs
Thanks,
Kishore
On Mon, Jul 15, 2013 at 1:30 PM, Venkatarami Netla
venkatarami.ne...@cloudwick.com wrote:
Hi,
pls see the available space for NN storage directory.
Thanks Regards
Venkat
On Mon, Jul 15, 2013 at
Hi Krishna,
Can you please send screenshots of namenode web UI.
Thanks Aditya.
On Mon, Jul 15, 2013 at 1:54 PM, Krishna Kishore Bonagiri
write2kish...@gmail.com wrote:
I have had enough space on the disk that is used, like around 30 Gigs
Thanks,
Kishore
On Mon, Jul 15, 2013 at 1:30
please check dfs.datanode.du.reserved in the hdfs-site.xml
On Jul 15, 2013 4:30 PM, Aditya exalter adityaexal...@gmail.com wrote:
Hi Krishna,
Can you please send screenshots of namenode web UI.
Thanks Aditya.
On Mon, Jul 15, 2013 at 1:54 PM, Krishna Kishore Bonagiri
Hi,
I have restarted my cluster after removing the data directory and
formatting the namenode. So, is this screenshot still useful for you or do
you want it only after I reproduce the issue?
Thanks,
Kishore
On Mon, Jul 15, 2013 at 1:59 PM, Venkatarami Netla
venkatarami.ne...@cloudwick.com
I don't have it in my hdfs-site.xml, in which case probably the default
value is taken..
On Mon, Jul 15, 2013 at 2:29 PM, Azuryy Yu azury...@gmail.com wrote:
please check dfs.datanode.du.reserved in the hdfs-site.xml
On Jul 15, 2013 4:30 PM, Aditya exalter adityaexal...@gmail.com wrote:
Hi
Thank you, Ram
I have configured core-site.xml as following:
?xml version=1.0?
?xml-stylesheet type=text/xsl href=configuration.xsl?
!-- Put site-specific property overrides in this file. --
configuration
property
namehadoop.tmp.dir/name
value/vol/persistent-hdfs/value
Hi All,
SET hive.exec.compress.output=true;
SET io.seqfile.compression.type=BLOCK;
SET mapred.output.compression.codec=com.hadoop.compression.lzo.LzopCodec;
set mapred.job.priority=VERY_HIGH;
set mapred.job.name=loading data from YYY to XX;
insert overwrite table X partition
If you want to build release tarball, you can use the ant target 'tar'. If you
want native libraries built, you need to set 'complile.native' flag to true.
'forrest.home' need to be set to Apache Forrest location in order to build java
doc.
So you will have a command like the following:
ant
Hi,
I have a nfs partition of 6TB and i am using (4+1) node hadoop cluster,
below is my current configuration on hdfs
My question is since i am creating sub-folders under same partition , it
really does not make any sense for taking hadoop advantage of fail safe as
ultimately all are pointed to
I was looking at the HDFS block storage and noticed a couple things (1) all
block files are in a flat directory structure (2) there is a meta file for
each block file. This leads me to ask:
-- Where can I find good reading that describes this level of HDFS internals?
-- Is the flat storage
Thanks! They are fine, I was just confused seeing them talked about in forums.
John
-Original Message-
From: Harsh J [mailto:ha...@cloudera.com]
Sent: Friday, July 05, 2013 8:01 PM
To: user@hadoop.apache.org
Subject: Re: Accessing HDFS
These APIs (ClientProtocol, DFSClient) are not
1) Right now, I would say jira and code.
2) It is not really a flat storage. It 'folds' itself when needed.
3) At least checksums. There are jiras about whether it should be somehow
in the block itself.
A beginning : https://issues.apache.org/jira/browse/HADOOP-1134
A (dead?) discussion :
Hi,
When giving users access to an Hadoop cluster they need a few XML config
files (like the hadoop-site.xml ).
They put these somewhere on they PC and start running their jobs on the
cluster.
Now when you're changing the settings you want those users to use (for
example you changed some
Hi,
Last week we had a discussion at work regarding setting up our new Hadoop
cluster(s).
One of the things that has changed is that the importance of the Hadoop
stack is growing so we want to be more available.
One of the points we talked about was setting up the cluster in such a way
that the
Hi Niels,
it's depend of the number of replicas and the Hadoop rack configuration
(level).
It's possible to have replicas on the two datacenters.
What's the rack configuration that you plan ? You can implement your
own one and define it using the topology.node.switch.mapping.impl
property.
According to your own analysis, you wouldn't be more available but that was
your aim.
Did you consider having two separate clusters? One per datacenter, with an
automatic copy of the data?
I understand that load balancing of work and data would not be easy but it
seems to me a simple strategy
yes. it is useful.
On Jul 16, 2013 5:40 AM, Niels Basjes ni...@basjes.nl wrote:
Hi,
When giving users access to an Hadoop cluster they need a few XML config
files (like the hadoop-site.xml ).
They put these somewhere on they PC and start running their jobs on the
cluster.
Now when you're
I think you should recompile the program after run the program
2013/7/13 Anit Alexander anitama...@gmail.com
Hello,
I am encountering a problem in cdh4 environment.
I can successfully run the map reduce job in the hadoop cluster. But when
i migrated the same map reduce to my cdh4
2013-07-12 11:04:26,002 WARN
org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker: Space
available on volume 'null' is 0, which is below the configured reserved
amount 104857600
This is interesting. Its calling your volume null, which may be more
of a superficial bug.
What is
hi
what you dfs.namenode.name.dir set?
check you dfs,namenode.name.dir directory permit and make sure you user can
access
2013/7/16 Harsh J ha...@cloudera.com
2013-07-12 11:04:26,002 WARN
org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker: Space
available on volume 'null' is 0,
you need to recompile the hadoop-lzo jar and so from 0.20.x to 1.x.
于 2013/7/15 18:55, pradeep T 写道:
After Upgrade I faced lot of Jar file issues and cleared each with lot of
search in the internet.
yes i did recompile. But i seem to face the same problem. I am running the
map reduce with a custom input format. I am not sure if there is some
change in the API to get the splits correct.
Regards
On Tue, Jul 16, 2013 at 6:24 AM, 闫昆 yankunhad...@gmail.com wrote:
I think you should recompile
Hi all,
What is the policy of choosing a node for a reducer in mapreduce (Hadoop
v1.2.0)?
For example,
If a cluster has 5 slaves, each slave can serve 2 maps and 2 reduces ,
there is a job who occupies 5 mappers and 3 reducers , how does the
jobtracker assign reducers to these nodes (choosing
26 matches
Mail list logo