at 4:57 PM, Uma Maheswara Rao G
mahesw...@huawei.commailto:mahesw...@huawei.com wrote:
Hi Arinto,
Please disable this feature with smaller clusters.
dfs.client.block.write.replace-datanode-on-failure.policy
Reason for this exception is, you have replication set to 3 and looks like you
have only 2
Hi Arinto,
Please disable this feature with smaller clusters.
dfs.client.block.write.replace-datanode-on-failure.policy
Reason for this exception is, you have replication set to 3 and looks like you
have only 2 nodes in the cluster from the logs. When you first time created
pipeline we will
Hi Rob,
DFSInputStream: InterfaceAudience for this class is private and you should
not use this class directly. This class mainly implements actual core
functionality of read. And this is DFS specific implementation only.
HdfsDataInputStream : InterfaceAudience for this class is public and
start-all.sh will not carry any arguments to pass to nodes.
Start with start-dfs.sh
or start directly namenode with upgrade option. ./hadoop namenode -upgrade
Regards,
Uma
From: yogesh dhari [yogeshdh...@live.com]
Sent: Thursday, November 22, 2012 12:23 PM
If you format namenode, you need to cleanup storage directories of DataNode as
well if that is having some data already. DN also will have namespace ID saved
and compared with NN namespaceID. if you format NN, then namespaceID will be
changed and DN may have still older namespaceID. So, just
Adding to Andy's points:
To be clarify: I think 0.23 does not claim HA feature.
Also Hadoop-2 HA is Active-Standby model.
Regards,
Uma
From: Andy Isaacson [a...@cloudera.com]
Sent: Thursday, November 15, 2012 8:19 AM
To: user@hadoop.apache.org
Subject:
Which version of Hadoop are you using?
Do you have all DNs running? can you check UI report, wehther all DN are a live?
Can you check the DN disks are good or not?
Can you grep the NN and DN logs with one of the corrupt blockID from below?
Regards,
Uma
node machine and should show the previous content, or I am
wrong ??
Why Its it happening, and why is it not cumming from safe mode by itself
Please suggest
Regards
Yogesh Kumar
From: Uma Maheswara Rao G [mahesw...@huawei.com]
Sent: Monday, October 29, 2012 5:10
Replication factor is per file option, So, you may have to write a small
program which will iterate over all files and set the replication factor to
desired one.
API: FileSystem#setReplication
Regards,
Uma
On Wed, Sep 5, 2012 at 11:39 PM, Uddipan Mukherjee
uddipan_mukher...@infosys.com wrote:
Hi Stuti,
Yes, I remember we have committed this to trunk only that time.
Regards,
Uma
From: Stuti Awasthi [stutiawas...@hcl.com]
Sent: Friday, August 31, 2012 5:54 PM
To: user@hadoop.apache.org
Subject: HADOOP-7178 patch is not present in Hadoop-1.0.3
Hi Jan,
Don't confuse with the backupnode/checkpoint nodes here.
The new HA architecture mainly targetted to build HA with Namenode states.
1) Active Namenode
2) Standby Namenode
When you start NN, they both will start in standby mode bydefault.
then you can switch one NN to active state by
Hi Mingxi,
In your thread dump, did you check DataStreamer thread? is it running?
If DataStreamer thread is not running, then this issue would be mostly same as
HDFS-2850.
Did you find any OOME in your clients?
Regards,
Uma
From: Mingxi Wu
Mark,
Any other clients deleted the file while write inprogress?
could you please grep in namenode about this file
/user/mark/output33/_temporary/_attempt_201202090811_0005_m_000247_0/part-00247
if any delete requests?
From: Mark question
Looks you are not using any compression in your code. Hadoop has some native
libraries to load mainly for compression codecs.
When you want to use that compression tequniques, you need to compile with this
compile.native option enable. Also need to set in java library path. If you are
not using
What is the Java heap space you configured?
for the property mapred.child.java.opts
From: Tim Broberg [tim.brob...@exar.com]
Sent: Tuesday, February 07, 2012 3:20 PM
To: common-user@hadoop.apache.org
Subject: out of memory running examples
I'm trying to
This looks to be HDFS specific question. please send it to correct mailing
list. cc'ed to mapreduce user if you have not registered for hdfs.
Please look at the previous discussion in mailing list about your question.
Sorry. Ignore my previos answer. Was just thinking about normal HDFS files :-).
In your question, they will be decided by your job itself. sorry for the
confusion.
From: Uma Maheswara Rao G [mahesw...@huawei.com]
Sent: Saturday, February 04, 2012 5:46 PM
To: hdfs
can you try delete this directory manually?
Please check other process already running with this directory configured.
Regards,
Uma
From: Vijayakumar Ramdoss [nellaivi...@gmail.com]
Sent: Thursday, February 02, 2012 1:27 AM
To:
Can you please check in UI, what is the heap usage. Then we can confirm whether
java heap is growing or not.
top will consider native memory usage also and nio uses directByteBuffers
internally.
This is good write up from Jonathan
Hi Shesha,
Take a look at org.apache.hadoop.hdfs.server.datanode.BlockSender.java
Regards,
Uma
From: Sesha Kumar [sesha...@gmail.com]
Sent: Monday, January 16, 2012 7:50 PM
To: hdfs-user@hadoop.apache.org
Subject: Data processing in DFSClient
Hey guys,
Sorry
Looks namenode is not giving the nodes. Can you please check Datanode is
running properly?
From: Apurv Verma [dapu...@gmail.com]
Sent: Wednesday, January 11, 2012 8:15 AM
To: hdfs-user@hadoop.apache.org
Subject: HDFS Problems After Programmatic Access
When I
, January 05, 2012 11:36 PM
To: hdfs-...@hadoop.apache.org
Subject: Re: Timeouts in Datanodes while block scanning
What version of HDFS? This question might be more appropriate for hdfs-user@
.
--
Aaron T. Myers
Software Engineer, Cloudera
On Thu, Jan 5, 2012 at 8:59 AM, Uma Maheswara Rao G
://pastebin.com/5s0yhgnj
output http://paste.ubuntu.com/780807/
Hope you will understand and extend your helping hand towards us.
Have a nice day.
Regards
Humayun
On 23 December 2011 17:31, Uma Maheswara Rao G mahesw...@huawei.com
wrote:
Hi Humayun ,
Lets assume
Hey Praveenesh,
You can start secondary namenode also by just giving the option ./hadoop
secondarynamenode
DN can not act as seconday namenode. The basic work for seconday namenode is to
do checkpointing and getting the edits insync with Namenode till last
checkpointing period. DN is to
From: Humayun kabir [humayun0...@gmail.com]
Sent: Thursday, December 22, 2011 10:34 PM
To: common-user@hadoop.apache.org; Uma Maheswara Rao G
Subject: Re: Hadoop configuration
Hello Uma,
Thanks for your cordial and quick reply. It would be great if you explain what
At what load you are running the cluster?
Looks NN not able to choose the targets for block.
When choosing the targets NN will check many conditions like what is the thread
count in DNs and allocated space for the node, ...etc.
If NN finds any of the nodes in that situation it will not choose
Yes you can use utility methods from IOUtils
ex:
FileOutputStream fo = new FileOutputStream (file);
IOUtils.copyBytes(fs.open(fileName), fo, 1024, true);
here fs is DFS stream.
other option is, you can make use of FileSystem apis.
EX:
FileSystem fs=new DistributedFileSystem();
Some workaround available in https://issues.apache.org/jira/browse/HADOOP-7489
try adding that options in hadoop-env.sh.
Regards,
Uma
From: Idris Ali [psychid...@gmail.com]
Sent: Friday, December 16, 2011 8:16 PM
To: common-user@hadoop.apache.org;
Yes you can use utility methods from IOUtils
ex:
FileOutputStream fo = new FileOutputStream (file);
IOUtils.copyBytes(fs.open(fileName), fo, 1024, true);
here fs is DFS stream.
other option is, you can make use of FileSystem apis.
EX:
FileSystem fs=new DistributedFileSystem();
Yes you can use utility methods from IOUtils
ex:
FileOutputStream fo = new FileOutputStream (file);
IOUtils.copyBytes(fs.open(fileName), fo, 1024, true);
here fs is DFS stream.
other option is, you can make use of FileSystem apis.
EX:
FileSystem fs=new DistributedFileSystem();
Property should set at Namenode side.
Please check your classpath, really conf directory coming or not where you
updated the propery.
From: Andrea Valentini Albanelli [andrea.valent...@pg.infn.it]
Sent: Wednesday, December 14, 2011 8:57 PM
To:
AFAIK backup node introduced in 0.21 version onwards.
From: praveenesh kumar [praveen...@gmail.com]
Sent: Wednesday, December 07, 2011 12:40 PM
To: common-user@hadoop.apache.org
Subject: HDFS Backup nodes
Does hadoop 0.20.205 supports configuring HDFS
Looks you are getting HDFS-2553.
The cause might be that, you cleared the datadirectories directly without DN
restart. Workaround would be to restart DNs.
Regards,
Uma
From: Stephen Boesch [java...@gmail.com]
Sent: Tuesday, November 29, 2011 8:53 PM
To:
you can find the code directly in FSNameSystem#allocateBlock
It is just a random long number and will ensure that blockid is not created
already by NN.
Regards,
Uma
From: kartheek muthyala [kartheek0...@gmail.com]
Sent: Tuesday, November 29, 2011 6:07 PM
Please recheck your cluster once, whether all the places has the same evrsion
of Jars.
It looks RPC client and servers are different versions.
From: Nitin Khandelwal [nitin.khandel...@germinait.com]
Sent: Monday, November 28, 2011 5:32 PM
To:
I think, you might not completed even single block write completely.
Length will be updated in NN after completing the block. Currently partial
block lengths will not be included in length calculation.
Regards,
Uma
From: Inder Pall [inder.p...@gmail.com]
From Java API, FileSystem#getFileBlockLocations should give you the
blocklocations.
Regards,
Uma
From: Praveen Sripati [praveensrip...@gmail.com]
Sent: Monday, November 28, 2011 10:01 PM
To: hdfs-user@hadoop.apache.org
Subject: Re: how to find data nodes on
Hey Shesha,
In Fedatated HDFS, same DataNodes can work with multiple NameNode.
Where as in your setup, complete cluster itself is different.
I would suggest you to take a look at HDFS-2471, Suresh has explained very neet
and breifly here.
Regards,
Uma
AFAIK, there is no facility like this in HDFS through command line.
One option is, write small client program and collect the files from root based
on your condition and invoke delete on them.
Regards,
Uma
From: Raimon Bosch [raimon.bo...@gmail.com]
Sent:
Hi,
Current volume choosing policy is round robin fashion, Since the DN got new
disk, balancer will balance some blocks to this node. But the volume choosing
will be same when palcing the block. AFAIK, it wont do any special balancing
between disks in the same node. please correct me if i
Also i am surprising, how you are writing mapreduce application here. Map and
reduce will work with key value pairs.
From: Uma Maheswara Rao G
Sent: Tuesday, November 22, 2011 8:33 AM
To: common-user@hadoop.apache.org; core-u...@hadoop.apache.org
Subject
http://svn.apache.org/repos/asf/hadoop/common/branches/
all branches code will be under this.
You can choose required one.
Regards,
Uma
From: mohmmadanis moulavi [anis_moul...@yahoo.co.in]
Sent: Tuesday, November 15, 2011 6:00 PM
To:
Yes, you can follow that.
mvn eclipse:eclipse will generate eclipse related files. After that directly
import in your eclipse.
note: Repository links need to update. hdfs and mapreduce are moved inside to
common folder.
Regatrds,
Uma
From: Amir Sanjar
eclipse:eclipse, for versions 0.23+, correct?
From: Uma Maheswara Rao G [mahesw...@huawei.com]
Sent: Monday, November 14, 2011 10:11 AM
To: common-user@hadoop.apache.org
Subject: RE: setting up eclipse env for hadoop
Yes, you can follow that.
mvn eclipse:eclipse
- Original Message -
From: Arko Provo Mukherjee arkoprovomukher...@gmail.com
Date: Tuesday, November 8, 2011 1:26 pm
Subject: Issues with Distributed Caching
To: mapreduce-user@hadoop.apache.org
Hello,
I am having the following problem with Distributed Caching.
*In the driver
- Original Message -
From: donal0412 donal0...@gmail.com
Date: Tuesday, November 8, 2011 1:04 pm
Subject: dfs.write.packet.size set to 2G
To: hdfs-user@hadoop.apache.org
Hi,
I want to store lots of files in HDFS, the file size is = 2G.
I don't want the file to split into blocks,because
You can look at BlockPoolSliceScanner#scan method. This is in trunk code.
You can find this logic in DataBlockScanner#run in earlier versions.
Regards,
Uma
- Original Message -
From: kartheek muthyala kartheek0...@gmail.com
Date: Monday, November 7, 2011 7:31 pm
Subject: Any daemon?
forwarding to mapreduce
---BeginMessage---
Am I being completely silly asking about this? Does anyone know?
On Wed, Nov 2, 2011 at 6:27 PM, Meng Mao meng...@gmail.com wrote:
Is there any mechanism in place to remove failed task attempt directories
from the TaskTracker's jobcache?
It
in this issue
and
tried different ways but no result.^^
BS.
Masoud
On 11/03/2011 08:34 PM, Uma Maheswara Rao G 72686 wrote:
it wont disply any thing on console.
If you get any error while exceuting the command, then only it
will disply on console. In your case it might executed
This problem may come if you dont configure the hostmappings properly.
Can you check whether your tasktrackers are pingable from each other with the
configured hostsnames?
Regards,
Uma
- Original Message -
From: Russell Brown misterr...@gmail.com
Date: Friday, November 4, 2011 9:00 pm
- Original Message -
From: Russell Brown misterr...@gmail.com
Date: Friday, November 4, 2011 9:11 pm
Subject: Re: Never ending reduce jobs, error Error reading task
outputConnection refused
To: mapreduce-user@hadoop.apache.org
On 4 Nov 2011, at 15:35, Uma Maheswara Rao G 72686 wrote
- Original Message -
From: Russell Brown misterr...@gmail.com
Date: Friday, November 4, 2011 9:18 pm
Subject: Re: Never ending reduce jobs, error Error reading task
outputConnection refused
To: mapreduce-user@hadoop.apache.org
On 4 Nov 2011, at 15:44, Uma Maheswara Rao G 72686 wrote
in this issue
and
tried different ways but no result.^^
BS.
Masoud
On 11/03/2011 08:34 PM, Uma Maheswara Rao G 72686 wrote:
it wont disply any thing on console.
If you get any error while exceuting the command, then only it
will disply on console. In your case it might executed
Looks before comlpeting the file, folder has been deleted.
In HDFS, we will be able to delete the files any time. Application need to take
care about the file comleteness depending on his usage.
Do you have any dfsclient side logs in mapreduce, when exactly delete command
issued?
- Original
- Original Message -
From: kartheek muthyala kartheek0...@gmail.com
Date: Thursday, November 3, 2011 11:23 am
Subject: Packets-Block
To: common-user@hadoop.apache.org
Hi all,
I need some info related to the code section which handles the
followingoperations.
Basically DataXceiver.c
.
On Thu, Nov 3, 2011 at 12:55 PM, Uma Maheswara Rao G 72686
mahesw...@huawei.com wrote:
- Original Message -
From: kartheek muthyala kartheek0...@gmail.com
Date: Thursday, November 3, 2011 11:23 am
Subject: Packets-Block
To: common-user@hadoop.apache.org
Hi all,
I
this folder permission via cygwin,NO RESULT.
Im really confused. ...
any idea please ...?
Thanks,
B.S
On 11/01/2011 05:38 PM, Uma Maheswara Rao G 72686 wrote:
Looks, that is permissions related issue on local dirs
There is an issue filed in mapred, related to this problem
https
Can you please give some trace?
- Original Message -
From: Masoud mas...@agape.hanyang.ac.kr
Date: Tuesday, November 1, 2011 11:08 am
Subject: under cygwin JUST tasktracker run by cyg_server user, Permission
denied .
To: common-user@hadoop.apache.org
Hi
I have problem in running
.
On 11/01/2011 04:33 PM, Uma Maheswara Rao G 72686 wrote:
Can you please give some trace?
- Original Message -
From: Masoudmas...@agape.hanyang.ac.kr
Date: Tuesday, November 1, 2011 11:08 am
Subject: under cygwin JUST tasktracker run by cyg_server user,
Permission denied
If you want to trace one particular block associated with a file, you can first
check the file Name and find the NameSystem.allocateBlock: from your NN logs.
here you can find the allocated blockID. After this, you just grep with this
blockID from your huge logs. Take the time spamps for each
- Original Message -
From: Jay Vyas jayunit...@gmail.com
Date: Saturday, October 29, 2011 8:27 pm
Subject: can't format namenode
To: common-user@hadoop.apache.org
Hi guys : In order to fix some issues im having (recently posted),
I'vedecided to try to make sure my name node is
- Original Message -
From: Josu Lazkano josu.lazk...@barcelonamedia.org
Date: Thursday, October 27, 2011 9:38 pm
Subject: Permission denied for normal users
To: hdfs-user@hadoop.apache.org
Hello list, I am new on Hadoop, I configura a 3 slaves and 1 master
Hadoop cluster.
The problem
Hi,
Firt of all, welcome to Hadoop.
- Original Message -
From: panamamike panamam...@hotmail.com
Date: Sunday, October 23, 2011 8:29 pm
Subject: Need help understanding Hadoop Architecture
To: core-u...@hadoop.apache.org
I'm new to Hadoop. I've read a few articles and presentations
- Original Message -
From: Ossi los...@gmail.com
Date: Friday, October 21, 2011 2:57 pm
Subject: lost data with 1 failed datanode and replication factor 3 in 6 node
cluster
To: common-user@hadoop.apache.org
hi,
We managed to lost data when 1 datanode broke down in a cluster of 6
- Original Message -
From: Mark question markq2...@gmail.com
Date: Saturday, October 22, 2011 5:57 am
Subject: Remote Blocked Transfer count
To: common-user common-user@hadoop.apache.org
Hello,
I wonder if there is a way to measure how many of the data blocks
havetransferred over
use case can you guys
point to?
I am not sure, what is your exact question here. Can you please clarify more on
this?
~Kartheek
On Mon, Oct 17, 2011 at 12:53 PM, Uma Maheswara Rao G 72686
mahesw...@huawei.com wrote:
AFAIK, append option is there in 20Append branch. Mainly
supports
- Original Message -
From: bourne1900 bourne1...@yahoo.cn
Date: Tuesday, October 18, 2011 3:21 pm
Subject: could not complete file...
To: common-user common-user@hadoop.apache.org
Hi,
There are 20 threads which put file into HDFS ceaseless, every
file is 2k.
When 1 million files
get your question clearly.
~Kartheek
On Tue, Oct 18, 2011 at 12:14 PM, Uma Maheswara Rao G 72686
mahesw...@huawei.com wrote:
- Original Message -
From: kartheek muthyala kartheek0...@gmail.com
Date: Tuesday, October 18, 2011 11:54 am
Subject: Re: Does hadoop support append
- Original Message -
From: Oleg Ruchovets oruchov...@gmail.com
Date: Tuesday, October 18, 2011 4:11 pm
Subject: execute hadoop job from remote web application
To: common-user@hadoop.apache.org
Hi , what is the way to execute hadoop job on remote cluster. I
want to
execute my hadoop
access for this site.
Also please go through this docs,
http://hadoop.apache.org/common/docs/r0.20.2/mapred_tutorial.html#Example%3A+WordCount+v2.0
Here is the wordcount example.
On Tue, Oct 18, 2011 at 1:13 PM, Uma Maheswara Rao G 72686
mahesw...@huawei.com wrote:
- Original
AFAIK, append option is there in 20Append branch. Mainly supports sync. But
there are some issues with that.
Same has been merged to 20.205 branch and will be released soon (rc2
available). And also fixed many bugs in this branch. As per our basic testing
it is pretty good as of now.Need to
So is there a client program to call this?
Can one write their own simple client to call this method from all
diskson the cluster?
How about a map reduce job to collect from all disks on the cluster?
On 10/15/11 4:51 AM, Uma Maheswara Rao G 72686
mahesw...@huawei.comwrote
?
P.s. I'd moved this conversation to hdfs-user@ earlier on, but now I
see it being cross posted into mr-user, common-user, and common-
dev --
Why?
On Mon, Oct 17, 2011 at 9:25 PM, Uma Maheswara Rao G 72686
mahesw...@huawei.com wrote:
We can write the simple program and you can call this API
- Original Message -
From: Mayuran Yogarajah mayuran.yogara...@casalemedia.com
Date: Tuesday, October 18, 2011 4:24 am
Subject: Hadoop node disk failure - reinstall question
To: common-user@hadoop.apache.org common-user@hadoop.apache.org
One of our nodes died today, it looks like the
You are using Which version of Hadoop ?
Please check the recent discussion, which will help you related to this problem.
http://search-hadoop.com/m/PPgvNPUoL2subj=Re+Starting+Datanode
Regards,
Uma
- Original Message -
From: Majid Azimi majid.merk...@gmail.com
Date: Sunday, October 16,
Are you able to ping the other node with the configured hostnames?
Make sure that you should be able to ping to the other machine with the
configured hostname in ect/hosts files.
Regards,
Uma
- Original Message -
From: praveenesh kumar praveen...@gmail.com
Date: Sunday, October 16, 2011
19:52, Uma Maheswara Rao G 72686
mahesw...@huawei.comwrote:
Are you able to ping the other node with the configured hostnames?
Make sure that you should be able to ping to the other machine
with the
configured hostname in ect/hosts files.
Regards,
Uma
- Original Message
/** Return the disk usage of the filesystem, including total capacity,
* used space, and remaining space */
public DiskStatus getDiskStatus() throws IOException {
return dfs.getDiskStatus();
}
DistributedFileSystem has the above API from java API side.
Regards,
Uma
- Original
I think below can give you more info about it.
http://developer.yahoo.com/blogs/hadoop/posts/2009/08/the_anatomy_of_hadoop_io_pipel/
Nice explanation by Owen here.
Regards,
Uma
- Original Message -
From: Yang Xiaoliang yangxiaoliang2...@gmail.com
Date: Wednesday, October 5, 2011 4:27 pm
Yes, FileStatus class would be trhe equavalent for list.
FileStstus has the API's isDir and getPath. This both api's can satify for
your futher usage.:-)
I think small difference would be, FileStatus will ensure the sorted order.
Regards,
Uma
- Original Message -
From: John Conwell
Hi,
It looks to me that, problem with your NFS. It is not supporting locks. Which
version of NFS are you using?
Please check your NFS locking support by writing simple program for file
locking.
I think NFS4 supports locking ( i did not tried).
http://nfs.sourceforge.net/
A6. What are the
Distcp will run as mapreduce job.
Here tasktrackers required the hostname mappings to contact to other nodes.
Please configure the mapping correctly in both the machines and try.
egards,
Uma
- Original Message -
From: trang van anh anh...@vtc.vn
Date: Wednesday, October 5, 2011 1:41 pm
Hello Kiran,
Can you check that block presents in DN and check the generation timestamp in
metafile(if you are aware of it)?
Can you grep the blk_-8354424441116992221 from your logs and paste here?
We have seen this when recovery is in progress and read parallelly(in 0.20x
versions). If
FileSystem objects will be cached in jvm.
When it tries to get the FS object by using Filesystem.get(..) ( sequence file
internally will use it), it will return same fs object if scheme and authority
is same for the uri.
fs cache key's equals implementation is below
static boolean
hi,
Here is some useful info:
A small file is one which is significantly smaller than the HDFS block size
(default 64MB). If you’re storing small files, then you probably have lots of
them (otherwise you wouldn’t turn to Hadoop), and the problem is that HDFS
can’t handle lots of files.
Every
Java 6, Cygwin ( maven + tortoiseSVN are for building hadoop) should be enough
for running standalone mode in windows.
Regards,
Uma
- Original Message -
From: Mark Kerzner markkerz...@gmail.com
Date: Saturday, September 24, 2011 4:58 am
Subject: How to run Hadoop in standalone mode in
, 2011 at 10:03 AM, He Chen airb...@gmail.com wrote:
Hi
It is interesting that a guy from Huawei is also working on
Hadoop project.
:)
Chen
On Sun, Sep 25, 2011 at 11:29 PM, Uma Maheswara Rao G 72686
mahesw...@huawei.com wrote:
Hi,
You can find the Code
Hello Abdelrahman,
Are you able to ping from one machine to other with the configured hostname?
configure both the hostnames in /etc/hosts file properly and try.
Regards,
Uma
- Original Message -
From: Abdelrahman Kamel abdouka...@gmail.com
Date: Monday, September 26, 2011 8:47 pm
Hello Joris,
Looks You have configured mapred.map.child.java.opts to -Xmx512M,
To spawn a child process that much memory is required.
Can you check what other processes occupied memory in your machine. Bacuse your
current task is not getting the enough memory to initialize. or try to reduce
Hi,
You can find the Code in DFSOutputStream.java
Here there will be one thread DataStreamer thread. This thread will pick the
packets from DataQueue and write on to the sockets.
Before this, when actually writing the chunks, based on the block size
parameter passed from client, it will
,
Arun
On Wed, Sep 21, 2011 at 2:03 PM, Uma Maheswara Rao G 72686
mahesw...@huawei.com wrote:
Hello Arun,
If you want to apply MAPREDUCE-1253 on 21 version,
applying patch directly using commands may not work because of
codebase changes.
So, you take the patch and apply
from storing metadata info, Is there anything more NN/JT
machinesare doing ?? .
So I can say I can survive with poor NN if I am not dealing with
lots of
files in HDFS ?
On Thu, Sep 22, 2011 at 11:08 AM, Uma Maheswara Rao G 72686
mahesw...@huawei.com wrote:
By just changing the configs
,
Arun
On Wed, Sep 21, 2011 at 2:03 PM, Uma Maheswara Rao G 72686
mahesw...@huawei.com wrote:
Hello Arun,
If you want to apply MAPREDUCE-1253 on 21 version,
applying patch directly using commands may not work because of
codebase changes.
So, you take the patch and apply
Hello Arun,
On which code base you are trying to apply the patch.
Code should match to apply the patch.
Regards,
Uma
- Original Message -
From: ArunKumar arunk...@gmail.com
Date: Wednesday, September 21, 2011 11:33 am
Subject: Making Mumak work with capacity scheduler
To:
is not able to start after crashing without
enough HD space.
Wei
-Original Message-
From: Uma Maheswara Rao G 72686 [mailto:mahesw...@huawei.com]
Sent: Tuesday, September 20, 2011 9:30 PM
To: common-user@hadoop.apache.org
Subject: Re: RE: java.io.IOException: Incorrect data format
-
From: ArunKumar arunk...@gmail.com
Date: Wednesday, September 21, 2011 12:01 pm
Subject: Re: Making Mumak work with capacity scheduler
To: hadoop-u...@lucene.apache.org
Hi Uma !
I am applying patch to mumak in hadoop-0.21 version.
Arun
On Wed, Sep 21, 2011 at 11:55 AM, Uma Maheswara Rao
-
0.21 onwards.
Can u describe in detail You may need to merge them logically (
back port
them) ?
I don't get it .
Arun
On Wed, Sep 21, 2011 at 12:07 PM, Uma Maheswara Rao G [via Lucene]
ml-node+s472066n3354668...@n3.nabble.com wrote:
Looks that patchs are based on 0.22 version. So
Hi,
You need not copy the files to NameNode.
Hadoop provide Client code as well to copy the files.
To copy the files from other node ( non dfs), you need to put the
hadoop**.jar's into classpath and use the below code snippet.
FileSystem fs =new DistributedFileSystem();
For more understanding the flows, i would recommend you to go through once
below docs
http://hadoop.apache.org/common/docs/r0.16.4/hdfs_design.html#The+File+System+Namespace
Regards,
Uma
- Original Message -
From: Uma Maheswara Rao G 72686 mahesw...@huawei.com
Date: Wednesday, September
,
Praveenesh
On Wed, Sep 21, 2011 at 2:37 PM, Uma Maheswara Rao G 72686
mahesw...@huawei.com wrote:
For more understanding the flows, i would recommend you to go
through once
below docs
http://hadoop.apache.org/common/docs/r0.16.4/hdfs_design.html#The+File+System+Namespace
Regards,
Uma
1 - 100 of 135 matches
Mail list logo