Hi,
On download page of hadoop ( e.g.
http://apache.arvixe.com/hadoop/common/stable1/ ) , I see lots of tars.
Whats the difference between hadoop-1.2.1-bin.tar.gz
http://apache.arvixe.com/hadoop/common/stable1/hadoop-1.2.1-bin.tar.gz
and hadoop-1.2.1.tar.gz
Thanks Ozawa
Regards,
Chris MacKenzie
http://www.chrismackenziephotography.co.uk/Expert in all aspects of
photography
telephone: 0131 332 6967 tel:0131 332 6967
email: stu...@chrismackenziephotography.co.uk
corporate: www.chrismackenziephotography.co.uk
Hi All,
I have just realised that my implementation of hadoop-2.4.1 is pulling in
all the default.xml files.
I have three copies of each in different directories, obviously at least
one of those is on the class path.
Anyway with all the effort to set up a site, it seems strange to me that I
Hi Guys,
Can anyone please provide some suggestions/solutions on it?
Thanks,
Smita
From: Smita Deshpande
Sent: Thursday, July 17, 2014 11:41 AM
To: 'user@hadoop.apache.org'
Subject: Progress indicator should not be negative.
Hi,
I am running the distributed shell example of YARN
You want to implement a RAID on top of HDFS or use HDFS on top of RAID? I
am not sure I understand any of these use cases. HDFS handles for you
replication and error detection. Fine tuning the cluster wouldn't be the
easier solution?
Bertrand Dechoux
On Mon, Jul 21, 2014 at 7:25 AM, Zesheng Wu
Smita,
Any chance you can print the value of containers waiting and total before
the exception.
From
We want to implement a RAID on top of HDFS, something like facebook
implemented as described in:
https://code.facebook.com/posts/536638663113101/saving-capacity-with-hdfs-raid/
2014-07-21 17:19 GMT+08:00 Bertrand Dechoux decho...@gmail.com:
You want to implement a RAID on top of HDFS or use
So you know that a block is corrupted thanks to an external process which
in this case is checking the parity blocks. If a block is corrupted but
hasn't been detected by HDFS, you could delete the block from the local
filesystem (it's only a file) then HDFS will replicate the good remaining
Thanks Bertrand, my reply comments inline following.
So you know that a block is corrupted thanks to an external process which
in this case is checking the parity blocks. If a block is corrupted but
hasn't been detected by HDFS, you could delete the block from the local
filesystem (it's only a
If a block is corrupted but hasn't been detected by HDFS, you could delete
the block from the local filesystem (it's only a file) then HDFS will
replicate the good remaining replica of this block.
We only have one replica for each block, if a block is corrupted, HDFS
cannot replicate it.
The '-bin' file does not have the source code (bin for binaries) while the
other does. You can check and see the major difference in the 'src' folders
under the top-level directory after unzipping/untarring.
Regards,
Shahab
On Mon, Jul 21, 2014 at 3:54 AM, Vimal Jain vkj...@gmail.com wrote:
I wrote my answer thinking about the XOR implementation. With reed-solomon
and single replication, the cases that need to be considered are indeed
smaller, simpler.
It seems I was wrong about my last statement though. If the machine hosting
a single-replicated block is lost, it isn't likely that
I recommend against deleting or moving *-default.xml, because these files
may be supplying reasonable default values for configuration properties
that you haven't set in *-site.xml. We also put defaults into the code
itself in case a configuration property is found to be completely missing,
but
Aren't the *-default.xml files supposed to be inside the jars rather than
loose files?
Cheers
Chris Mawata
On Jul 21, 2014 12:59 PM, Chris Nauroth cnaur...@hortonworks.com wrote:
I recommend against deleting or moving *-default.xml, because these files
may be supplying reasonable default values
That's a good point. I'm not sure how bare *-default.xml files would be
showing up on a deployment outside the jars.
Chris Nauroth
Hortonworks
http://hortonworks.com/
On Mon, Jul 21, 2014 at 11:12 AM, Chris Mawata chris.maw...@gmail.com
wrote:
Aren't the *-default.xml files supposed to be
What is the rule for determining how many nodes should be in your initial
cluster?
B.
I suspect someone wanted to read through them to know tge defaults.
Chris
On Jul 21, 2014 2:16 PM, Chris Nauroth cnaur...@hortonworks.com wrote:
That's a good point. I'm not sure how bare *-default.xml files would be
showing up on a deployment outside the jars.
Chris Nauroth
Hortonworks
And there is actually quite a lot of information about it.
https://github.com/facebook/hadoop-20/blob/master/src/contrib/raid/src/java/org/apache/hadoop/hdfs/DistributedRaidFileSystem.java
http://wiki.apache.org/hadoop/HDFS-RAID
I read a bit on documentation on yarn memory tuning and found that
It is suggested to set mapreduce.map.java.opts = 0.8 *
mapreduce.map.memory.mb.
I am wondering why is 0.8, but not 0.9 or higher?
--
Chen Song
Hi,
For some reason, all PIDs file are missing in my cluster, I have to
manually kill all java processes on all machines, then I restarted the
HDFS, but it took so long time in applying changes in edit log file, so my
question is how can I reduce the delay? My understanding is as follows,
could
hi,maillist:
i use sudo -u hdfs hadoop fs -rm -r -skipTrash
/user/hive/warehouse/adx.db/dsp_request/2014-03*/* in CDH4.4,but i find it
can not work in CDH5,why?
# sudo -u hdfs hadoop fs -rm -r -skipTrash
/user/hive/warehouse/dsp.db/dsp_request/2014-01*/*
rm:
Thank Bertrand, I've checked these information earlier. There's only XOR
implementation, and missed blocks are reconstructed by creating new files.
2014-07-22 3:47 GMT+08:00 Bertrand Dechoux decho...@gmail.com:
And there is actually quite a lot of information about it.
Mmm, it seems that the facebook branch
https://github.com/facebook/hadoop-20/
https://github.com/facebook/hadoop-20/blob/master/src/contrib/raid/src/java/org/apache/hadoop/hdfs/DistributedRaidFileSystem.java
has
implemented reed-solomon codes, what I was checking earlier were the
following two
Can you paste fs -ls output on these ?
On Jul 21, 2014, at 5:51 PM, ch huang justlo...@gmail.com wrote:
hi,maillist:
i use sudo -u hdfs hadoop fs -rm -r -skipTrash
/user/hive/warehouse/adx.db/dsp_request/2014-03*/* in CDH4.4,but i find it
can not work in CDH5,why?
# sudo -u
Regards,
Yi Liu
25 matches
Mail list logo