I followed michael noll's tutorial for making hadoop-0-20-append jars..
http://www.michael-noll.com/blog/2011/04/14/building-an-hadoop-0-20-x-version-for-hbase-0-90-2/
After following the article.. we get 5 jar files which we need to replace it
from hadoop.0.20.2 jar file.
There is no jar file
Hi,
I am planning a small Hadoop cluster, but looking ahead, are there cheaps
option to have a back up of the data? If I later want to upgrade the
hardware, do I make a complete copy, or do I upgrade one node at a time?
Thank you,
Mark
http://www.opscode.com/chef/
http://trac.mcs.anl.gov/projects/bcfg2
http://cfengine.com/
http://www.puppetlabs.com/
I use chef personally, but the others are just as good and all are tuned
towards different philosophies in configuration management.
http://trac.mcs.anl.gov/projects/bcfg2- n
On
Hi Drew,
I don't know if this is actually the issue or not, but the output below makes
me think you might be passing Cygwin pathes into the java.exe launcher. If
that's the case, it won't work. java.exe is pure Windows and doesn't know
about '/cygdrive/c' for example (it also expects the
Pupetize
From: gokul gokraz...@gmail.com
To: common-user@hadoop.apache.org
Sent: Wed, 22 June, 2011 8:38:13 AM
Subject: Automatic Configuration of Hadoop Clusters
Dear all,
for benchmarking purposes we would like to adjust configurations as well as
flexibly
Looks like you missed the '#' in line beginning
Feel free to set HADOOP_LOG_DIR in that script or elsewhere
On 6/22/11 1:02 PM, Jack Craig jcr...@carrieriq.com wrote:
Hi Folks,
In the hadoop-env.sh, we find, ...
# Where log files are stored. $HADOOP_HOME/logs by default.
# export
Jack,
I believe the location can definitely be set to any desired path.
Could you tell us the issues you face when you change it?
P.s. The env var is used to set the config property hadoop.log.dir
internally. So as long as you use the regular scripts (bin/ or init.d/
ones) to start daemons, it
Thx to both respondents.
Note i've not tried this redirection as I have only production grids available.
Our grids are growing and with them, log volume.
As until now that log location has been in the same fs as the grid data,
so running out of space due log bloat is a growing problem.
From
Hi,
Can I limit the log file duration ?
I want to keep files for last 15 days only.
Regards,
Jagaran
From: Jack Craig jcr...@carrieriq.com
To: common-user@hadoop.apache.org common-user@hadoop.apache.org
Sent: Wed, 22 June, 2011 2:00:23 PM
Subject: Re: Any
I had the same issue. I installed the previous stable version of Hadoop
(0.20.2), and it worked fine. I hope this helps.
-Sal
can anyone help me?
叶达峰 kobe082...@qq.com编写:
Hi,
I am a freshman on Hadoop. Today, I spent the whole night trying to set up a
development environment for Hadoop. I encounter several problems, first is
that the eclipse can't load the plugin, I changed to another version, this
problem
I've run into similar problems in my hive jobs and will look at the
'mapred.child.ulimit' option. One thing that we've found is when
loading data with insert overwrite into our hive tables we've needed to
include a 'CLUSTER BY' or 'DISTRIBUTE BY' option. Generally that's
fixed our memory
Hi guys,
I suspected that the problem was due to overhead introduced by the
filesystem, so I tried to set the dfs.replication.max property to
different values.
First, I tried with 2, and I got a message saying that I was requesting a
value of 3, which was bigger than the limit. So I couldn't do
Hi,
I am using Eclipse Helios Service Release 2.
I encountered a similar problem (map/reduce perspective failed to load) when
upgrading eclipse plugin from 0.20.2 to 0.20.3-append version.
I compared the source code of eclipse plugin and found only a few
difference. I tried to revert the
do you use hadoop 0.20.203.0?
I also have problem about this plugin.
Yaozhen Pan itzhak@gmail.com编写:
Hi,
I am using Eclipse Helios Service Release 2.
I encountered a similar problem (map/reduce perspective failed to load) when
upgrading eclipse plugin from 0.20.2 to 0.20.3-append version.
Hi,
Our hadoop version was built on 0.20-append with a few patches.
However, I didn't see big differences in eclipse-plugin.
Yaozhen
On Thu, Jun 23, 2011 at 11:29 AM, 叶达峰 (Jack Ye) kobe082...@qq.com wrote:
do you use hadoop 0.20.203.0?
I also have problem about this plugin.
Yaozhen Pan
I used the 0.20.203.0, and can't access the Dfs locations.
Following is the error:
failure to login
internal error:map/reduce location status updater
org/codehaus/jackson/map/jsonmappingexceptoon
Yaozhen Pan itzhak@gmail.com编写:
Hi,
Our hadoop version was built on 0.20-append with a few
Alberto,
I can assure you that fiddling with default replication factors can't
be the solution here. Most of us running a 3+ cluster still use the
3-replica-factor and it hardly introduces a performance lag. As long
as your Hadoop cluster network is not shared with other network
applications, you
18 matches
Mail list logo