Hi,
I am in the middle of setting up a hadoop 2 cluster. I am using the hadoop
2.1-beta tarball.
My cluster has 1 master node running the hdfs namenode, the resourcemanger
and the job history server. Next to that I have 3 nodes acting as
datanodes and nodemanagers.
In order to test, if everythi
ces as seen by the
> ResourceManager. Do you see "Active Nodes" on the RM web UI first page? If
> not, you'll have to check the NodeManager logs to see if they crashed for
> some reason.
>
> Thanks,
> +Vinod Kumar Vavilapalli
> Hortonworks Inc.
> http://hortonwork
Hi,
I am looking for the most obvious way to verify the downloads of
hadoop tarballs. I saw that you provide a .mds file containing MD5,
SHA1 and other check sums, which is produced by gpg --print-mds. I
fail finding the right way to verify those in a reliable way. I came
up with this:
md5sum --
something is wrong with your name-resolution. If you look at the error
message, it says you are trying to connect to 127.0.0.1 instead of the
remote host.
-André
On Tue, Sep 3, 2013 at 12:05 PM, Visioner Sadak
wrote:
> Hello Hadoopers,
>
> I am trying to configure httpf
This is usually a String.format() problem, when the developer was
using an English locale and was not aware of the fact, that
String.format is locale dependent.
Try this:
export LANG=en_EN.UTF-8
- André
On Tue, Sep 3, 2013 at 3:20 PM, Felipe Gutierrez
wrote:
> Hi,
>
> I am trying to run Contr
.hadoop.mapred.MapTask.runOldMapper(MapTask.java:436)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
> at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.sec
Hi,
while running some local tests, I had trouble with junit and it turned
out, that hadoop itself (1.2.1 and 2.1.0-beta) ships with a junit jar.
I am wondering if that is a bug or a feature. If it is a feature,
upgrading to something newer than 4.5 would be nice for the stable
version of hadoop.
It looks like an overflow somewhere 9223372036854775807 ==
0x7fffL == java.lang.Long.MAX_VALUE.
- André
On Fri, Sep 6, 2013 at 5:42 PM, Logan Hardy wrote:
> I'm seeing thousands of the following messages per day on my Datanodes. In
> every single message the NameNode recorded length
also keep in mind, that java 6 no longer gets "public" updates from
Oracle: http://www.oracle.com/technetwork/java/eol-135779.html
- André
On Wed, Oct 9, 2013 at 11:48 PM, SF Hadoop wrote:
> I hadn't. Thank you!!! Very helpful.
>
> Andy
>
>
> On Wed, Oct 9, 2013 at 2:25 PM, Patai Sangbutsaraku
Have a look at our vagrant hadoop cluster, that does just that (using
ubuntu though):
https://github.com/Cascading/vagrant-cascading-hadoop-cluster
-- André
On Sat, Oct 12, 2013 at 12:33 AM, Raj Hadoop wrote:
> All,
>
> I have a CentOS VM image and want to replicate it four times on my Mac
> co
The best thing to do is to open a JIRA here:
https://issues.apache.org/jira/secure/Dashboard.jspa You might also
want to submit a patch, which is very easy.
- André
On Fri, Oct 18, 2013 at 11:28 AM, Siddharth Tiwari
wrote:
> The installation documentation for Hadoop yarn at this link
> http://ha
Now get a copy of the code, fix the mistake and attach the patch to the JIRA.
- André
On Fri, Oct 18, 2013 at 11:49 AM, Siddharth Tiwari
wrote:
> Opened a Jira https://issues.apache.org/jira/browse/YARN-1319
>
>
>
> **
> Cheers !!!
> Siddharth Tiwari
> Have a refreshing d
I reported the 32bit/64bit problem a few weeks ago. There hasn't been
much activity around it though:
https://issues.apache.org/jira/browse/HADOOP-9911
- André
On Mon, Nov 4, 2013 at 2:20 PM, Salman Toor wrote:
> Hi,
>
> Ok so 2.x is not a new version its another branch. Good to know! Actually
>
t;
>
> On Nov 4, 2013, at 2:55 PM, Andre Kelpe wrote:
>
> I reported the 32bit/64bit problem a few weeks ago. There hasn't been
> much activity around it though:
> https://issues.apache.org/jira/browse/HADOOP-9911
>
> - André
>
> On Mon, Nov 4, 2013 at 2:20 PM,
Hi Daniel,
first of all, before posting to a mailing list, take a deep breath and
let your frustrations out. Then write the email. Using words like
"crappy", "toxicware", "nightmare" are not going to help you getting
useful responses.
While I agree that the docs can be confusing, we should try to
You have to start eclipse from an environment that has the correct umask
set, otherwise it will not inherit the settings.
Open a terminal, do umask 022 && eclipse and re-try to run the tests.
- André
On Wed, Dec 18, 2013 at 12:35 AM, Karim Awara wrote:
>
> Yes. Nothing yet. I should mention I
You could also give cascading lingual a try:
http://www.cascading.org/lingual/ http://docs.cascading.org/lingual/1.0/
We have a connector for oracle (
https://github.com/Cascading/cascading-jdbc#oracle), so you could read the
data from oracle do the processing on a hadoop cluster and write it back
This might get you further: https://github.com/mraad/Shapefile
- André
On Tue, Feb 25, 2014 at 11:29 AM, Sugandha Naolekar
wrote:
> Hello,
>
> I have a huge shapefile which has some 500 polygon geometries. Is there a
> way to store this shapefile in such a format in HDFS that each block will
>
May I recommend using Cascading instead of using MR directly? Cascading
supports Hadoop 1.x and Hadoop 2.x based distros and you don't have to
wrestle with these things all the time: http://www.cascading.org/ It's OSS,
ASL v2 licensed and all the good stuff.
- André
On Sun, May 11, 2014 at 1:52
We have a multi-vm or single-vm setup with apache hadoop, if you want to
give that a spin:
https://github.com/Cascading/vagrant-cascading-hadoop-cluster
- André
On Sun, Jul 6, 2014 at 9:05 AM, MrAsanjar . wrote:
> For my hadoop development and testing I use LXC (linux container) instead
> of V
Why don't you just use the apache tarball? We even have that automated, if
vagrant is your thing:
https://github.com/Cascading/vagrant-cascading-hadoop-cluster
- André
On Tue, Aug 12, 2014 at 10:12 PM, mani kandan wrote:
> Which distribution are you people using? Cloudera vs Hortonworks vs
> B
Could this be caused by the fact that hadoop no longer ships with 64bit
libs? https://issues.apache.org/jira/browse/HADOOP-9911
- André
On Tue, Aug 19, 2014 at 5:40 PM, arthur.hk.c...@gmail.com <
arthur.hk.c...@gmail.com> wrote:
> Hi,
>
> I am trying Snappy in Hadoop 2.4.1, here are my steps:
>
Hi,
I am trying to use the DistributedCache and I am running into problems in a
test, when using the LocalFileSystem. FSDownload complains about
permissions like so. This is hadoop 2.4.1 with JDK 6 on Linux.:
Caused by: java.io.IOException: Resource file:/path/to/some/file is not
publicly accessa
On Wed, Aug 20, 2014 at 11:54 PM, Ken Krugler
wrote:
>
>
> PS - And why, oh why is "target" hard-coded all over the place in the
> mini-cluster code as the directory (from CWD) for logs, data blocks, etc?
>
>
https://issues.apache.org/jira/browse/YARN-1442
- André
--
André Kelpe
an...@concurr
virtualbox is known for causing instabilities in the host-kernel (or at
least, it used to). You might be better off asking for support there:
https://www.virtualbox.org/wiki/Bugtracker
- André
On Wed, Sep 17, 2014 at 4:25 AM, Li Li wrote:
> hi all,
> I know it's not a problem related to had
Did you format your namenode before starting HDFS?
- André
On Sun, Nov 23, 2014 at 7:24 PM, Tim Dunphy wrote:
> Hey all,
>
> OK thanks for your advice on setting up a hadoop test environment to get
> started in learning how to use hadoop! I'm very excited to be able to start
> to take this plu
Try Cascading multitool: http://docs.cascading.org/multitool/2.6/
- André
On Fri, Dec 12, 2014 at 10:30 AM, unmesha sreeveni
wrote:
> I am trying to divide my HDFS file into 2 parts/files
> 80% and 20% for classification algorithm(80% for modelling and 20% for
> prediction)
> Please provide sug
Try our vagrant setup:
https://github.com/Cascading/vagrant-cascading-hadoop-cluster
- André
On Sat, Jan 17, 2015 at 10:07 PM, Krish Donald wrote:
> Hi,
>
> I am looking for working VM of Apache Hadoop.
> Not looking for cloudera or Horton works VMs.
> If anybody has it and if they can share th
See here:
https://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-common/ClusterSetup.html
- André
On Wed, Feb 4, 2015 at 2:47 PM, Fernando Carvalho <
fernandocarvalhocoe...@gmail.com> wrote:
> Dear Hadoop users,
>
> I would like to know if some have an simple example of how to setup Ha
Hadoop has moved to git: https://wiki.apache.org/hadoop/GitAndHadoop
-- André
On Fri, Feb 6, 2015 at 9:13 AM, Azuryy Yu wrote:
> Hi,
>
> http://svn.apache.org/viewcvs.cgi/hadoop/common/trunk/
>
> I cannot open this URL. does that anybody can access it?
>
> another, I cannot "svn up" the new rel
Please just use a build tool like maven or gradle for your build.
There is no way to manage your classpath like this and stay sane.
Nobody does this and you shouldn't either.
- André
On Tue, Apr 14, 2015 at 7:34 AM, Anand Murali wrote:
> Dear Naik:
>
> I have already set path both for Hadoop and
Are you serious?
"This release is *not* yet ready for production use. Critical issues
are being ironed out via testing and downstream adoption. Production
users
should wait for a *2.7.1/2.7.2* release."
- André
On Thu, Apr 23, 2015 at 3:13 PM, Ted Yu wrote:
> Can you use 2.7.0 ?
>
>
> ht
yarn logs -applicationId application_1434970319691_0135 should do the
trick. Note that you have to enable log aggregation in yarn to make that
work.
- André
On Thu, Oct 8, 2015 at 9:39 AM, Dhanashri Desai
wrote:
> Everytime when I run a job on hadoop cluster, I have to see failed job log
> fil
33 matches
Mail list logo