RE: Running map reduce programmatically is unusually slow

2013-11-04 Thread Chandra Mohan, Ananda Vel Murugan
Hi, Today morning, I noticed one more weird thing. When I run the map reduce job using this utility, it does not show up in JobTracker web UI. Any one has any clue? Please help. Thanks. Regards, Anand.C From: Chandra Mohan, Ananda Vel Murugan [mailto:ananda.muru...@honeywell.com] Sent: Monday,

Re: CompateTo() in WritableComparable

2013-11-04 Thread unmesha sreeveni
ie key -> A*Atrans->after multiplication the result will be a 2D array which is declared as double (matrix) lets say the result be Matrix "Ekey"(double[][] Ekey) value --> Atrans*D -> after multiplication the result will be Matrix "Eval" (double[][] Eval). After tat i ne

CFP NoSQL FOSDEM - Hadoop Community

2013-11-04 Thread laura.czajkow...@gmail.com
Hi all, We're pleased to announce the call for participation for the NoSQL devroom, returning after a great last year. NoSQL is an encompassing term that covers a multitude of different and interesting database solutions. As the interest in NoSQL continues to grow, we are looking for talks on an

Re: Hadoop 2.2.0: hdfs configuration

2013-11-04 Thread Ping Luo
I provided the config directory explicitly and now it works hdfs --config etc/hadoop namenode -format From: Ping Luo To: "user@hadoop.apache.org" Sent: Monday, November 4, 2013 1:55 PM Subject: Hadoop 2.2.0: hdfs configuration I am trying to setup Hadoo

Hadoop 2.2.0: hdfs configuration

2013-11-04 Thread Ping Luo
I am trying to setup Hadoop 2.2.0 in cluster mode. I modified the hdfs-site.xml file in $HADOOP_HOME/etc/hadoop as below:       dfs.namenode.name.dir     /home/hadoop/hdfs/namenode/         dfs.namenode.hosts     localhost         dfs.datanode.data.dir     /home/hadoop/hdfs/datanode/  

List jobhistory into a txt file

2013-11-04 Thread xeon
Hi, I want to put all the job details into a txt file. I am using YARN, and when I launched the "job -history" command, it says that the file doesn't exist, although the data is available in the in the web interface. But this is a little bit different of what I really want, that is to list a

Re: best solution for data ingestion

2013-11-04 Thread Chris Mattmann
Hi Guys, Depending on the *type* of ingestion you are trying to do into HDFS, the combination of Apache OODT (http://oodt.apache.org/) and Apache Tika (http://tika.apache.org/) may do the trick. Cheers, Chris -Original Message- From: Bing Jiang Reply-To: "user@hadoop.apache.org" Date

Re: sub

2013-11-04 Thread Ted Yu
Please send email to: user-subscr...@hadoop.apache.org On Mon, Nov 4, 2013 at 8:03 AM, Yang Zhang wrote: > >

sub

2013-11-04 Thread Yang Zhang

Re: Error while running Hadoop Source Code

2013-11-04 Thread Basu,Indrashish
Hi All, Any update on the below post ? I came across some old post regarding the same issue. It explains the solution as " The *nopipe* example needs more documentation. It assumes that it is run with the InputFormat from src/test/org/apache/*hadoop*/mapred/*pipes*/ *WordCountInputFormat*.

Re: C++ example for hadoop-2.2.0

2013-11-04 Thread Andre Kelpe
No, because I was trying to set up a cluster automatically with the tarballs from apache.org. - André On Mon, Nov 4, 2013 at 3:05 PM, Salman Toor wrote: > Hi, > > Did you tried to compile with source? > > /Salman. > > > Salman Toor, PhD > salman.t...@it.uu.se > > > > On Nov 4, 2013, at 2:55 PM,

create har before hdfs

2013-11-04 Thread 黄骞
Hi everyone, I have to use Har file to deal with small files on hand. However it takes too much time to store millions of small files into hdfs before I can archive them. Could I archive them first outside hdfs with some program and then put the result file into hdfs? Thanks! Qian

Re: C++ example for hadoop-2.2.0

2013-11-04 Thread Salman Toor
Hi, Did you tried to compile with source? /Salman. Salman Toor, PhD salman.t...@it.uu.se On Nov 4, 2013, at 2:55 PM, Andre Kelpe wrote: > I reported the 32bit/64bit problem a few weeks ago. There hasn't been > much activity around it though: > https://issues.apache.org/jira/browse/HADOOP-

Running map reduce programmatically is unusually slow

2013-11-04 Thread Chandra Mohan, Ananda Vel Murugan
Hi, I have written a small utility to run map reduce job programmatically. My aim is to run my map reduce job without using hadoop shell script. I am planning to call this utility from another application. Following is the code which runs the map reduce job. I have bundled this java class into

Re: C++ example for hadoop-2.2.0

2013-11-04 Thread Andre Kelpe
I reported the 32bit/64bit problem a few weeks ago. There hasn't been much activity around it though: https://issues.apache.org/jira/browse/HADOOP-9911 - André On Mon, Nov 4, 2013 at 2:20 PM, Salman Toor wrote: > Hi, > > Ok so 2.x is not a new version its another branch. Good to know! Actually >

Re: C++ example for hadoop-2.2.0

2013-11-04 Thread Salman Toor
Hi, Yes I will give a try and let everyone know. /Salman. Salman Toor, PhD salman.t...@it.uu.se On Nov 4, 2013, at 2:19 PM, REYANE OUKPEDJO wrote: > I will suggest getting the source and compiling in your machine. If your > machine is 64 bits you will get the 64 bits native libraries.

Re: C++ example for hadoop-2.2.0

2013-11-04 Thread Salman Toor
Hi, Ok so 2.x is not a new version its another branch. Good to know! Actually 32bit will be difficult as the code I got have already some dependencies on 64 bit. Otherwise I will continue with 1.x version. Can you suggest some version with 1.x series which is stable and work on the cluster env

Re: C++ example for hadoop-2.2.0

2013-11-04 Thread REYANE OUKPEDJO
I will suggest getting the source and compiling in your machine. If your machine is 64 bits you will get the 64 bits native libraries. That will solve the problem you have .Please get hadoop-2.2.0-src.tar.gz. Thanks  Reyane OUKPEDJO On Monday, November 4, 2013 7:56 AM, Amr Shahin wrote:

Re: C++ example for hadoop-2.2.0

2013-11-04 Thread Amr Shahin
Well, the 2 series isn't exactly the "next version". It's a continuation of branch .2. Also, the error message from the gcc indicates that the library you're trying to link to isn't compatible which made me suspicious. check the documentation to see if hadoop has 64 libraries, or otherwise compile

Re: C++ example for hadoop-2.2.0

2013-11-04 Thread Salman Toor
Hi, Thanks for your answer! But are you sure about it? Actually Hadoop version 1.2 have both 32 and 64 bit libraries so I believe the the next version should have both... But I am not sure just a random guess :-( Regards.. Salman. Salman Toor, PhD salman.t...@it.uu.se On Nov 4, 2013, at

Re: C++ example for hadoop-2.2.0

2013-11-04 Thread Amr Shahin
I believe hadoop isn't compatible with 64 architecture. Try installing the 32 libraries and compile against them. This error (skipping incompatible /home/sztoor/hadoop-2.2.0/lib/native/libhadooppipes.a when searching -lhadooppipes) indicates so On Mon, Nov 4, 2013 at 2:44 PM, Salman Toor wrote:

Re: CompateTo() in WritableComparable

2013-11-04 Thread Mirko Kämpf
You just have to implement the WritableComparable interface. This might be straight forward for any simple data type. Have look on: public int compareTo(MyWritableComparable o) { int thisValue = this.value int thatValue = o.value; return (thisValue < thatValue ? -1 : (thisValue

Re: CompateTo() in WritableComparable

2013-11-04 Thread unmesha sreeveni
I am trying to emit a matrix(2D double array) as key and another matrix as value. ie context.write(new myclass(matrixa),new myclass(matrixb)); * so how to compare them On Mon, Nov 4, 2013 at 4:16 PM, unmesha sreeveni wrote: > Thanks Dieter De Witte > > > On Mon, Nov 4, 2013 at 4:11 PM, Dieter

Re: CompateTo() in WritableComparable

2013-11-04 Thread unmesha sreeveni
Thanks Dieter De Witte On Mon, Nov 4, 2013 at 4:11 PM, Dieter De Witte wrote: > A writableComparable is an interface. A key in mapreduce needs to > implement it, since the keys need to be compared for equality. So it has > nothing to do with key value pairs as such. > > Regards, Dieter > > > 20

Re: C++ example for hadoop-2.2.0

2013-11-04 Thread Salman Toor
Hi, Can someone give a pointer? Thanks in advance. Regards.. Salman. Salman Toor, PhD salman.t...@it.uu.se On Nov 3, 2013, at 11:31 PM, Salman Toor wrote: > Hi, > > I am quite new to the Hadoop world, previously was running hadoop-1.2.0 > stable version on my small cluster and encou

Re: CompateTo() in WritableComparable

2013-11-04 Thread Dieter De Witte
A writableComparable is an interface. A key in mapreduce needs to implement it, since the keys need to be compared for equality. So it has nothing to do with key value pairs as such. Regards, Dieter 2013/11/4 unmesha sreeveni > > what is CompateTo() in WritableComparable? > Is that comparing a

CompateTo() in WritableComparable

2013-11-04 Thread unmesha sreeveni
what is CompateTo() in WritableComparable? Is that comparing a key and value or comparing key and key,value and value -- *Thanks & Regards* Unmesha Sreeveni U.B *Junior Developer* *Amrita Center For Cyber Security* * Amritapuri.www.amrita.edu/cyber/ *

Re: best solution for data ingestion

2013-11-04 Thread Bing Jiang
Apache Pig is also a solution for data ingest, which gives more flexible in functionality and more efficient in development. Regards. Bing 2013/11/2 Marcel Mitsuto F. S. > I've done some testing with flume, but ended up using syslog-ng, more > flexible, reliable, and with a lower fingerprint. >

Re: trace class calls in hadoop hdfs

2013-11-04 Thread Bing Jiang
I think you can try to use jdb, which exists in $JAVA_HOME/bin. Regards Bing 2013/11/4 Karim Awara > Hi, > > I want to trace the calls that both the DataNode and the NameNode do when > I execute a shell command on the hdfs such as (hadoop dfs -put {src} > {dst}). Any idea how to do that? > > >

trace class calls in hadoop hdfs

2013-11-04 Thread Karim Awara
Hi, I want to trace the calls that both the DataNode and the NameNode do when I execute a shell command on the hdfs such as (hadoop dfs -put {src} {dst}). Any idea how to do that? ; tracing including the data node asking the name node for the blockID.. also the class where datanode use to determ