Re: Re: Re: Re: Help in Hadoop

2009-11-22 Thread aa225
I am still getting the same exception. This is the stack trace of it. java.io.IOException: Not a file: hdfs://zeus:18004/user/hadoop/output6/MatrixA-Row1 at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:195) at org.apache.hadoop.mapred.JobClient.submitJob

Re: Re: Re: Re: Help in Hadoop

2009-11-22 Thread Gang Luo
- 发件人: "aa...@buffalo.edu" 收件人: common-user@hadoop.apache.org; aa...@buffalo.edu; Jason Venner 发送日期: 2009/11/22 (周日) 9:53:41 下午 主 题: Re: Re: Re: Re: Help in Hadoop I am still getting the same exception. This is the stack trace of it. java.io.IOException: Not a file: hdfs://zeus:18004/user/ha

Re: Re: Re: Re: Doubt in Hadoop

2009-11-29 Thread aa225
Hi, Actually, I just made the change suggested by Aaron and my code worked. But I still would like to know why does the setJarbyClass() method have to be called when the Main class and the Map and Reduce classes are in the same package ? Thank You Abhishek Agrawal SUNY- Buffalo (716-435-7122)

Re: Re: Re: Re: Doubt in Hadoop

2009-11-30 Thread Aaron Kimball
You need to send a jar to the cluster so it can run your code there. Hadoop doesn't magically know which jar is the one containing your main class, or that of your mapper/reducer -- so you need to tell it via that call so it knows which jar file to upload. - Aaron On Sun, Nov 29, 2009 at 7:54 AM,

Re: RE!

2009-07-26 Thread Ninad Raut
Check out pro hadoop ... very nice book.. On Mon, Jul 27, 2009 at 11:21 AM, Sugandha Naolekar wrote: > Hello! > > Can u please suggest me few of the good books of HADOOP? > > I am lookking for the book of it more based on the programming rather than > the theory. > > As in, > -> Map-Reduce and Co

Re: Re: Re: Re: Re: map output not euqal to reduce input

2009-12-14 Thread Amogh Vasekar
--- 原始邮件 发件人: Amogh Vasekar 收件人: "common-user@hadoop.apache.org" 发送日期: 2009/12/11 (周五) 2:55:12 上午 主 题: Re: Re: Re: Re: map output not euqal to reduce input Hi, The counters are updated as the records are *consumed*, for both mapper and reducer. Can you confirm if all the values returned

Re: Re: Re: Help in Hadoop

2009-11-22 Thread aa225
Hi everybody, The 10 different map-reducers store their respective outputs in 10 different files. This is the snap shot had...@zeus:~/hadoop-0.19.1$ bin/hadoop dfs -ls output5 Found 2 items drwxr-xr-x - hadoop supergroup 0 2003-05-16 02:16 /user/hadoop/output5/MatrixA-Row1

Re: Re: Re: Help in Hadoop

2009-11-22 Thread Jason Venner
set the number of reduce tasks to 1. 2009/11/22 > Hi everybody, > The 10 different map-reducers store their respective outputs in > 10 > different files. This is the snap shot > > had...@zeus:~/hadoop-0.19.1$ bin/hadoop dfs -ls output5 > Found 2 items > drwxr-xr-x - hadoop supergro

Re: Re: Re: Doubt in Hadoop

2009-11-29 Thread aa225
Hi, I dont set job.setJarByClass(Map.class). But my main class, the map class and the reduce class are all in the same package. Does this make any difference at or do I still have to call Thank You Abhishek Agrawal SUNY- Buffalo (716-435-7122) On Fri 11/27/09 1:42 PM , Aaron Kimball aa.

Re: re-reading

2011-06-08 Thread Stefan Wienert
Try input.clone()... 2011/6/8 Mark question : > Hi, > >   I'm trying to read the inputSplit over and over using following function > in MapperRunner: > > @Override >    public void run(RecordReader input, OutputCollector output, Reporter > reporter) throws IOException { > >   RecordReader copyInpu

Re: re-reading

2011-06-08 Thread Harsh J
Or if that does not work for any reason (haven't tried it really), try writing your own InputFormat wrapper where in you can have direct access to the InputSplit object to do what you want to (open two record readers, and manage them separately). On Wed, Jun 8, 2011 at 1:48 PM, Stefan Wienert wro

Re: re-reading

2011-06-08 Thread Mark question
Thanks for the replies, but input doesn't have 'clone' I don't know why ... so I'll have to write my custom inputFormat ... I was hoping for an easier way though. Thank you, Mark On Wed, Jun 8, 2011 at 1:58 AM, Harsh J wrote: > Or if that does not work for any reason (haven't tried it really),

Re: re-reading

2011-06-08 Thread Mark question
I have a question though for Harsh case... I wrote my custom inputFormat which will create an array of recordReaders and give them to the MapRunner. Will that mean multiple copies of the inputSplit are all in memory? or will there be one copy pointed by all of them .. as if they were pointers ? T

Re: re-reading

2011-06-08 Thread Harsh J
Mark, The InputSplit is something of a meta class you ought to use to get path, offset and length information from. Your RecordReader implementation in the InputFormat would ideally be wrapping two instantiated RecordReaders made from the same InputSplit meta information. The InputSplit object doe

Re: re-reading

2011-06-08 Thread Mark question
I assumed before reading the split API that it is the actual split, my bad. Thanks alot Harsh, it's working great! Mark

Re: Re: Re: Re: Re: map output no t euqal to reduce input

2009-12-16 Thread Gang Luo
- 发件人: Amogh Vasekar 收件人: "common-user@hadoop.apache.org" 发送日期: 2009/12/15 (周二) 1:59:14 上午 主 题: Re: Re: Re: Re: Re: map output not euqal to reduce input >>how do you define 'consumed by reducer' Trivially, as long as you have your values iterator go to the end, you shoul

Re: RE: RE: java.io.IOException: Incorrect data format

2011-09-20 Thread Uma Maheswara Rao G 72686
I would suggest you to clean some space and try. Regards, Uma - Original Message - From: "Peng, Wei" Date: Wednesday, September 21, 2011 10:03 am Subject: RE: RE: java.io.IOException: Incorrect data format To: common-user@hadoop.apache.org > Yes, I can. The datanode i

Re: Re: Re: Re: map output not euqal to reduce input

2009-12-11 Thread Gang Luo
asekar 收件人: "common-user@hadoop.apache.org" 发送日期: 2009/12/11 (周五) 2:55:12 上午 主 题: Re: Re: Re: Re: map output not euqal to reduce input Hi, The counters are updated as the records are *consumed*, for both mapper and reducer. Can you confirm if all the values returned by your iterators are

Re: Re: distcp question

2012-10-12 Thread kojie . fu
kojie.fu From: Rita Date: 2012-10-13 03:19 To: common-user Subject: Re: distcp question thanks for the advise. Before I push or pull. Are there any tests I can run before I do the distCP. I am not 100% sure if I have my webhdfs setup properly. On Fri, Oct 12, 2012 at 1:01 PM, J

Re: Re: distcp question

2012-10-12 Thread Rita
nvermind. Figured it out. On Fri, Oct 12, 2012 at 3:20 PM, kojie.fu wrote: > > > > > > kojie.fu > > From: Rita > Date: 2012-10-13 03:19 > To: common-user > Subject: Re: distcp question > thanks for the advise. > > Before I push or pull. Are there any t

Re: Code re-use?

2009-09-09 Thread Kevin Peterson
On Tue, Sep 8, 2009 at 1:16 PM, Mark Kerzner wrote: > Hi, > I have some code that's common between the main class, mapper, and reducer. > Can I put it only in the main class and use it from mapper and reducer? > > A similar question about static variables in the main - are the available > from ma

Re: Code re-use?

2009-09-09 Thread Mark Kerzner
Thank you, Kevin, for a detailed explanation. I went ahead and shared both. Since I test on my machine, it worked :) but obviously it was a fluke, and I need to change my code for running on the cluster. Sincerely, Mark On Wed, Sep 9, 2009 at 2:57 PM, Kevin Peterson wrote: > On Tue, Sep 8, 2009

Re: Re: block size

2011-09-20 Thread hao.wang
Hi, Joey: Thanks for your help! 2011-09-21 hao.wang 发件人: Joey Echeverria 发送时间: 2011-09-21 10:10:54 收件人: common-user 抄送: 主题: Re: block size HDFS blocks are stored as files in the underlying filesystem of your datanodes. Those files do not take a fixed amount of space, so if you

Re: Re: DN limit

2011-12-22 Thread bourne1900
der: Adrian Liu Date: 2011年12月23日(星期五) 上午10:47 To: common-user@hadoop.apache.org Subject: Re: DN limit In my understanding, the max number of files stored in the HDFS should be /sizeof(inode struct). This max number of HDFS files should be no smaller than max files a datanode can hold. Please feel

Re: Re: DN limit

2011-12-23 Thread Harsh J
上午10:47 > To: common-user@hadoop.apache.org > Subject: Re: DN limit > In my understanding, the max number of files stored in the HDFS should be > /sizeof(inode struct). This max number of HDFS files > should be no smaller than max files a datanode can hold. > > Please feel f

Re: Re: DN limit

2011-12-25 Thread bourne1900
Hi, The replica of block is 1. Threre is 150million block in NN web UI. Bourne 发件人: Harsh J 发送时间: 2011年12月24日(星期六) 下午2:09 收件人: common-user 主题: Re: Re: DN limit Bourne, You have 14 million files, each taking up a single block or are these files multi-blocked? What does the block count come up

Re: Re: Re: Re: map output not euqal t o reduce input

2009-12-10 Thread Amogh Vasekar
���ˣ� Todd Lipcon �ռ��ˣ� common-user@hadoop.apache.org ���ڣ� 2009/12/10 () 4:43:52 ���� �� �⣺ Re: Re�� Re�� map output not euqal to reduce input On Thu, Dec 10, 2009 at 1:15 PM, Gang Luo wrote: > Hi Todd, > I didn't change the partitioner, just use the default one. Will the d

Re: Re: Re: how to make hadoop balance automatically

2010-10-08 Thread shangan
2010-10-08 发件人: Taeho Kang 发送时间: 2010-10-08 13:49:55 收件人: common-user 抄送: 主题: Re: Re: how to make hadoop balance automatically Have the dfs upload done by a server not running a datanode and your blocks will be randomly distributed among active datanodes. On Fri, Oct 8, 2010 at 2

Re: Re: Re: how to make hadoop balance automatically

2010-10-08 Thread Harsh J
choise ? > > > > > 2010-10-08 > > > > > > > 发件人: Taeho Kang > 发送时间: 2010-10-08 13:49:55 > 收件人: common-user > 抄送: > 主题: Re: Re: how to make hadoop balance automatically > > Have the dfs upload done by a server not running a datanode and your > b

Re: Re-using output directories

2009-08-18 Thread Enis Soztutar
Phil Hagelberg wrote: I'm trying to write a Hadoop job that will add documents to an existing lucene index. My initial idea was to set the index as the output directory and create and IndexWriter based on FileOutputFormat.getOutputPath(context), but this requires that the output path not exist wh

Re: Re: Problem with Hadoop

2009-11-09 Thread aa225
Hi Jeff, I am sorry but I do not have the file mapred-site.xml. So I made the change in hadoop-site.xml. Also do we have to make the change only in the master node or even in the slave nodes ? Thank You Abhishek Agrawal SUNY- Buffalo (716-435-7122) On Tue 11/10/09 12:01 AM , Jeff Zhang

Re: Re: Help in Hadoop

2009-11-22 Thread aa225
Hellow, If I write the output of the 10 tasks in 10 different files then how do I go about merging the output ? Is there some in built functionality or do I have to write some code for that ? Thank You Abhishek Agrawal SUNY- Buffalo (716-435-7122) On Sun 11/22/09 5:40 PM , Gang Luo lgpu

Re: Re: Help in Hadoop

2009-11-22 Thread Gang Luo
.apache.org; aa...@buffalo.edu; Gang Luo 发送日期: 2009/11/22 (周日) 5:48:36 下午 主 题: Re: Re: Help in Hadoop Hellow, If I write the output of the 10 tasks in 10 different files then how do I go about merging the output ? Is there some in built functionality or do I have to write some code for that ? Than

Re: Re: Doubt in Hadoop

2009-11-26 Thread aa225
Hi, I am running the job from command line. The job runs fine in the local mode but something happens when I try to run the job in the distributed mode. Abhishek Agrawal SUNY- Buffalo (716-435-7122) On Fri 11/27/09 2:31 AM , Jeff Zhang zjf...@gmail.com sent: > Do you run the map reduce job

Re: Re: Doubt in Hadoop

2009-11-27 Thread Aaron Kimball
When you set up the Job object, do you call job.setJarByClass(Map.class)? That will tell Hadoop which jar file to ship with the job and to use for classloading in your code. - Aaron On Thu, Nov 26, 2009 at 11:56 PM, wrote: > Hi, > I am running the job from command line. The job runs fine in

Re: Re: return in map

2009-12-06 Thread Edmund Kohlwey
gt; Similarly, if I catch an exception and I want to quit the current task, what > should I do? > > -Gang > > > - 原始邮件 > 发件人: Edmund Kohlwey > 收件人: common-user@hadoop.apache.org > 发送日期: 2009/12/6 (周日) 10:52:40 上午 > 主 题: Re: return in map > > Let me

Re: Re: return in map

2009-12-06 Thread Amogh Vasekar
, the file I want to read > doesn't exist)? if use System.exit(), hadoop will try to run it again. > Similarly, if I catch an exception and I want to quit the current task, what > should I do? > > -Gang > > > - ԭʼ�ʼ� > �ˣ� Edmund Kohlwey > �ռ��ˣ�

RE: Re: return in map

2009-12-07 Thread Gang Luo
Thanks. It helps. -Gang - 原始邮件 发件人: Amogh Vasekar 收件人: "common-user@hadoop.apache.org" 发送日期: 2009/12/7 (周一) 12:43:07 上午 主 题: Re: Re: return in map Hi, If the file doesn’t exist, java will error out. For partial skips, o.a.h.mapreduce.Mapper class provides a method ru

Re: Need to re replicate

2010-01-27 Thread Raymond Jennings III
I would try running the rebalance utility. I would be curious to see what that will do and if that will fix it. --- On Wed, 1/27/10, Ananth T. Sarathy wrote: > From: Ananth T. Sarathy > Subject: Need to re replicate > To: common-user@hadoop.apache.org > Date: Wednesday, January

Re: Need to re replicate

2010-01-27 Thread Ananth T. Sarathy
: > > > From: Ananth T. Sarathy > > Subject: Need to re replicate > > To: common-user@hadoop.apache.org > > Date: Wednesday, January 27, 2010, 9:28 PM > > One of our datanodes went bye bye. We > > added a bunch more data nodes, but > > when I do

Re: Need to re replicate

2010-01-27 Thread Brian Bockelman
do and if that will fix it. >> >> --- On Wed, 1/27/10, Ananth T. Sarathy wrote: >> >>> From: Ananth T. Sarathy >>> Subject: Need to re replicate >>> To: common-user@hadoop.apache.org >>> Date: Wednesday, January 27, 2010, 9:28 PM >>> O

Re: Need to re replicate

2010-01-27 Thread Ananth T. Sarathy
be curious to see > what > >> that will do and if that will fix it. > >> > >> --- On Wed, 1/27/10, Ananth T. Sarathy > wrote: > >> > >>> From: Ananth T. Sarathy > >>> Subject: Need to re replicate > >>> To: common-user@h

Re: Need to re replicate

2010-01-27 Thread Brian Bockelman
ymondj...@yahoo.com >>>> wrote: >>> >>>> I would try running the rebalance utility. I would be curious to see >> what >>>> that will do and if that will fix it. >>>> >>>> --- On Wed, 1/27/10, Ananth T. Sarathy >> wr

Re: Re: Fair Scheduler Problem

2012-03-06 Thread hao.wang
ds, 2012-03-07 hao.wang 发件人: Harsh J 发送时间: 2012-03-07 14:14:05 收件人: common-user 抄送: 主题: Re: Fair Scheduler Problem Hello Hao, Its best to submit CDH user queries to https://groups.google.com/a/cloudera.org/group/cdh-user/topics (cdh-u...@cloudera.org) where the majority of CDH users

Re: Re: Fair Scheduler Problem

2012-03-06 Thread Harsh J
__ > hao.wang > ____ > 发件人: Harsh J > 发送时间: 2012-03-07 14:14:05 > 收件人: common-user > 抄送: > 主题: Re: Fair Scheduler Problem > Hello Hao, > Its best to submit CDH user queries to > https://groups.google.com/a/cloudera.org/group/cdh-user

Re: Re: mapreduce attempts killed

2010-08-27 Thread shangan
Thank you. I get it, it is the speculative attempt. 2010-08-27 shangan 发件人: Amareshwari Sri Ramadasu 发送时间: 2010-08-27 16:33:59 收件人: common-user@hadoop.apache.org 抄送: 主题: Re: mapreduce attempts killed You should look at task logs to figure why the tasks failed. They are

Re: Re-generate datanode storageID?

2011-03-24 Thread Niels Basjes
Hi, To solve that simply do the following on the problematic nodes: 1) Stop the datanode (probably not running) 2) Remove everything inside the .../cache/hdfs/ 3) Start the datanode again. Note: With cloudera always use "service" way to stop/start hadoop software! service hadoop-0.20-datanode sto

Re: Re-generate datanode storageID?

2011-03-24 Thread Marc Leavitt
Worked perfectly. Thanks Niels! -mgl On Mar 24, 2011, at 12:48 PM, Niels Basjes wrote: > Hi, > > To solve that simply do the following on the problematic nodes: > 1) Stop the datanode (probably not running) > 2) Remove everything inside the .../cache/hdfs/ > 3) Start the datanode again. > >

Re: Map Tasks re-executing

2011-03-30 Thread maha
It's not the sorting, since the sorted files are produced in output, it's then mapper not existing well. so can anyone tell me if it's wrong to write mapper.close() function like this ? @Override public void close() throws IOException{ helper.CleanUp();

Re:

2012-07-29 Thread Harsh J
For a job to get submitted to a cluster, you will need proper client configurations. Have you configured your mapred-site.xml and yarn-site.xml properly inside /etc/hadoop/conf/mapred-site.xml and /etc/hadoop/conf/yarn-site.xml at the client node? On Mon, Jul 30, 2012 at 12:00 AM, abhiTowson cal

Re:

2012-07-29 Thread abhiTowson cal
Hi, Thanks for reply Harsh, These are my configuration properties //mapred-site.xml mapreduce.framework.name yarn mapreduce.jobhistory.address hadoop-master-2:10020 mapreduce.jobhistory.webapp.address hadoop-master-2:19888 //Yarn-site-xml Classpath for typical applications.

Re:

2012-07-29 Thread anil gupta
Hi Abhishek, Once you make sure that whatever Harsh said in the previous email is present in the cluster and then also the job runs in Local Mode. Then try running the job with hadoop --config option. Refer to this discussion for more detail: https://groups.google.com/a/cloudera.org/forum/#!topic/

Re:

2012-07-29 Thread abhiTowson cal
HI Anil, I have already tried this,but issue could not be resolved. Regards Abhishek On Sun, Jul 29, 2012 at 3:05 PM, anil gupta wrote: > Hi Abhishek, > > Once you make sure that whatever Harsh said in the previous email is > present in the cluster and then also the job runs in Local Mode. Then

Re:

2012-07-29 Thread Anil Gupta
Are you using cdh4? In you cluster are you using yarn or mr1? Check the classpath of Hadoop by Hadoop classpath command. Best Regards, Anil On Jul 29, 2012, at 12:12 PM, abhiTowson cal wrote: > HI Anil, > > I have already tried this,but issue could not be resolved. > > Regards > Abhishek > >

Re:

2012-07-29 Thread abhiTowson cal
Hi Anil, Iam using chd4 with yarn. On Sun, Jul 29, 2012 at 3:17 PM, Anil Gupta wrote: > Are you using cdh4? In you cluster are you using yarn or mr1? > Check the classpath of Hadoop by Hadoop classpath command. > > Best Regards, > Anil > > On Jul 29, 2012, at 12:12 PM, abhiTowson cal

Re:

2012-07-29 Thread abhiTowson cal
hi anil, Hadoop class path is also working fine. Regards Abhishek Thanks for On Sun, Jul 29, 2012 at 3:20 PM, abhiTowson cal wrote: > Hi Anil, > Iam using chd4 with yarn. > > On Sun, Jul 29, 2012 at 3:17 PM, Anil Gupta wrote: >> Are you using cdh4? In you cluster are you using yarn

Re:

2012-07-29 Thread anil gupta
Hi Abhishek, I faced similar problem with cdh4 few days ago. THe problem i found out was the classpath. The user from which i installed the cdh packages was having the right classpath but the users "yarn" and "hdfs" were having the incorrect classpath. When i was trying to run the job as yarn or h

Re:

2012-07-29 Thread anil gupta
Hi Abhishek, I didnt mean to ask you whether it returns result or not. I meant that you should check that the classpath is correct. It should have the directories where yarn is installed. ~Anil On Sun, Jul 29, 2012 at 12:23 PM, abhiTowson cal wrote: > hi anil, > > Hadoop class path is also work

Re:

2012-07-29 Thread abhiTowson cal
Hi anil, Thanks for the reply.Same as your case my pi job is haulted and their is no progress. Regards Abhishek On Sun, Jul 29, 2012 at 3:31 PM, anil gupta wrote: > Hi Abhishek, > > I didnt mean to ask you whether it returns result or not. I meant that you > should check that the classpath is c

Re:

2012-07-29 Thread anil gupta
Seems like you are also stuck in the same problem as i am... I am going to work on changing my conf tomorrow to fix this. How much memory your node has? Check the logs of Nodemanagers..At the bottom of the logs file you will see that NM is stopping some components.(sorry i cant recall the exact nam

RE!

2009-07-26 Thread Sugandha Naolekar
Hello! Can u please suggest me few of the good books of HADOOP? I am lookking for the book of it more based on the programming rather than the theory. As in, -> Map-Reduce and Compression -> HIVE and Hbase -> Experimentations with hadoop I also want to know, in what way can we use hadoop in big

Re::!

2009-08-02 Thread Sugandha Naolekar
I want to compress the data first and then place it in HDFS. Again, while retrieving the same, I want to uncompress it and place on the desired destination. Can this be possible. How to get started? Also, I want to get started with actual coding part of compression and MAP reduce. PLease suggest me

Re: :!

2009-08-02 Thread prashant ullegaddi
By "I want to compress the data first and then place it in HDFS", do you mean you want to compress the data locally and then copy to DFS? What's the size of your data? What's the capacity of HDFS? On Mon, Aug 3, 2009 at 10:45 AM, Sugandha Naolekar wrote: > I want to compress the data first and t

Re: :!

2009-08-02 Thread Sugandha Naolekar
Yes, You are right. Here goes the details related:: -> I have a Hadoop cluster of 7 nodes. Now there is this 8th machine, which is not a part of the hadoop cluster. -> I want to place the data of that machine into the HDFS. Thus, before placing it in HDFS, I want to compress it, and then dump in t

Re: :!

2009-08-02 Thread prashant ullegaddi
I don't think you will be able to compress some data unless it's on HDFS. What you can do is 1. Manually compress the data on the machine where the data resides. Then, copy the same to HDFS. or 2. Copy the data without compressing to HDFS, then run a job which just emits the data as it reads in k

Re: :!

2009-08-03 Thread Sugandha Naolekar
dats fine. But, if I place the data in HDFS and then run map reduce code to provide compression, then the data will get compressed in sequence files but, even the original data will reside in the memory;thereby leading or causing a kind of redundancy of data... Can u pls suggest me a way out?/ On

Re: :!

2009-08-03 Thread A BlueCoder
unsubscribe On Mon, Aug 3, 2009 at 12:01 AM, Sugandha Naolekar wrote: > dats fine. But, if I place the data in HDFS and then run map reduce code to > provide compression, then the data will get compressed in sequence files > but, even the original data will reside in the memory;thereby leading or

Re: :!

2009-08-03 Thread Sugandha Naolekar
This is ridiculous. What do you mean by unsubscribe.?? I have few queries and dats why have logged in to the corresponding forum. On Mon, Aug 3, 2009 at 12:33 PM, A BlueCoder wrote: > unsubscribe > > On Mon, Aug 3, 2009 at 12:01 AM, Sugandha Naolekar > wrote: > > > dats fine. But, if I place the

Re: :!

2009-08-03 Thread prashant ullegaddi
How files are written can be controlled. Maybe you are using SequenceFileOutputFormat. You can setOutputFormat() to TextOutputFormat. I guess, this must solve your problem! On Mon, Aug 3, 2009 at 12:31 PM, Sugandha Naolekar wrote: > dats fine. But, if I place the data in HDFS and then run map re

RE: :!

2009-08-03 Thread Amogh Vasekar
program will launch one map task per compressed file, so make sure you design your compression accordingly. Thanks, Amogh -Original Message- From: Sugandha Naolekar [mailto:sugandha@gmail.com] Sent: Monday, August 03, 2009 12:32 PM To: common-user@hadoop.apache.org Subject: Re: :!

Re: :!

2009-08-03 Thread Vibhooti Verma
ugust 03, 2009 12:32 PM > To: common-user@hadoop.apache.org > Subject: Re: :! > > dats fine. But, if I place the data in HDFS and then run map reduce code to > provide compression, then the data will get compressed in sequence files > but, even the original data will reside in the memo

Re: :!

2009-08-03 Thread Brian Bockelman
Hey Sugandha, It's a common mistake - I think he was trying to unsubscribe to the mailing list (which is done by sending a message to a specific email address with the command "unsubscribe"), not telling you to unsubscribe. Brian On Aug 3, 2009, at 2:09 AM, Sugandha Naolekar wrote: This i

Re: :!

2009-09-20 Thread Sugandha Naolekar
Hello! Can I use RMI to dump the files or data from a remote machine into the hadoop cluster, by executing that code from the local host? -- Regards! Sugandha

Re::!

2009-10-20 Thread Sugandha Naolekar
Hello! We have a cluster of 5 nodes and we are concentrating on the development of a DFS(Distributed File System). with the incorporation of Hadoop. Now, Can I get some ideas on how can I design package diagrams. -- Regards! Sugandha

Re::!

2009-10-21 Thread sudha sadhasivam
The cluster can be  set up in fully distributed mode. The slave and master conf files have to be changed appropriately G Sudha Sadasivam --- On Tue, 10/20/09, Sugandha Naolekar wrote: From: Sugandha Naolekar Subject: Re::! To: core-u...@hadoop.apache.org Date: Tuesday, October 20, 2009, 5:09

Re:

2012-01-05 Thread Roger Smith
What do you think about it? http://www.safia.co.ke/folderoldpublik1291.php?owjhpageID=78 Thu, 5 Jan 2012 16:21:39 __ "Were both going to work in the mill next Monday." (c) Harlanne volutant

Re::!!

2010-06-17 Thread Sugandha Naolekar
I need to execute a code through the propmt of hadoop,i.e; bin/hadoop>. So, I built the jar of it using jar cfmv Jarfile_name Manifest_filename -C directory_name/ .(in which d jars,and class files are added). After that, I simply execute the code thro' bin/hadoop Jarfilename. But, I get an error o

Re: :!!

2010-06-17 Thread Raghava Mutharaju
did you use the following? bin/hadoop jar Raghava. On Thu, Jun 17, 2010 at 9:21 PM, Sugandha Naolekar wrote: > I need to execute a code through the propmt of hadoop,i.e; bin/hadoop>. > So, I built the jar of it using jar cfmv Jarfile_name Manifest_filename -C > directory_name/ .(in which d ja

Re: :!!

2010-06-17 Thread Sugandha Naolekar
Following things I did:: 1) I went into the hadoop diectory- the path is /home/hadoop/Softwares/hadoop0.19.0 2) Then I made a folder named Try under the above path. I added all the jars under lib directory and the bin folder in which, my code lies. This bin folder got created under the eclipse's w

Re: :!!

2010-06-17 Thread Sugandha Naolekar
Now the jar is getting built but, when i try 2 run it, it diplays following. >bin/hadoop jar Sample.jar RunJar jarFile [mainClass] args... PLease suggest something...if possible, d procedures I have followed can be tried by someone..!! Regards! Sugandha On Fri, Jun 18, 2010 at 8:13 AM, Suga

Re: :!!

2010-06-17 Thread Chandraprakash Bhagtani
the problem may be in your jar creation or the path where you are copying the jar may be different from the jar command you are running. try building jar from eclipse itself and make sure you are giving correct path of the jar file to hadoop command. On Fri, Jun 18, 2010 at 11:01 AM, Sugandha Nao

Re: :!!

2010-06-18 Thread Sugandha Naolekar
The steps mentioned in d above mail of mine r d ones dat I follwed.! If some one could repeat d same, the problem can b better understood. How to run a Hadoop jar file Through Runjar API??? See simply, a built jar file can b passed as a parameter and where to be extracted in the unjar static

Re: :!!

2010-06-18 Thread Raghava Mutharaju
I followed the jar construction step mentioned in the Usage section (link below) and also the step mentioned to run specific classes in the jar. http://hadoop.apache.org/common/docs/r0.20.1/mapred_tutorial.html#Usage I replaced the classes folder with bin because thats where Eclipse puts in the c

RE:

2010-09-13 Thread 褚 鵬兵
solved this question. i found that automatically downloaded file is corrupted.manually download hadoop-0.20.0.tar.gz then put it into ~/.ant/cache/hadoop/core/sources folder.then it is ok. From: chu_pengb...@hotmail.com To: common-user@hadoop.apache.org Subject: Date: Thu, 9 Sep 2010 18:03:

Re: Re: Re: map output not euqal to reduce input

2009-12-10 Thread Todd Lipcon
On Thu, Dec 10, 2009 at 1:15 PM, Gang Luo wrote: > Hi Todd, > I didn't change the partitioner, just use the default one. Will the default > partitioner cause the lost of the records? > > -Gang > Do the maps output data nondeterministically? Did you experience any task failures in the run of the

Re: Re: Re: map output not euqal to reduce input

2009-12-10 Thread Gang Luo
Lipcon 收件人: common-user@hadoop.apache.org 发送日期: 2009/12/10 (周四) 4:43:52 下午 主 题: Re: Re: Re: map output not euqal to reduce input On Thu, Dec 10, 2009 at 1:15 PM, Gang Luo wrote: > Hi Todd, > I didn't change the partitioner, just use the default one. Will the default > partitioner cau

Re: Re: HDFS block physical location

2012-07-25 Thread Chen He
; but that just gives me the hostnames or am I overlooking something? > I actually need the filename/harddisk on the node. > > JS > > Gesendet: Mittwoch, 25. Juli 2012 um 23:33 Uhr > Von: "Chen He" > An: common-user@hadoop.apache.org > Betreff: Re: HDFS block physic

Re: Re: HDFS block physical location

2012-07-25 Thread Todd Lipcon
actually need the filename/harddisk on the node. >> >> JS >> >> Gesendet: Mittwoch, 25. Juli 2012 um 23:33 Uhr >> Von: "Chen He" >> An: common-user@hadoop.apache.org >> Betreff: Re: HDFS block physical location >> >nohup hadoop

Re: Task re-scheduling in hadoop

2011-08-23 Thread Arun C Murthy
Moving to mapreduce-user@, bcc common-user@ On Aug 23, 2011, at 2:31 AM, Vaibhav Pol wrote: > Hi All, > I have some query regarding task re-scheduling.Can it possible > to make Job tracker wait for some time before re-scheduling of failed > tracker's tasks. > W

Re: RE: java.io.IOException: Incorrect data format

2011-09-20 Thread Uma Maheswara Rao G 72686
Are you able to create the directory manually in the DataNode Machine? #mkdirs /state/partition2/hadoop/dfs/tmp Regards, Uma - Original Message - From: "Peng, Wei" Date: Wednesday, September 21, 2011 9:44 am Subject: RE: java.io.IOException: Incorrect data format To: c

RE: RE: java.io.IOException: Incorrect data format

2011-09-20 Thread Peng, Wei
Yes, I can. The datanode is not able to start after crashing without enough HD space. Wei -Original Message- From: Uma Maheswara Rao G 72686 [mailto:mahesw...@huawei.com] Sent: Tuesday, September 20, 2011 9:30 PM To: common-user@hadoop.apache.org Subject: Re: RE: java.io.IOException

RE: RE: java.io.IOException: Incorrect data format

2011-09-20 Thread Peng, Wei
, 2011 9:30 PM To: common-user@hadoop.apache.org Subject: Re: RE: java.io.IOException: Incorrect data format Are you able to create the directory manually in the DataNode Machine? #mkdirs /state/partition2/hadoop/dfs/tmp Regards, Uma - Original Message - From: "Peng, Wei" Date:

RE: RE: java.io.IOException: Incorrect data format

2011-09-20 Thread Peng, Wei
I just solved the problem by releasing more space on the related HD partitions. Thank you all for your help ! Wei -Original Message- From: Peng, Wei [mailto:wei.p...@xerox.com] Sent: Tuesday, September 20, 2011 9:35 PM To: common-user@hadoop.apache.org Subject: RE: RE

Re: RE: risks of using Hadoop

2011-09-21 Thread Uma Maheswara Rao G 72686
: RE: risks of using Hadoop To: common-user@hadoop.apache.org > Amen to that. I haven't heard a good rant in a long time, I am > definitely amused end entertained. > > As a veteran of 3 years with Hadoop I will say that the SPOF issue > is whatever you want to make it. But it

Re: Re: could not complete file...

2011-10-18 Thread bourne1900
Sender: Uma Maheswara Rao G 72686 Date: 2011年10月18日(星期二) 下午6:00 To: common-user CC: common-user Subject: Re: could not complete file... - Original Message - From: bourne1900 Date: Tuesday, October 18, 2011 3:21 pm Subject: could not complete file... To: common-user > Hi, > >

Re: Re: about hadoop lzo compression

2012-05-23 Thread yingnan.ma
Hi, Thank you for help! Best Regards Malone 2012-05-23 yingnan.ma 发件人: Harsh J 发送时间: 2012-05-23 18:24:14 收件人: common-user 抄送: 主题: Re: about hadoop lzo compression Malone, Right now it works despite error cause Pig hasn't had a need to read/write LZO data locally. Hence

Re: Re: how to query JobTracker

2010-06-17 Thread Some Body
Alan - original message Subject: Re: how to query JobTracker Sent: Thu, 17 Jun 2010 From: Sanel Zukan > AFAIK, there is no such method (to get a job name from client side) :( > (at least I wasn't able to find it). Via JobProfile can be > extracted job name via given id, but on

Re: Re: how to query JobTracker

2010-06-17 Thread Sanel Zukan
; submit the job, write the jobid to a lock file (hdfs://myapp/myjob.lock),   > and then >  a. remove the lock file when the job finishes, or >  b. if a new job is triggered before the first finished, read the jobid from > the lock file >     kill the previous job, and start a new one

Re: RE: please help in setting hadoop

2009-11-26 Thread aa225
iday, November 27, 2009 10:56 AM > > To: common-user@hadoop.apache.org > Subject: Re: please help in setting hadoop > > > > Hi, > > Just a thought, but you do not need to setup the temp directory in > > conf/hadoop-site.xml especially if you are running basi

Re: RE: please help in setting hadoop

2009-11-27 Thread Aaron Kimball
gt; > relative to me. > > > > > > > > > > -Original Message- > > > > From: aa...@buffalo.edu [aa...@buffa > > lo.edu] > > Sent: Friday, November 27, 2009 10:56 AM > > > > To: common-user@hadoop.apache.org > > Subje

  1   2   3   4   5   6   7   8   9   10   >