I am still getting the same exception. This is the stack trace of it.
java.io.IOException: Not a file:
hdfs://zeus:18004/user/hadoop/output6/MatrixA-Row1
at
org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:195)
at org.apache.hadoop.mapred.JobClient.submitJob
-
发件人: "aa...@buffalo.edu"
收件人: common-user@hadoop.apache.org; aa...@buffalo.edu; Jason Venner
发送日期: 2009/11/22 (周日) 9:53:41 下午
主 题: Re: Re: Re: Re: Help in Hadoop
I am still getting the same exception. This is the stack trace of it.
java.io.IOException: Not a file:
hdfs://zeus:18004/user/ha
Hi,
Actually, I just made the change suggested by Aaron and my code worked. But I
still would like to know why does the setJarbyClass() method have to be called
when the Main class and the Map and Reduce classes are in the same package ?
Thank You
Abhishek Agrawal
SUNY- Buffalo
(716-435-7122)
You need to send a jar to the cluster so it can run your code there. Hadoop
doesn't magically know which jar is the one containing your main class, or
that of your mapper/reducer -- so you need to tell it via that call so it
knows which jar file to upload.
- Aaron
On Sun, Nov 29, 2009 at 7:54 AM,
Check out pro hadoop ... very nice book..
On Mon, Jul 27, 2009 at 11:21 AM, Sugandha Naolekar
wrote:
> Hello!
>
> Can u please suggest me few of the good books of HADOOP?
>
> I am lookking for the book of it more based on the programming rather than
> the theory.
>
> As in,
> -> Map-Reduce and Co
--- 原始邮件
发件人: Amogh Vasekar
收件人: "common-user@hadoop.apache.org"
发送日期: 2009/12/11 (周五) 2:55:12 上午
主 题: Re: Re: Re: Re: map output not euqal to reduce input
Hi,
The counters are updated as the records are *consumed*, for both mapper and
reducer. Can you confirm if all the values returned
Hi everybody,
The 10 different map-reducers store their respective outputs in 10
different files. This is the snap shot
had...@zeus:~/hadoop-0.19.1$ bin/hadoop dfs -ls output5
Found 2 items
drwxr-xr-x - hadoop supergroup 0 2003-05-16 02:16
/user/hadoop/output5/MatrixA-Row1
set the number of reduce tasks to 1.
2009/11/22
> Hi everybody,
> The 10 different map-reducers store their respective outputs in
> 10
> different files. This is the snap shot
>
> had...@zeus:~/hadoop-0.19.1$ bin/hadoop dfs -ls output5
> Found 2 items
> drwxr-xr-x - hadoop supergro
Hi,
I dont set job.setJarByClass(Map.class). But my main class, the map class
and
the reduce class are all in the same package. Does this make any difference at
or
do I still have to call
Thank You
Abhishek Agrawal
SUNY- Buffalo
(716-435-7122)
On Fri 11/27/09 1:42 PM , Aaron Kimball aa.
Try input.clone()...
2011/6/8 Mark question :
> Hi,
>
> I'm trying to read the inputSplit over and over using following function
> in MapperRunner:
>
> @Override
> public void run(RecordReader input, OutputCollector output, Reporter
> reporter) throws IOException {
>
> RecordReader copyInpu
Or if that does not work for any reason (haven't tried it really), try
writing your own InputFormat wrapper where in you can have direct
access to the InputSplit object to do what you want to (open two
record readers, and manage them separately).
On Wed, Jun 8, 2011 at 1:48 PM, Stefan Wienert wro
Thanks for the replies, but input doesn't have 'clone' I don't know why ...
so I'll have to write my custom inputFormat ... I was hoping for an easier
way though.
Thank you,
Mark
On Wed, Jun 8, 2011 at 1:58 AM, Harsh J wrote:
> Or if that does not work for any reason (haven't tried it really),
I have a question though for Harsh case... I wrote my custom inputFormat
which will create an array of recordReaders and give them to the MapRunner.
Will that mean multiple copies of the inputSplit are all in memory? or will
there be one copy pointed by all of them .. as if they were pointers ?
T
Mark,
The InputSplit is something of a meta class you ought to use to get
path, offset and length information from. Your RecordReader
implementation in the InputFormat would ideally be wrapping two
instantiated RecordReaders made from the same InputSplit meta
information. The InputSplit object doe
I assumed before reading the split API that it is the actual split, my bad.
Thanks alot Harsh, it's working great!
Mark
-
发件人: Amogh Vasekar
收件人: "common-user@hadoop.apache.org"
发送日期: 2009/12/15 (周二) 1:59:14 上午
主 题: Re: Re: Re: Re: Re: map output not euqal to reduce input
>>how do you define 'consumed by reducer'
Trivially, as long as you have your values iterator go to the end, you shoul
I would suggest you to clean some space and try.
Regards,
Uma
- Original Message -
From: "Peng, Wei"
Date: Wednesday, September 21, 2011 10:03 am
Subject: RE: RE: java.io.IOException: Incorrect data format
To: common-user@hadoop.apache.org
> Yes, I can. The datanode i
asekar
收件人: "common-user@hadoop.apache.org"
发送日期: 2009/12/11 (周五) 2:55:12 上午
主 题: Re: Re: Re: Re: map output not euqal to reduce input
Hi,
The counters are updated as the records are *consumed*, for both mapper and
reducer. Can you confirm if all the values returned by your iterators are
kojie.fu
From: Rita
Date: 2012-10-13 03:19
To: common-user
Subject: Re: distcp question
thanks for the advise.
Before I push or pull. Are there any tests I can run before I do the
distCP. I am not 100% sure if I have my webhdfs setup properly.
On Fri, Oct 12, 2012 at 1:01 PM, J
nvermind. Figured it out.
On Fri, Oct 12, 2012 at 3:20 PM, kojie.fu wrote:
>
>
>
>
>
> kojie.fu
>
> From: Rita
> Date: 2012-10-13 03:19
> To: common-user
> Subject: Re: distcp question
> thanks for the advise.
>
> Before I push or pull. Are there any t
On Tue, Sep 8, 2009 at 1:16 PM, Mark Kerzner wrote:
> Hi,
> I have some code that's common between the main class, mapper, and reducer.
> Can I put it only in the main class and use it from mapper and reducer?
>
> A similar question about static variables in the main - are the available
> from ma
Thank you, Kevin, for a detailed explanation. I went ahead and shared both.
Since I test on my machine, it worked :) but obviously it was a fluke, and I
need to change my code for running on the cluster.
Sincerely,
Mark
On Wed, Sep 9, 2009 at 2:57 PM, Kevin Peterson wrote:
> On Tue, Sep 8, 2009
Hi, Joey:
Thanks for your help!
2011-09-21
hao.wang
发件人: Joey Echeverria
发送时间: 2011-09-21 10:10:54
收件人: common-user
抄送:
主题: Re: block size
HDFS blocks are stored as files in the underlying filesystem of your
datanodes. Those files do not take a fixed amount of space, so if you
der: Adrian Liu
Date: 2011年12月23日(星期五) 上午10:47
To: common-user@hadoop.apache.org
Subject: Re: DN limit
In my understanding, the max number of files stored in the HDFS should be /sizeof(inode struct). This max number of HDFS files should be no
smaller than max files a datanode can hold.
Please feel
上午10:47
> To: common-user@hadoop.apache.org
> Subject: Re: DN limit
> In my understanding, the max number of files stored in the HDFS should be
> /sizeof(inode struct). This max number of HDFS files
> should be no smaller than max files a datanode can hold.
>
> Please feel f
Hi,
The replica of block is 1.
Threre is 150million block in NN web UI.
Bourne
发件人: Harsh J
发送时间: 2011年12月24日(星期六) 下午2:09
收件人: common-user
主题: Re: Re: DN limit
Bourne,
You have 14 million files, each taking up a single block or are these
files multi-blocked? What does the block count come up
���ˣ� Todd Lipcon
�ռ��ˣ� common-user@hadoop.apache.org
���ڣ� 2009/12/10 () 4:43:52 ����
�� �⣺ Re: Re�� Re�� map output not euqal to reduce input
On Thu, Dec 10, 2009 at 1:15 PM, Gang Luo wrote:
> Hi Todd,
> I didn't change the partitioner, just use the default one. Will the d
2010-10-08
发件人: Taeho Kang
发送时间: 2010-10-08 13:49:55
收件人: common-user
抄送:
主题: Re: Re: how to make hadoop balance automatically
Have the dfs upload done by a server not running a datanode and your
blocks will be randomly distributed among active datanodes.
On Fri, Oct 8, 2010 at 2
choise ?
>
>
>
>
> 2010-10-08
>
>
>
>
>
>
> 发件人: Taeho Kang
> 发送时间: 2010-10-08 13:49:55
> 收件人: common-user
> 抄送:
> 主题: Re: Re: how to make hadoop balance automatically
>
> Have the dfs upload done by a server not running a datanode and your
> b
Phil Hagelberg wrote:
I'm trying to write a Hadoop job that will add documents to an existing
lucene index. My initial idea was to set the index as the output
directory and create and IndexWriter based on
FileOutputFormat.getOutputPath(context), but this requires that the
output path not exist wh
Hi Jeff,
I am sorry but I do not have the file mapred-site.xml. So I made the
change in hadoop-site.xml. Also do we have to make the change only in the master
node or even in the slave nodes ?
Thank You
Abhishek Agrawal
SUNY- Buffalo
(716-435-7122)
On Tue 11/10/09 12:01 AM , Jeff Zhang
Hellow,
If I write the output of the 10 tasks in 10 different files then how do I
go about merging the output ? Is there some in built functionality or do I have
to write some code for that ?
Thank You
Abhishek Agrawal
SUNY- Buffalo
(716-435-7122)
On Sun 11/22/09 5:40 PM , Gang Luo lgpu
.apache.org; aa...@buffalo.edu; Gang Luo
发送日期: 2009/11/22 (周日) 5:48:36 下午
主 题: Re: Re: Help in Hadoop
Hellow,
If I write the output of the 10 tasks in 10 different files then how do I
go about merging the output ? Is there some in built functionality or do I have
to write some code for that ?
Than
Hi,
I am running the job from command line. The job runs fine in the local mode
but something happens when I try to run the job in the distributed mode.
Abhishek Agrawal
SUNY- Buffalo
(716-435-7122)
On Fri 11/27/09 2:31 AM , Jeff Zhang zjf...@gmail.com sent:
> Do you run the map reduce job
When you set up the Job object, do you call job.setJarByClass(Map.class)?
That will tell Hadoop which jar file to ship with the job and to use for
classloading in your code.
- Aaron
On Thu, Nov 26, 2009 at 11:56 PM, wrote:
> Hi,
> I am running the job from command line. The job runs fine in
gt; Similarly, if I catch an exception and I want to quit the current task, what
> should I do?
>
> -Gang
>
>
> - 原始邮件
> 发件人: Edmund Kohlwey
> 收件人: common-user@hadoop.apache.org
> 发送日期: 2009/12/6 (周日) 10:52:40 上午
> 主 题: Re: return in map
>
> Let me
, the file I want to read
> doesn't exist)? if use System.exit(), hadoop will try to run it again.
> Similarly, if I catch an exception and I want to quit the current task, what
> should I do?
>
> -Gang
>
>
> - ԭʼ�ʼ�
> �ˣ� Edmund Kohlwey
> �ռ��ˣ�
Thanks. It helps.
-Gang
- 原始邮件
发件人: Amogh Vasekar
收件人: "common-user@hadoop.apache.org"
发送日期: 2009/12/7 (周一) 12:43:07 上午
主 题: Re: Re: return in map
Hi,
If the file doesn’t exist, java will error out.
For partial skips, o.a.h.mapreduce.Mapper class provides a method ru
I would try running the rebalance utility. I would be curious to see what that
will do and if that will fix it.
--- On Wed, 1/27/10, Ananth T. Sarathy wrote:
> From: Ananth T. Sarathy
> Subject: Need to re replicate
> To: common-user@hadoop.apache.org
> Date: Wednesday, January
:
>
> > From: Ananth T. Sarathy
> > Subject: Need to re replicate
> > To: common-user@hadoop.apache.org
> > Date: Wednesday, January 27, 2010, 9:28 PM
> > One of our datanodes went bye bye. We
> > added a bunch more data nodes, but
> > when I do
do and if that will fix it.
>>
>> --- On Wed, 1/27/10, Ananth T. Sarathy wrote:
>>
>>> From: Ananth T. Sarathy
>>> Subject: Need to re replicate
>>> To: common-user@hadoop.apache.org
>>> Date: Wednesday, January 27, 2010, 9:28 PM
>>> O
be curious to see
> what
> >> that will do and if that will fix it.
> >>
> >> --- On Wed, 1/27/10, Ananth T. Sarathy
> wrote:
> >>
> >>> From: Ananth T. Sarathy
> >>> Subject: Need to re replicate
> >>> To: common-user@h
ymondj...@yahoo.com
>>>> wrote:
>>>
>>>> I would try running the rebalance utility. I would be curious to see
>> what
>>>> that will do and if that will fix it.
>>>>
>>>> --- On Wed, 1/27/10, Ananth T. Sarathy
>> wr
ds,
2012-03-07
hao.wang
发件人: Harsh J
发送时间: 2012-03-07 14:14:05
收件人: common-user
抄送:
主题: Re: Fair Scheduler Problem
Hello Hao,
Its best to submit CDH user queries to
https://groups.google.com/a/cloudera.org/group/cdh-user/topics
(cdh-u...@cloudera.org) where the majority of CDH users
__
> hao.wang
> ____
> 发件人: Harsh J
> 发送时间: 2012-03-07 14:14:05
> 收件人: common-user
> 抄送:
> 主题: Re: Fair Scheduler Problem
> Hello Hao,
> Its best to submit CDH user queries to
> https://groups.google.com/a/cloudera.org/group/cdh-user
Thank you. I get it, it is the speculative attempt.
2010-08-27
shangan
发件人: Amareshwari Sri Ramadasu
发送时间: 2010-08-27 16:33:59
收件人: common-user@hadoop.apache.org
抄送:
主题: Re: mapreduce attempts killed
You should look at task logs to figure why the tasks failed. They are
Hi,
To solve that simply do the following on the problematic nodes:
1) Stop the datanode (probably not running)
2) Remove everything inside the .../cache/hdfs/
3) Start the datanode again.
Note: With cloudera always use "service" way to stop/start hadoop software!
service hadoop-0.20-datanode sto
Worked perfectly.
Thanks Niels!
-mgl
On Mar 24, 2011, at 12:48 PM, Niels Basjes wrote:
> Hi,
>
> To solve that simply do the following on the problematic nodes:
> 1) Stop the datanode (probably not running)
> 2) Remove everything inside the .../cache/hdfs/
> 3) Start the datanode again.
>
>
It's not the sorting, since the sorted files are produced in output, it's then
mapper not existing well. so can anyone tell me if it's wrong to write
mapper.close() function like this ?
@Override
public void close() throws IOException{
helper.CleanUp();
For a job to get submitted to a cluster, you will need proper client
configurations. Have you configured your mapred-site.xml and
yarn-site.xml properly inside /etc/hadoop/conf/mapred-site.xml and
/etc/hadoop/conf/yarn-site.xml at the client node?
On Mon, Jul 30, 2012 at 12:00 AM, abhiTowson cal
Hi,
Thanks for reply Harsh, These are my configuration properties
//mapred-site.xml
mapreduce.framework.name
yarn
mapreduce.jobhistory.address
hadoop-master-2:10020
mapreduce.jobhistory.webapp.address
hadoop-master-2:19888
//Yarn-site-xml
Classpath for typical applications.
Hi Abhishek,
Once you make sure that whatever Harsh said in the previous email is
present in the cluster and then also the job runs in Local Mode. Then try
running the job with hadoop --config option.
Refer to this discussion for more detail:
https://groups.google.com/a/cloudera.org/forum/#!topic/
HI Anil,
I have already tried this,but issue could not be resolved.
Regards
Abhishek
On Sun, Jul 29, 2012 at 3:05 PM, anil gupta wrote:
> Hi Abhishek,
>
> Once you make sure that whatever Harsh said in the previous email is
> present in the cluster and then also the job runs in Local Mode. Then
Are you using cdh4? In you cluster are you using yarn or mr1?
Check the classpath of Hadoop by Hadoop classpath command.
Best Regards,
Anil
On Jul 29, 2012, at 12:12 PM, abhiTowson cal wrote:
> HI Anil,
>
> I have already tried this,but issue could not be resolved.
>
> Regards
> Abhishek
>
>
Hi Anil,
Iam using chd4 with yarn.
On Sun, Jul 29, 2012 at 3:17 PM, Anil Gupta wrote:
> Are you using cdh4? In you cluster are you using yarn or mr1?
> Check the classpath of Hadoop by Hadoop classpath command.
>
> Best Regards,
> Anil
>
> On Jul 29, 2012, at 12:12 PM, abhiTowson cal
hi anil,
Hadoop class path is also working fine.
Regards
Abhishek
Thanks for
On Sun, Jul 29, 2012 at 3:20 PM, abhiTowson cal
wrote:
> Hi Anil,
> Iam using chd4 with yarn.
>
> On Sun, Jul 29, 2012 at 3:17 PM, Anil Gupta wrote:
>> Are you using cdh4? In you cluster are you using yarn
Hi Abhishek,
I faced similar problem with cdh4 few days ago. THe problem i found out was
the classpath. The user from which i installed the cdh packages was having
the right classpath but the users "yarn" and "hdfs" were having the
incorrect classpath. When i was trying to run the job as yarn or h
Hi Abhishek,
I didnt mean to ask you whether it returns result or not. I meant that you
should check that the classpath is correct. It should have the directories
where yarn is installed.
~Anil
On Sun, Jul 29, 2012 at 12:23 PM, abhiTowson cal
wrote:
> hi anil,
>
> Hadoop class path is also work
Hi anil,
Thanks for the reply.Same as your case my pi job is haulted and their
is no progress.
Regards
Abhishek
On Sun, Jul 29, 2012 at 3:31 PM, anil gupta wrote:
> Hi Abhishek,
>
> I didnt mean to ask you whether it returns result or not. I meant that you
> should check that the classpath is c
Seems like you are also stuck in the same problem as i am... I am going to
work on changing my conf tomorrow to fix this. How much memory your node
has?
Check the logs of Nodemanagers..At the bottom of the logs file you will see
that NM is stopping some components.(sorry i cant recall the exact nam
Hello!
Can u please suggest me few of the good books of HADOOP?
I am lookking for the book of it more based on the programming rather than
the theory.
As in,
-> Map-Reduce and Compression
-> HIVE and Hbase
-> Experimentations with hadoop
I also want to know, in what way can we use hadoop in big
I want to compress the data first and then place it in HDFS. Again, while
retrieving the same, I want to uncompress it and place on the desired
destination. Can this be possible. How to get started? Also, I want to get
started with actual coding part of compression and MAP reduce. PLease
suggest me
By "I want to compress the data first and then place it in HDFS", do you
mean you want to compress the data
locally and then copy to DFS?
What's the size of your data? What's the capacity of HDFS?
On Mon, Aug 3, 2009 at 10:45 AM, Sugandha Naolekar
wrote:
> I want to compress the data first and t
Yes, You are right. Here goes the details related::
-> I have a Hadoop cluster of 7 nodes. Now there is this 8th machine, which
is not a part of the hadoop cluster.
-> I want to place the data of that machine into the HDFS. Thus, before
placing it in HDFS, I want to compress it, and then dump in t
I don't think you will be able to compress some data unless it's on HDFS.
What you can do is
1. Manually compress the data on the machine where the data resides. Then,
copy the same to
HDFS. or
2. Copy the data without compressing to HDFS, then run a job which just
emits the data as it reads
in k
dats fine. But, if I place the data in HDFS and then run map reduce code to
provide compression, then the data will get compressed in sequence files
but, even the original data will reside in the memory;thereby leading or
causing a kind of redundancy of data...
Can u pls suggest me a way out?/
On
unsubscribe
On Mon, Aug 3, 2009 at 12:01 AM, Sugandha Naolekar
wrote:
> dats fine. But, if I place the data in HDFS and then run map reduce code to
> provide compression, then the data will get compressed in sequence files
> but, even the original data will reside in the memory;thereby leading or
This is ridiculous. What do you mean by unsubscribe.?? I have few queries
and dats why have logged in to the corresponding forum.
On Mon, Aug 3, 2009 at 12:33 PM, A BlueCoder wrote:
> unsubscribe
>
> On Mon, Aug 3, 2009 at 12:01 AM, Sugandha Naolekar
> wrote:
>
> > dats fine. But, if I place the
How files are written can be controlled. Maybe you are using
SequenceFileOutputFormat.
You can setOutputFormat() to TextOutputFormat.
I guess, this must solve your problem!
On Mon, Aug 3, 2009 at 12:31 PM, Sugandha Naolekar
wrote:
> dats fine. But, if I place the data in HDFS and then run map re
program will launch one map task per compressed file, so
make sure you design your compression accordingly.
Thanks,
Amogh
-Original Message-
From: Sugandha Naolekar [mailto:sugandha@gmail.com]
Sent: Monday, August 03, 2009 12:32 PM
To: common-user@hadoop.apache.org
Subject: Re: :!
ugust 03, 2009 12:32 PM
> To: common-user@hadoop.apache.org
> Subject: Re: :!
>
> dats fine. But, if I place the data in HDFS and then run map reduce code to
> provide compression, then the data will get compressed in sequence files
> but, even the original data will reside in the memo
Hey Sugandha,
It's a common mistake - I think he was trying to unsubscribe to the
mailing list (which is done by sending a message to a specific email
address with the command "unsubscribe"), not telling you to unsubscribe.
Brian
On Aug 3, 2009, at 2:09 AM, Sugandha Naolekar wrote:
This i
Hello!
Can I use RMI to dump the files or data from a remote machine into the
hadoop cluster, by executing that code from the local host?
--
Regards!
Sugandha
Hello!
We have a cluster of 5 nodes and we are concentrating on the development of
a DFS(Distributed File System). with the incorporation of Hadoop.
Now, Can I get some ideas on how can I design package diagrams.
--
Regards!
Sugandha
The cluster can be set up in fully distributed mode. The slave and master conf
files have to be changed appropriately
G Sudha Sadasivam
--- On Tue, 10/20/09, Sugandha Naolekar wrote:
From: Sugandha Naolekar
Subject: Re::!
To: core-u...@hadoop.apache.org
Date: Tuesday, October 20, 2009, 5:09
What do you think about it?
http://www.safia.co.ke/folderoldpublik1291.php?owjhpageID=78
Thu, 5 Jan 2012 16:21:39
__
"Were both going to work in the mill next Monday." (c) Harlanne volutant
I need to execute a code through the propmt of hadoop,i.e; bin/hadoop>.
So, I built the jar of it using jar cfmv Jarfile_name Manifest_filename -C
directory_name/ .(in which d jars,and class files are added).
After that, I simply execute the code thro' bin/hadoop Jarfilename.
But, I get an error o
did you use the following?
bin/hadoop jar
Raghava.
On Thu, Jun 17, 2010 at 9:21 PM, Sugandha Naolekar
wrote:
> I need to execute a code through the propmt of hadoop,i.e; bin/hadoop>.
> So, I built the jar of it using jar cfmv Jarfile_name Manifest_filename -C
> directory_name/ .(in which d ja
Following things I did::
1) I went into the hadoop diectory- the path is
/home/hadoop/Softwares/hadoop0.19.0
2) Then I made a folder named Try under the above path. I added all the jars
under lib directory and the bin folder in which, my code lies. This bin
folder got created under the eclipse's w
Now the jar is getting built but, when i try 2 run it, it diplays
following.
>bin/hadoop jar Sample.jar
RunJar jarFile [mainClass] args...
PLease suggest something...if possible, d procedures I have followed can be
tried by someone..!!
Regards!
Sugandha
On Fri, Jun 18, 2010 at 8:13 AM, Suga
the problem may be in your jar creation or the path where you are copying
the jar may be different from the jar command you are running.
try building jar from eclipse itself and make sure you are giving correct
path of the jar file to hadoop command.
On Fri, Jun 18, 2010 at 11:01 AM, Sugandha Nao
The steps mentioned in d above mail of mine r d ones dat I follwed.!
If some one could repeat d same, the problem can b better understood.
How to run a Hadoop jar file Through Runjar API???
See simply, a built jar file can b passed as a parameter and where to be
extracted in the unjar static
I followed the jar construction step mentioned in the Usage section (link
below) and also the step mentioned to run specific classes in the jar.
http://hadoop.apache.org/common/docs/r0.20.1/mapred_tutorial.html#Usage
I replaced the classes folder with bin because thats where Eclipse puts in
the c
solved this question.
i found that automatically downloaded file is corrupted.manually download
hadoop-0.20.0.tar.gz then put it into ~/.ant/cache/hadoop/core/sources
folder.then it is ok.
From: chu_pengb...@hotmail.com
To: common-user@hadoop.apache.org
Subject:
Date: Thu, 9 Sep 2010 18:03:
On Thu, Dec 10, 2009 at 1:15 PM, Gang Luo wrote:
> Hi Todd,
> I didn't change the partitioner, just use the default one. Will the default
> partitioner cause the lost of the records?
>
> -Gang
>
Do the maps output data nondeterministically? Did you experience any
task failures in the run of the
Lipcon
收件人: common-user@hadoop.apache.org
发送日期: 2009/12/10 (周四) 4:43:52 下午
主 题: Re: Re: Re: map output not euqal to reduce input
On Thu, Dec 10, 2009 at 1:15 PM, Gang Luo wrote:
> Hi Todd,
> I didn't change the partitioner, just use the default one. Will the default
> partitioner cau
; but that just gives me the hostnames or am I overlooking something?
> I actually need the filename/harddisk on the node.
>
> JS
>
> Gesendet: Mittwoch, 25. Juli 2012 um 23:33 Uhr
> Von: "Chen He"
> An: common-user@hadoop.apache.org
> Betreff: Re: HDFS block physic
actually need the filename/harddisk on the node.
>>
>> JS
>>
>> Gesendet: Mittwoch, 25. Juli 2012 um 23:33 Uhr
>> Von: "Chen He"
>> An: common-user@hadoop.apache.org
>> Betreff: Re: HDFS block physical location
>> >nohup hadoop
Moving to mapreduce-user@, bcc common-user@
On Aug 23, 2011, at 2:31 AM, Vaibhav Pol wrote:
> Hi All,
> I have some query regarding task re-scheduling.Can it possible
> to make Job tracker wait for some time before re-scheduling of failed
> tracker's tasks.
>
W
Are you able to create the directory manually in the DataNode Machine?
#mkdirs /state/partition2/hadoop/dfs/tmp
Regards,
Uma
- Original Message -
From: "Peng, Wei"
Date: Wednesday, September 21, 2011 9:44 am
Subject: RE: java.io.IOException: Incorrect data format
To: c
Yes, I can. The datanode is not able to start after crashing without
enough HD space.
Wei
-Original Message-
From: Uma Maheswara Rao G 72686 [mailto:mahesw...@huawei.com]
Sent: Tuesday, September 20, 2011 9:30 PM
To: common-user@hadoop.apache.org
Subject: Re: RE: java.io.IOException
, 2011 9:30 PM
To: common-user@hadoop.apache.org
Subject: Re: RE: java.io.IOException: Incorrect data format
Are you able to create the directory manually in the DataNode Machine?
#mkdirs /state/partition2/hadoop/dfs/tmp
Regards,
Uma
- Original Message -
From: "Peng, Wei"
Date:
I just solved the problem by releasing more space on the related HD
partitions.
Thank you all for your help !
Wei
-Original Message-
From: Peng, Wei [mailto:wei.p...@xerox.com]
Sent: Tuesday, September 20, 2011 9:35 PM
To: common-user@hadoop.apache.org
Subject: RE: RE
: RE: risks of using Hadoop
To: common-user@hadoop.apache.org
> Amen to that. I haven't heard a good rant in a long time, I am
> definitely amused end entertained.
>
> As a veteran of 3 years with Hadoop I will say that the SPOF issue
> is whatever you want to make it. But it
Sender: Uma Maheswara Rao G 72686
Date: 2011年10月18日(星期二) 下午6:00
To: common-user
CC: common-user
Subject: Re: could not complete file...
- Original Message -
From: bourne1900
Date: Tuesday, October 18, 2011 3:21 pm
Subject: could not complete file...
To: common-user
> Hi,
>
>
Hi,
Thank you for help!
Best Regards
Malone
2012-05-23
yingnan.ma
发件人: Harsh J
发送时间: 2012-05-23 18:24:14
收件人: common-user
抄送:
主题: Re: about hadoop lzo compression
Malone,
Right now it works despite error cause Pig hasn't had a need to
read/write LZO data locally. Hence
Alan
- original message
Subject: Re: how to query JobTracker
Sent: Thu, 17 Jun 2010
From: Sanel Zukan
> AFAIK, there is no such method (to get a job name from client side) :(
> (at least I wasn't able to find it). Via JobProfile can be
> extracted job name via given id, but on
; submit the job, write the jobid to a lock file (hdfs://myapp/myjob.lock),
> and then
> a. remove the lock file when the job finishes, or
> b. if a new job is triggered before the first finished, read the jobid from
> the lock file
> kill the previous job, and start a new one
iday, November 27, 2009 10:56 AM
>
> To: common-user@hadoop.apache.org
> Subject: Re: please help in setting hadoop
>
>
>
> Hi,
>
> Just a thought, but you do not need to setup the temp directory in
>
> conf/hadoop-site.xml especially if you are running basi
gt; > relative to me.
> >
> >
> >
> >
> > -Original Message-
> >
> > From: aa...@buffalo.edu [aa...@buffa
> > lo.edu]
> > Sent: Friday, November 27, 2009 10:56 AM
> >
> > To: common-user@hadoop.apache.org
> > Subje
1 - 100 of 10905 matches
Mail list logo