Re: Configuration set up questions - Container killed on request. Exit code is 143

2014-07-18 Thread Chris MacKenzie
Hi Guys, Thanks very much for getting back to me. Thanks Chris - the idea of slitting the data is a great suggestion. Yes Wangda, I was restarting after changing the configs I’ve been checking the relationship between what I thought was in my config files and what hadoop thought were in them.

Re: MR JOB

2014-07-18 Thread Rich Haase
File copy operations do not run as map reduce jobs. All hadoop fs commands are run as operations against HDFS and do not use the MapReduce. On Fri, Jul 18, 2014 at 10:49 AM, Ashish Dobhal dobhalashish...@gmail.com wrote: Does the normal operations of hadoop such as uploading and downloading a

unsubscribe

2014-07-18 Thread Zilong Tan

MR JOB

2014-07-18 Thread Ashish Dobhal
Does the normal operations of hadoop such as uploading and downloading a file into the HDFS run as a MR job. If so why cant I see the job being run on my task tracker and job tracker. Thank you.

Re: MR JOB

2014-07-18 Thread Ashish Dobhal
Rich Haase Thanks, But if the copy ops do not occur as a MR job then how does the splitting of a file into several blocks takes place. On Fri, Jul 18, 2014 at 10:24 PM, Rich Haase rdha...@gmail.com wrote: File copy operations do not run as map reduce jobs. All hadoop fs commands are run as

Re: Configuration set up questions - Container killed on request. Exit code is 143

2014-07-18 Thread Tsuyoshi OZAWA
Hi Chris MacKenzie, How about trying as follows to identify the reason of your problem? 1. Making both yarn.nodemanager.pmem-check-enabled and yarn.nodemanager.vmem-check-enabled false 2. Making yarn.nodemanager.pmem-check-enabled true 3. Making yarn.nodemanager.pmem-check-enabled true and

Re: MR JOB

2014-07-18 Thread Rich Haase
HDFS handles the splitting of files into multiple blocks. It's a file system operation that is transparent to the user. On Fri, Jul 18, 2014 at 11:07 AM, Ashish Dobhal dobhalashish...@gmail.com wrote: Rich Haase Thanks, But if the copy ops do not occur as a MR job then how does the splitting

Re: unsubscribe

2014-07-18 Thread Rich Haase
To unsubscribe from this list send an email to user-unsubscribe@hadoop. apache.org On Fri, Jul 18, 2014 at 10:54 AM, Zilong Tan z...@rocketfuelinc.com wrote: -- *Kernighan's Law* Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as

Re: MR JOB

2014-07-18 Thread Ashish Dobhal
Thanks. On Fri, Jul 18, 2014 at 10:41 PM, Rich Haase rdha...@gmail.com wrote: HDFS handles the splitting of files into multiple blocks. It's a file system operation that is transparent to the user. On Fri, Jul 18, 2014 at 11:07 AM, Ashish Dobhal dobhalashish...@gmail.com wrote: Rich

Re: Re: HDFS input/output error - fuse mount

2014-07-18 Thread andrew touchet
Thanks Chris! The issue was that even though I set jdk-7u21 as my default, it checked for /usr/java/jdk-1.6* first. Even though it was compiled with 1.7. Is there anyway to generate a proper hadoop-config.sh to reflect the minor version hadoop was built with? So that in my case, it would check

Re: Replace a block with a new one

2014-07-18 Thread Shumin Guo
That will break the consistency of the file system, but it doesn't hurt to try. On Jul 17, 2014 8:48 PM, Zesheng Wu wuzeshen...@gmail.com wrote: How about write a new block with new checksum file, and replace the old block file and checksum file both? 2014-07-17 19:34 GMT+08:00 Wellington

Re: Replace a block with a new one

2014-07-18 Thread Arpit Agarwal
IMHO this is a spectacularly bad idea. Is it a one off event? Why not just take the perf hit and recreate the file? If you need to do this regularly you should consider a mutable file store like HBase. If you start modifying blocks from under HDFS you open up all sorts of consistency issues.

Re: Re: HDFS input/output error - fuse mount

2014-07-18 Thread Chris Mawata
Great that you got it sorted out. I'm afraid I don't know if there is a configuration that would automatically check the versions -- maybe someone who knows might chime in. Cheers Chris On Jul 18, 2014 3:06 PM, andrew touchet adt...@latech.edu wrote: Thanks Chris! The issue was that even

what exactly does data in HDFS look like?

2014-07-18 Thread Adaryl Bob Wakefield, MBA
And by that I mean is there an HDFS file type? I feel like I’m missing something. Let’s say I have a HUGE json file that I import into HDFS. Does it retain it’s JSON format in HDFS? What if it’s just random tweets I’m streaming. Is it kind of like a normal disk where there are all kinds of

Re: what exactly does data in HDFS look like?

2014-07-18 Thread Shahab Yunus
The data itself is eventually store in a form of file. Each blocks of the file and it replicas are stored in files and directories on different nodes. The Namenode that keep the information and maintains it about each file and where its blocks (and replicated blocks exist in the cluster.) As for