Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: BUG? dfs.namenode.replication.considerLoad and decommissioned nodes

2014-01-27 Thread rp
Sehr geehrte Damen und Herren, Herr Pappert ist nicht mehr für unser Unternehmen tätig. Ihre E-Mail wird nicht weitergeleitet. Bitte schicken Sie uns Ihre E-Mail an i...@luenebits.de Sie können uns wie folgt erreichen: luenebits GmbH Uelzener Str. 14 21335 Lüneburg Tel. 04131/9980880

Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: BUG? dfs.namenode.replication.considerLoad and decommissioned nodes

2014-01-27 Thread rp
Sehr geehrte Damen und Herren, Herr Pappert ist nicht mehr für unser Unternehmen tätig. Ihre E-Mail wird nicht weitergeleitet. Bitte schicken Sie uns Ihre E-Mail an i...@luenebits.de Sie können uns wie folgt erreichen: luenebits GmbH Uelzener Str. 14 21335 Lüneburg Tel. 04131/9980880

Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: BUG? dfs.namenode.replication.considerLoad and decommissioned nodes

2014-01-27 Thread rp
Sehr geehrte Damen und Herren, Herr Pappert ist nicht mehr für unser Unternehmen tätig. Ihre E-Mail wird nicht weitergeleitet. Bitte schicken Sie uns Ihre E-Mail an i...@luenebits.de Sie können uns wie folgt erreichen: luenebits GmbH Uelzener Str. 14 21335 Lüneburg Tel. 04131/9980880

Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: BUG? dfs.namenode.replication.considerLoad and decommissioned nodes

2014-01-27 Thread rp
Sehr geehrte Damen und Herren, Herr Pappert ist nicht mehr für unser Unternehmen tätig. Ihre E-Mail wird nicht weitergeleitet. Bitte schicken Sie uns Ihre E-Mail an i...@luenebits.de Sie können uns wie folgt erreichen: luenebits GmbH Uelzener Str. 14 21335 Lüneburg Tel. 04131/9980880

Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: BUG? dfs.namenode.replication.considerLoad and decommissioned nodes

2014-01-27 Thread rp
Sehr geehrte Damen und Herren, Herr Pappert ist nicht mehr für unser Unternehmen tätig. Ihre E-Mail wird nicht weitergeleitet. Bitte schicken Sie uns Ihre E-Mail an i...@luenebits.de Sie können uns wie folgt erreichen: luenebits GmbH Uelzener Str. 14 21335 Lüneburg Tel. 04131/9980880

Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: BUG? dfs.namenode.replication.considerLoad and decommissioned nodes

2014-01-27 Thread rp
Sehr geehrte Damen und Herren, Herr Pappert ist nicht mehr für unser Unternehmen tätig. Ihre E-Mail wird nicht weitergeleitet. Bitte schicken Sie uns Ihre E-Mail an i...@luenebits.de Sie können uns wie folgt erreichen: luenebits GmbH Uelzener Str. 14 21335 Lüneburg Tel. 04131/9980880

Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: BUG? dfs.namenode.replication.considerLoad and decommissioned nodes

2014-01-27 Thread rp
Sehr geehrte Damen und Herren, Herr Pappert ist nicht mehr für unser Unternehmen tätig. Ihre E-Mail wird nicht weitergeleitet. Bitte schicken Sie uns Ihre E-Mail an i...@luenebits.de Sie können uns wie folgt erreichen: luenebits GmbH Uelzener Str. 14 21335 Lüneburg Tel. 04131/9980880

Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: BUG? dfs.namenode.replication.considerLoad and decommissioned nodes

2014-01-27 Thread rp
Sehr geehrte Damen und Herren, Herr Pappert ist nicht mehr für unser Unternehmen tätig. Ihre E-Mail wird nicht weitergeleitet. Bitte schicken Sie uns Ihre E-Mail an i...@luenebits.de Sie können uns wie folgt erreichen: luenebits GmbH Uelzener Str. 14 21335 Lüneburg Tel. 04131/9980880

Re: BUG? dfs.namenode.replication.considerLoad and decommissioned nodes

2014-01-27 Thread Fahd Albinali
Can you remove me from this list. Thanks. Fahd On 01/27/2014 01:58 PM, r...@fwpsystems.com wrote: Sehr geehrte Damen und Herren, Herr Pappert ist nicht mehr für unser Unternehmen tätig. Ihre E-Mail wird nicht weitergeleitet. Bitte schicken Sie uns Ihre E-Mail an i...@luenebits.de Sie können

Re: BUG? dfs.namenode.replication.considerLoad and decommissioned nodes

2014-01-27 Thread Bo Li
do the same thing to me, please! 2014-01-27 Fahd Albinali falbin...@everyfit.com Can you remove me from this list. Thanks. Fahd On 01/27/2014 01:58 PM, r...@fwpsystems.com wrote: Sehr geehrte Damen und Herren, Herr Pappert ist nicht

Re: BUG? dfs.namenode.replication.considerLoad and decommissioned nodes

2014-01-27 Thread Thompson, John H. (GSFC-606.2)[Computer Sciences Corporation]
And me From: Bo Li libo.1024@gmail.com Reply-To: hdfs-user@hadoop.apache.org Date: Monday, January 27, 2014 3:40 PM To: hdfs-user@hadoop.apache.org Subject: Re: BUG? dfs.namenode.replication.considerLoad and decommissioned nodes do the same thing to me, please!

Re: No space left on device during merge.

2014-01-27 Thread Tim Potter
Thanks for your reply Vinod.I've been thinking about partitioning the data to having multiple reducers each one working on a contiguous part of the sort space. The problems is the keys are a combination of URLs and RDF BNodes. I can't see a way, without previously analysing the data, of

Invalide URI in job start

2014-01-27 Thread Lukas Kairies
Hello, I try to use XtreemFS as an alternative file system for Hadoop 2.x. There is an existing FileSystem implementation for Hadoop 1.x that works fine. First think I did was to implement a DelegateToFileSystem subclass to provide an AbstractFileSystem implementation for XtreemFS (just

Re: HDFS open file limit

2014-01-27 Thread sudhakara st
There is no open file limitation for HDFS. The 'Too many open file' limit is for OS file system. Increase *system-wide maximum number of open files, Per-User/Group/Process file descriptor limits.* On Mon, Jan 27, 2014 at 1:52 AM, Bertrand Dechoux decho...@gmail.comwrote: At least for each

Re: Performance in running jobs at the same time

2014-01-27 Thread sudhakara st
1 - I installed Hadoop MRv2 in VirtualMachines. When the jobs are running, I try to list them with hadoop jobs -list, but it takes lots of time for the command being executed. This happens because of the performance of the VM. I just wonder how it works with big machines. Does anyone have an idea

Processing steps of NameNode Secondary NameNode

2014-01-27 Thread Amit Mittal
Hi, I have a doubt of the processing steps of NameNode: *Reference:* Hadoop: The Definitive Guide:3rd Ed book by Tom White On page# 340 (Ch 10: HDFS The file system image edit log) Text from book: When a filesystem client performs a write operation (such as creating or moving a file), it

Does all reducer take input from all NodeManager/Tasktrackers of Map tasks

2014-01-27 Thread Amit Mittal
Hi, Does all reducer take input from all NodeManager/Tasktrackers of Map tasks ? *Reference:* Hadoop: The Definitive Guide:3rd Ed book by Tom White On page# 210 (Ch 6: How MapReduce Works Shuffle Sort The reducer side) There is a note, here is the text from book: How do reducers know which

Starting... -help needed

2014-01-27 Thread Thomas Bentsen
Hello everyone I have recently decided to try out the Hadoop complex. According to the getting started I am supposed to change the config in [hadoop]/conf/* But there is no such conf directory. It looks a lot like I am supposed to copy the files from the tar-file all over the OS into the

Commissioning Task tracker

2014-01-27 Thread Shekhar Sharma
Hello, I am using apache hadoop-1.0.3 version and i want to commission a task tracker. For this i have included a property mapred.hosts in mapred-site.xml and for this property i have mentioned a file. In this file i have given the IP address of a machine.. then i ran the command hadoop mradmin

RE: BlockMissingException reading HDFS file, but the block exists and fsck shows OK

2014-01-27 Thread John Lilley
None of the datanode logs have error messages. From: Harsh J [mailto:ha...@cloudera.com] Sent: Monday, January 27, 2014 8:15 AM To: user@hadoop.apache.org Subject: Re: BlockMissingException reading HDFS file, but the block exists and fsck shows OK Can you check the log of the DN that is

Re: Commissioning Task tracker

2014-01-27 Thread Nitin Pawar
I think the file name is mapred.include On Mon, Jan 27, 2014 at 9:11 PM, Shekhar Sharma shekhar2...@gmail.comwrote: Hello, I am using apache hadoop-1.0.3 version and i want to commission a task tracker. For this i have included a property mapred.hosts in mapred-site.xml and for this

Re: Commissioning Task tracker

2014-01-27 Thread Nitin Pawar
Sorry for incomplete reply In hadoop 1.2/1.0 , following is the property property namemapred.hosts/name value${HADOOP_CONF_DIR}/mapred.include/value descriptionNames a file that contains the list of nodes that may connect to the jobtracker. If the value is empty, all hosts are

Re: Commissioning Task tracker

2014-01-27 Thread Shekhar Sharma
U mean property name? Regards, Som Shekhar Sharma +91-8197243810 On Mon, Jan 27, 2014 at 9:14 PM, Nitin Pawar nitinpawar...@gmail.comwrote: I think the file name is mapred.include On Mon, Jan 27, 2014 at 9:11 PM, Shekhar Sharma shekhar2...@gmail.comwrote: Hello, I am using apache

Re: Commissioning Task tracker

2014-01-27 Thread Shekhar Sharma
Thats fine..I have provided the file name as include I have tried this also..but i am getting the same error my xml files are as follows: hdfs-site.xml configuration property namedfs.replication/name value2/value /property property namedfs.block.size/name

Re: HDFS open file limit

2014-01-27 Thread Harsh J
Hi John, There is a concurrent connections limit on the DNs that's set to a default of 4k max parallel threaded connections for reading or writing blocks. This is also expandable via configuration but usually the default value suffices even for pretty large operations given the replicas help

Re: BlockMissingException reading HDFS file, but the block exists and fsck shows OK

2014-01-27 Thread Harsh J
Can you check the log of the DN that is holding the specific block for any errors? On Jan 27, 2014 8:37 PM, John Lilley john.lil...@redpoint.net wrote: I am getting this perplexing error. Our YARN application launches tasks that attempt to simultaneously open a large number of files for

BlockMissingException reading HDFS file, but the block exists and fsck shows OK

2014-01-27 Thread John Lilley
I am getting this perplexing error. Our YARN application launches tasks that attempt to simultaneously open a large number of files for merge. There seems to be a load threshold in terms of number of simultaneous tasks attempting to open a set of HDFS files on a four-node cluster. The

Re: Does all reducer take input from all NodeManager/Tasktrackers of Map tasks

2014-01-27 Thread Vinod Kumar Vavilapalli
On Jan 27, 2014, at 4:17 AM, Amit Mittal amitmitt...@gmail.com wrote: Question 1: I believe the TaskTracker and then JobTracker/AppMaster will receive the updates through call to Task.statusUpdate(TaskUmbilicalProtocol obj). By which the JobTracker/AM will know the location of the map's

Re: Starting... -help needed

2014-01-27 Thread Chris Mawata
Check if you have [hadoop]/etc/hadoop as the configuration directory is different in version 2.x On Jan 27, 2014 10:37 AM, Thomas Bentsen t...@bentzn.com wrote: Hello everyone I have recently decided to try out the Hadoop complex. According to the getting started I am supposed to change the

Re: BUG? dfs.namenode.replication.considerLoad and decommissioned nodes

2014-01-27 Thread Bryan Beaudreault
Thanks for the reply, Harsh. I've entered the JIRA: https://issues.apache.org/jira/browse/HDFS-5837 On Sun, Jan 26, 2014 at 12:56 AM, Harsh J ha...@cloudera.com wrote: Hi Bryan, It is a bug that we calculate the average including marked-for-decomm. DNs. Please do log a JIRA for this! On

Re: Re: BUG? dfs.namenode.replication.considerLoad and decommissioned nodes

2014-01-27 Thread rp
Sehr geehrte Damen und Herren, Herr Pappert ist nicht mehr für unser Unternehmen tätig. Ihre E-Mail wird nicht weitergeleitet. Bitte schicken Sie uns Ihre E-Mail an i...@luenebits.de Sie können uns wie folgt erreichen: luenebits GmbH Uelzener Str. 14 21335 Lüneburg Tel. 04131/9980880

Re: Re: Re: BUG? dfs.namenode.replication.considerLoad and decommissioned nodes

2014-01-27 Thread rp
Sehr geehrte Damen und Herren, Herr Pappert ist nicht mehr für unser Unternehmen tätig. Ihre E-Mail wird nicht weitergeleitet. Bitte schicken Sie uns Ihre E-Mail an i...@luenebits.de Sie können uns wie folgt erreichen: luenebits GmbH Uelzener Str. 14 21335 Lüneburg Tel. 04131/9980880

Re: Re: Re: Re: BUG? dfs.namenode.replication.considerLoad and decommissioned nodes

2014-01-27 Thread rp
Sehr geehrte Damen und Herren, Herr Pappert ist nicht mehr für unser Unternehmen tätig. Ihre E-Mail wird nicht weitergeleitet. Bitte schicken Sie uns Ihre E-Mail an i...@luenebits.de Sie können uns wie folgt erreichen: luenebits GmbH Uelzener Str. 14 21335 Lüneburg Tel. 04131/9980880

Re: Re: Re: Re: Re: Re: BUG? dfs.namenode.replication.considerLoad and decommissioned nodes

2014-01-27 Thread rp
Sehr geehrte Damen und Herren, Herr Pappert ist nicht mehr für unser Unternehmen tätig. Ihre E-Mail wird nicht weitergeleitet. Bitte schicken Sie uns Ihre E-Mail an i...@luenebits.de Sie können uns wie folgt erreichen: luenebits GmbH Uelzener Str. 14 21335 Lüneburg Tel. 04131/9980880

Re: Re: Re: Re: Re: BUG? dfs.namenode.replication.considerLoad and decommissioned nodes

2014-01-27 Thread rp
Sehr geehrte Damen und Herren, Herr Pappert ist nicht mehr für unser Unternehmen tätig. Ihre E-Mail wird nicht weitergeleitet. Bitte schicken Sie uns Ihre E-Mail an i...@luenebits.de Sie können uns wie folgt erreichen: luenebits GmbH Uelzener Str. 14 21335 Lüneburg Tel. 04131/9980880

Re: Re: Re: Re: Re: Re: Re: Re: BUG? dfs.namenode.replication.considerLoad and decommissioned nodes

2014-01-27 Thread rp
Sehr geehrte Damen und Herren, Herr Pappert ist nicht mehr für unser Unternehmen tätig. Ihre E-Mail wird nicht weitergeleitet. Bitte schicken Sie uns Ihre E-Mail an i...@luenebits.de Sie können uns wie folgt erreichen: luenebits GmbH Uelzener Str. 14 21335 Lüneburg Tel. 04131/9980880

Re: Re: Re: Re: Re: Re: Re: Re: Re: BUG? dfs.namenode.replication.considerLoad and decommissioned nodes

2014-01-27 Thread rp
Sehr geehrte Damen und Herren, Herr Pappert ist nicht mehr für unser Unternehmen tätig. Ihre E-Mail wird nicht weitergeleitet. Bitte schicken Sie uns Ihre E-Mail an i...@luenebits.de Sie können uns wie folgt erreichen: luenebits GmbH Uelzener Str. 14 21335 Lüneburg Tel. 04131/9980880

Re: Re: Re: Re: Re: Re: Re: BUG? dfs.namenode.replication.considerLoad and decommissioned nodes

2014-01-27 Thread rp
Sehr geehrte Damen und Herren, Herr Pappert ist nicht mehr für unser Unternehmen tätig. Ihre E-Mail wird nicht weitergeleitet. Bitte schicken Sie uns Ihre E-Mail an i...@luenebits.de Sie können uns wie folgt erreichen: luenebits GmbH Uelzener Str. 14 21335 Lüneburg Tel. 04131/9980880

Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: BUG? dfs.namenode.replication.considerLoad and decommissioned nodes

2014-01-27 Thread rp
Sehr geehrte Damen und Herren, Herr Pappert ist nicht mehr für unser Unternehmen tätig. Ihre E-Mail wird nicht weitergeleitet. Bitte schicken Sie uns Ihre E-Mail an i...@luenebits.de Sie können uns wie folgt erreichen: luenebits GmbH Uelzener Str. 14 21335 Lüneburg Tel. 04131/9980880

Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: BUG? dfs.namenode.replication.considerLoad and decommissioned nodes

2014-01-27 Thread rp
Sehr geehrte Damen und Herren, Herr Pappert ist nicht mehr für unser Unternehmen tätig. Ihre E-Mail wird nicht weitergeleitet. Bitte schicken Sie uns Ihre E-Mail an i...@luenebits.de Sie können uns wie folgt erreichen: luenebits GmbH Uelzener Str. 14 21335 Lüneburg Tel. 04131/9980880

Re: Invalide URI in job start

2014-01-27 Thread Vinod Kumar Vavilapalli
Need your help to debug this. Seems like the scheme is getting lost somewhere along the way. Clearly as you say if job.jar is on the file-system, then JobClient is properly uploading it. There are multilple things that you'll need to check - Check the NodeManager logs for the URL. It does

RE: HDFS open file limit

2014-01-27 Thread John Lilley
What exception would I expect to get if this limit was exceeded? john From: Harsh J [mailto:ha...@cloudera.com] Sent: Monday, January 27, 2014 8:12 AM To: user@hadoop.apache.org Subject: Re: HDFS open file limit Hi John, There is a concurrent connections limit on the DNs that's set to a

RE: HDFS read stats

2014-01-27 Thread John Lilley
Ummm... so if I've called FileSystem.open() with an hdfs:// path, and it returns an FSDataInputStream, how do I get from there to the DFSInputStream that you say has the interface I want? Thanks John From: Ted Yu [mailto:yuzhih...@gmail.com] Sent: Sunday, January 26, 2014 6:16 PM To:

Re: Processing steps of NameNode Secondary NameNode

2014-01-27 Thread Haohui Mai
Conceptually you can think of the namenode is similar to a journal file system. For each write, it updates the in-memory data structure, persists the operations on the stable storage (i.e., calling sync to flush the buffer of the edit logs), then responds to the client. Note that all writes are

Re: Starting... -help needed

2014-01-27 Thread Chris Mawata
Correct. It seems you are reading literature on version 1.x but your software is 2.x. (What are you using for directions?) There will be a few changes in the location of files. On 1/27/2014 12:46 PM, Thomas Bentsen wrote: I do have the [hadoop]/etc/hadoop dir and it looks like it has more or

Re: Starting... -help needed

2014-01-27 Thread Thomas Bentsen
I did try the 1.2 release and that worked right out of the box - positive surprise. For 2.2.0 I followed: hadoop.apache.org 'Getting Started' 'Learn about' - bring up a page that tells about 'What's new in 2.2.0' Single Node Setup (in the left menu) - tells me that I should do something

RE: BlockMissingException reading HDFS file, but the block exists and fsck shows OK

2014-01-27 Thread John Lilley
I've found that the error occurs right around a threshold where 20 tasks attempt to open 220 files each. This is ... slightly over 4k total files open. But that's the total number of open files across the 4-node cluster, and since the blocks are evenly distributed, that amounts to 1k

Re: HA Jobtracker failure

2014-01-27 Thread Karthik Kambatla
(Redirecting to cdh-user, moving user@hadoop to bcc). Hi Oren Can you attach slightly longer versions of the log files on both the JTs? Also, if this is something recurring, it would be nice to monitor the JT heap usage and GC timeouts using jstat -gcutil jt-pid. Thanks Karthik On Thu, Jan

Re: HA Jobtracker failure

2014-01-27 Thread Siddharth Tiwari
How have you implemented the failover ? Also can you attach JTHA logs ? If you hav implemented it using. Zkfc, it would be interesting to look in zookeeper logs as well. Sent from my iPhone On Jan 27, 2014, at 3:00 PM, Karthik Kambatla ka...@cloudera.com wrote: (Redirecting to cdh-user,

Strange rpc exception in Yarn

2014-01-27 Thread Jay Vyas
Hi folks: At the **end** of a successful job, im getting some strange stack traces this when using pig, however, it doesnt seem to be pig specific from the stacktrace. Rather, it appears that the job client is attempting to do something funny. Anyone ever see this sort of exception in

Re: HDFS read stats

2014-01-27 Thread Ted Yu
FSDataInputStream has this javadoc: /** Utility that wraps a {@link FSInputStream} in a {@link DataInputStream} You can utilize this method: @InterfaceAudience.LimitedPrivate({HDFS}) public InputStream getWrappedStream() { return in; And cast the return value to DFSInputStream

Which numa policy is the best for hadoop process?

2014-01-27 Thread ch huang
hi,maillist: the numa arch CPU has several policies ,i wonder if anyone has tested it ,and which one is the best?

Re: Does all reducer take input from all NodeManager/Tasktrackers of Map tasks

2014-01-27 Thread Amit Mittal
Hi Vinod, Thank you for the clarifications. Now I reread the note and it explains How do reducers know which **machines** to fetch map output from?. So its about in the entire clusters, which nodes has the map output ready for this reducer. Thanks Amit On Mon, Jan 27, 2014 at 10:36 PM, Vinod

Re: question about hadoop dfs

2014-01-27 Thread Jeff Zhang
you can use the fsck command to find the block locations, here's one example hadoop fsck /user/hadoop/graph_data.txt -blocks -locations -files On Sun, Jan 26, 2014 at 2:48 PM, EdwardKing zhan...@neusoft.com wrote: hdfs-site.xm is follows: configuration property namedfs.name.dir/name

Re: HDFS buffer sizes

2014-01-27 Thread Arpit Agarwal
Looks like DistributedFileSystem ignores it though. On Sat, Jan 25, 2014 at 6:09 AM, John Lilley john.lil...@redpoint.netwrote: There is this in FileSystem.java, which would appear to use the default buffer size of 4096 in the create() call unless otherwise specified in

RE: how to learn hadoop 2.2.0?

2014-01-27 Thread Ganesh Hariharan
Hope I am not hijacking the thread.. Please let me know the scope for administrators in the space of Hadoop, Cassandra etc... Regards G Date: Sun, 26 Jan 2014 10:16:50 +0100 Subject: Re: how to learn hadoop 2.2.0? From: decho...@gmail.com To: user@hadoop.apache.org I would point you to