Re: Local file system to access hdfs blocks

2014-08-28 Thread Stanley Shi
*BP-13-7914115-10.122.195.197-14909166276345 is the blockpool information* *blk_1073742025 <1073742025> is the block name;* *these names are "private" to teh HDFS system and user should not use them, right?* *But if you really want ot know this, you can check the fsck code to see whether they are

namenod shutdown. epoch number mismatch

2014-08-28 Thread cho ju il
hadoop version 2.4.1 Namenode shutdown to become epoch number mismatch. Why suddenly epoch numbers mismatch ? Why suddenly namenode shutdown ? *** namenode log 2014-08-26 12:17:48,625 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 3

How to remove Decommissioned node??

2014-08-28 Thread cho ju il
Hadoop Version 2.4.1 Remove the datanode process. Decommissioning Nodes>> Decommissioned >> Dead How to remove decommissioned dead node???

"No space left on device" Exception, OPENFORWRITE file

2014-08-28 Thread cho ju il
How can i disk re-balancing of datanode??? Why does generate openforwrite file? How do I handle openforwrite file? Hadoop 1.1.2 *** Datanode disk usage Use% Mounted on 100%/data01 100%/data02 100%/data03 100%/data04 100%/data05 100%/data06 20% /data07 *

How to remove dead node ?

2014-08-28 Thread cho ju il
Hadoop Version 2.4.1 Remove the datanode process. Decommissioning Nodes>> Decommissioned >> Deads How to remove DeadNodes ???

How to retrieve cached data in datanode? (Centralized cache management)

2014-08-28 Thread 남윤민
Hello, I have a question about using the cached data in memory via centralized cache management. I cached the data what I want to use through the CLI (hdfs cacheadmin -addDirectives ...).Then, when I write my mapreduce application, how can I read the cached data in memory? Here is the source co

Re: org.apache.hadoop.io.compress.SnappyCodec not found

2014-08-28 Thread Tsuyoshi OZAWA
Hi, It looks a problem of class path at spark side. Thanks, - Tsuyoshi On Fri, Aug 29, 2014 at 8:49 AM, arthur.hk.c...@gmail.com wrote: > Hi, > > I use Hadoop 2.4.1, I got "org.apache.hadoop.io.compress.SnappyCodec not > found” error: > > hadoop checknative > 14/08/29 02:54:51 WARN bzip2.Bzip2F

Re: Local file system to access hdfs blocks

2014-08-28 Thread Demai Ni
Stanley and all, thanks. I will write a client application to explore this path. A quick question again. Using the fsck command, I can retrieve all the necessary info $ hadoop fsck /tmp/list2.txt -files -blocks -racks . *BP-13-7914115-10.122.195.197-14909166276345:blk_1073742025* len=8 repl=2

org.apache.hadoop.io.compress.SnappyCodec not found

2014-08-28 Thread arthur.hk.c...@gmail.com
Hi, I use Hadoop 2.4.1, I got "org.apache.hadoop.io.compress.SnappyCodec not found” error: hadoop checknative 14/08/29 02:54:51 WARN bzip2.Bzip2Factory: Failed to load/initialize native-bzip2 library system-native, will use pure-Java version 14/08/29 02:54:51 INFO zlib.ZlibFactory: Successfully

Job is reported as complete on history server while on console it shows as only half way thru

2014-08-28 Thread S.L
Hi All, I am running a MRV1 job on Hadoop YARN 2.3.0 cluster , the problem is when I submit this job YARN created multiple applications for that submitted job , and the last application that is running in YARN is marked as complete even as on console its reported as only 58% complete . I have conf

Re: ApplicationMaster link on cluster web page does not work

2014-08-28 Thread Margusja
Moved resourcemanager to another server and it works. I guess I have some network miss routing there :) Best regards, Margus (Margusja) Roo +372 51 48 780 http://margus.roo.ee http://ee.linkedin.com/in/margusroo skype: margusja ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)" On

Re: ApplicationMaster link on cluster web page does not work

2014-08-28 Thread Margusja
More information after I started resourcemanager [root@vm38 ~]# /etc/init.d/hadoop-yarn-resourcemanager start Starting Hadoop resourcemanager: [ OK ] and I open cluster web interface there is some tcp connections to 8088: [root@vm38 ~]# netstat -np | grep 8088 tcp

ApplicationMaster link on cluster web page does not work

2014-08-28 Thread Margusja
Hi yarn.app.mapreduce.am.staging-dir /user yarn.nodemanager.aux-services mapreduce_shuffle yarn.nodemanager.aux-services.mapreduce_shuffle.class org.apache.hadoop.mapred.ShuffleHandler yarn.application.classpath /etc/hadoop/conf,/usr/lib/hadoop/*,/usr/lib

Re: Need some tutorials for Mapreduce written in Python

2014-08-28 Thread Amar Singh
Thank you to everyone who responded to this thread. I got couple of good moves and got some good online courses to explore from to get some fundamental understanding of the things. Thanks Amar On Thu, Aug 28, 2014 at 10:15 AM, Sriram Balachander < sriram.balachan...@gmail.com> wrote: > Hadoop T

Re: hadoop installation error: localhost: ssh: connect to host localhost port 22: connection refused

2014-08-28 Thread Ritesh Kumar Singh
try 'ssh localhost' and show the output On Thu, Aug 28, 2014 at 7:55 PM, Li Chen wrote: > Can anyone please help me with this installation error? > > After I type "start-yarn.sh" : > > starting yarn daemons > starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-xx.out > localhos

hadoop installation error: localhost: ssh: connect to host localhost port 22: connection refused

2014-08-28 Thread Li Chen
Can anyone please help me with this installation error? After I type "start-yarn.sh" : starting yarn daemons starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-xx.out localhost: ssh: connect to host localhost port 22: connection refused when I ran jps to check, only Jps and Re

Re: What happens when .....?

2014-08-28 Thread Mahesh Khandewal
unsubscribe On Thu, Aug 28, 2014 at 6:42 PM, Eric Payne wrote: > Or, maybe have a look at Apache Falcon: > Falcon - Apache Falcon - Data management and processing platform > > > > > > > > Falcon - Apache Falcon - Data management and processing platform >

Re: What happens when .....?

2014-08-28 Thread Eric Payne
Or, maybe have a look at Apache Falcon: Falcon - Apache Falcon - Data management and processing platform Falcon - Apache Falcon - Data management and processing platform Apache Falcon - Data management and processing platform View on falcon.incubator.apache.org Preview by Yahoo

RE: Hadoop on Windows 8 with Java 8

2014-08-28 Thread Liu, Yi A
Currently Hadoop doesn't officially support JAVA8 Regards, Yi Liu From: Ruebenacker, Oliver A [mailto:oliver.ruebenac...@altisource.com] Sent: Thursday, August 28, 2014 8:46 PM To: user@hadoop.apache.org Subject: Hadoop on Windows 8 with Java 8 Hello, I can't find any information on how

Hadoop on Windows 8 with Java 8

2014-08-28 Thread Ruebenacker, Oliver A
Hello, I can't find any information on how possible or difficult it is to install Hadoop as a single node on Windows 8 running Oracle Java 8. The tutorial on Hadoop 2 on Windows mentions neither Windows 8 nor Java 8. Is there anything know

Error could only be replicated to 0 nodes instead of minReplication (=1)

2014-08-28 Thread Jakub Stransky
Hello, we are using Hadoop 2.2.0 (HDP 2.0), avro 1.7.4. running on CentOS 6.3 I am facing a following issue when using a AvroMultipleOutputs with dynamic output files. My M/R job works fine for a smaller amount of data or at least the error hasn't appear there so far. With bigger amount of data I

Re: libhdfs result in JVM crash issue, please help me

2014-08-28 Thread Vincent,Wei
#0 0x7f1e3872c425 in raise () from /lib/x86_64-linux-gnu/libc.so.6 (gdb) bt #0 0x7f1e3872c425 in raise () from /lib/x86_64-linux-gnu/libc.so.6 #1 0x7f1e3872fb8b in abort () from /lib/x86_64-linux-gnu/libc.so.6 #2 0x7f1e380a4405 in os::abort(bool) () from /usr/jdk1.7.0_51/jre/lib

RE: Appending to HDFS file

2014-08-28 Thread rab ra
Thank you all, It works now Regards rab On 28 Aug 2014 12:06, "Liu, Yi A" wrote: > Right, please use FileSystem#append > > > > *From:* Stanley Shi [mailto:s...@pivotal.io] > *Sent:* Thursday, August 28, 2014 2:18 PM > *To:* user@hadoop.apache.org > *Subject:* Re: Appending to HDFS file > > > >

RE: replication factor in hdfs-site.xml

2014-08-28 Thread Liu, Yi A
For 1#, since you still have 2 datanodes alive, and the replication is 2, writing will success. (Read will success) For 2#, now you only have 1 datanode, and the replication is 2, then initial writing will success, but later sometime pipeline recovery will fail. Regards, Yi Liu -Origina

AW: Running job issues

2014-08-28 Thread Blanca Hernandez
Thanks, it fixed my problem! Von: Arpit Agarwal [mailto:aagar...@hortonworks.com] Gesendet: Donnerstag, 28. August 2014 01:41 An: user@hadoop.apache.org Betreff: Re: Running job issues Susheel is right. I've fixed the typo on the wiki page. On Wed, Aug 27, 2014 at 12:28 AM, Susheel Kumar Gadalay

replication factor in hdfs-site.xml

2014-08-28 Thread Satyam Singh
Hi Users, I want behaviour for hadoop cluster for writing/reading in following replication cases: 1. replication=2 and in cluster (3 datatnodes+namenode) one datanode gets down. 2. replication=2 and in cluster (3 datatnodes+namenode) 2 datanode gets down. BR, Satyam