Re: org.mortbay.jetty.EofException

2014-01-02 Thread Azuryy
I also found this issue. Hope answer here. Sent from my iPhone5s > On 2014年1月2日, at 16:56, centerqi hu wrote: > > Hi all > > My hadoop version is 1.1.2 > > I found the following errors in the logs tasktracker > This seems like https://issues.apache.org/jira/browse/MAPREDUCE-5 > > But, but I

Re: Issues with Decommissioning MAchine

2014-03-07 Thread Azuryy
You are right Sent from my iPhone5s > On 2014年3月7日, at 22:10, divye sheth wrote: > > figured it out. We were facing a scenario where some of the files had > replication factor more than the actual number of datanodes. > > https://issues.apache.org/jira/browse/MAPREDUCE-2845 > > That is why

Re: HA NN Failover question

2014-03-14 Thread Azuryy
Which Hadoop version you used? Sent from my iPhone5s > On 2014年3月15日, at 9:29, dlmarion wrote: > > Server 1: NN1 and ZKFC1 > Server 2: NN2 and ZKFC2 > Server 3: Journal1 and ZK1 > Server 4: Journal2 and ZK2 > Server 5: Journal3 and ZK3 > Server 6+: Datanode > > All in the same rack. I would

Re: HA NN Failover question

2014-03-14 Thread Azuryy
I suppose NN2 is standby, please check ZKFC2 is alive before stop network on nn1 Sent from my iPhone5s > On 2014年3月15日, at 10:53, dlmarion wrote: > > Apache Hadoop 2.3.0 > > > Sent via the Samsung GALAXY S®4, an AT&T 4G LTE smartphone > > > Original

Re: Is Hadoop's TooRunner thread-safe?

2014-03-21 Thread Azuryy
Yes, this is the best way to go. Sent from my iPhone5s > On 2014年3月22日, at 3:03, Something Something wrote: > > I will be happy to follow all these steps if someone confirms that this is > the best way to handle it. Seems harmless to me, but just wondering. Thanks. > > >> On Fri, Mar 21, 2

Re: YarnException: Unauthorized request to start container. This token is expired.

2014-03-23 Thread Azuryy
Hi, Please send to the CDH mail list, you cannot get answer here. Sent from my iPhone5s > On 2014年3月23日, at 17:37, Fengyun RAO wrote: > > What does this exception mean? I googled a lot, all the results tell me it's > because the time is not synchronized between datanode and namenode. > Howeve

Re: Getting error message from AM container launch

2014-03-26 Thread Azuryy
You used 'nice' in your app? Sent from my iPhone5s > On 2014年3月27日, at 6:55, John Lilley wrote: > > On further examination they appear to be 369 characters long. I’ve read > about similar issues showing when the environment exceeds 132KB, but we > aren’t putting anything significant in the

Re: Hadoop 2.2.0 Distributed Cache

2014-03-27 Thread Azuryy
-files was used by hive, not MR. So it cannot be recognized by your MR job. Sent from my iPhone5s > On 2014年3月28日, at 2:31, Jonathan Poon wrote: > > Hi Serge, > > I'm using the -files option through the hadoop cli. > > The following lines of code works > > Path[] localPaths = context.getLoca

Re: Exception in createBlockOutputStream trying to start application

2014-04-04 Thread Azuryy
Hi Mirillo, Generally EOF error was caused by network You can ignore this Exception, which just tell you one DN in the pipe line is bad, dfsclient will avoid locate it next time. Sent from my iPhone5s > On 2014年4月5日, at 1:25, Murillo Flores wrote: > > Hello everybody, > > I'm running a ha

Re: Connect a node in virtual machine with other nodes in physical machine with port forwarding

2014-04-06 Thread Azuryy
Hi, Just do it, I did it successfully. Sent from my iPhone5s > On 2014年4月6日, at 14:42, Chau Yee Cheung wrote: > > Hi everyone, > > I am trying to build a cluster with my classmates. 3 of us have linux > installed on the physical machine, 2 of us are using virtual machines. Since > the campu

Re: Connect a node in virtual machine with other nodes in physical machine with port forwarding

2014-04-06 Thread Azuryy
ss the VM guest on another machine without port forwarding? (just making > sure I'm not being paranoid here :) ) > > > 2014-04-06 18:29 GMT+08:00 Azuryy : >> Hi, >> Just do it, I did it successfully. >> >> >> Sent from my iPhone5s >> >

Re: heterogeneous storages in HDFS

2014-04-14 Thread Azuryy
Hadoop 2.5 would be released on mid May. Sent from my iPhone5s > On 2014年4月14日, at 17:47, lei liu wrote: > > When is hadoop released? > > > > > 2014-04-14 17:04 GMT+08:00 Stanley Shi : >> Please find it in this page: https://wiki.apache.org/hadoop/Roadmap >> >> hadoop 2.3.0 only include

Re: For a new installation: use the BackupNode or the CheckPointNode?

2013-03-23 Thread Azuryy Yu
IMO, if you run HA, then SSN is not necessary. On Mar 24, 2013 12:40 PM, "Harsh J" wrote: > Yep, this is correct - you only need the SecondaryNameNode in 1.x. In > 2.x, if you run HA, the standby NameNode role also doubles up > automatically as the SNN so you don't need to run an extra. > > On Su

Re: For a new installation: use the BackupNode or the CheckPointNode?

2013-03-23 Thread Azuryy Yu
SNN(secondary name node),sorry for typo. On Mar 24, 2013 12:59 PM, "Azuryy Yu" wrote: > IMO, if you run HA, then SSN is not necessary. > On Mar 24, 2013 12:40 PM, "Harsh J" wrote: > >> Yep, this is correct - you only need the SecondaryNameNode in 1.x. In

Re: 2 Reduce method in one Job

2013-03-24 Thread Azuryy Yu
there isn't such method, you had to submit another MR. On Mar 24, 2013 9:03 PM, "Fatih Haltas" wrote: > I want to get reduce output as key and value then I want to pass them to a > new reduce as input key and input value. > > So is there any Map-Reduce-Reduce kind of method? > > Thanks to all. >

Re: question for commetter

2013-03-24 Thread Azuryy Yu
good question, i just want HA, dont want to change more configuration. On Mar 25, 2013 2:32 AM, "Balaji Narayanan (பாலாஜி நாராயணன்)" < li...@balajin.net> wrote: > is there a reason why you dont want to run MRv2 under yarn? > > > On 22 March 2013 22:49, Azuryy Yu w

Hadoop-2.x native libraries

2013-03-24 Thread Azuryy Yu
Hi, How to get hadoop-2.0.3-alpha native libraries, it was compiled under 32bits OS in the released package currently.

Re: Hadoop-2.x native libraries

2013-03-24 Thread Azuryy Yu
ive,docs -DskipTests -Dtar" and > then use that. > > Alternatively, if you're interested in packages, use the Apache > Bigtop's scripts from http://bigtop.apache.org/ project's repository > and generate the packages with native libs as well. > > On Mon, Mar 25

HDFS-HA customized callback support

2013-03-25 Thread Azuryy Yu
I just submitted the following patch, welcome review it. -- Forwarded message -- From: "Fengdong Yu (JIRA)" Date: Mar 25, 2013 6:07 PM Subject: [jira] [Created] (HDFS-4631) Support customized call back method during failover automatically. To: Cc: Fengdong Yu created HDFS-4631: -

Re: Hadoop-2.x native libraries

2013-03-25 Thread Azuryy Yu
rry, > Do you have detail steps what did you do to make MRV1 work with HDFS2? > Thanks, > Mounir > On Mon, 2013-03-25 at 13:39 +0800, Azuryy Yu wrote: > > Thanks Harsh! > > I used -Pnative got it. > > I am compile src code. I made MRv1 work with HDFSv2 successfully. >

Re: Any answer ? Candidate application for map reduce

2013-03-25 Thread Azuryy Yu
for your reqeirement, its just write a customized MR inputformat and outputformat based on FileInputFormat. On Mar 25, 2013 1:48 PM, "AMARNATH, Balachandar" < balachandar.amarn...@airbus.com> wrote: > Any answers from anyone of you J > > ** ** > > ** ** > > Regards > > Bala > > ** ** >

RE: How to tell my Hadoop cluster to read data from an external server

2013-03-26 Thread Azuryy Yu
can you addInputPath(hdfs://……),dont change fs.default.name, It cannot solve your problem. On Mar 26, 2013 7:03 PM, "Agarwal, Nikhil" wrote: > Hi, > > Thanks for your reply. I do not know about cascading. Should I google it > as “cascading in hadoop”? Also, what I was thinking is to implemen

Re: Regarding NameNode Problem

2013-03-26 Thread Azuryy Yu
and your hadoop version. On Mar 26, 2013 1:28 PM, "Mohammad Tariq" wrote: > Hello Sagar, > > It would be helpful if you could share your logs with us. > > Warm Regards, > Tariq > https://mtariq.jux.com/ > cloudfront.blogspot.com > > > On Tue, Mar 26, 2013 at 10:47 AM, Sagar Thacker wrot

RE: For a new installation: use the BackupNode or the CheckPointNode?

2013-03-26 Thread Azuryy Yu
yes,you got it. hadoop1.0.x cannot failover auto or mannual. you have to copy fsimage from SNN to the primary NN. On Mar 27, 2013 11:29 AM, "David Parks" wrote: > Thanks for the update, I understand now that I'll be installing a > "secondary > name node" which performs checkpoints on the primary

Re: Why do some blocks refuse to replicate...?

2013-03-28 Thread Azuryy Yu
which hadoop version you used? On Mar 29, 2013 5:24 AM, "Felix GV" wrote: > > Yes, I didn't specify how I was testing my changes, but basically, here's what I did: > > My hdfs-site.xml file was modified to include a reference the a file containing a list of all datanodes (via dfs.hosts) and a ref

Re: QIM failover callback patch

2013-03-28 Thread Azuryy Yu
Sorry! Todd has been reviewed it. On Fri, Mar 29, 2013 at 11:40 AM, Azuryy Yu wrote: > hi, > who can review this one: > https://issues.apache.org/jira/browse/HDFS-4631 > > thanks. >

Re: FileSystem Error

2013-03-29 Thread Azuryy Yu
using haddop jar, instead of java -jar. hadoop script can set a proper classpath for you. On Mar 29, 2013 11:55 PM, "Cyril Bogus" wrote: > Hi, > > I am running a small java program that basically write a small input data > to the Hadoop FileSystem, run a Mahout Canopy and Kmeans Clustering and

Re: Why big block size for HDFS.

2013-03-31 Thread Azuryy Yu
When you seek to a position within a HDFS file, you are not seek from the start of the first block and then one by one. Actually DFSClient can skip some blocks until find one block, which offset and block length includes your seek position. On Mon, Apr 1, 2013 at 12:55 AM, Rahul Bhattacharjee

Re: are we able to decommission multi nodes at one time?

2013-04-01 Thread Azuryy Yu
I can translate it to native English: how many nodes you want to decommission? On Tue, Apr 2, 2013 at 11:01 AM, Yanbo Liang wrote: > You want to decommission how many nodes? > > > 2013/4/2 Henry JunYoung KIM > >> 15 for datanodes and 3 for replication factor. >> >> 2013. 4. 1., 오후 3:23, varun

Re: Provide context to map function

2013-04-01 Thread Azuryy Yu
In your map function add following: Path currentInput = ((FileSplit)context.getInputSplit()).getPath(); then: if (currentInput is first ){ } else{ .. } On Tue, Apr 2, 2013 at 11:55 AM, Abhinav M Kulkarni < abhinavkulka...@gmail.com> wrote: > Hi, > > I have

Re: Provide context to map function

2013-04-01 Thread Azuryy Yu
I supposed your input splits are FileSplit, if not, you need to: InputSplit split = context.getInputSplit(); if (split instanceof FileSplit){ Path path = ((FileSplit)split).getPath(); } On Tue, Apr 2, 2013 at 12:02 PM, Azuryy Yu wrote: > In your map function add following: >

Re: Eclipse plugin for hadoop 1.1.1

2013-04-01 Thread Azuryy Yu
It's unavailable in the hadoop1.x distribution, you can find it in the hadoop-0.20.x distribution. On Tue, Apr 2, 2013 at 1:05 PM, Varsha Raveendran < varsha.raveend...@gmail.com> wrote: > Hello! > > Is there an eclipse plugin for MapReduce on hadoop 1.1.1 available? > > I am finding it difficul

Re: are we able to decommission multi nodes at one time?

2013-04-02 Thread Azuryy Yu
; >>>> 2013/4/2 Henry Junyoung Kim > >>>>> > >>>>> :) > >>>>> > >>>>> currently, I have 15 data nodes. > >>>>> for some tests, I am trying to decommission until 8 nodes. > >>>>>

Re: are we able to decommission multi nodes at one time?

2013-04-02 Thread Azuryy Yu
bq. then namenode start to copy block replicates on DN-2 to another DN, supposed DN-2. sorry for typo. Correct for it: then namenode start to copy block replicates on DN-1 to another DN, supposed DN-2. On Wed, Apr 3, 2013 at 9:51 AM, Azuryy Yu wrote: > It's different. > If you j

Re: are we able to decommission multi nodes at one time?

2013-04-03 Thread Azuryy Yu
not at all. so don't worry about that. On Wed, Apr 3, 2013 at 2:04 PM, Yanbo Liang wrote: > It means that may be some replicas will be stay in under replica state? > > > 2013/4/3 Azuryy Yu > >> bq. then namenode start to copy block replicates on DN-2 to an

Re: MapReduce on Local files

2013-04-03 Thread Azuryy Yu
For FileInputFormat, start with "_" is hidden file by default. you can write a custom PathFilter, and pass it to the InputFormat. On Wed, Apr 3, 2013 at 5:58 PM, Harsh J wrote: > You've been misled by the GUI you use, I'm afraid. Many DEs (Desktop > Environments) consider ~-suffix files as hidd

Re: NameNode failure and recovery!

2013-04-03 Thread Azuryy Yu
for Hadoopv2, there is HA, so SNN is not necessary. On Apr 3, 2013 10:41 PM, "Rahul Bhattacharjee" wrote: > Hi all, > > I was reading about Hadoop and got to know that there are two ways to > protect against the name node failures. > > 1) To write to a nfs mount along with the usual local disk. >

Re: What do ns_quota and ds_quota mean in an namednode entry

2013-04-04 Thread Azuryy Yu
name space, disk space. ns means block numbers limits. ds is total file size limitation. On Apr 4, 2013 3:12 PM, "Bert Yuan" wrote: > Bellow is json format of an namednode entry: > { > inode:{ > inodepath:'/anotherDir/biggerfile', > replication:3, > modificationtime:'2

Re: Hadoop 1.04 with Eclipice

2013-04-06 Thread Azuryy Yu
download hadoop-0.20.203, there is hadoop-eclipse plugin, which also supports hadoop-1.0.4 Send from my Sony mobile. On Apr 5, 2013 11:14 PM, "sahil soni" wrote: > Hi All, > > I have installed the Hadoop 1.04 on the Red Hat Linux 5 . I want to > install the Eclipse any version on Windows 7 and w

Re: MVN repository for hadoop trunk

2013-04-06 Thread Azuryy Yu
hi, do you think trunk is also stable as well as released stable? --Send from my Sony mobile. On Apr 7, 2013 5:01 AM, "Harsh J" wrote: > I don't think we publish nightly or rolling jars anywhere on maven > central from trunk builds. > > On Sun, Apr 7, 2013 at 2:17 AM, Jay Vyas wrote: > > Hi guy

Re: backup node question

2013-04-07 Thread Azuryy Yu
Hi Harsh, Do you mean BackupNameNode is Secondary NameNode in Hadoop1.x? On Sun, Apr 7, 2013 at 4:05 PM, Harsh J wrote: > Yes, it need not keep an edits (transactions) stream locally cause > those are passed synchronously to the BackupNameNode, which persists > it on its behalf. > > On Sun, Apr

Re: backup node question

2013-04-07 Thread Azuryy Yu
out in 2.x today if > you wish to. > > On Sun, Apr 7, 2013 at 3:12 PM, Azuryy Yu wrote: > > Hi Harsh, > > Do you mean BackupNameNode is Secondary NameNode in Hadoop1.x? > > > > > > On Sun, Apr 7, 2013 at 4:05 PM, Harsh J wrote: > >> > >> Yes,

Re: backup node question

2013-04-07 Thread Azuryy Yu
SNN=secondary name node in my last mail. --Send from my Sony mobile. On Apr 7, 2013 10:01 PM, "Azuryy Yu" wrote: > I am confused. Hadoopv2 has NN SNN DN JN(journal node), so whats > Standby Namenode? > > --Send from my Sony mobile. > On Apr 7, 2013

Re: backup node question

2013-04-07 Thread Azuryy Yu
daemon (i.e. it just runs the NameNode service), > just a naming convention. > > On Sun, Apr 7, 2013 at 7:31 PM, Azuryy Yu wrote: > > I am confused. Hadoopv2 has NN SNN DN JN(journal node), so whats > > Standby Namenode? > > > > --Send from my

A question of QJM with HDFS federation

2013-04-07 Thread Azuryy Yu
Hi dears, I deployed Hadoopv2 with HA enabled using QJM, so my question is: 1) if we also configured HDFS federation, such as: NN1 is active, NN2 is standby NN3 is active, NN4 is standby they are configured as HDFS federation, then, Can these four NNs using the same Journal nodes and ZK

Re: A question of QJM with HDFS federation

2013-04-07 Thread Azuryy Yu
, and > won't collide with another NSs' ZKFCs. > > Do post back if there are still some more doubts. > > On Mon, Apr 8, 2013 at 10:53 AM, Azuryy Yu wrote: > > Hi dears, > > I deployed Hadoopv2 with HA enabled using QJM, so my question is: > > > >

Re: Best format to use

2013-04-08 Thread Azuryy Yu
impala can work with compressed files, but it's sequence file, not compressed directly. On Tue, Apr 9, 2013 at 7:48 AM, Mark wrote: > Trying to determine what the best format to use for storing daily logs. We > recently switch from snappy (.snappy) to gzip (.deflate) but I'm wondering > if ther

Re: Problem accessing HDFS from a remote machine

2013-04-08 Thread Azuryy Yu
can you use command "jps" on your localhost to see if there is NameNode process running? On Tue, Apr 9, 2013 at 2:27 AM, Bjorn Jonsson wrote: > Yes, the namenode port is not open for your cluster. I had this problem > to. First, log into your namenode and do netstat -nap to see what ports are >

Re: The Job.xml file

2013-04-09 Thread Azuryy Yu
Yes, you can start a job directly from a job.xml try "hadoop job -submit JOB_FILE", replace JOB_FILE with your job.xml. On Wed, Apr 10, 2013 at 12:25 AM, Jay Vyas wrote: > Hi guys: I cant find much info about the life cycle for the job.xml file > in hadoop. > > My thoughts are : > > 1) It is c

Re: Copy Vs DistCP

2013-04-10 Thread Azuryy Yu
CP command is not parallel, It's just call FileSystem, even if DFSClient has multi threads. DistCp can work well on the same cluster. On Thu, Apr 11, 2013 at 8:17 AM, KayVajj wrote: > The File System Copy utility copies files byte by byte if I'm not wrong. > Could it be possible that the cp co

Re: Copy Vs DistCP

2013-04-11 Thread Azuryy Yu
file partitioned on >> various data nodes? >> >> >> On Wed, Apr 10, 2013 at 6:30 PM, Azuryy Yu wrote: >> >>> CP command is not parallel, It's just call FileSystem, even if DFSClient >>> has multi threads. >>> >>> DistCp can wor

Re: Copy Vs DistCP

2013-04-11 Thread Azuryy Yu
re albeit not often. >> >> Sent from my iPhone >> >> On Apr 10, 2013, at 10:37 PM, Alexander Pivovarov >> wrote: >> >> if cluster is busy with other jobs distcp will wait for free map slots. >> Regular cp is more reliable and predictable. Especialy if you

Re: Mapper always hangs at the same spot

2013-04-13 Thread Azuryy Yu
agree. just check your app. or paste map code here. --Send from my Sony mobile. On Apr 14, 2013 4:08 AM, "Edward Capriolo" wrote: > Your application logic is likely stuck in a loop. > > > On Sat, Apr 13, 2013 at 12:47 PM, Chris Hokamp wrote: > >> >When you say "never progresses", do you see the

Re: A question of QJM with HDFS federation

2013-04-14 Thread Azuryy Yu
use the same journal nodes and ZK nodes? Thanks. On Mon, Apr 8, 2013 at 2:57 PM, Azuryy Yu wrote: > Thank you very much, Harsh. > > not yet question now. > > --Send from my Sony mobile. > On Apr 8, 2013 2:51 PM, "Harsh J" wrote: > >> Hi Azurry, >> &

Re: jobtracker not starting - access control exception - folder not owned by me (it claims)

2013-04-15 Thread Azuryy Yu
I supposed you start-mapred by user mapred. then hadoop fs -chown -R mpared:mapred /home/jbu/hadoop_local_install/ hadoop-1.0.4/tmp/mapred/system this is caused by fairscheduler, please reach MAPREDUCE-4398 On Mon, Apr 15, 2013 at 6:43 PM, J

Re: 答复: Region has been CLOSING for too long, this should eventually complete or the server will expire, send RPC again

2013-04-15 Thread Azuryy Yu
This is zookeeper issue. please paste zookeeper log here. thanks. On Tue, Apr 16, 2013 at 9:58 AM, dylan wrote: > It is hbase-0.94.2-cdh4.2.0. > > ** ** > > *发件人:* Ted Yu [mailto:yuzhih...@gmail.com] > *发送时间:* 2013年4月16日 9:55 > *收件人:* u...@hbase.apache.org > *主题:* Re: Region has been CLOSI

Re: 答复: 答复: Region has been CLOSING for too long, this should eventually complete or the server will expire, send RPC again

2013-04-15 Thread Azuryy Yu
16, 2013 at 10:37 AM, dylan wrote: > How to check zookeeper log?? It is the binary files, how to transform it > to normal log? ** **I find the “ > org.apache.zookeeper.server.LogFormatter”, how to run?** ** > > ** ** > > *发件人:* Azuryy Yu [mailto:azury...@gmail.com] &g

Re: 答复: 答复: Region has been CLOSING for too long, this should eventually complete or the server will expire, send RPC again

2013-04-15 Thread Azuryy Yu
and paste ZK configuration in the zookeerp_home/conf/zoo.cfg On Tue, Apr 16, 2013 at 10:42 AM, Azuryy Yu wrote: > it located under hbase-home/logs/ if your zookeeper is managed by hbase. > > but I noticed you configured QJM, then did your QJM and Hbase share the > same ZK cluster?

Re: 答复: 答复: 答复: Region has been CLOSING for too long, this should eventually complete or the server will expire, send RPC again

2013-04-15 Thread Azuryy Yu
8:3888 > > server.2=Slave02:2888:3888 > > server.3=Slave03:2888:3888 > > ** ** > > *发件人:* Azuryy Yu [mailto:azury...@gmail.com] > *发送时间:* 2013年4月16日 10:45 > *收件人:* user@hadoop.apache.org > *主题:* Re: 答复: 答复: Region has been CLOSING for too long, this shoul

Re: 答复: 答复: 答复: 答复: Region has been CLOSING for too long, this should eventually complete or the server will expire, send RPC again

2013-04-15 Thread Azuryy Yu
HoldException: Master is initializing**** > > ** ** > > *发件人:* Azuryy Yu [mailto:azury...@gmail.com] > *发送时间:* 2013年4月16日 10:59 > *收件人:* user@hadoop.apache.org > *主题:* Re: 答复: 答复: 答复: Region has been CLOSING for too long, this should > eventually complete or the server will expire,

Re: 答复: 答复: 答复: 答复: 答复: Region has been CLOSING for too long, this should eventually complete or the server will expire, send RPC again

2013-04-15 Thread Azuryy Yu
de0003 > > 2013-04-16 11:03:44,000 [myid:1] - INFO > [SessionTracker:ZooKeeperServer@325] - Expiring session > 0x23e0dc5a333000b, timeout of 4ms exceeded > > 2013-04-16 11:03:44,001 [myid:1] - INFO [ProcessThread(sid:1 > cport:-1)::PrepRequestProcessor@476] - Proce

Re: Submitting mapreduce and nothing happens

2013-04-16 Thread Azuryy Yu
do you have data exists on your input path? On Wed, Apr 17, 2013 at 1:18 AM, Amit Sela wrote: > Nothing on JT log, but as I mentioned I see this in the client log: > > [WARN ] org.apache.hadoop.mapred.JobClient » Use > GenericOptionsParser for parsing the arguments. Applications should >

Re: Reading and Writing Sequencefile using Hadoop 2.0 Apis

2013-04-17 Thread Azuryy Yu
you can use if even if it's depracated. I can find in the org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader.java, @Override public void initialize(InputSplit split, TaskAttemptContext context ) throws IOException, InterruptedExcep

Re: Physically moving HDFS cluster to new

2013-04-17 Thread Azuryy Yu
Data nodes name or IP changed cannot cause your data loss. only kept fsimage(under the namenode.data.dir) and all block data on the data nodes, then everything can be recoveryed when your start the cluster. On Thu, Apr 18, 2013 at 1:20 AM, Tom Brown wrote: > We have a situation where we want t

Re: Cartesian product in hadoop

2013-04-18 Thread Azuryy Yu
This is not suitable for his large dataset. --Send from my Sony mobile. On Apr 18, 2013 5:58 PM, "Jagat Singh" wrote: > Hi, > > Can you have a look at > > http://pig.apache.org/docs/r0.11.1/basic.html#cross > > Thanks > > > On Thu, Apr 18, 2013 at 7:47 PM, zheyi rong wrote: > >> Dear all, >> >>

when to hadoop-2.0 stable release

2013-04-19 Thread Azuryy Yu
I don't think this is easy to answer. maybe it's not decided. if so, can you tell me what important features are still deveoping or any other reasons? Appreciate.

Re: Writing intermediate key,value pairs to file and read it again

2013-04-20 Thread Azuryy Yu
you would look at chain reducer java doc, which meet your requirement. --Send from my Sony mobile. On Apr 20, 2013 11:43 PM, "Vikas Jadhav" wrote: > Hello, > Can anyone help me in following issue > Writing intermediate key,value pairs to file and read it again > > let us say i have to write each

Re: Keep Kerberos credentials valid after logging out

2013-05-21 Thread Azuryy Yu
nohup ./your_bash 1>temp.log 2>&1 & --Send from my Sony mobile. On May 21, 2013 6:32 PM, "zheyi rong" wrote: > Hi all, > > I would like to run my hadoop job in a bash file for several times, e.g. > #!/usr/bin/env bash > for i in {1..10} > do > my-hadoop-job > done > > Since I d

Re: Hint on EOFException's on datanodes

2013-05-24 Thread Azuryy Yu
maybe network issue, datanode received an incomplete packet. --Send from my Sony mobile. On May 24, 2013 1:39 PM, "Stephen Boesch" wrote: > > On a smallish (10 node) cluster with only 2 mappers per node after a few > minutes EOFExceptions are cropping up on the datanodes: an example is shown > b

Re:

2013-06-01 Thread Azuryy Yu
This should be fixed in hadoop-1.1.2 stable release. if we determine completedMapsInputSize is zero, then job's map tasks MUST be zero, so the estimated output size is zero. below is the code: long getEstimatedMapOutputSize() { long estimate = 0L; if (job.desiredMaps() > 0) { estim

Re:

2013-06-01 Thread Azuryy Yu
quot;estimate total map output will be " + estimate); } return estimate; } } On Sun, Jun 2, 2013 at 12:34 AM, Azuryy Yu wrote: > This should be fixed in hadoop-1.1.2 stable release. > if we determine completedMapsInputSize is zero, then job's map tasks MUST >

Re:

2013-06-03 Thread Azuryy Yu
can you upgrade to 1.1.2, which is also a stable release, and fixed the bug you facing now. --Send from my Sony mobile. On Jun 2, 2013 3:23 AM, "Shahab Yunus" wrote: > Thanks Harsh for the reply. I was confused too that why security is > causing this. > > Regards, > Shahab > > > On Sat, Jun 1, 2

Re:

2013-06-03 Thread Azuryy Yu
yes. hadoop-1.1.2 was released on Jan. 31st. just download it. On Tue, Jun 4, 2013 at 6:33 AM, Lanati, Matteo wrote: > Hi Azuryy, > > thanks for the update. Sorry for the silly question, but where can I > download the patched version? > If I look into the closest mirr

Re:

2013-06-03 Thread Azuryy Yu
Hi Harsh, I need to take care my eyes recently, I mis-read 1.2.0 to 1.0.2, so I said upgrade. Sorry. On Tue, Jun 4, 2013 at 9:46 AM, Harsh J wrote: > Azuryy, > > 1.1.2 < 1.2.0. Its not an upgrade you're suggesting there. If you feel > there's been a regression, ca

Re: how to locate the replicas of a file in HDFS?

2013-06-04 Thread Azuryy Yu
ClientProtocol namenode = DFSClient.createNamenode(conf); HdfsFileStatus hfs = namenode.getFileInfo("your_hdfs_file_name"); LocatedBlocks lbs = namenode.getBlockLocations("your_hdfs_file_name", 0, hfs.getLen()); for (LocatedBlock lb : lbs.getLocatedBlocks()) { DatanodeInfo[] info = lb.getLocati

HadoopV2 and HDFS-fuse

2013-06-08 Thread Azuryy Yu
hi, Does anybody can tell me how to compile hdfs-fuse based on Hadoop-2.0-*, Thanks.

Re: HadoopV2 and HDFS-fuse

2013-06-09 Thread Azuryy Yu
hi Harsh, yes, I‘ve build native -Pnative successfully. I also used -Drequire.fuse=true. but I just found contrib/fuse directory is empty. so I asked this question, Thanks Harsh. --Send from my Sony mobile. On Jun 9, 2013 9:09 PM, "Harsh J" wrote: > Hi Azuryy, > > Are

Re: hadoop 2.0 client configuration

2013-06-10 Thread Azuryy Yu
if you want work with HA, yes , all these configuration needed. --Send from my Sony mobile. On Jun 11, 2013 8:05 AM, "Praveen M" wrote: > Hello, > > I'm a hadoop n00b, and I had recently upgraded from hadoop 0.20.2 to > hadoop 2 (chd-4.2.1) > > For a client configuration to connect to the hadoop

Re: read lucene index in mapper

2013-06-11 Thread Azuryy Yu
you need to add lucene index tar.gz in the distributed cache as archive, then create index reader in the mapper's setup. --Send from my Sony mobile. On Jun 12, 2013 12:50 AM, "parnab kumar" wrote: > Hi , > > I need to read an existing lucene index in a map.can someone point > me to the r

Re: how to design the mapper and reducer for the below problem

2013-06-14 Thread Azuryy Yu
This is a Graph problem, you want to find all joined sub graphs, so i don't think it's easy using Map/Reduce. but you can try Yarn, it can be iterated easily, at least compared with M/R. On Fri, Jun 14, 2013 at 12:41 PM, parnab kumar wrote: > Consider a following input file of format : > inpu

Re: Error in command: bin/hadoop fs -put conf input

2013-06-15 Thread Azuryy Yu
from the log, there is no room on the HDFS. --Send from my Sony mobile. On Jun 16, 2013 5:12 AM, "sumit piparsania" wrote: > Hi, > > I am getting the below error while executing the command. Kindly assist me > in resolving this issue. > > > $ bin/hadoop fs -put conf input > bin/hadoop: line 320:

Re: how to close hadoop when tmp files were cleared

2013-06-17 Thread Azuryy Yu
you must using the same account that start your cluster. On Mon, Jun 17, 2013 at 3:39 PM, wrote: > > Hi, > > My hadoop cluster has been running for a period of time. Now i want to > close it for some system changes. But the command "bin/stop-all.sh" shows > "no jobtracker to stop","no tasktrac

Re: how to close hadoop when tmp files were cleared

2013-06-17 Thread Azuryy Yu
ps aux|grep java , you can find pid, then just 'kill -9' to stop the Hadoop process. On Mon, Jun 17, 2013 at 4:34 PM, Harsh J wrote: > Just send the processes a SIGTERM signal (regular kill). Its what the > script does anyway. Ensure to change the PID directory before the next > restart though

Re: how to close hadoop when tmp files were cleared

2013-06-17 Thread Azuryy Yu
Yes Harsh, It's my fault. On Mon, Jun 17, 2013 at 5:09 PM, Harsh J wrote: > Hey Azuryy, > > The -9 (SIGKILL) is unnecessary and isn't recommended unless its > unresponsive. The SIGTERM has an additional benefit of running any > necessary shutdown handling procedure

Re: How to fail the Name Node or how to crash the Name Node for testing Purpose.

2013-06-18 Thread Azuryy Yu
or "kill -9 namenode_pid" to simulate NN crashed. On Wed, Jun 19, 2013 at 2:42 PM, Azuryy Yu wrote: > $HADOOP_HOME/bin/hadoop-daemon.sh stop namenode > > > > On Wed, Jun 19, 2013 at 2:38 PM, Pavan Kumar Polineni < > smartsunny...@gmail.com> wrote: > >

Re: How to fail the Name Node or how to crash the Name Node for testing Purpose.

2013-06-18 Thread Azuryy Yu
$HADOOP_HOME/bin/hadoop-daemon.sh stop namenode On Wed, Jun 19, 2013 at 2:38 PM, Pavan Kumar Polineni < smartsunny...@gmail.com> wrote: > For Testing The Name Node Crashes and failures. For Single Point of Failure > > -- > Pavan Kumar Polineni >

Re: How to fail the Name Node or how to crash the Name Node for testing Purpose.

2013-06-18 Thread Azuryy Yu
hey Pavan, Hadoop-2.* has HDFS HA, which hadoop version are you using? On Wed, Jun 19, 2013 at 2:46 PM, Pavan Kumar Polineni < smartsunny...@gmail.com> wrote: > I am checking for Cloudera only. But no HA? just we have single Name node. > For testing purposes and preventing actions. Preparing e

Re: Setting up secure Hadoop using whosso

2013-06-19 Thread Azuryy Yu
hey Tom, thanks for share. --Send from my Sony mobile. On Jun 19, 2013 11:31 PM, "Tom Tran" wrote: > Thanks for the reply, Harsh. > Anyway, I tested with whosso on a small cluster and it worked so far . I > tested Secondary namenode as well as webui login. > > I wrote a small blog on it. http:

Re: How Yarn execute MRv1 job?

2013-06-19 Thread Azuryy Yu
Hi Sam, please look at :http://hbase.apache.org/book.html#d2617e499 generally, we said YARN is Hadoop-2.x, you can download hadoop-2.0.4-alpha. and Hive-0.10 supports hadoop-2.x very well. On Thu, Jun 20, 2013 at 2:11 PM, sam liu wrote: > Thanks Arun! > > #1, Yes, I did tests and found that t

Re: How Yarn execute MRv1 job?

2013-06-20 Thread Azuryy Yu
gt; So, older versions of HBase and Hive, like HBase 0.94.0 and Hive 0.9.0, > does not support hadoop 2.x, right? > > Thanks! > > > 2013/6/20 Azuryy Yu > >> Hi Sam, >> please look at :http://hbase.apache.org/book.html#d2617e499 >> >> generally, we sa

HDFS upgrade under HA

2013-06-21 Thread Azuryy Yu
Hi, The layout version is -43 until 2.0.4-alpha, but HDFS-4908 changed layout version to -45. so if My test cluster is running hadoop-2.0.4-alpha(-43), which is upgraded from hadoop-1.0.4, then I want to upgrade using trunk(-45), how to do? It cannot upgrade under HA, so I can use hadoop-1.0.4 c

Re: Inputformat

2013-06-22 Thread Azuryy Yu
you had to write a JSONInputFormat, or google first to find it. --Send from my Sony mobile. On Jun 23, 2013 7:06 AM, "jamal sasha" wrote: > Then how should I approach this issue? > > > On Fri, Jun 21, 2013 at 4:25 PM, Niels Basjes wrote: > >> If you try to hammer in a nail (json file) with a sc

Re: MapReduce job not running - i think i keep all correct configuration.

2013-06-23 Thread Azuryy Yu
Can you paste some error logs here? you can find it on the JT or TT. and tell us the hadoop version. On Sun, Jun 23, 2013 at 9:20 PM, Pavan Kumar Polineni < smartsunny...@gmail.com> wrote: > > Hi all, > > first i have a machine with all the demons are running on it. after that i > added two data

Re: which hadoop version i can choose in production env?

2013-06-24 Thread Azuryy Yu
I advice community version of Hadoop-1.1.2, which is a stable release, Hadoop2 hasn't stable release currently, even if all alpha release was extensive tested. but from me, I think HDFS2 is stable now.(no?), MR1 is also stable, but Yarn still need extensive tests(at least I think so), so our pro

Re: 答复: Help about build cluster on boxes which already has one?

2013-06-25 Thread Azuryy Yu
there is no MN, NM is node manager. --Send from my Sony mobile. On Jun 26, 2013 6:31 AM, "yuhe" wrote: > I plan to use CDH3u4,and what is MN? > > -- > 使用语盒发送 @2013-06-25 22:36 > http://www.yuchs.com > > > -- 原始邮件 -- > user@hadoop.apache.org @2013年06月25日 15:12 > > What version of Hadoop are you p

Re: java.lang.UnsatisfiedLinkError - Unable to load libGfarmFSNative library

2013-06-26 Thread Azuryy Yu
>From the log: libGfarmFSNative.so: libgfarm.so.1: cannot open shared object file: No such file or directory I don't think you put libgfarm.* under $HADOOP_HOME/lib/native/Linux-amd64-64 (Linux-i386-32 if running on 32 bits OS) on all nodes. On Thu, Jun 27, 2013 at 10:44 AM, Harsh J wrote: >

Re: Could we use the same identity store for user groups mapping in MIT Kerberos + OpenLDAP setup

2013-06-28 Thread Azuryy Yu
you can try whosso, which is simple than kerbose. --Send from my Sony mobile. On Jun 29, 2013 7:29 AM, "Zheng, Kai" wrote: > Hi all, > > ** ** > > I have a setup using MIT Kerberos with OpenLDAP as the user database. It’s > desired to use the same user database that holds all the kinit prin

Re: data loss after cluster wide power loss

2013-07-01 Thread Azuryy Yu
how to enable "sync on block close" in HDFS? --Send from my Sony mobile. On Jul 2, 2013 6:47 AM, "Lars Hofhansl" wrote: > HBase is interesting here, because it rewrites old data into new files. So > a power outage by default would not just lose new data but potentially old > data as well. > You

Re: Job level parameters

2013-07-01 Thread Azuryy Yu
They are all listed in the mapred-default.xml, and there are detailed description. On Tue, Jul 2, 2013 at 11:14 AM, Felix.徐 wrote: > Hi all, > > Is there a detailed list or document about the job specific parameters of > mapreduce ? > > Thanks! >

Re: reply: a question about dfs.replication

2013-07-01 Thread Azuryy Yu
It's not HDFS issue. dfs.replication is a client side configuration, not server side. so you need to set it to '2' on your client side( your application running on). THEN execute command such as : hdfs dfs -put or call HDFS API in java application. On Tue, Jul 2, 2013 at 12:25 PM, Francis.Hu

  1   2   3   >