Re: CPU utilization keeps increasing when using HDFS

2014-09-01 Thread Stanley Shi
conclusion that the CPU utilization is related to HDFS. We want to know whether this issue is really related to HDFS and is there any solution to fix it? Thanks a lot! BR/Shiyuan -- Regards, *Stanley Shi,*

Re: Replication factor affecting write performance

2014-09-01 Thread Stanley Shi
are scanned for all known viruses. Always scan attachments before opening them. -- Regards, *Stanley Shi,*

Re: Local file system to access hdfs blocks

2014-08-29 Thread Stanley Shi
:blk_1073742025 1073742025 *seem the BlockLocation don't provide the public info here. http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/BlockLocation.html is there another entry point? somethinig fsck is using? thanks Demai On Wed, Aug 27, 2014 at 11:09 PM, Stanley Shi s

Re: Local file system to access hdfs blocks

2014-08-28 Thread Stanley Shi
. Maybe there is an existing Hadoop API to return such info in already? Demai on the run On Aug 26, 2014, at 9:14 PM, Stanley Shi s...@pivotal.io wrote: I am not sure this is what you want but you can try this shell command: find [DATANODE_DIR] -name [blockname] On Tue, Aug 26, 2014

Re: What happens when .....?

2014-08-28 Thread Stanley Shi
job, would it only work on the files present at that point ?? Regards, Nikhil Kandoi -- Regards, *Stanley Shi,*

Re: Appending to HDFS file

2014-08-28 Thread Stanley Shi
= org.apache.hadoop.fs.FileSystem.get(configuration); FSDataOutputStream fp = fs.create(pt, true) fp ${key} ${value}\n On 27 Aug 2014 09:46, Stanley Shi s...@pivotal.io wrote: would you please past the code in the loop? On Sat, Aug 23, 2014 at 2:47 PM, rab ra rab...@gmail.com wrote: Hi By default

Re: Hadoop 2.5.0 - HDFS browser-based file view

2014-08-28 Thread Stanley Shi
view a file in the browser. Clicking a file gives a little popup with metadata and a download link. Can HDFS be configured to show plaintext file contents in the browser? Thanks, Brian -- Regards, *Stanley Shi,*

Re: Hadoop on Safe Mode because Resources are low on NameNode

2014-08-26 Thread Stanley Shi
about this issue? Thankfully, Vincent. -- Regards, *Stanley Shi,*

Re: Local file system to access hdfs blocks

2014-08-26 Thread Stanley Shi
...] With such info, is there a way to 1) login to hfds01, and read the block directly at local file system level? Thanks Demai on the run -- Regards, *Stanley Shi,*

Re: Appending to HDFS file

2014-08-26 Thread Stanley Shi
, appends some information. This is not working and I see only the last write. How do I accomplish append operation in hadoop? Can anyone share a pointer to me? regards Bala -- Regards, *Stanley Shi,*

Re: issue about distcp Source and target differ in block-size. Use -pb to preserve block-sizes during copy.

2014-07-25 Thread Stanley Shi
Your client side was running at 14/07/24 18:35:58 INFO mapreduce.Job: T***, But you are pasting NN log at 2014-07-24 17:39:34,255; By the way, which version of HDFS are you using? Regards, *Stanley Shi,* On Fri, Jul 25, 2014 at 10:36 AM, ch huang justlo...@gmail.com wrote: 2014-07-24 17:33

Re: issue about distcp Source and target differ in block-size. Use -pb to preserve block-sizes during copy.

2014-07-24 Thread Stanley Shi
Would you please also past the corresponding namenode log? Regards, *Stanley Shi,* On Fri, Jul 25, 2014 at 9:15 AM, ch huang justlo...@gmail.com wrote: hi,maillist: i try to copy data from my old cluster to new cluster,i get error ,how to handle this? 14/07/24 18:35:58 INFO

Re: issue about run MR job use system user

2014-07-24 Thread Stanley Shi
The alex should belongs to hadoop group on namenode; Regards, *Stanley Shi,* On Thu, Jul 24, 2014 at 10:11 PM, java8964 java8...@hotmail.com wrote: Are you sure user 'Alex' belongs to 'hadoop' group? Why not your run command 'id alex' to prove it? And 'Alex' belongs to 'hadoop' group can

Re: Decommissioning a data node and problems bringing it back online

2014-07-23 Thread Stanley Shi
which distribution are you using? Regards, *Stanley Shi,* On Thu, Jul 24, 2014 at 4:38 AM, andrew touchet adt...@latech.edu wrote: I should have added this in my first email but I do get an error in the data node's log file '2014-07-12 19:39:58,027 INFO

Re: how to reduce delay in HDFS restart

2014-07-23 Thread Stanley Shi
Do you have a secondary namenode running? Secondary NN is used for this purpose; Also, if you have HDFS HA enabled, this problem will also not occur. Regards, *Stanley Shi,* On Tue, Jul 22, 2014 at 7:24 AM, Anfernee Xu anfernee...@gmail.com wrote: Hi, For some reason, all PIDs file

Re: Huge text file for Hadoop Mapreduce

2014-07-09 Thread Stanley Shi
You can get the wikipedia data from it's website, it's pretty big; Regards, *Stanley Shi,* On Tue, Jul 8, 2014 at 1:35 PM, Du Lam delim123...@gmail.com wrote: Configuration conf = getConf(); conf.setLong(mapreduce.input.fileinputformat.split.maxsize,1000); // u can set this to some

Re: How do I recover the namenode?

2014-07-09 Thread Stanley Shi
will report that blocks are missing; 1. Since you're using HA, lots of the editlogs are stored in the journal node; the fsimage you have may not be exactly the one you want; Regards, *Stanley Shi,* On Tue, Jul 8, 2014 at 8:12 AM, cho ju il tjst...@kgrid.co.kr wrote: Thank you for answer

Re: Managed File Transfer

2014-07-09 Thread Stanley Shi
There's a DistCP utility for this kind of purpose; Also there's Spring XD there, but I am not sure if you want to use it. Regards, *Stanley Shi,* On Mon, Jul 7, 2014 at 10:02 PM, Mohan Radhakrishnan radhakrishnan.mo...@gmail.com wrote: Hi, We used a commercial FT and scheduler

Re: How to recover reducer task data on a different data node?

2014-07-03 Thread Stanley Shi
It will start from scratch to copy all map outputs from all mapper nodes; Regards, *Stanley Shi,* On Thu, Jul 3, 2014 at 2:28 PM, James Teng tenglinx...@outlook.com wrote: First i would like to declare that although i am not new to hadoop, but not expert on it as well. i would like

Re: How to upgrade Hadoop 2.2 to 2.4

2014-06-24 Thread Stanley Shi
Cluster-running upgrade is only supported after 2.4, that is, upgrade from 2.4 to 2.4+ is supported; upgrading from 2.2 to 2.4 is not supported; Regards, *Stanley Shi,* On Fri, Jun 20, 2014 at 5:50 PM, Jason Meng neu...@126.com wrote: Hi, I setup Hadoop 2.2 cluster with NameNode HA. How

Re: grouping similar items toegther

2014-06-24 Thread Stanley Shi
The similar logic is not transitive, that means, if a is similar to b, b is similar to c, but a may be not similar to c; then how do you do the group? Regards, *Stanley Shi,* On Sat, Jun 21, 2014 at 2:51 AM, parnab kumar parnab.2...@gmail.com wrote: Hi, I have a set of hashes. Each

Re: should i just assign history server address on NN or i have to assign on each node?

2014-06-03 Thread Stanley Shi
should set it on RM node; Regards, *Stanley Shi,* On Wed, Jun 4, 2014 at 9:24 AM, ch huang justlo...@gmail.com wrote: hi,maillist: i installed my job history server on my one of NN(i use NN HA) ,i want to ask if i need set history server address on each node?

Re: Problems Starting NameNode on hadoop-2.2.0

2014-06-02 Thread Stanley Shi
Another possible reason is that you are not using the correct conf file; Regards, *Stanley Shi,* On Tue, Jun 3, 2014 at 6:53 AM, Rajat Jain rajat...@gmail.com wrote: Have you tried setting fs.defaultFS with the same value? On Sat, May 31, 2014 at 11:22 AM, ishan patwa riddle4...@gmail.com

Re: Issue with conf.set and conf.get method

2014-05-21 Thread Stanley Shi
Are you trying to pass arguments from user input? reading input from stdin? I suggest you use some special characters to express; for example, let args[2] == TAB and in your program, you can convert this TAB to the real delimeter you want to use (\t) Regards, *Stanley Shi,* On Wed, May 21

Re: Issue with conf.set and conf.get method

2014-05-21 Thread Stanley Shi
Regards, *Stanley Shi,* On Thu, May 22, 2014 at 10:46 AM, Stanley Shi s...@gopivotal.com wrote: seems my guess is correct; I mean in your program, you can call: *hadoop jar myjar.jar input output * *instead you use:* *hadoop jar myjar.jar input output TAB* *or * *hadoop jar

Re: Container request with short hostname vs full hostname issue

2014-05-19 Thread Stanley Shi
it will wait for some other hosts to register as the short name (which will not happen); Regards, *Stanley Shi,* On Mon, May 19, 2014 at 10:24 PM, REYANE OUKPEDJO r.oukpe...@yahoo.comwrote: There seems to be an issue with the container request when specifying the host name for the container. We

Re: hadoop 2.2.0 nodemanager graceful stop

2014-05-16 Thread Stanley Shi
hadoop-daemon stop nodemanager?? Regards, *Stanley Shi,* On Wed, May 7, 2014 at 11:48 AM, Henry Hung ythu...@winbond.com wrote: Is there a way to gracefully stop a nodemanager? I want to make some yarn-site.xml and mapred-site.xml configuration changes, and need to restart

Re: Rack awareness and pipeline write

2014-05-12 Thread Stanley Shi
in some case you may not find the third node to place replica. Regards, *Stanley Shi,* On Sun, May 11, 2014 at 10:55 AM, jianan hu hujia...@gmail.com wrote: Hi everyone, See HDFS documents, It says For the common case, when the replication factor is three, HDFS’s placement policy

Re: writing multiple files on hdfs

2014-05-12 Thread Stanley Shi
Yes, why not? Regards, *Stanley Shi,* On Sun, May 11, 2014 at 9:57 PM, Karim Awara karim.aw...@kaust.edu.sawrote: Hi, Can I open multiple files on hdfs and write data to them in parallel and then close them at the end? -- Best Regards, Karim Ahmed Awara

Re: No job can run in YARN (Hadoop-2.2)

2014-05-12 Thread Stanley Shi
The FileNotFoundException doesn't mean anything in the pi program. If you have some error and the program didn't run successfully, it will always throw this exception. What do you have in the opts? Regards, *Stanley Shi,* On Mon, May 12, 2014 at 2:09 PM, Tao Xiao xiaotao.cs@gmail.com wrote

Re: Installing both hadoop1 and haddop2 on single node

2014-05-05 Thread Stanley Shi
Please be sure to use different HADOOP_CONF_DIR for the two version; and also in the configuration, be sure to use different folder to store the HDFS related files; Regards, *Stanley Shi,* On Tue, May 6, 2014 at 8:41 AM, Shengjun Xin s...@gopivotal.com wrote: According to your description, I

Re: Client usage with multiple clusters

2014-04-25 Thread Stanley Shi
... Regards, *Stanley Shi,* On Fri, Apr 18, 2014 at 7:42 AM, david marion dlmar...@hotmail.com wrote: I'm having an issue in client code where there are multiple clusters with HA namenodes involved. Example setup using Hadoop 2.3.0: Cluster A with the following properties defined in core

Re: 答复: hdfsConnect always success

2014-04-25 Thread Stanley Shi
NoRouteToHost, please check your network setting Regards, *Stanley Shi,* On Fri, Apr 18, 2014 at 3:42 PM, td...@126.com wrote: Hi, No errors in hdfsConnect(). But if I call hdfsCreateDirectory() after hdfsConnect() , got errors as followed: hdfsCreateDirectory(/tmp/root/0629

Re: Re: Hadoop NoClassDefFoundError

2014-04-15 Thread Stanley Shi
can do you an unzip -l myjob.jar to see if your jar file has the correct hierarchy? Regards, *Stanley Shi,* On Tue, Apr 15, 2014 at 6:53 PM, laozh...@sina.cn laozh...@sina.cn wrote: Thank you for your advice . When i user your command , i get the below error info . $ hadoop jar myjob.jar

Re: Setting debug log level for individual daemons

2014-04-15 Thread Stanley Shi
Is this what you are looking for? http://hadoop.apache.org/docs/r2.3.0/hadoop-project-dist/hadoop-common/CommandsManual.html#daemonlog Regards, *Stanley Shi,* On Wed, Apr 16, 2014 at 2:06 AM, Ashwin Shankar ashwinshanka...@gmail.comwrote: Thanks Gordon and Stanley, but this would require us

Re: Resetting dead datanodes list

2014-04-14 Thread Stanley Shi
I believe there's some command to show list of datanodes from CLI, using parsing HTML is not a good idea. HTML page is intended to be read by human. I also don't know how to refresh node list; Regards, *Stanley Shi,* On Sat, Apr 12, 2014 at 11:31 AM, Ashwin Shankar ashwinshanka

Re: heterogeneous storages in HDFS

2014-04-14 Thread Stanley Shi
Please find it in this page: https://wiki.apache.org/hadoop/Roadmap hadoop 2.3.0 only include phase 1 of the heterogeneous storage; phase 2 will be included in 2.5.0; Regards, *Stanley Shi,* On Mon, Apr 14, 2014 at 4:38 PM, ascot.m...@gmail.com ascot.m...@gmail.comwrote: hi, From 2.3.0

Re: heterogeneous storages in HDFS

2014-04-14 Thread Stanley Shi
'); wrote: When is hadoop released? 2014-04-14 17:04 GMT+08:00 Stanley Shi s...@gopivotal.comjavascript:_e(%7B%7D,'cvml','s...@gopivotal.com'); : Please find it in this page: https://wiki.apache.org/hadoop/Roadmap hadoop 2.3.0 only include phase 1 of the heterogeneous storage; phase 2

Re: Setting debug log level for individual daemons

2014-04-14 Thread Stanley Shi
, Ashwin -- Regards, *Stanley Shi,*

Re: Time taken to do a word count on 10 TB data.

2014-04-14 Thread Stanley Shi
be just to analyze the above size of data. Regards Shashidhar -- Regards, *Stanley Shi,*

Re: 2-node cluster

2014-04-14 Thread Stanley Shi
You can just follow any instruction on deploying distributed cluster, just put several different services on the same host; Regards, *Stanley Shi,* On Tue, Apr 15, 2014 at 12:02 PM, Mohan Radhakrishnan radhakrishnan.mo...@gmail.com wrote: Hi, I have 2 nodes, one is OSX

Re: can't copy between hdfs

2014-04-13 Thread Stanley Shi
Not sure if this helps but copyFromLocal is just to write data from the current client machine to hdfs; but distcp will start a mapreduce job to do the copy, that means the NodeManager/taskTracker machine need to write data to the remote hdfs cluster; Regards, *Stanley Shi,* On Sun, Apr 13

Re: job getting hung

2014-04-13 Thread Stanley Shi
do you any node manager running? Regards, *Stanley Shi,* On Mon, Apr 14, 2014 at 11:37 AM, Rahul Singh smart.rahul.i...@gmail.comwrote: Hi, I am trying to run wordcount example(input file contains just few words) but the job seems to be stuck. How do i debug what went wrong? [hduser

Re: how can i archive old data in HDFS?

2014-04-10 Thread Stanley Shi
AFAIK, no tools now. Regards, *Stanley Shi,* On Fri, Apr 11, 2014 at 9:09 AM, ch huang justlo...@gmail.com wrote: hi,maillist: how can i archive old data in HDFS ,i have lot of old data ,the data will not be use ,but it take lot of space to store it ,i want to archive and zip

Re: HDFS with JBOD configuration

2014-04-03 Thread Stanley Shi
AFAIK, data node will be dead since it cannot handle one disk failure ( if it cannot write data to a disk, it will fail). But not sure if this situation has changed. Regards, *Stanley Shi,* On Thu, Apr 3, 2014 at 7:29 PM, Bharath Kumar bharath...@gmail.com wrote: Hi All, I have

Re: hadoop 2.2.0 document download

2014-04-02 Thread Stanley Shi
what do you mean by document? are you looking for this? http://hadoop.apache.org/docs/r2.2.0/api/index.html Regards, *Stanley Shi,* On Thu, Apr 3, 2014 at 10:53 AM, EdwardKing zhan...@neusoft.com wrote: I look through http://hadoop.apache.org/docs/r2.2.0/ ,but I don't find any url about

Re: how to solve reducer memory problem?

2014-04-02 Thread Stanley Shi
This doesn't seem like related with the data size. How much memory do you use for the reducer? Regards, *Stanley Shi,* On Thu, Apr 3, 2014 at 8:04 AM, Li Li fancye...@gmail.com wrote: I have a map reduce program that do some matrix operations. in the reducer, it will average many large

Re: number of map tasks on yarn

2014-04-01 Thread Stanley Shi
map task number is not decided by the resources you need. It's decided by something else. Regards, *Stanley Shi,* On Wed, Apr 2, 2014 at 9:08 AM, Libo Yu yu_l...@hotmail.com wrote: Hi all, I pretty much use the default yarn setting to run a word count example on a 3 node cluster. Here

Re: How to get locations of blocks programmatically?‏

2014-03-31 Thread Stanley Shi
FileSystem.getFileBlockLocations(...) Regards, *Stanley Shi,* On Fri, Mar 28, 2014 at 10:03 AM, Libo Yu yu_l...@hotmail.com wrote: Hi all, hadoop path fsck -files -block -locations can list locations for all blocks in the path. Is it possible to list all blocks and the block locations

Re: Hadoop 2.2.0 Distributed Cache

2014-03-26 Thread Stanley Shi
where did you get the error? from the compiler or the runtime? Regards, *Stanley Shi,* On Thu, Mar 27, 2014 at 7:34 AM, Jonathan Poon jkp...@ucdavis.edu wrote: Hi Everyone, I'm submitting a MapReduce job using the -files option to copy a text file that contains properties I use

Re: how to read custom writable object from hdfs use java api?

2014-03-25 Thread Stanley Shi
org.apache.hadoop.mapreduce.lib.input.TextInputFormat to read each record from you output file. Regards, *Stanley Shi,* On Tue, Mar 25, 2014 at 9:07 PM, Li Li fancye...@gmail.com wrote: I have map-reduce job to output my custom writable objects, how can I read it using pure java api? I don't want to serialize it to string(Text

Re: I am about to lose all my data please help

2014-03-23 Thread Stanley Shi
Can you confirm that you namenode image and fseditlog are still there? if not, then your data IS lost. Regards, *Stanley Shi,* On Sun, Mar 23, 2014 at 6:24 PM, Fatih Haltas fatih.hal...@nyu.edu wrote: No, not ofcourse I blinded it. On Wed, Mar 19, 2014 at 5:09 PM, praveenesh kumar praveen

Re: DataNode failed to start due to /usr/lib/hadoop-hdfs/bin/hdfs not exist

2014-03-23 Thread Stanley Shi
seems like a vendor specific problem. Ask this in HDP forum. Regards, *Stanley Shi,* On Mon, Mar 24, 2014 at 9:50 AM, Anfernee Xu anfernee...@gmail.com wrote: Hi, All dataNode in my cluster failed to start due to below error notice: /Stage[2]/Hdp-hadoop::Datanode/Hdp-hadoop::Service

Re: Need FileName with Content

2014-03-20 Thread Stanley Shi
Just reviewed the code again, you are not really using map-reduce. you are reading all files in one map process, this is not a normal map-reduce job works. Regards, *Stanley Shi,* On Thu, Mar 20, 2014 at 1:50 PM, Ranjini Rathinam ranjinibe...@gmail.comwrote: Hi, If we give the below code

Re: Need FileName with Content

2014-03-20 Thread Stanley Shi
()) { word.set(pp.getName() + + itr.nextToken()); context.write(word, one); } } } Note: add your filtering code here; and then when running the command, use you input path as param; Regards, *Stanley Shi,* On Fri, Mar 21, 2014 at 9:32 AM, Stanley Shi s...@gopivotal.com

Re: Need FileName with Content

2014-03-19 Thread Stanley Shi
You want to do a word count for each file, but the code give you a word count for all the files, right? = word.set(tokenizer.nextToken()); output.collect(word, one); == change it to: word.set(filename++tokenizer.nextToken()); output.collect(word,one); Regards, *Stanley

Re: I am about to lose all my data please help

2014-03-18 Thread Stanley Shi
Ah yes, I overlooked this. Then please check the file are there or not: ls /home/hadoop/project/hadoop-data/dfs/name? Regards, *Stanley Shi,* On Tue, Mar 18, 2014 at 2:06 PM, Azuryy Yu azury...@gmail.com wrote: I don't think this is the case, because there is; property

Re: Problem of installing HDFS-385 and the usage

2014-03-18 Thread Stanley Shi
/BlockPlacementPolicyWithNodeGroup.java --If I want to control the block placement then I have to write codes rather than type shell commands? if you want to implement your own logic on block placement, you have to write code. Regards, *Stanley Shi,* On Wed, Mar 19, 2014 at 3:07 AM, Eric Chiu ericchiu0

Re: how to unzip a .tar.bz2 file in hadoop/hdfs

2014-03-17 Thread Stanley Shi
download it, unzip and put it back? Regards, *Stanley Shi,* On Fri, Mar 14, 2014 at 5:44 PM, Sai Sai saigr...@yahoo.in wrote: Can some one please help: How to unzip a .tar.bz2 file which is in hadoop/hdfs Thanks Sai

Re: Problem of installing HDFS-385 and the usage

2014-03-17 Thread Stanley Shi
it, you need to write your own policy, please see this JIRA for example: https://issues.apache.org/jira/browse/HDFS-3601 Regards, *Stanley Shi,* On Mon, Mar 17, 2014 at 11:31 AM, Eric Chiu ericchiu0...@gmail.com wrote: HI all, Could anyone tell me How to install and use this hadoop plug

Re: I am about to lose all my data please help

2014-03-17 Thread Stanley Shi
/property* Regards, *Stanley Shi,* On Sun, Mar 16, 2014 at 5:29 PM, Mirko Kämpf mirko.kae...@gmail.com wrote: Hi, what is the location of the namenodes fsimage and editlogs? And how much memory has the NameNode. Did you work with a Secondary NameNode or a Standby NameNode for checkpointing

Re: Process of files in mapreduce

2014-03-12 Thread Stanley Shi
For reading PDF in java, you may refer to this link: http://stackoverflow.com/questions/4784825/how-to-read-pdf-files-using-java in mapreduce, you can use the same code; except that each map() function processes one file; Regards, *Stanley Shi,* On Wed, Mar 12, 2014 at 4:53 PM, Ranjini

Re: In hadoop All racks belongs to same subnet !

2014-03-11 Thread Stanley Shi
There's no limitation on same-subnet. Regards, *Stanley Shi,* On Wed, Mar 12, 2014 at 1:31 PM, navaz navaz@gmail.com wrote: Hi all Question regarding hadoop architecture . Generally in hadoop cluster nodes are placed in racks and all the nodes connected to top of the rack switch

Re: Adding new dataNode to existing cluster for hadoop2.2

2014-03-10 Thread Stanley Shi
just start the new node with the same configuration as in namenode, after sometime, you will see the new node list. Regards, *Stanley Shi,* On Tue, Mar 11, 2014 at 9:07 AM, Parmeet delhi...@yahoo.com wrote: Hello, I am trying to add a new dataNode to existing hadoop cluster you would

Re: What's the best practice for managing Hadoop dependencie?

2014-03-09 Thread Stanley Shi
Waiting for others to give best practice. I think you can use eclipse to manage the maven; see the full dependency hierarchy, if some jar(for example, guava) exists in both hadoop dependency chain and your own requirements, put your requirements' scope as provided . Regards, *Stanley Shi

Re:

2014-03-06 Thread Stanley Shi
Maybe your console and browser are using different settings, would you please try wget http://repo.maven.apache.org/maven2/org/apache/felix/maven-bundle-plugin/2.4.0/maven-bundle-plugin-2.4.0.pom ? Regards, *Stanley Shi,* On Wed, Mar 5, 2014 at 6:59 PM, Avinash Kujur avin...@gmail.com wrote

Re: MR2 Job over LZO data

2014-03-06 Thread Stanley Shi
May be you can try download the LZO class and rebuild it against Hadoop 2.2.0; If build success, you should be good to go; if failed, then maybe you need to wait for the LZO guys to update their code. Regards, *Stanley Shi,* On Thu, Mar 6, 2014 at 6:29 PM, KingDavies kingdav...@gmail.com wrote

Re: Fetching configuration values from cluster

2014-03-06 Thread Stanley Shi
You can read from http://resource-manager.host.ip:8088/conf This is an xml format file you can use directly. Regards, *Stanley Shi,* On Fri, Mar 7, 2014 at 1:46 AM, John Lilley john.lil...@redpoint.netwrote: How would I go about fetching configuration values (e.g. yarn-site.xml) from

Re: How to solve it ? java.io.IOException: Failed on local exception

2014-03-05 Thread Stanley Shi
which version of hadoop you are using? This is something similar with your error log: http://stackoverflow.com/questions/19895969/can-access-hadoop-fs-through-shell-but-not-through-java-main Regards, *Stanley Shi,* On Wed, Mar 5, 2014 at 4:29 PM, 张超 chao.zh...@dianping.com wrote: Hi all

Re: [hadoop] AvroMultipleOutputs org.apache.avro.file.DataFileWriter$AppendWriteException

2014-03-04 Thread Stanley Shi
Which version of hadoop are you using? There's a possibility that the hadoop environment already have a avro**.jar in place, thus caused the jar conflict. Regards, *Stanley Shi,* On Tue, Mar 4, 2014 at 11:25 PM, John Pauley john.pau...@threattrack.comwrote: Outside hadoop: avro-1.7.6

Re: [hadoop] AvroMultipleOutputs org.apache.avro.file.DataFileWriter$AppendWriteException

2014-03-03 Thread Stanley Shi
which avro version are you using when running outside of hadoop? Regards, *Stanley Shi,* On Mon, Mar 3, 2014 at 11:49 PM, John Pauley john.pau...@threattrack.comwrote: This is cross posted to avro-user list ( http://mail-archives.apache.org/mod_mbox/avro-user/201402.mbox/%3ccf3612f6.94d2

Re: class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields

2014-03-03 Thread Stanley Shi
groupIdorg.apache.hadoop/groupId artifactIdhadoop-core/artifactId version1.2.1/version /dependency Regards, *Stanley Shi,* On Tue, Mar 4, 2014 at 1:15 AM, Margusja mar...@roo.ee wrote: Hi 2.2.0 and 2.3.0 gave me the same container log. A little bit more details. I'll try

Re: why jobtracker.jsp can't work?

2014-03-02 Thread Stanley Shi
In Hadoop 2.2, there's no actual jobtracker running, you may want to access the Resource Manager Web UI: http://172.11.12.6:8088/ Regards, *Stanley Shi,* On Mon, Mar 3, 2014 at 2:07 PM, EdwardKing zhan...@neusoft.com wrote: I use Hadoop 2.2 and I want to run MapReduce web UI,so I visit