How can I unsubscribe from this list?

2013-12-28 Thread Alex Luya
Hello I can't find a email like:hdfs-u...@hadoop.apach.org.

Re: How can I unsubscribe from this list?

2013-12-28 Thread Alex Luya
Sorry,I don't know what you say On 12/28/2013 04:52 PM, r...@fwpsystems.com wrote: Sehr geehrte Damen und Herren, Herr Pappert ist nicht mehr für unser Unternehmen tätig. Ihre E-Mail wird nicht weitergeleitet. Bitte schicken Sie uns Ihre E-Mail an i...@luenebits.de Sie können uns wie folgt

Re: How can I unsubscribe from this list?

2013-12-28 Thread Alex Luya
Ok,maybe you said this email is deprecated,but I am keeping received email from :hdfs-user@hadoop.apache.org. On 12/28/2013 04:54 PM, r...@fwpsystems.com wrote: Sehr geehrte Damen und Herren, Herr Pappert ist nicht mehr für unser Unternehmen tätig. Ihre E-Mail wird nicht weitergeleitet. Bitte

How to unsubscribe from this list?

2013-12-18 Thread Alex Luya
Hello Can anybody tell me a way to unsubscribe from this list?

how to unsubscribe from this list?

2013-03-14 Thread Alex Luya
I can't find a way to do it.

how can I unsubscribe from this list?

2013-03-12 Thread Alex Luya
can't find a way to unsubscribe from this list.

how can I unsubscribe from this list?

2013-03-12 Thread Alex Luya
can't find a way to unsubscribe from this list.

How can I unsubscribe from this list?

2013-03-12 Thread Alex Luya
can't find a way to unsubscribe from this list.

how to unsubscribe from this list?

2013-02-19 Thread Alex Luya
I can't googling it out,has this renamed?

hadoop branch-0.20-append Build error:build.xml:933: exec returned: 1

2011-04-11 Thread Alex Luya
BUILD FAILED .../branch-0 .20-append/build.xml:927: The following error occurred while executing this line: ../branch-0 .20-append/build.xml:933: exec returned: 1 Total time: 1 minute 17 seconds + RESULT=1 + '[' 1 '!=' 0 ']' + echo 'Build Failed: 64-bit build not run' Build Failed: 64-bit

hadoop branch-0.20-append Build error:build.xml:933: exec returned: 1

2011-04-11 Thread Alex Luya
BUILD FAILED .../branch-0 .20-append/build.xml:927: The following error occurred while executing this line: ../branch-0 .20-append/build.xml:933: exec returned: 1 Total time: 1 minute 17 seconds + RESULT=1 + '[' 1 '!=' 0 ']' + echo 'Build Failed: 64-bit build not run' Build Failed: 64-bit

Build error:Target package-libhdfs does not exist in the project Hadoop.

2011-03-19 Thread Alex Luya
Got this error,configuration is : I have checked out source from:https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20-append/ this is my build script: // #!/bin/bash VERSION=0.20.0-append

subscribe

2011-03-16 Thread Alex Luya

cloudera CDH3 error: namenode running,but:Error: JAVA_HOME is not set and Java could not be found

2011-03-16 Thread Alex Luya
I download cloudera CDH3 beta:hadoop-0.20.2+228,and modified three files:hdfs.xml,core-site.xml and hadoop-env.sh.and I do have set JAVA_HOME in file:hadoop-env.sh,and then try to run:start-dfs.sh,got this error,but strange thing is that namenode is running.I can't understand why.Any help is

lzo library existed in classpath,Why can't it be refered by job?

2010-08-17 Thread Alex Luya
Hello I followed this guide:http://mail-archives.apache.org/mod_mbox/hadoop-common-user/201006.mbox/AANLkTileo- q8useip8y3na9pdyhlyufippr0in0lk...@mail.gmail.com,then run: hadoop jar hadoop-examples-0.20.2+320.jar grep input output 'dfs[a-z.]+', got error:Caused by:

Re: how to get lzo library library loaded?(error :Caused by: java.lang.ClassNotFoundException: com.hadoop.compression.lzo.LzoCodec)

2010-08-16 Thread Alex Luya
Hello I use java.I think the problem is that I can't get lzo library loaded,and I can't get example run successfully.so It is not the problem of programing language. On Monday, August 16, 2010 03:43:12 pm rosefinny111 wrote: hi friend u write program which language means it in the java

how to get lzo library loaded?

2010-08-15 Thread Alex Luya
Hi, At every beginning,I run:hadoop jar hadoop-*-examples.jar grep input output 'dfs[a-z.]+' successfully,but when run: nutch crawl url -dir crawl -depth 3,got errors: -

how to get lzo library loaded?

2010-08-15 Thread Alex Luya
Hi, At every beginning,I run:hadoop jar hadoop-*-examples.jar grep input output 'dfs[a-z.]+' successfully,but when run: nutch crawl url -dir crawl -depth 3,got errors: -

how to get lzo loaded?

2010-08-08 Thread Alex Luya
Hi, At every beginning,I run:hadoop jar hadoop-*-examples.jar grep input output 'dfs[a-z.]+' successfully,but when run: nutch crawl url -dir crawl -depth 3,got errors: -

Re: Enabling LZO compression of map outputs in Cloudera Hadoop 0.20.1

2010-08-07 Thread Alex Luya
Does it(hadoop-lzo) only work for hadoop 0.20,not work for 0.21 or 0.22? On Friday, August 06, 2010 09:05:47 am Todd Lipcon wrote: On Thu, Aug 5, 2010 at 4:52 PM, Bobby Dennett bdenn...@gmail.com wrote: Hi Josh, No real pain points... just trying to investigate/research the best way to

How to get lzo compression library loaded?

2010-07-31 Thread Alex Luya
Hello: I have followed this link:http://code.google.com/p/hadoop-gpl- compression/wiki/FAQ to install lzo compression library,and copy hadoop-lzo-0.4.4.jar to $HADOOP_HOME/lib,and all files under ..lib/native/Linux-amd64-64 to $HADOOP_HOME/lib/native/Linux-amd64-64, and run example,but got

How to get lzo compression library loaded?

2010-07-31 Thread Alex Luya
Hello: I have followed this link:http://code.google.com/p/hadoop-gpl- compression/wiki/FAQ to install lzo compression library,and copy hadoop-lzo-0.4.4.jar to $HADOOP_HOME/lib,and all files under ..lib/native/Linux-amd64-64 to $HADOOP_HOME/lib/native/Linux-amd64-64, and run example,but got

error:Caused by: java.lang.ClassNotFoundException: com.hadoop.compression.lzo.LzopCodec

2010-07-28 Thread Alex Luya
Hello: I got source code from http://github.com/kevinweil/hadoop-lzo,compiled them successfully,and then 1,copy hadoop-lzo-0.4.4.jar to directory:$HADOOP_HOME/lib of each master and slave 2,Copy all files under directory:../Linux-amd64-64/lib to directory:

LZO Question

2010-07-25 Thread Alex Luya
Hello: I got source code from http://github.com/kevinweil/hadoop-lzo,compiled them successfully,and then 1,copy hadoop-lzo-0.4.4.jar to directory:$HADOOP_HOME/lib of each master and slave 2,Copy all files under directory:../Linux-amd64-64 to directory: $HADDOOP_HOME/lib/native/Linux-amd64-64

Re: Lzo question

2010-07-25 Thread Alex Luya
picked up straight away and that you have to restart job-trackers and task-trackers for them to be used in map-reduce jobs. Good luck! Thanks, Jamie On 24 July 2010 08:40, Alex Luya alexander.l...@gmail.com wrote: Hello: I got source code from http://github.com/kevinweil/hadoop

Lzo question

2010-07-24 Thread Alex Luya
Hello: I got source code from http://github.com/kevinweil/hadoop-lzo,compiled them successfully,and then 1,copy hadoop-lzo-0.4.4.jar to directory:$HADOOP_HOME/lib of each master and slave 2,Copy all files under directory:../Linux-amd64-64 to directory: $HADDOOP_HOME/lib/native/Linux-amd64-64

Question about LZO(Caused by: java.lang.ClassNotFoundException: com.hadoop.compression.lzo.LzopCodec)

2010-07-20 Thread Alex Luya
Hello: I got source code from http://github.com/kevinweil/hadoop-lzo,compiled them successfully,and then 1,copy hadoop-lzo-0.4.4.jar to directory:$HADOOP_HOME/lib of each master and slave 2,Copy all files under directory:../Linux-amd64-64 to directory:

error:Not able to place enough replicas, still in need of 3(same configuration doesn't work)

2010-05-28 Thread Alex Luya
Hello: when run hadoop dfs -put src des,I got an error:java.io.IOException: File /user/alex/hadoop-alex-namenode-AlexLuya.log could only be replicated to 0 nodes, instead of 1,and I have checked logs in namenode and datanode and secondary,only this error is presented in namenode:

error:Not able to place enough replicas, still in need of 3

2010-05-28 Thread Alex Luya
Hello: when run hadoop dfs -put src des,I got an error:java.io.IOException: File /user/alex/hadoop-alex-namenode-AlexLuya.log could only be replicated to 0 nodes, instead of 1,and I have checked logs in namenode and datanode and secondary,only this error is presented in namenode:

Re: Does error could only be replicated to 0 nodes, instead of 1 mean no datanodes available?

2010-05-27 Thread Alex Luya
Hello here is the output of hadoop fsck /: Status: HEALTHY Total size:0 B Total dirs:2 Total files: 0 (Files currently being written: 1) Total blocks (validated): 0 Minimally replicated blocks: 0

Does error could only be replicated to 0 nodes, instead of 1 mean no datanodes available?

2010-05-26 Thread Alex Luya
Hello: I got this error when putting files into hdfs,it seems a old issue,and I followed the solution of this link: