red.
>
>On Mon, Mar 18, 2013 at 7:22 AM, springring wrote:
>> thanks
>> I modify the java file with old "mapred" API, but there is still error
>>
>> javac -classpath
>> /usr/lib/hadoop/hadoop-core-0.20.2-cdh3u3.jar:/usr/lib/hadoop
lass, but your
>WholeFileInputFormat is using the new MR API
>(org.apache.hadoop.mapreduce.lib.input.InputFormat). Using the older form
>will let you pass.
>
>This has nothing to do with your version/distribution of Hadoop.
>
>
>On Fri, Mar 15, 2013 at 4:28 PM, Steve Lo
Hi,
my hadoop version is Hadoop 0.20.2-cdh3u3 and I want to define new
InputFormat in hadoop book , but there is error
"class org.apache.hadoop.streaming.WholeFileInputFormat not
org.apache.hadoop.mapred.InputFormat"
Hadoop version is 0.20, but the streaming still depend on 0.10 mapred ap
Hi,
I want to use:
hadoop jar /hadoop-streaming-0.20.2-cdh3u3.jar -inputformat
org.apache.hadoop.streaming.WholeFileInputFormat
so, I download code from :
https://github.com/tomwhite/hadoop-book/tree/master/ch07/src/main/java
WholeFileInputFormat.java
WholeFileRecordReader.java
an
Hi,
I put some file include chinese into HDFS.
And read the file as "hadoop fs -cat /user/hive/warehouse/..." , is ok, I
can see the chinese.
But when I open the table in hive, I can't read chinese(english is ok
)why?
sorry
the error keep on, even when i modify the code
"offset,filename = line.strip().split("\t")"
At 2013-01-14 09:27:10,springring wrote:
>hi,
> I find the key point, not the hostname, it is right.
>just chang "offset,filename = line.split("
a valid host name
>
>may be if you have local hadoop then you can name refer it with
>hdfs://localhost:9100/ or hdfs://127.0.0.1:9100
>
>if its on other machine then just try with IP address of that machine
>
>
>On Sat, Jan 12, 2013 at 12:55 AM, springring wrote:
>
>&g
ssue. Just
>making an guess
>something like hdfs://host:port/path
>
>On Sat, Jan 12, 2013 at 12:30 AM, springring wrote:
>
>> hdfs://user/hdfs/catalog3/
>
>
>
>
>
>--
>Nitin Pawar
Hi,
When I run code below as a streaming, the job error N/A and killed. I run
step by step, find it error when
" file_obj = open(file) " . When I run same code outside of hadoop, everything
is ok.
1 #!/bin/env python
2
3 import sys
4
5 for line in sys.stdin:
6 offset,file
ed by the local machine's /etc/group file, or if you're
using NIS or LDAP, its controlled there.
So you can run the unix shell command groups to find out which group(s) you
belong to, and then switch to one of those.
HTH
-Mike
-----Original Message-
From: springring [mailto:spri
Hi,
There are "chmod"、"chown"、"chgrp" in HDFS,
is there some command like "useradd -g" to add a
user in a group,? Even more, is there "hadoop's
group", not "linux's group"?
Ring
Hi,
how to create a user group in hdfs?
hadoop fs -?
Ring
Hi,
I install CDH3 follow the mannul as attached file,
but when I run the command
"su -s /bin/bash -hdfs -c 'hadoop namenode -format'"
on page 25, it show that "su: invalid option --h",
so I change the comand to
"su -s /bin/bash -hdfs -c'hadoop namenode -format'"
the message is that
"May n
hi all,
I want to making sure one thing --if there are web page in HDFS to access
files?
I know that there are command like "fs -put" and "fs -get",even more we can
download
file from web like "slave:50075".But is there a way to put file in HDFS through
web?
Additional , is there functio
I have a question.
Now that hadoop have "maper" and "reducer" , how about the solution like
Map and Reduce
or that directly Reduce here nodepair can be look as a
branch...
- Original Message -
From: "Konstantin Shvachko"
To:
Sent: Tuesday, March 02, 2010 10:21 AM
Subject: Re: Namesp
gt; you might consider putting this paper to Google Docs and share it with
> everyone - that'd be an easier that 'email sharing' :)
>
> On Fri, Feb 26, 2010 at 02:37AM, springring wrote:
>> Hi All,
>> I have upload the english version of the paper to gmail whi
i wish new
version will be helpful to our communication.
Any suggestion or question are welcome, either to the subject
or grammar.
Springring.Xu
Download from: mail.google.com
login ID: hadoopcn
password: mapreduce
> - Original Message -----
> From: "springring&qu
I have upload the paper to gmail
mail.google.com
login ID: hadoopcn
password: mapreduce
- Original Message -
From: "springring"
To:
Sent: Sunday, February 21, 2010 5:31 PM
Subject: [help!] [paper]deconstruct Hadoop Distributed File System
>一 一||
>
&
-
From: "springring"
To:
Sent: Sunday, February 21, 2010 4:50 PM
Subject: Re: [paper]deconstruct Hadoop Distributed File System
> sorry~
>
> perhaps the *.pdf file processed as spam? so, try again as attached .rar
> file.
>
>
> - Original Message --
sorry~
perhaps the *.pdf file processed as spam? so, try again as attached .rar file.
- Original Message -
From: "springring"
To:
Sent: Sunday, February 21, 2010 12:52 PM
Subject: [paper]deconstruct Hadoop Distributed File System
> Hi All,
>The attached file i
Hi All,
The attached file is my paper about deconstruct Hadoop Distributed File
System and Cluster triplet-Space Model.
Look forward your comments or suggestions.
Springring.Xu
Hi,
I've been puzzling .
hadoop-0.17.1\src\java\org\apache\hadoop\dfs\Blocksmap.java line 291
private Map map = new HashMap();
why is not
-- private Map map = new HashMap();
i think if Map, the key will be BlockInfo, mean
{INodeFile,Datanode},
and it can be reduced by INodeFile or/and Data
being sorted on the keys.
The output of the first map: ( why not combine)
< Bye, 1>
< Hello, 1>
< World, 2>
The output of the second map: ( why not combine)
< Goodbye, 1>
< Hadoop, 2>
- Original Message -
From: "springring"
To: ;
Sent: Sunday,
Hi,
as the red color word in attached file page7. i think it should be
"combine" instead of "map",
or it's my miscommunication?
br
Springring.Xu
24 matches
Mail list logo