I can translate it to native English: how many nodes you want to
decommission?
On Tue, Apr 2, 2013 at 11:01 AM, Yanbo Liang yanboha...@gmail.com wrote:
You want to decommission how many nodes?
2013/4/2 Henry JunYoung KIM henry.jy...@gmail.com
15 for datanodes and 3 for replication factor.
I supposed your input splits are FileSplit, if not, you need to:
InputSplit split = context.getInputSplit();
if (split instanceof FileSplit){
Path path = ((FileSplit)split).getPath();
}
On Tue, Apr 2, 2013 at 12:02 PM, Azuryy Yu azury...@gmail.com wrote:
In your map function add
It's unavailable in the hadoop1.x distribution, you can find it in the
hadoop-0.20.x distribution.
On Tue, Apr 2, 2013 at 1:05 PM, Varsha Raveendran
varsha.raveend...@gmail.com wrote:
Hello!
Is there an eclipse plugin for MapReduce on hadoop 1.1.1 available?
I am finding it difficult to
When you seek to a position within a HDFS file, you are not seek from the
start of the first block and then one by one.
Actually DFSClient can skip some blocks until find one block, which offset
and block length includes your seek position.
On Mon, Apr 1, 2013 at 12:55 AM, Rahul
using haddop jar, instead of java -jar.
hadoop script can set a proper classpath for you.
On Mar 29, 2013 11:55 PM, Cyril Bogus cyrilbo...@gmail.com wrote:
Hi,
I am running a small java program that basically write a small input data
to the Hadoop FileSystem, run a Mahout Canopy and Kmeans
which hadoop version you used?
On Mar 29, 2013 5:24 AM, Felix GV fe...@mate1inc.com wrote:
Yes, I didn't specify how I was testing my changes, but basically, here's
what I did:
My hdfs-site.xml file was modified to include a reference the a file
containing a list of all datanodes (via
Sorry!
Todd has been reviewed it.
On Fri, Mar 29, 2013 at 11:40 AM, Azuryy Yu azury...@gmail.com wrote:
hi,
who can review this one:
https://issues.apache.org/jira/browse/HDFS-4631
thanks.
can you addInputPath(hdfs://……),dont change fs.default.name, It cannot
solve your problem.
On Mar 26, 2013 7:03 PM, Agarwal, Nikhil nikhil.agar...@netapp.com
wrote:
Hi,
Thanks for your reply. I do not know about cascading. Should I google it
as “cascading in hadoop”? Also, what I was
and your hadoop version.
On Mar 26, 2013 1:28 PM, Mohammad Tariq donta...@gmail.com wrote:
Hello Sagar,
It would be helpful if you could share your logs with us.
Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com
On Tue, Mar 26, 2013 at 10:47 AM, Sagar
yes,you got it. hadoop1.0.x cannot failover auto or mannual. you have to
copy fsimage from SNN to the primary NN.
On Mar 27, 2013 11:29 AM, David Parks davidpark...@yahoo.com wrote:
Thanks for the update, I understand now that I'll be installing a
secondary
name node which performs checkpoints
I just submitted the following patch, welcome review it.
-- Forwarded message --
From: Fengdong Yu (JIRA) j...@apache.org
Date: Mar 25, 2013 6:07 PM
Subject: [jira] [Created] (HDFS-4631) Support customized call back method
during failover automatically.
To:
Azurry,
Do you have detail steps what did you do to make MRV1 work with HDFS2?
Thanks,
Mounir
On Mon, 2013-03-25 at 13:39 +0800, Azuryy Yu wrote:
Thanks Harsh!
I used -Pnative got it.
I am compile src code. I made MRv1 work with HDFSv2 successfully.
On Mar 25, 2013 12:56 PM, Harsh J
for your reqeirement, its just write a customized MR inputformat and
outputformat based on FileInputFormat.
On Mar 25, 2013 1:48 PM, AMARNATH, Balachandar
balachandar.amarn...@airbus.com wrote:
Any answers from anyone of you J
** **
** **
Regards
Bala
** **
*From:*
there isn't such method, you had to submit another MR.
On Mar 24, 2013 9:03 PM, Fatih Haltas fatih.hal...@nyu.edu wrote:
I want to get reduce output as key and value then I want to pass them to a
new reduce as input key and input value.
So is there any Map-Reduce-Reduce kind of method?
good question, i just want HA, dont want to change more configuration.
On Mar 25, 2013 2:32 AM, Balaji Narayanan (பாலாஜி நாராயணன்)
li...@balajin.net wrote:
is there a reason why you dont want to run MRv2 under yarn?
On 22 March 2013 22:49, Azuryy Yu azury...@gmail.com wrote:
is there a way
Hi,
How to get hadoop-2.0.3-alpha native libraries, it was compiled under
32bits OS in the released package currently.
-Dtar and
then use that.
Alternatively, if you're interested in packages, use the Apache
Bigtop's scripts from http://bigtop.apache.org/ project's repository
and generate the packages with native libs as well.
On Mon, Mar 25, 2013 at 9:27 AM, Azuryy Yu azury...@gmail.com wrote:
Hi,
How
IMO, if you run HA, then SSN is not necessary.
On Mar 24, 2013 12:40 PM, Harsh J ha...@cloudera.com wrote:
Yep, this is correct - you only need the SecondaryNameNode in 1.x. In
2.x, if you run HA, the standby NameNode role also doubles up
automatically as the SNN so you don't need to run an
SNN(secondary name node),sorry for typo.
On Mar 24, 2013 12:59 PM, Azuryy Yu azury...@gmail.com wrote:
IMO, if you run HA, then SSN is not necessary.
On Mar 24, 2013 12:40 PM, Harsh J ha...@cloudera.com wrote:
Yep, this is correct - you only need the SecondaryNameNode in 1.x. In
2.x, if you
it has issues, namenode save blockid-nodes, using ip addr if your slaves
config file using ip addr instead of hostname.
On Mar 23, 2013 10:14 AM, Balaji Narayanan (பாலாஜி நாராயணன்)
li...@balajin.net wrote:
Assuming you are using hostnAmes and not ip address in your config
files What happens
, 2013 at 9:01 AM, Azuryy Yu azury...@gmail.com wrote:
it has issues, namenode save blockid-nodes, using ip addr if your slaves
config file using ip addr instead of hostname.
On Mar 23, 2013 10:14 AM, Balaji Narayanan (பாலாஜி நாராயணன்)
li...@balajin.net wrote:
Assuming you are using
is there a way to separate hdfs2 from hadoop2? I want use hdfs2 and
mapreduce1.0.4, exclude yarn. because I need HDFS-HA.
hadoop definition guide.pdf should be helpful. there is a chapter for this.
but only for MRv1.
On Mar 23, 2013 1:50 PM, Sai Sai saigr...@yahoo.in wrote:
Just wondering if there is any step by step explaination/article of MR
output we get when we run a job either in eclipse or ubuntu.
Any
not yet. HA is only work on hadoop-2.0.x
On Mar 14, 2013 1:42 AM, Shumin Guo gsmst...@gmail.com wrote:
Hi,
I have downloaded the latest stable hadoop release version 1.1.2. Can I
configure High Availability for the jobtracker and namenode with this
version?
Thanks,
Shumin
there is no good idea for your question. but for hadoop-2.x, its easy with
HDFS federation.
On Mar 14, 2013 12:38 PM, Shashank Agarwal shashankagarwal1...@gmail.com
wrote:
Hey Guys,
I have two different hadoop clusters in production. One cluster is used as
backing for HBase and the other for
dont wait patch, its a very simple fix. just do it.
On Mar 13, 2013 5:04 PM, Amit Sela am...@infolinks.com wrote:
But the patch will work on 1.0.4 correct ?
On Wed, Mar 13, 2013 at 4:57 AM, George Datskos
george.dats...@jp.fujitsu.com wrote:
Leo
That JIRA says fix version=1.0.4 but it
you want a n:n join or 1:n join?
On Mar 13, 2013 10:51 AM, Roth Effy effyr...@gmail.com wrote:
I want to join two table data in reducer.So I need to find the start of
the table.
someone said the DataJoinReducerBase can help me,isn't it?
2013/3/13 Azuryy Yu azury...@gmail.com
you cannot
xcievers : 4096 is enough, and I don't think you pasted a full stack
exception.
Socket is ready for receiving, but client closed abnormally. so you
generally got this error.
On Mon, Mar 11, 2013 at 2:33 AM, Pablo Musa pa...@psafe.com wrote:
This variable was already set:
property
it is used for HDFS federation, if you have more than one NN in your
cluster, block pool id is different for each NN.
On Mar 3, 2013 1:49 PM, Dhanasekaran Anbalagan bugcy...@gmail.com wrote:
Hi Guys,
In my node namenode page I seen
Started:Wed Feb 27 12:41:28 EST 2013 Version:2.0.0-cdh4.0.1,
yes,just ignore this log.
On Mar 2, 2013 7:27 AM, jamal sasha jamalsha...@gmail.com wrote:
Though it copies.. but it gives this error?
On Fri, Mar 1, 2013 at 3:21 PM, jamal sasha jamalsha...@gmail.com wrote:
When I try this.. I get an error
cat: Unable to write to output stream.
Are
Who can review this JIRA(https://issues.apache.org/jira/browse/HDFS-4533),
which is very simple.
-- Forwarded message --
From: Hadoop QA (JIRA) j...@apache.org
Date: Wed, Feb 27, 2013 at 4:53 PM
Subject: [jira] [Commented] (HDFS-4533) start-dfs.sh ignored additional
parameters
Patch available now. anybody can take a look, Thanks.
On Wed, Feb 27, 2013 at 10:46 AM, Azuryy Yu azury...@gmail.com wrote:
Hi Suresh,
Thanks for your reply. I filed a bug:
https://issues.apache.org/jira/browse/HDFS-4533
On Wed, Feb 27, 2013 at 9:30 AM, Suresh Srinivas
sur
Anybody here? Thanks!
On Tue, Feb 26, 2013 at 9:57 AM, Azuryy Yu azury...@gmail.com wrote:
Hi all,
I've stay on this question several days. I want upgrade my cluster from
hadoop-1.0.3 to hadoop-2.0.3-alpha, I've configured QJM successfully.
How to customize clusterID by myself
That's easy, in your example,
Map output key: FIELD-N ; Map output value: just original value.
In the reduece: if there is LOGTAGTAB in the value, then this is the
first log entry. if not, this is a splitted log entry. just get a sub
string and concat with the first log entry.
Am I explain
%3Dtrue%26mm_lat%3DUNKNOWN%26mm_long%3DUNKNOWN%26mm_hpx%3D1280%26mm_wpx%3D800%26mm_density%3D2.0%26mm_dpi%3DUNKNOWN%26mm_campaignid%3D45695%26autoExpand%3Dtruequery-string=ncid%3DWBNMMG9h4XmbJBUHbDrNWWWm
tr7y MLNL 1009 10034 3401 t4fx 10034 click
On Tue, Feb 26, 2013 at 9:39 PM, Azuryy Yu azury
I think you mixed federation with HA. am I right?
If another name node hasn't changes, then It doesn't do any edit log
rolling. federated NNs don't keep concurrency( I think you want say keep
sync-able?)
On Sun, Feb 24, 2013 at 11:09 PM, YouPeng Yang yypvsxf19870...@gmail.comwrote:
Hi All
it indicates 'cannot find com.google.protobuf '
On Feb 21, 2013 7:38 PM, Ted yuzhih...@gmail.com wrote:
What compilation errors did you get ?
Thanks
On Feb 21, 2013, at 1:37 AM, Azuryy Yu azury...@gmail.com wrote:
Hi ,
I just want to share some experience on hadoop-2.x compiling
In mapred-site.xml:
property
namemapreduce.jobhistory.address/name
valueYOUR_HOST:10020/value
/property
property
namemapreduce.jobhistory.webapp.address/name
valueYOUR_HOST:19888/value
/property
property
namemapreduce.jobhistory.intermediate-done-dir/name
think should behave almost exactly like older
versions of Hadoop) and it had the exact same problem. Does H2.0MR1 us
journal nodes? I'll try to read up more on this later today. Thanks for
the tip.
On Feb 18, 2013, at 16:32 , Azuryy Yu wrote:
Because journal nodes are also be formated
--
*发件人:* Azuryy Yu [azury...@gmail.com]
*发送时间:* 2013年2月18日 15:56
*收件人:* user@hadoop.apache.org
*主题:* Re: 答复: some ideas for QJM and NFS
All JNs are deployed on the same node with DN.
On Mon, Feb 18, 2013 at 3:35 PM, 谢良 xieli...@xiaomi.com wrote:
Hi
Because journal nodes are also be formated during NN format, so you need to
start all JN daemons firstly.
On Feb 19, 2013 7:01 AM, Keith Wiley kwi...@keithwiley.com wrote:
This is Hadoop 2.0. Formatting the namenode produces no errors in the
shell, but the log shows this:
2013-02-18
Oh, yes, you are right, George. I'll probably do it in the next days.
On Mon, Feb 18, 2013 at 2:47 PM, George Datskos
george.dats...@jp.fujitsu.com wrote:
Hi Azuryy,
So you have measurements for hadoop-1.0.4 and hadoop-2.0.3+QJM, but I
think you should also measure hadoop-2.0.3 _wihout_
builtin-java classes where applicable
2013-02-18_15:20:30
so the performance is a little bit better than hadoop-1.0.4.
On Mon, Feb 18, 2013 at 2:53 PM, Azuryy Yu azury...@gmail.com wrote:
Oh, yes, you are right, George. I'll probably do it in the next days.
On Mon, Feb 18, 2013 at 2:47 PM
All JNs are deployed on the same node with DN.
On Mon, Feb 18, 2013 at 3:35 PM, 谢良 xieli...@xiaomi.com wrote:
Hi Azuryy, just want to confirm one thing, your JN did not deploy on the
same machines within DN, right ?
Regards,
Liang
--
*发件人:* Azuryy Yu [azury
This is a typical total sort using map/reduce. it can be done with both
map and reduce.
On Fri, Feb 15, 2013 at 10:39 PM, Arun Vasu arun...@gmail.com wrote:
Hi,
Is it possible to sort a huge text file lexicographically using a
mapreduce job which has only map tasks and zero reduce tasks?
/property
On Sat, Feb 16, 2013 at 12:56 PM, Azuryy Yu azury...@gmail.com wrote:
Hi,
If I have four name nodes in my cluster: n1, n2, n3, n4, then
n2 is n1's standby, n4 is n3's standby.
there is no dependency between n1 and n3, which is HDFS federation.
Anybody can kindly give me an example
201 - 246 of 246 matches
Mail list logo