t; control how to send tasks to which node, and how many per node.
>
> Some detail examples you can get from book "Professional Hadoop Solution"
> Character 4.
>
> Yong
____
s (of CPU and Memory) of a node such that
> only one container runs on a host at any given time.
>
> On Wed, Jan 29, 2014 at 3:30 AM, Keith Wiley wrote:
>> I'm running a program which in the streaming layer automatically
>> multithreads and does so by automatically
Yeah, it isn't, not even remotely, but thanks.
On Jan 28, 2014, at 14:06 , Bryan Beaudreault wrote:
> If this cluster is being used exclusively for this goal, you could just set
> the mapred.tasktracker.map.tasks.maximum to 1.
>
>
> On Tue, Jan 28, 2014 at 5:00 PM, Keith
e machine.
Can this be done?
Thanks.
________
Keith Wiley kwi...@keithwiley.com keithwiley.commusic.keithwiley.com
"Yet mark his perfect self-contentment, and hence learn his lesson, that to be
self-content
Seems to work well. Thank you very much!
On Jan 21, 2014, at 12:42 , Keith Wiley wrote:
> I'll look it up. Thanks.
>
> On Jan 21, 2014, at 11:43 , java8964 wrote:
>
>> You cannot use hadoop "NLineInputFormat"?
>>
>> If you generate 100 lines of
, you will get 100 mapper running
> concurrently.
>
> You want perfect control over mapper num? NLineInputFormat is designed for
> your purpose.
>
> Yong
________
Keith Wiley kwi...@keithwiley.com
s
exactly the right number. However, if the latter is correct, why is the task
distribution uneven (albeit NEARLY even) and what (if anything) can be done
about it?
Thanks.
____
Keith Wiley kwi...@keithwiley.co
return false;
}
};
}
}
________
Keith Wiley kwi...@keithwiley.com keithwiley.commusic.keithwiley.com
"You can scratch an itch, but you can't itch a scratch. Furthermore, an it
and one that overrides
Configured and Tool.
On Jan 17, 2014, at 09:46 , Vinod Kumar Vavilapalli wrote:
> What is the version of Hadoop that you are using?
>
> +Vinod
>
> On Jan 16, 2014, at 2:41 PM, Keith Wiley wrote:
>
>> My driver is implemented around Tool a
aren't there.
What on Earth am I doing wrong here?
________
Keith Wiley kwi...@keithwiley.com keithwiley.commusic.keithwiley.com
"Luminous beings are we, not this crude matter."
-- Yoda
ot seeing anything. The status jus says
"reduce > reduce", as always.
Thanks for any help on this.
____
Keith Wiley kwi...@keithwiley.com keithwiley.commusic.keithwiley.com
"And what if we pic
is the best,
merely that I actually got it to work!
http://www.keithwiley.com/writing/HowToDeployHadoopYarnOnEC2.shtml
Cheers!
____
Keith Wiley kwi...@keithwiley.com keithwiley.commusic.keithwiley.com
&qu
r 2043 Jan 27 01:07 slaves.sh*
4 -rwxr-xr-x 1 ec2-user ec2-user 1159 Jan 27 01:07 start-mapred.sh*
4 -rwxr-xr-x 1 ec2-user ec2-user 1068 Jan 27 01:07 stop-mapred.sh*
Keith Wiley kwi...@keithwiley.com keithwiley.commusic.keithw
tion: webapps/datanode not found in CLASSPATH") so I
get two very similar errors at once, one on the namenode and one on the
datanode.
This webapps/ dir business makes no sense since the files (or directories) the
logs claim to be looking for inside webapps/ ("hdfs" an
s for the tip.
On Feb 18, 2013, at 16:32 , Azuryy Yu wrote:
> Because journal nodes are also be formated during NN format, so you need to
> start all JN daemons firstly.
>
> On Feb 19, 2013 7:01 AM, "Keith Wiley" wrote:
> This is Hadoop 2.0. Formatting the namenode p
esting a little difficult.
Any ideas?
____
Keith Wiley kwi...@keithwiley.com keithwiley.commusic.keithwiley.com
"The easy confidence with which I know another man's religion is folly teache
roduct Manager for AWS
> EMR).
> Best wishes.
>
> De: "Keith Wiley"
> Para: user@hadoop.apache.org
> Enviados: Jueves, 14 de Febrero 2013 15:46:05
> Asunto: Re: .deflate trouble
>
> Good call. We can't use the conventional web-based JT due to corporate
bled compression on its own.
________
Keith Wiley kwi...@keithwiley.com keithwiley.commusic.keithwiley.com
"The easy confidence with which I know another man's religion is folly teaches
me to suspect that my own is also."
-- Mark Twain
hat produced this output also carry
> mapred.output.compress=false in it? The file should be viewable on the
> JT UI page for the job. Unless explicitly turned out, even 0.19
> wouldn't have enabled compression on its own.
>
> On Fri, Feb 15, 2013 at 3:50 AM, Keith Wiley wrote:
>> I
kind of stuck. The output shouldn't be compressed to begin
with, and all attempts to decompress it have failed.
Any ideas?
Thanks.
____
Keith Wiley kwi...@keithwiley.com keithwiley.commusic.keithwiley.c
:
>>>> Hi Serge,
>>>>
>>>> are you sure your datanodes have reported in ?
>>>>
>>>>
>>>>
>>>> On Tue, Sep 4, 2012 at 10:10 AM, Serge Blazhiyevskyy
>>>> wrote:
>>>>> Can look in name node
reamer.run(DFSClient.java:2822)
________
Keith Wiley kwi...@keithwiley.com keithwiley.commusic.keithwiley.com
"I used to be with it, but then they changed what it was. Now, what I'm with
isn't it, and what's it seems weird and scary to me."
t that to 0 if
> you do not want such an automatic preventive measure. Its not exactly
> a need, just a check to help avoid accidental data loss due to
> non-monitoring of disk space.
>
> On Tue, Sep 4, 2012 at 11:33 PM, Keith Wiley wrote:
>> I had moved the data directory to the
ed for now.
Thanks.
________
Keith Wiley kwi...@keithwiley.com keithwiley.commusic.keithwiley.com
"I used to be with it, but then they changed what it was. Now, what I'm with
isn't it, and what's
Number of Under-Replicated Blocks: 5
NameNode Storage:
Storage Directory TypeState
/var/lib/hadoop-0.20/cache/hadoop/dfs/name IMAGE_AND_EDITS Active
Cloudera's Distribution including Apache Hadoop, 2012.
_____
!
Keith Wiley kwi...@keithwiley.com keithwiley.commusic.keithwiley.com
"I do not feel obliged to believe that the same God who has endowed us with
sense, reason, and intellect has intended us to forgo their use."
-- Galileo Galilei
hdfs would suddenly start
exhibiting this error out of the blue?
Thanks.
________
Keith Wiley kwi...@keithwiley.com keithwiley.commusic.keithwiley.com
"Luminous beings are we, not th
out of disk space? THAT's the real question on the table here.
Any ideas?
____
Keith Wiley kwi...@keithwiley.com keithwiley.commusic.keithwiley.com
"Yet mark his perfect self-contentment, and hence learn his lesson, that to be
self-contented is to be vi
HDFS cluster, mv your existing dfs.name.dir and dfs.data.dir
> dir contents onto the new storage mount. Reconfigure dfs.data.dir and
> dfs.name.dir to point to these new locations and start it back up. All
> should be well.
>
> On Tue, Aug 28, 2012 at 12:15 AM, Keith Wiley wrote:
>>
ing
away the old cluster, making a new cluster from scratch, and reuploading the
data to hdfs?...or is that really the only feasible way to migrate a
pseudo-distributed cluster to a second larger storage?
Thanks.
___
Tom White's book (O'Reilly).
Sent from my phone, please excuse my brevity.
Keith Wiley, kwi...@keithwiley.com, http://keithwiley.com
Pravin Sinha wrote:
Hi,
I am new to Hadoop. What would be the best way to learn hadoop and eco system
31 matches
Mail list logo