Do you know what properties to tune?
Thanks,
Ram
On Thu, Aug 18, 2016 at 9:11 PM, tkg_cangkul wrote:
> i think that's because you don't have enough resource. u can tune your
> cluster config to maximize your resource.
>
>
> On 19/08/16 11:03, rammohan ganapavarapu
i think that's because you don't have enough resource. u can tune your
cluster config to maximize your resource.
On 19/08/16 11:03, rammohan ganapavarapu wrote:
I dont see any thing odd except this not sure if i have to worry about
it or not.
2016-08-19 03:29:26,621 INFO [main]
Thanks for your suggestion Daniel. I was already using SequenceFile but my
format was poor. I was storing file contents as Text in my SeqFile,
So all my map jobs did repeated conversion from Text to double. I resolved
this by correcting SequenceFile format. Now I store serialised java object
in
Yes, it is possible is to enable HA mode and Automatic Failover in
a federated namespace. Following are some of the quick references, I feel
its worth reading these blogs to get more insight into this. I think, you
can start prototyping a test cluster with this and post your queries to
this forum
maybe u can check the logs from port 8088 on your browser. that was RM
UI. just choose your job id and then check the logs.
On 19/08/16 10:14, rammohan ganapavarapu wrote:
Sunil,
Thanks you for your input, below are my server metrics for RM. Also
attached RM UI for capacity scheduler
Hi
It could be because of many of reasons. Also I am not sure about which
scheduler your are using, pls share more details such as RM log etc.
I could point out few reasons
- Such as "Not enough resource is cluster" can cause this
- If using Capacity Scheduler, if queue capacity is maxed out,
Turns out we made a stupid mistake - our system was managing to mix
configuration between an old cluster and a new cluster. So, things are working
now.
Thanks,
Ben
From: Benjamin Ross
Sent: Thursday, August 18, 2016 10:05 AM
To: Rohith Sharma K S; Gao, Yunlong
Hi,
When i submit a MR job, i am getting this from AM UI but it never get
finished, what am i missing ?
Thanks,
Ram
What is the value of jobTracker port 8031 or 8032?
Ram
Rohith,
Thanks - we're still having issues. Can you help out with this?
How do you specify the done directory for an MR job? The job history done dir
is mapreduce.jobhistory.done-dir. I specified the job one as
mapreduce.jobtracker.jobhistory.location as per the documentation here.
MR jobs and JHS should have same configurations for done-dir if configured.
Otherwise staging-dir should be same for both. Make sure both Job and JHS has
same configurations value.
Usually what would happen is , MRApp writes job file in one location and
HistoryServer trying to read from
Store them within a sequencefile
On Thursday, 18 August 2016, Madhav Sharan wrote:
> Hi , can someone please recommend a fast way in hadoop to store and
> retrieve matrix of double values?
>
> As of now we store values in text files and the read it in java using HDFS
>
12 matches
Mail list logo