and how can I get these values using the job id in java?
On 13 June 2013 08:15, Devaraj k devara...@huawei.com wrote:
As per my understanding as of now start and end times are not available
through shell command. You can use the JobClient API to get the same.
** **
When I launch the command mapred queue -list I have this output:
Scheduling Info : Capacity: 100.0, MaximumCapacity: 1.0, CurrentCapacity:
0.0
What is the difference between Capacity and MaximumCapacity fields?
--
Best regards,
By default, the ResourceManager will try give you a container on that node,
rack or anywhere (in that order).
We recently added ability to whitelist or blacklist nodes to allow for more
control.
Arun
On Jun 12, 2013, at 8:03 AM, John Lilley wrote:
If I request a container on a node, and
I have set up hadoop cluster on two node but JobTracker UI in Cluster
summary shows only one node
Namenode shows Live nodes 2 but data is always put on same master node
not on slave node
On master node - jps
all process are running
On slave node -jps
tasktracke and datanode are running
i
Can you check the Job Tracker and task Tracker log files, whether any problem
while starting the Task Tracker or any problem while connecting to Job
tracker...
Thanks
Devaraj
From: Vikas Jadhav [mailto:vikascjadha...@gmail.com]
Sent: 13 June 2013 12:22
To: user@hadoop.apache.org
Subject:
The script is on local file system. it's on linux box.
I totally agree we need version control for the source code. this is a good
example to show the importance of version control.
Thank you Michael and Chris for your inputs anyway.
On Thu, Jun 13, 2013 at 10:35 AM, Chris Embree
But no the script is not send to the cluster so you won't be able to
recover from here because what is send is the 'interpretation' of the
script.
If that was the case, that would be a question for the pig mailing list.
Regards
Bertrand
On Thu, Jun 13, 2013 at 11:33 AM, feng jiang
Hi,
If the pig process is not closed ,try lsof may helpJ
The files used by one process
/proc/pid/fd/file,use lsof to find full path ,copy the file in memory to
disk.
God bless you.
From: feng jiang [mailto:jiangfut...@gmail.com]
Sent: Thursday, June 13, 2013 5:33 PM
To:
Well if the script was sitting on the cluster... Then it would be a Hadoop
question.
?How do you recover a file that was deleted on HDFS?
Which is an interesting question...
But the OP said it wasn't on HDFS, and to your point... One can only say sorry
dude, bummer, rewrite it.
Sorry you're
You can use BTIER http://www.lessfs.com in linux.(I don't know its
stable or not).With FreeBSD you can use ZFS +L2ARC on SSD.
On Thu, Jun 13, 2013 at 6:35 AM, Michael Segel michael_se...@hotmail.comwrote:
I could have sworn there was a thread on this already. (Maybe the HBase
list?)
The programming error is already mentioned. You are actually not overriding
base classes method , rather created a new method.
Thanks,
Rahul
On Thu, Jun 13, 2013 at 11:12 AM, Omkar Joshi
omkar.jo...@lntinfotech.comwrote:
Ok but that link is broken - can you provide a working one?
Regards,
Hi Yuzhang,
Moving this question to the Hadoop user list.
Are you using MapReduce or writing your own YARN application? In
MapReduce, all maps must request the same amount of memory and all reduces
must request the same amount of memory. It would be trivial to do this in
your own YARN
Hi Mike,
Yes, I also have thought about HBase or Cassandra but my data is pretty
much a snapshot, it does not require updates. Most of my aggregations will
also need to be computed once and won't change over time with the exception
of some aggregation that is based on the last N days of data.
When MR assigns data splits to map tasks, does it assign a set of
non-contiguous blocks to one map? The reason I ask is, thinking through the
problem, if I were the MR scheduler I would attempt to hand a map task a bunch
of blocks that all exist on the same datanode, and then schedule the map
I am running a cloudera hadoop cluster and I have noticed that some of my
services are showing a status of Unknown Health. I have checked the
individual UI's (ie: HBase, TaskTracker, Datanode, etc) and all them appear to
be healthy and running smoothly.
However, for example when I look at the
Hi -
I wanted to know on TeaLeaf WebLog files / database. Is the data from TeaLeaf
proprietary or Is it in a readable foramat by other tools? Can any one one
advise who have experience on this product.
Thanks,
Raj
Hi Raj - Tealeaf supports export via a variety of methods. It isn't a
proprietary format nor closed. We pull Tealeaf data into our BigInsights
Hadoop-based offering all the time.
---
Sent from my Blackberry so please excuse typing and spelling errors.
Consider a following input file of format :
input File :
1 2
2 3
3 4
6 7
7 9
10 11
The output Should be as follows :
1 2 3 4
6 7 9
10 11
18 matches
Mail list logo