Hi,
I'm testing out native lib support on our test amd64 test cluster
running 0.20.205 running the following
./bin/hadoop jar hadoop-test-0.20.205.0.jar testsequencefile -seed 0
-count 1000 -compressType RECORD xxx -codec
org.apache.hadoop.io.compress.GzipCodec -check 2
it fails with
Oh dear, I feel such a fool. However, in the spirit of knowledge-sharing I
thought I’d pass back my results (I hate it
when I find a thread where somebody has exactly the same problem I’m having and
they then just close it by saying
they’ve fixed it, without saying *how*).
It seems that my
On 14/11/11 20:46, Raj V wrote:
Hi Stephen
THis is probably happening during jobtracker start. Can you provide any
relevant logs from the task tracker log fiile?
You are correct, there is even a helpful message
2011-11-16 15:05:58,076 WARN org.apache.hadoop.mapred.JobTracker:
Incorrect
On 16/11/11 14:07, stephen mulcahy wrote:
On 14/11/11 20:46, Raj V wrote:
Hi Stephen
THis is probably happening during jobtracker start. Can you provide
any relevant logs from the task tracker log fiile?
You are correct, there is even a helpful message
2011-11-16 15:05:58,076 WARN
Solved it. IIUC that is because, by default, the conf/ subdirectory is
not part of classpath in 0.23. You need to specify it using the
--config switch:
$ hdfs --config ~/hadoop/conf/ namenode
whereas before you'd have typed
$ hadoop namenode
On Wed, Nov 16, 2011 at 1:33 PM, Petru Dimulescu
Hello Stephen,
This is surely a bug. Could you file a new JIRA for this?
But I feel its pretty strange that the native libs do not come packed
inside their dedicated native folder and are instead mixed around with
.jars under lib/ itself. Right now these files appear to be duplicated
as well,
On 16/11/11 15:13, Harsh J wrote:
Hello Stephen,
This is surely a bug. Could you file a new JIRA for this?
But I feel its pretty strange that the native libs do not come packed
inside their dedicated native folder and are instead mixed around with
.jars under lib/ itself. Right now these files
We will be adding more memory into our master node in the near future.
We generally don't mind if our map/reduce jobs are unable to run for a
short period but we are more concerned about the impact this may have on
our HBase cluster. Will HBase continue to work will hadoops name-node
and/or
Hi all,
I'm wondering if there is a way to get output messages that are printed
from the main class of a Hadoop job.
Usually 21 out.log would wok, but in this case it only saves the
output messages printed in the main class before starting the job.
What I want is the output messages that
Hi guys : In a shared cluster environment, whats the best way to reduce the
number of mappers per job ? Should you do it with inputSplits ? Or simply
toggle the values in the JobConf (i.e. increase the number of bytes in an
input split) ?
--
Jay Vyas
MMSB/UCHC
The HBase-Writer team is happy to announce that HBase-Writer 0.90.3 is
available for download:
http://code.google.com/p/hbase-writer/downloads/list
HBase-Writer 0.90.3 is a maintenance release that fixes library
compatibility since older
versions of Heritrix and HBase. More details may be
After a month's hiatus for Hadoop World, we're back! The December Hadoop
meetup will be held Wednesday, December 14, from 6pm to 8pm. This meetup
will be hosted by Splunk at their office on Brannan St.
As usual, we will use the discussion-based unconference format. At the
beginning of the meetup
just the blocksize 128M or 256M,it may reduce the number of mappers per job
2011/11/17 Jay Vyas jayunit...@gmail.com
Hi guys : In a shared cluster environment, whats the best way to reduce the
number of mappers per job ? Should you do it with inputSplits ? Or simply
toggle the values in the
Hi guys !
Q I see that createCache() method of JobInProgress is involved in
assignment of input splits across nodes in Hadoop.
Which classes are involved in assignment of input splits of jobs to nodes ?
I am interested in modifying this assignment policy. How can i do it ?
Q How can i access
yes ,you're right,but
1)waste of disk space ,this is not right,this will not waster the disk
space of datanode,if you don't believe ,you can see the code
2) difficulty to balance HDFS,this may be true
3) low Map stage data locality; why?
2011/11/17 He Chen airb...@gmail.com
Hi Jay Vyas
Ke
On 16/11/11 14:52, stephen mulcahy wrote:
So, digging further - hadoop seems to want to create a file
mapred.system.dir/job id/jobToken
for each job I submit.
I assume this file is related to the new security stuff. Can I disable
this activity until I require the security functionality or
16 matches
Mail list logo