on the test cluster because I want hot upgrade on the prod
cluster.
On 2013-11-21 7:23 PM, "Joshi, Rekha"
mailto:rekha_jo...@intuit.com>> wrote:
Hi Azurry,
This error occurs when FSImage finds previous fs state, and as log states you
would need to either finalizeUpgrade or rollback
Hi Azurry,
This error occurs when FSImage finds previous fs state, and as log states you
would need to either finalizeUpgrade or rollback to proceed.Below -
bin/hadoop dfsadmin –finalizeUpgrade
hadoop dfsadmin –rollback
On side note for a small test cluster on which one might suspect you are th
Hi,
Think storing is not much of an issue, as much as some thought would be
required in processing.
Think you can, to the basic be able to use SequenceFileInputFormat,
ByteArrayInputStream (and corresponding output) for the binary files.
There are some experiments on audio, video here -
Audio -
Almost never silenced the logs on terminal, only tuned config for
path/retention period on logs, so just top of mind, mostly –S/--silent for no
logs, -V/--verbose for max logs possible works on executables, --help will
confirm if it is possible.
If it doesn't work, well it should :-)
Thanks
Re
Refer hadoop put, get syntax for placing input files on hdfs (automate script)
and pig dump, store after mapreduce to have your output directory -
http://pig.apache.org/docs/r0.9.2/start.html#Pig+Tutorial+Files
Thanks
Rekha
From: A Geek mailto:dw...@live.com>>
Reply-To: mailto:user@hadoop.apach
Hi Yongzhi,
Well, I don't know if this will help, but I looked into source code, can
see all token, authentication related features discussed in the design
under- o.a.h.hdfs.security.*, o.a.h.mapreduce.security.*, o.a.h.security.*
, o.a.h.security.authentication.*
And HADOOP-4487 is marked fixed
rds
Bertrand
On Wed, Sep 12, 2012 at 12:09 PM, Joshi, Rekha
mailto:rekha_jo...@intuit.com>> wrote:
Hi Piter,
JobControl just means there are multiple complex jobs, but you will see the
information for each job on your hadoop web interface webhdfs still, wouldn't
you?
Or if that does n
Hi Piter,
JobControl just means there are multiple complex jobs, but you will see the
information for each job on your hadoop web interface webhdfs still, wouldn't
you?
Or if that does not work, you might need to use Reporters/Counters to get the
log info data in custom format as needed.
Thank
.
But I do not see that in api, so even if I extend
InputFormat/RecordReader, I will not be able to have a feature of
setEncoding() on my file format.Having that would be a good solution.
Thanks
Rekha
On 11/09/12 12:37 PM, "Joshi, Rekha" wrote:
>Hi Ajay,
>
>Try Se
Hi Ajay,
Try SequenceFileAsBinaryInputFormat ?
Thanks
Rekha
On 11/09/12 11:24 AM, "Ajay Srivastava" wrote:
>Hi,
>
>I am using default inputFormat class for reading input from text files
>but the input file has some non utf-8 characters.
>I guess that TextInputFormat class is default inputForm
Hi Hemanth,
I am still getting my hands dirty on yarn, so this is preliminary – maybe as
the hdfs path in AggregatedLogsBlock points to /tmp/logs and you say service is
unable to read it, possibly check perm or change the configuration in
yarn-site.xml and try?
Thanks
Rekha
From: Hemanth Yami
Hi Andy,
If you are referring to HADOOP_CLASSPATH, that is env variable on your cluster
or effected via config xml.But if you need your own environment variables for
streaming you may use -cmdenv PATH= on your streaming command.Or if you have
specific jars for the streaming process -libjars on
Hi May,
Not sure which hadoop deployment you use (Cloudera/HortonWorks/Apache?), but
hadoop wiki - http://wiki.apache.org/hadoop/ can help
For tracing IO logs, check your map/reduce flow – Output
Collector/Reporter/Context.To get understanding of filesystem IO refer source
under o.a.h.fs.* and
Hi Abhay,
Ideally the error line - "Caused by:
org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find any valid
local directory for output/map_128.out" suggests you either do not have
permissions for output folder or disk is full.
Also 5 is not a big number on thread spawning, (
14 matches
Mail list logo