Guys,
I am a bit confused with the package availability and compatibility of the
upgraded Hadoop components - HDFS Federation and YARN (MRv2)
My questions areWhich download contains HDFS federation?Any good documentation
to get daemons running on HDFS federation?Is Yarn compatible with older
Using Federated Namespaces is optional.
YARN's services use HDFS minimally on their own. It is your MR job or
your YARN application that really uses the filesystem. So you can have
YARN host itself on any namespace, and it wouldn't matter to your
actual jobs/apps, which will submit with their own
It shud be visible from every namenode machine have you tried this commmand
bin/hdfs dfs -ls /yourdirectoryname/
On Wed, Sep 18, 2013 at 9:23 AM, Sandeep L sandeepvre...@outlook.comwrote:
Hi,
I resolved the issue.
There is some problem with /etc/hosts file.
One more question I would
Hi,
I have implemented a custom Writable that needs special metadata (a Apache
UIMA type system) to decode the input. This is much more complex metadata
than a simple schema, so I suppose I can't use HCat or similar things. I
would like to store this metadata only once per input file, e.g.
What is your config set to for mapred local dirs? And what are the permissions
to those directories?
All users need executable permissions in all the paths up to the local-dir so
that they can create their own directories in there. For e.g. if one of the
mapred local dir is /a/b/c/mapred, then
Hi,
How do I implement (say ) in wordcount a combiner functionality if i am
using python hadoop streaming?
Thanks
Hi,
I'm working on upgrading my cluster from CDH3u5 to CDH4. Trying to do the
upgrade in place rather than creating a new cluster and migrating over.
Doing this on a test cluster right now, but ran into an issue -
First I uninstalled the CDH3 packages and installed the CDH4 ones, then
upgraded
LMGTFY:
http://pydoop.sourceforge.net/docs/pydoop_script.html#pydoop-script-guide
On Wed, Sep 18, 2013 at 6:01 PM, jamal sasha jamalsha...@gmail.com wrote:
Hi,
How do I implement (say ) in wordcount a combiner functionality if i am
using python hadoop streaming?
Thanks
Folks,
Any one run into this issue before:
java.io.IOException: Max block location exceeded for split: Paths:
/foo/bar
InputFormatClass: org.apache.hadoop.mapred.TextInputFormat
splitsize: 15 maxsize: 10
at
Hi Omkar,
It is my own custom AM that I am using, not the MR-AM. But I am still not
able to believe how can a negative value go from the getProgress() call
which is always calculated from division of positive numbers, but might be
some floating point computation problems as you are saying.
10 matches
Mail list logo