Daniel,
It looks that you can only communicate with NameNode to do metadata-only
operations (e.g. listing, creating a dir, empty file)...
Did you format the NameNode correctly?
A quite similar issue is described here:
http://www.manning-sandbox.com/thread.jspa?messageID=126741. The last reply
I did that more than once, I just retry it from the beginning. I zapped the
directories and recreated them with hdfs namenode -format and restarted
HDFS and I am still getting the very same error.
I have posted previously the report. Is there anything in this report that
indicates I am not having
Adam,
that's not the issue, I did substitute the name in the first report. The
actual hostname is feynman.cids.ca.
-
Daniel Savard
2013/12/3 Adam Kawa kawa.a...@gmail.com
Daniel,
I see that in previous hdfs report, you had: hosta.subdom1.tld1, but now
you have
Adam,
here is the link:
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html
Then, since it didn't work I tried a number of things, but my configuration
files are really skinny and there isn't much stuff in it.
-
Daniel Savard
2013/12/3
Adam and others,
I solved my problem by increasing by 3GB the filesystem holding the data. I
didn't try to increase it by smaller steps, so I don't know exactly at
which point I had enough space for HDFS to work properly. Is there anywhere
in the documentation a place we can have a list of
FYI,
I did recreate from scratch a new filesystem to hold the HDFS and increased
the size until the put operation succeeded. It took me a minimum of 650MB
filesystem to be able to copy a 100K file. I incremented the space by
chunks of 10MB each time to get the best value.
Here is the output of
Hi Daniel,
first of all, before posting to a mailing list, take a deep breath and
let your frustrations out. Then write the email. Using words like
crappy, toxicware, nightmare are not going to help you getting
useful responses.
While I agree that the docs can be confusing, we should try to stay
Hi Daniel,
I agree with you that 2.2 documents are very unfriendly.
In many documents, the change in 2.2 from 1.x is just a format.
There are still many documents to be converted. (ex. Hadoop Streaming)
Furthermore, there are a lot of dead links in documents.
I've been trying to fix dead links,
Daniel,
Apologies if you had a bad experience. If you can point them out to us, we'd
be more than happy to fix it - alternately, we'd *love* it if you could help us
improve docs too.
Now, for the problem at hand:
http://wiki.apache.org/hadoop/CouldOnlyBeReplicatedTo is one place to look.
Thanks Arun,
I already read and did everything recommended at the referred URL. There
isn't any error message in the logfiles. The only error message appears
when I try to put a non-zero file on the HDFS as posted above. Beside that,
absolutely nothing in the logs is telling me something is wrong
I am trying to configure hadoop 2.2.0 from source code and I found the
instructions really crappy and incomplete. It is like they were written to
avoid someone can do the job himself and must contract someone else to do
it or buy a packaged version.
It is about three days I am struggling with
11 matches
Mail list logo