Hi,
I'm a bit confused on the local disk and the hdfs disk.
Through my admin page, i see information like the following.
Configured Capacity : 254.05 GB
DFS Used : 207.09 GB
Non DFS Used : 16.87 GB
DFS Remaining : 30.09 GB
DFS Used% : 81.52 %
DFS Remaining% : 11.84 %
I only made
Hi,
Inline explanation:
On Tue, Nov 6, 2012 at 1:41 PM, Elaine Gan elaine-...@gmo.jp wrote:
Hi,
I'm a bit confused on the local disk and the hdfs disk.
Through my admin page, i see information like the following.
Configured Capacity : 254.05 GB
Configured Capacity = Sum(Configured DN
Dear Harsh,
Thank you for the very clear explanation.
Very well noted now :)
Hi,
Inline explanation:
On Tue, Nov 6, 2012 at 1:41 PM, Elaine Gan elaine-...@gmo.jp wrote:
Hi,
I'm a bit confused on the local disk and the hdfs disk.
Through my admin page, i see information like the
Thanks for all your answers so far! There's still one question open which I
can't seem to find an answer for in the source code or documentation. When
I specify the two source directories of my two datasets to be joined
through CompositeInputFormat and say dataset A comes first and B second,
will
Hi,
I'm setting my combiner and reducer to the same java class. Is there any
API that could tell me the context in which the java class is running after
the hadoop job is submitted to the cluster i.e whether the class is running
as a combiner or a reducer. I need this information to change the
Hi Prasad,
My reply inline.
On Tue, Nov 6, 2012 at 4:15 PM, Prasad GS gsp200...@gmail.com wrote:
Hi,
I'm setting my combiner and reducer to the same java class. Is there any API
that could tell me the context in which the java class is running after the
hadoop job is submitted to the
Hi Bertrand,
I believe the framework does give a few combiner statistics of its own
(like in/out records and such). If your combiner class is separate,
then instantiating counters in it with apt naming should address the
need, since the class itself will be separately instantiated.
Even if we
Hey Andy,
We also have a FAQ wiki page at http://wiki.apache.org/hadoop/FAQ you
can send these kind of things to. Let me know your Apache Hadoop Wiki
user-ID (you can sign up there), and we'll grant you edit rights.
On Tue, Nov 6, 2012 at 8:06 PM, Kartashov, Andy andy.kartas...@mpac.ca wrote:
Hadoopers,
Last month I asked question 2 and this is what I have recently learned.
If you decide to overwrite Hadoop default directories (recommended) this is
what to keep in mind:
You create directories:
A. On Local Linux FS, using $ mkdir:
***
1. What:
If you exceed the amount of physical memory available, memory pages will be
written to disk in a temp space. The act of 'swapping' the memory pages from
memory to disk and back again is known as 'swap'.
HBase is highly sensitive to the latency of swapping memory in and out of
physical memory
Thanks for all the responses. This is very useful information.
On Mon, Nov 5, 2012 at 11:40 PM, Serge Blazhiyevskyy
serge.blazhiyevs...@nice.com wrote:
I second this proposed solution. Distcp work very well with backing up
data on the separate cluster
From: Bharath Mundlapudi
Hadoopers,
How does one start Daemons remotely when scripts normally require root user to
start them? Do you modify scripts?
Thanks,
NOTICE: This e-mail message and any attachments are confidential, subject to
copyright and may be privileged. Any unauthorized use, copying or disclosure is
Andy, I noticed you are splitting up your /etc/init.d startup -- we're
new-ish to Hadoop and found that the supplied init.d script doesn't
necessarily stop services properly, so I end up doing it manually. If
you could share your info that would be great... I can see where they
can be
Forrest,
I must admit. I am very new myself. Two months ago I didn't know what $sudo
was. :) So, do not take my comments too seriously. I could be wrong about many
issues.
There are 5 daemon scripts in /etc/init.d/. I am using MRv1, so mine are:
hadoop-hdfs-datanode
hadoop-hdfs-namenode
Hi,
I am very new to Hadoop. I have run tutorials for map and reduce.
I would like more information on hadoop's web interface and details
Thanks,
Bharati
Fortigate Filtered
Have you installed snappy library (which should have libsnappy.so in
/usr/lib or /usr/local/lib) first?
On Wed, Nov 7, 2012 at 5:30 AM, alx...@aim.com wrote:
When I used hadoop-1.0.4 without the installation of hadoop-snappy I sawin
the logs
WARN snappy.LoadSnappy: Snappy native library
Yes, I have installed snappy-1.0.5.
Here is the output of
ls -l /usr/local/lib
total 1468
lrwxrwxrwx. 1 root root 52 Nov 5 16:00 libjvm.so -
/usr/java/jdk1.6.0_37/jre/lib/amd64/server/libjvm.so
-rw-r--r--. 1 root root 946358 Nov 3 17:44 liblzo2.a
-rwxr-xr-x. 1 root root866 Nov 3
Alex
You need to make sure libsnappy.so is available in hadoop's java library path.
The way hadoop sets the java library path is that based on the jdk it will add
either 32bit or 64bit libraries.
They need to be present in $HADOOP_HOME/lib/native/Linux-amd64-64 for 64bit jdk
or
cool ..thank you harsh
From: Harsh J [ha...@cloudera.com]
Sent: Wednesday, November 07, 2012 9:05 AM
To: user@hadoop.apache.org
Subject: Re: ooozie.
Hi Siva,
You can do it by making sure of the jars the application (oozie)
starts with, but I wouldn't
19 matches
Mail list logo