Hi
That is great helpfull.Thank you very much.
regards
2013/2/25 Harsh J
> Hi,
>
> A few points:
>
> - The property "fs.default.name" has been deprecated in favor of the
> newer name "fs.defaultFS".
> - When using nameservice IDs, you shouldn't rely on the RPC address.
> That is, your fs.d
I am using Ganglia.
Note I have short circuit reads enabled (I think, I never verified it was
working but I do get errors if I run jobs as another user).
Also, if Ganglia's network use included the local socket then I would see
network utilization in all cases. I see no utilization when using HB
Just an aside (I've not tried to look at the original issue yet), but
Scribe has not been maintained (nor has seen a release) in over a year
now -- looking at the commit history. Same case with both Facebook and
Twitter's fork.
On Mon, Feb 25, 2013 at 7:16 AM, Lucas Bernardi wrote:
> Yeah I looke
Given your other thread post, I'd say you may have an inconsistency in
the JDK deployed on the cluster. We generally recommend using the
Oracle JDK6 (for 1.0.4). More details on which version to pick can be
found at http://wiki.apache.org/hadoop/HadoopJavaVersions. In any
case, the JDK installation
Hi Robert,
How are you measuring the network usage? Note that unless short
circuit reading is on, data reads are done over a local socket as
well, and may appear in network traffic observing tools too (but do
not mean they are over the network).
On Mon, Feb 25, 2013 at 2:35 AM, Robert Dyer wrote
Hi,
A few points:
- The property "fs.default.name" has been deprecated in favor of the
newer name "fs.defaultFS".
- When using nameservice IDs, you shouldn't rely on the RPC address.
That is, your fs.defaultFS should instead simply be "hdfs://ns1", and
the config will load the right RPC that is b
Could you send us the log messages (with timestamps) which you think
is behind the behavior you see? The new edit log format is different
than the one in 1.x, and uses several smaller edit log files. This new
format is described in PDFs attached at
https://issues.apache.org/jira/browse/HDFS-1073, a
Hi .
Sorry to bother you againt.
I need to close this question by myself.
Here is my core-site.xml:
**
fs.default.name
hdfs://Hadoop01:8020
*
Hi Azuryy,Harsh
It is strange that the NameNode rolling edits log rolls edits logs all
the times,even i do nothing on the NameNode vi client.
Any futher help will be appreciated.
regard.
2013/2/25 YouPeng Yang
> Hi Azuryy
>
>Yes.that is what is.
>Thank you for your reply.
Hi Azuryy
Yes.that is what is.
Thank you for your reply. I am making effort to make clear about
federation and HA.
Regards.
2013/2/25 Azuryy Yu
> I think you mixed federation with HA. am I right?
>
> If another name node hasn't changes, then It doesn't do any edit log
> rolling. fede
I think you mixed federation with HA. am I right?
If another name node hasn't changes, then It doesn't do any edit log
rolling. federated NNs don't keep concurrency( I think you want say keep
sync-able?)
On Sun, Feb 24, 2013 at 11:09 PM, YouPeng Yang wrote:
> Hi All
>
> I'm testing the HDFS Fe
Hi Fatih,
Have you looked in the logs files? Anything there?
JM
2013/2/24 Fatih Haltas
> I am always getting the Child Error, I googled but I could not solve the
> problem, did anyone encounter with same problem before?
>
>
> [hadoop@ADUAE042-LAP-V conf]$ hadoop jar
> /home/hadoop/project/hado
Yeah I looked at scribe, looks good but sounds like too much for my
problem. I'd rather make it work the simple way. Could you pleas post your
code, may be I'm doing something wrong on the sync side. Maybe a buffer
size, block size or some other parameter is different...
Thanks!
Lucas
On Sun, Fe
I am using the same version of Hadoop as you.
Can you look at something like Scribe, which AFAIK fits the use case you
describe.
Thanks
Hemanth
On Sun, Feb 24, 2013 at 3:33 AM, Lucas Bernardi wrote:
> That is exactly what I did, but in my case, it is like if the file were
> empty, the job cou
Hi Mike,Harsh
Thank you for your reply.
I need to time to digest the above stuff.
Thanks
Regards
2013/2/25 Harsh J
> Mike,
>
> I don't see how SPOF comes into the picture when HA is already present
> in the releases that also carry Federation and each NN (federated or
> non) can b
The MapR videos on programming and map-reduce are all general videos.
The videos that cover capabilities like NFS, snapshots and mirrors are all
MapR specific since ordinary Hadoop distributions like Cloudera,
Hortonworks and Apache can't support those capabilities.
The videos that cover MapR adm
I have a small 6 node dev cluster. I use a 1GB SequenceFile as input to a
MapReduce job, using a custom split size of 10MB (to increase the number of
maps). Each map call will read random entries out of a shared MapFile
(that is around 50GB).
I set replication to 6 on both of these files, so all
Mike,
I don't see how SPOF comes into the picture when HA is already present
in the releases that also carry Federation and each NN (federated or
non) can be assigned further Standby-NN roles. At this point we can
stop using the word "SPOF" completely for HDFS. Would do great good
for avoiding fur
I think part of the confusion stems from the fact that federation of name nodes
only splits the very large cluster in to smaller portions of the same cluster.
If you lose a federated name node, you only lose a portion of the cluster not
the whole thing. So now instead of one SPOF, you have two S
Hi,
Federated namenodes are independent of one another (except that they
both get reports from all the/common DNs in the cluster). It is
natural to see one roll its edit logs based on its own rate of
metadata growth, as compared to the other. Their edits, image, etc.
everything is independent - th
Ok sir. I have seen it superficially but now I will look thorough at it.
*
--
Cheers,
Mayur*
On Sun, Feb 24, 2013 at 8:21 PM, Jean-Marc Spaggiari <
jean-m...@spaggiari.org> wrote:
> Hi Mayur,
>
> Have you looked at the link I sent you?
>
>
> http://www.michael-noll.com/tutorials/running-hadoop-on
Hi Mayur,
Have you looked at the link I sent you?
http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
It will show you where to set JAVA_HOME and others.
JM
2013/2/24 Mayur Patil
> Hello there,
>
>Success !! I have installed Hadoop-1.0.4.deb file on
Hello there,
Success !! I have installed Hadoop-1.0.4.deb file on Ubuntu 12.04 LTS !!
In /usr/bin/hadoop file where should I set JAVA_HOME and
HADOOP_INSTALL??
I have openjdk 7 and 6 version installed.
Thanks !!
--
*Cheers,
Mayur*.
Hi Mayur,
>
> How are you installing the
I am always getting the Child Error, I googled but I could not solve the
problem, did anyone encounter with same problem before?
[hadoop@ADUAE042-LAP-V conf]$ hadoop jar
/home/hadoop/project/hadoop-1.0.4/hadoop-examples-1.0.4.jar
aggregatewordcount /home/hadoop/project/hadoop-data/NetFlow test16
Sai,
just use 127.0.0.1 in all the URIs you have. Less complicated and easily
replaceable
On Sun, Feb 24, 2013 at 5:37 PM, sudhakara st wrote:
> Hi,
>
> Execute ifcongf find the IP of system
> and add line in /etc/host
> (your ip) ubuntu
>
> use URI string : public static String fsURI = "hdfs:
Hi,
Execute ifcongf find the IP of system
and add line in /etc/host
(your ip) ubuntu
use URI string : public static String fsURI = "hdfs://ubuntu:9000";
On Sun, Feb 24, 2013 at 5:23 PM, Sai Sai wrote:
> Many Thanks Nitin for your quick reply.
>
> Heres what i have in my hosts file and i am ru
Many Thanks Nitin for your quick reply.
Heres what i have in my hosts file and i am running in VM i m assuming it is
the pseudo mode:
*
127.0.0.1 localhost.localdomain localhost
#::1 ubuntu localhost6.localdomain6 localhost6
#127.0.1.1 ubuntu
127.0.0.1 ubu
if you want to use master as your hostname then make such entry in your
/etc/hosts file
or change the hdfs://master to hdfs://localhost
On Sun, Feb 24, 2013 at 5:10 PM, Sai Sai wrote:
>
> Greetings,
>
> Below is the program i am trying to run and getting this exception:
> *
Greetings,
Below is the program i am trying to run and getting this exception:
***
Test Start.
java.net.UnknownHostException: unknown host: master
at org.apache.hadoop.ipc.Client$Connection.(Client.java:214)
at org.apache.hadoop.ipc.Client.getConn
Thanks Mahesh for your help.
Wondering if u can provide some insight with the below compare method using
byte[] in the SecondarySort example:
public static class Comparator extends WritableComparator {
public Comparator() {
super(URICountKey.class);
}
public
Thank you very much but
No this is the file in hdfs and it is the exact path of netflow data in
hdfs. Hadoop-data is the home hdfs directory, before downgrading my jdk
this command worked well
24 Şubat 2013 Pazar tarihinde sudhakara st adlı kullanıcı şöyle yazdı:
> Hi,
> Your specifying the
Hi,
Your specifying the input directory in local file system not in HDFS,
Copy some text file to using '-put' or '-copyFromLoca'l to HDFS user home
directory then try to execute word count by specifying home as input
directory.
On Sun, Feb 24, 2013 at 3:29 PM, Fatih Haltas wrote:
>
>
> Hi
Hi Hemanth;
Thanks for your grreat helps,
I am really much obliged to you.
I solved this problem by changing my java compiler vs. but now though I
changed everynodes configuration I am getting this error even I tried to
run example of wordcount without making any changes.
What may be the reason
33 matches
Mail list logo