Dear All:
I am unable to start Hadoop even after setting HADOOP_INSTALL,JAVA_HOME and
JAVA_PATH. Please find below error message
anand_vihar@Latitude-E5540:~/hadoop-2.6.0$ start-dfs.sh --config
/home/anand_vihar/hadoop-2.6.0/conf
Starting namenodes on [localhost]
localhost: Error: JAVA_HOME is
Try to export JAVA_HOME in hadoop-env.sh
Best Regard,
Jeff Zhang
From: Anand Murali anand_vi...@yahoo.commailto:anand_vi...@yahoo.com
Reply-To: user@hadoop.apache.orgmailto:user@hadoop.apache.org
user@hadoop.apache.orgmailto:user@hadoop.apache.org, Anand Murali
Eating up the IOException in the mapper looks suspicious to me. That can
silently consume the input without any output. Also check in the map sysout
messages for the console print output.
As an aside, since you are not doing anything in the reduce, try setting
number of reduces to 0. That will
Why are you reading the files with buffered reader in map function.
The problem with your code might be because of the following reason,
The files in /data/path/to/file/d_20150330-1650 will be locally stored
and will not be accessible to the mappers running on different nodes, and
as in your
Tried export in hadoop-env.sh. Does not work either
Anand Murali 11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004,
IndiaPh: (044)- 28474593/ 43526162 (voicemail)
On Wednesday, April 1, 2015 1:03 PM, Jianfeng (Jeff) Zhang
jzh...@hortonworks.com wrote:
Try to export
I continue to get the samede error.I
export JAVA_HOME=/home/anand_vihar/jdk1.0.7_u75 (in hadoop-env.sh)
when I echo $JAVA_HOME it shows me the above path but when I $java -version, it
gives me openjdk version
start-dfs.sh ... errors out saying JAVA_HOME not set., but echo shows
JAVA_HOME.
Anand,
Try Oracle JDK instead of Open JDK.
Regards,
Ramkumar Bashyam
On Wed, Apr 1, 2015 at 1:25 PM, Anand Murali anand_vi...@yahoo.com wrote:
Tried export in hadoop-env.sh. Does not work either
Anand Murali
11/7, 'Anand Vihar', Kandasamy St, Mylapore
Chennai - 600 004, India
Ph: (044)-
Very interesting, BTW. So you try to launch app-master with YARN Container
but your own node-manager without YARN Container, Am I right?
Drake 민영근 Ph.D
kt NexR
On Wed, Apr 1, 2015 at 3:38 PM, Dongwon Kim eastcirc...@postech.ac.kr
wrote:
Thanks for your input but I need to launch my own node
Ok thanks. Shall do
Sent from my iPhone
On 01-Apr-2015, at 2:19 pm, Ram Kumar ramkumar.bash...@gmail.com wrote:
Anand,
Try Oracle JDK instead of Open JDK.
Regards,
Ramkumar Bashyam
On Wed, Apr 1, 2015 at 1:25 PM, Anand Murali anand_vi...@yahoo.com wrote:
Tried export in
Hi,
If you are using Ubuntu then add these lines to /etc/environment
JAVA_HOME=*actual path to jdk*
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:$JAVA_HOME/bin
Please put the actual path to JDK in the first line.
Regards,
Ravindra
On Wed, Apr 1, 2015 at 5:50
Thanks for your input but I need to launch my own node manager
(different from the Yarn NM) running on each node.
(which is not explained in the original question)
If I were to launch just a single master with a well-known address,
ZooKeeper would be a great solution!
Thanks.
Dongwon Kim
Anand,
Sorry about that, I was assuming Redhat/Centos.
For Ubuntu, try sudo update-alternatives --config java.
Sent from my Verizon Wireless 4G LTE smartphone
Original message
From: Anand Murali anand_vi...@yahoo.com
Date: 04/01/2015 7:22 AM (GMT-05:00)
To:
Dear Mr.Roland:
The alternatives command errors out. I have the extracted version of the Oracle
JDK7. However I am ignorant regarding its installation on Ubuntu. Can you point
me to installation material so that I can look up and try.
Thanks
Regards,
Anand Murali 11/7, 'Anand Vihar', Kandasamy
Anand,
My guess is that your alternatives setup isn’t complete.
At a prompt, as su, run the command ‘alternatives - - config java’. Make sure
that the oracle version is listed and is marked as the active one.
If it is not, go through the steps to make sure it is.
- rd
I'm doing precisely the opposite.
My own node manager (MY_NM) is an AM in YARN and, therefore, each
MY_NM is expected to run inside a YARN container.
What I trying to do is to execute the AM (MY_NM) on each slave.
For the reason, I need to launch an AM on a specific node but
Hadoop-2.6.0 ignores
Hello
I need your advise to start using Hadoop !
I created an AWS account and setup Elastic Map Reduce to test Amazon
solution
But, I need to know the best way to start using Hadoop
thanks,
Adam
I meant /etc/environment. It should be present if you are using Ubuntu.
Regards,
Ravindra
On Wed, Apr 1, 2015 at 6:39 PM, Anand Murali anand_vi...@yahoo.com wrote:
Mr. Ravindra
I dont find any etc/environment. Can you be more specific please. I have
done whatever you are saying in a user
Mr. Ravindra:
I am using Ubuntu 14. Can you please provide the full path. I am logged in as
root and it is not found in /etc. In any case what you have suggested I have
tried creating a batch file and it does not work in my installation.
Thanks
Anand Murali 11/7, 'Anand Vihar', Kandasamy St,
Mr. Ravindra
I dont find any etc/environment. Can you be more specific please. I have done
whatever you are saying in a user created batch program and run it, followed by
running hadoop-env.sh and it still does not work.
Thanks
Anand Murali 11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai
Mr. Roland:
This is what I get. How do I now get Oracle JDK to be identified
anand_vihar@Latitude-E5540:~$ sudo update-alternatives --config java
[sudo] password for anand_vihar:
There is only one alternative in link group java (providing /usr/bin/java):
Mr. Ravindra:
This is visible, however I am unable to modify it, eventhough I have admin
priveleges. I am new to the Linux environment. Shall be glad if you did advise.
However, as I told you earlier, I have created a batch program which contains,
JAVA_HOME setting, HADOOP_INSTALL setting and
Hi,
I have created a Mapper class[3] that filters out key values pairs that
go to a specific partition. When I set the partition class in my code
[1], I get the error in [2] and I don’t understand why this is
happening. Any help to fix this?
[1]
|Configuration conf = cj.getConfiguration();
The error message is very clear: a class which extends Partitioner is
expected.
Maybe you meant to specify MyHashPartitioner ?
Cheers
On Wed, Apr 1, 2015 at 7:54 AM, xeonmailinglist-gmail
xeonmailingl...@gmail.com wrote:
Hi,
I have created a Mapper class[3] that filters out key values
See this for more details in how to write your own Custom Paritioner (even
if a bit outdated, they still give you the basic idea of what you need to
do).
http://hadooptutorial.wikispaces.com/Custom+partitioner
https://developer.yahoo.com/hadoop/tutorial/module5.html#partitioning
Regards,
Shahab
As the error tells you, you cannot use a class as a Partitioner if it does
not satisfy the interface requirements of the partitioning mechanism. You
need to set a class a Partitioner which extends or implements the Partioner
contract.
Regards,
Shahab
On Wed, Apr 1, 2015 at 10:54 AM,
Sometimes my job will get the following error. What may be the reason for
this ? And is there any property that I can use to prevent this ?
Looks like someone got the same error.
http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-user/201502.mbox/%3c54e64f97.7070...@ulul.org%3E
2015-04-01
Hello All,
Did any one got this error before. I am working on database migration
task from postgresql to MySQL.
Here is what I did.
I took the dumps using PG_DUMP from PostgreSQL and converted it to
MySQL using PHP script.
I don't see any error in creating the tables in MySQL db. I created
Please un subscribe me from this list.
Regards,
Ravi Prasad Pentakota
India Software Lab, IBM Software Group
Phone: +9180-43328520 Mobile: 919620959477
e-mail:rapen...@in.ibm.com
From: Kumar Jayapal kjayapa...@gmail.com
To: user@hadoop.apache.org
Cc: Anand Murali
Hi there,
If log files are deleted without restarting service, it seems that the logs is
to be lost for later operation. For example, on namenode, datanode.
Why not log files could be re-created when deleted by mistake or on purpose
during cluster is running?
Thanks,
Jared
Dear Team,
I am trying to append the contents to a reducer output file using multiple
output.
My requirement is to write the reducer output to mutiple folders and the
data must be appended to the existing content.
Now I have used the custom output format by extending the Text output
format
Ok. Many thanks shall try
Sent from my iPhone
On 02-Apr-2015, at 7:48 am, Kumar Jayapal kjayapa...@gmail.com wrote:
$which java
make sure the paths are valid for your installation (change if using 32bit
version):
/usr/lib/jvm/java-6-openjdk-amd64/jre/bin/java
Hi,
Creating batch program will not have the same effect. If you put the
variables in /etc/environment then it will be available to all users on the
operating system. HDFS doesn't run with root privileges.
You need to open the application with *sudo* or with root privileges to
modify it.
e.g. If
32 matches
Mail list logo