Chris and Sato,
Thanks a bunch! I've been so swamped by these and other issues we've been
having in scrambling to upgrade our cluster that I forgot to file a bug. I
certainly complained aloud that the docs were insufficient, but I didn't do
anything to help the community so thanks a bunch for
Hello Billy,
I think your experience indicates that our documentation is insufficient for
discussing how to configure and use the alternative file systems. I filed
issue HADOOP-11863 to track a documentation enhancement.
https://issues.apache.org/jira/browse/HADOOP-11863
Please feel free to
Hi,
The first warning shows out-of-memory error of JVM.
Did you give enough max heap memory for DataNode daemons?
DN daemons, by default, uses max heap size 1GB. So if your DN requires more
than that, it will be in a trouble.
You can check the memory consumption of you DN dameons (e.g., top
http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-hdfs/HdfsRollingUpgrade.html
Rolling Upgrade, 2.4.1 to 2.6.0
Downgrade without downtime, 2.6.0 to 2.4.1
Downgrade without downtime is not possible?
org.apache.hadoop.hdfs.server.datanode.DataNode: Reported NameNode version
I agree with Sato's statement that the service loader mechanism should be able
to find the S3N file system classes via the service loader metadata embedded in
hadoop-aws.jar. I expect setting fs.s3n.impl wouldn't be required. Billy, if
you find otherwise in your testing, please let us know.
Sudo what my friend. There are so many options to sudo
Sent from my iPhone
On 23-Apr-2015, at 8:20 am, sandeep vura sandeepv...@gmail.com wrote:
Ananad,
Try sudo it will work
On Wed, Apr 22, 2015 at 5:58 PM, Shahab Yunus shahab.yu...@gmail.com wrote:
Can you try sudo?
Hi,
I am getting this error when I execute run the job in sqoop2 from hue. I
see lots of people talking about this error but no proper resolution.
Did any one able to resolve this issue. Any help is appreciated.
2015-04-22 21:36:07,281 ERROR
This has been introduced as a 2.7.0 feature, see MAPREDUCE-5583.
On Tue, Apr 21, 2015 at 4:32 AM, Zhe Li allenlee...@gmail.com wrote:
Hi, after upgraded to Hadoop 2 (yarn), I found that
'mapred.jobtracker.taskScheduler.maxRunningTasksPerJob' no longer worked,
right?
One workaround is to use
run this command in the terminal from root directory
$ sudo nano /etc/hosts (( It will prompt to enter root password))
Later you can comment those lines in hosts files #127.0.1.1
add this line 127.0.0.1 localhost
save the host file and exit
On Thu, Apr 23, 2015 at 8:39 AM, Anand Murali
Many thanks my friend. Shall try it right away.
Anand Murali 11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004,
IndiaPh: (044)- 28474593/ 43526162 (voicemail)
On Thursday, April 23, 2015 10:51 AM, sandeep vura sandeepv...@gmail.com
wrote:
run this command in the
Hi Billy, Chris,
Let me share a couple of my findings.
I believe this was introduced by HADOOP-10893,
which was introduced from 2.6.0(HDP2.2).
1. fs.s3n.impl
We added a property to the core-site.xml file:
You don't need to explicitly set this. It has never been done so in
previous versions.
Hi All,I am currently trying to build Hadoop 2.6 for windows from the source
code but I encountered a problem in the libwinutils.c class.The problem is
with the following line of code:const WCHAR* wsceConfigRelativePath =
WIDEN_STRING(STRINGIFY(WSCE_CONFIG_DIR)) L\\
Thanks Naga for your reply.
Does the community has a plan to support the limit per job in future?
Thanks.
On Tue, Apr 21, 2015 at 3:49 PM, Naganarasimha G R (Naga)
garlanaganarasi...@huawei.com wrote:
Hi Sanjeev,
YARN already supports to map the deprecated configuration name to the new
Hi Anand,
You should search /etc directory in root not Hadoop directory.
On Wed, Apr 22, 2015 at 2:57 PM, Anand Murali anand_vi...@yahoo.com wrote:
Dear All:
I dont see a etc/host. Find below.
anand_vihar@Latitude-E5540:~$ cd hadoop-2.6.0
anand_vihar@Latitude-E5540:~/hadoop-2.6.0$ ls -al
Hi Zhe,
AFAIK there is no such explicit requirement to support MR clients limiting the
number of containers/tasks for a given job at any given point of time.
In fact as explained earlier Admin can control this by queue capacity, max
capacity and user specific capacity configurations.
Is there
I don't seem to have etc/host
Sent from my iPhone
On 22-Apr-2015, at 2:30 pm, sandeep vura sandeepv...@gmail.com wrote:
Hi Anand,
comment the ip address - 127.0.1.1 in /etc/hosts
add the following ip address - 127.0.0.1 localhost in /etc/hosts.
Restart your hadoop cluster after
hosts file will be available in /etc directory please check once.
On Wed, Apr 22, 2015 at 2:36 PM, Anand Murali anand_vi...@yahoo.com wrote:
I don't seem to have etc/host
Sent from my iPhone
On 22-Apr-2015, at 2:30 pm, sandeep vura sandeepv...@gmail.com wrote:
Hi Anand,
comment the ip
Dear All:
I dont see a etc/host. Find below.
anand_vihar@Latitude-E5540:~$ cd hadoop-2.6.0
anand_vihar@Latitude-E5540:~/hadoop-2.6.0$ ls -al
total 76
drwxr-xr-x 12 anand_vihar anand_vihar 4096 Apr 21 13:23 .
drwxrwxr-x 26 anand_vihar anand_vihar 4096 Apr 22 14:05 ..
drwxr-xr-x 2 anand_vihar
Dear All:
Has anyone encountered this error and if so how have you fixed it other then
re-installing Hadoop or re-starting start-dfs.sh when you have already started
after boot. Find below
anand_vihar@Latitude-E5540:~$ ssh localhost
Welcome to Ubuntu 14.10 (GNU/Linux 3.16.0-34-generic x86_64)
Hi Anand,
comment the ip address - 127.0.1.1 in /etc/hosts
add the following ip address - 127.0.0.1 localhost in /etc/hosts.
Restart your hadoop cluster after made changes in /etc/hosts
Regards,
Sandeep.v
On Wed, Apr 22, 2015 at 2:16 PM, Anand Murali anand_vi...@yahoo.com wrote:
Dear All:
Ok thanks will do
Sent from my iPhone
On 22-Apr-2015, at 2:39 pm, sandeep vura sandeepv...@gmail.com wrote:
hosts file will be available in /etc directory please check once.
On Wed, Apr 22, 2015 at 2:36 PM, Anand Murali anand_vi...@yahoo.com wrote:
I don't seem to have etc/host
Sent
I allocated 5G.
I think OOM is not the cause of essentially
-Original Message-
From: Han-Cheol Cholt;hancheol@nhn-playart.comgt;
To: lt;user@hadoop.apache.orggt;;
Cc:
Sent: 2015-04-22 (수) 15:32:35
Subject: RE: rolling upgrade(2.4.1 to 2.6.0) problem
Hi,
The first warning
Hi Yves,
For 64-bit compilation, it should work out of box.
What command are you using to build?
Below build command works for me
$mvn install -Pnative-win -DskipTests
Ensure you have cmake also.
Regards,
Kiran
Dear Sandeep:
many thanks. I did find hosts, but I do not have write priveleges, eventhough I
am administrator. This is strange. Can you please advise.
Thanks
Anand Murali 11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004,
IndiaPh: (044)- 28474593/ 43526162 (voicemail)
On
Can you try sudo?
https://www.linux.com/learn/tutorials/306766:linux-101-introduction-to-sudo
Regards,
Shahab
On Wed, Apr 22, 2015 at 8:26 AM, Anand Murali anand_vi...@yahoo.com wrote:
Dear Sandeep:
many thanks. I did find hosts, but I do not have write priveleges,
eventhough I am
25 matches
Mail list logo