Re: ResourceLocalizationService: Localizer failed when running pi example

2015-04-19 Thread Fernando O.
nobody had this issue? :(

On Sat, Apr 18, 2015 at 1:24 PM, Fernando O. fot...@gmail.com wrote:

 Hey All,
 It's me again with another noob question: I deployed a cluster (HA
 mode) everything looked good but when I tried to run the pi example:

  bin/hadoop jar
 ./share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar pi 16 100

 the same error occurs if I try to generate data with teragen 1
 /test/data


 2015-04-18 15:49:04,090 INFO
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService:
 Localizer failed
 java.lang.NullPointerException
 at
 org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:268)
 at
 org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:344)
 at
 org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:150)
 at
 org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:131)
 at
 org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:115)
 at
 org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService.getLocalPathForWrite(LocalDirsHandlerService.java:420)
 at
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerRunner.run(ResourceLocalizationService.java:1075)


 I'm guessing it's a configuration issue but I don't know what am I missing
 :S



Re: Again incompatibility, locating example jars

2015-04-19 Thread Mahmood Naderan
Thanks that fixed the error however, still it cannot be run like the previous 
version.
The command istime ${HADOOP_HOME}/bin/hadoop jar  
${HADOOP_HOME}/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar grep 
${WORK_DIR}/data-MicroBenchmarks/in ${WORK_DIR}/data-MicroBenchmarks/out/grep 
a*xyz


where  ${WORK_DIR}=`pwd`
Here is the output
[mahmood@tiger MicroBenchmarks]$ pwd
/home/mahmood/bigdatabench/BigDataBench_V3.1_Hadoop_Hive/MicroBenchmarks

[mahmood@tiger MicroBenchmarks]$ hadoop fs -ls 
/home/mahmood/bigdatabench/BigDataBench_V3.1_Hadoop_Hive/MicroBenchmarks/data-MicroBenchmarks/in/in
15/04/19 11:56:26 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Found 2 items
-rw-r--r--   1 mahmood supergroup  524149543 2015-04-19 10:19 
/home/mahmood/bigdatabench/BigDataBench_V3.1_Hadoop_Hive/MicroBenchmarks/data-MicroBenchmarks/in/in/lda_wiki1w_1
-rw-r--r--   1 mahmood supergroup  526316345 2015-04-19 10:19 
/home/mahmood/bigdatabench/BigDataBench_V3.1_Hadoop_Hive/MicroBenchmarks/data-MicroBenchmarks/in/in/lda_wiki1w_2



15/04/19 11:57:06 INFO mapred.MapTask: Starting flush of map output
15/04/19 11:57:06 INFO mapred.LocalJobRunner: map task executor complete.
15/04/19 11:57:06 WARN mapred.LocalJobRunner: job_local962861772_0001
java.lang.Exception: java.io.FileNotFoundException: Path is not a file: 
/home/mahmood/bigdatabench/BigDataBench_V3.1_Hadoop_Hive/MicroBenchmarks/data-MicroBenchmarks/in/in
    at 
org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:70)
    at 
org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:56)
    at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1891)
...
 


As you can see, it says the path is not valid but the files are actually there.
Regards,
Mahmood 


 On Sunday, April 19, 2015 10:47 AM, Chris Nauroth 
cnaur...@hortonworks.com wrote:
   

 Hello Mahmood,
You want the hadoop-mapreduce-examples-2.6.0.jar file.  The grep job (as well 
as other example jobs like wordcount) reside in this jar file for the 2.x line 
of the codebase.
Chris NaurothHortonworkshttp://hortonworks.com/

From: Mahmood Naderan nt_mahm...@yahoo.com
Reply-To: User user@hadoop.apache.org, Mahmood Naderan nt_mahm...@yahoo.com
Date: Saturday, April 18, 2015 at 10:59 PM
To: User user@hadoop.apache.org
Subject: Again incompatibility, locating example jars

Hi,There is another incompatibility between 1.2.0 and 2.2.6. I appreciate is 
someone help to figure it out.This command works on 1.2.0
time ${HADOOP_HOME}/bin/hadoop jar  ${HADOOP_HOME}/hadoop-examples-*.jar grep 
${WORK_DIR}/data-MicroBenchmarks/in ${WORK_DIR}/data-MicroBenchmarks/out/grep 
a*xyz 
But on 2.6.0, I receive this error:
Not a valid JAR: 
/home/mahmood/bigdatabench/apache/hadoop-2.6.0/hadoop-examples-*.jar

Indeed there is no such file in that folder. So I guess it has been moved to 
another folder. However, there are three jar files in 2.6.0
/home/mahmood/bigdatabench/apache/hadoop-2.6.0/share/hadoop/mapreduce/sources/hadoop-mapreduce-examples-2.6.0-sources.jar
/home/mahmood/bigdatabench/apache/hadoop-2.6.0/share/hadoop/mapreduce/sources/hadoop-mapreduce-examples-2.6.0-test-sources.jar
/home/mahmood/bigdatabench/apache/hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar


Which one should I use?

Regards,
Mahmood

  

Re: ipc.Client: Retrying connect to server

2015-04-19 Thread MrAsanjar .
try adding 127.0.0.1 localhost to /etc/hosts file

On Sun, Apr 19, 2015 at 12:09 PM, Mich Talebzadeh m...@peridale.co.uk
wrote:

 Hi,



 In addition, perhaps an easier approach would be a telnet to test quickly
 if server is started on a given port



 Mine runs on port 9000 so if it works the telnet opening will be
 successful.



 *netstat -plten|grep java*



 tcp0  0 0.0.0.0:10020   0.0.0.0:*
 LISTEN  1009   19471  6898/java

 tcp0  0 0.0.0.0:50020   0.0.0.0:*
 LISTEN  1009   15758  6213/java

 *tcp0  0 50.140.197.217:9000
 http://50.140.197.217:9000 0.0.0.0:*
 LISTEN  1009   15711  6113/java*

 tcp0  0 0.0.0.0:50090   0.0.0.0:*
 LISTEN  1009   15972  6405/java

 tcp0  0 0.0.0.0:19888   0.0.0.0:*
 LISTEN  1009   18565  6898/java

 tcp0  0 0.0.0.0:10033   0.0.0.0:*
 LISTEN  1009   18343  6898/java

 tcp0  0 0.0.0.0:50070   0.0.0.0:*
 LISTEN  1009   15502  6113/java

 tcp0  0 0.0.0.0:10010   0.0.0.0:*
 LISTEN  1009   21433  7335/java

 tcp0  0 0.0.0.0:50010   0.0.0.0:*
 LISTEN  1009   15745  6213/java

 tcp0  0 0.0.0.0:90830.0.0.0:*
 LISTEN  1009   19481  7110/java

 tcp0  0 0.0.0.0:50075   0.0.0.0:*
 LISTEN  1009   15750  6213/java



 hduser@rhes564::/home/hduser/dba/bin *telnet rhes564 9000*

 Trying 50.140.197.217...

 Connected to rhes564.

 Escape character is '^]'.



 HTH



 Mich Talebzadeh



 http://talebzadehmich.wordpress.com



 Author of the books* A Practitioner’s Guide to Upgrading to Sybase** ASE
 15, **ISBN 978-0-9563693-0-7*.

 co-author *Sybase Transact SQL Guidelines Best Practices, ISBN
 978-0-9759693-0-4*

 *Publications due shortly:*

 *Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and
 Coherence Cache*

 *Oracle and Sybase, Concepts and Contrasts*, ISBN: 978-0-9563693-1-4, volume
 one out shortly



 NOTE: The information in this email is proprietary and confidential. This
 message is for the designated recipient only, if you are not the intended
 recipient, you should destroy it immediately. Any information in this
 message shall not be understood as given or endorsed by Peridale Ltd, its
 subsidiaries or their employees, unless expressly so stated. It is the
 responsibility of the recipient to ensure that this email is virus free,
 therefore neither Peridale Ltd, its subsidiaries nor their employees accept
 any responsibility.



 *From:* Brahma Reddy Battula [mailto:brahmareddy.batt...@hotmail.com]
 *Sent:* 19 April 2015 17:53
 *To:* user@hadoop.apache.org
 *Subject:* RE: ipc.Client: Retrying connect to server



 Hello Mahmood Naderan,



 When client is trying to connect to server with configured port(and
 address) and server is not started with that port then client will retry (
 and you will get following error..)..



 I can able to trace that Namenode is not running From JPS report which you
 had posted..Please check the namenode logs ( *location *:
 /home/mahmood/bigdatabench/apache/hadoop-1.0.2/libexec/../logs/hadoop-mahmood-namenode-tiger.out/log)


 --

 Date: Fri, 17 Apr 2015 08:22:12 +
 From: nt_mahm...@yahoo.com
 To: user@hadoop.apache.org
 Subject: ipc.Client: Retrying connect to server

 Hello,

 I have done all steps (as far as I know) to bring up the hadoop. However,
 I get the this error



 15/04/17 12:45:31 INFO ipc.Client: Retrying connect to server: localhost/
 127.0.0.1:54310. Already tried 0 time(s).



 There are a lot of threads and posts regarding this error and I tried
 them. However still stuck at this error :(



 Can someone help me? What did I wrong?









 Here are the configurations:



 1) Hadoop configurations

 [mahmood@tiger hadoop-1.0.2]$ cat conf/mapred-site.xml
 ?xml version=1.0?
 ?xml-stylesheet type=text/xsl href=configuration.xsl?
 !-- Put site-specific property overrides in this file. --
 configuration
 property
   namemapred.job.tracker/name
   valuelocalhost:54311/value
 /property
 property
   namemapred.child.java.opts/name
   value-Xmx512m/value
 /property
 /configuration



 [mahmood@tiger hadoop-1.0.2]$ cat conf/core-site.xml
 ?xml version=1.0?
 ?xml-stylesheet type=text/xsl href=configuration.xsl?
 !-- Put site-specific property overrides in this file. --
 configuration
 property
   namefs.default.name/name
   valuehdfs://localhost:54310/value
 /property
 /configuration



 [mahmood@tiger hadoop-1.0.2]$ cat conf/hdfs-site.xml
 ?xml version=1.0?
 ?xml-stylesheet type=text/xsl href=configuration.xsl?
 !-- Put site-specific property overrides in this file. --
 configuration
 property
   namedfs.replication/name
   value1/value
 /property
 property
   namehadoop.tmp.dir/name
   

RE: ipc.Client: Retrying connect to server

2015-04-19 Thread Mich Talebzadeh
Yes

 

cat /etc/hosts

# Do not remove the following line, or various programs

# that require network functionality will fail.

50.140.197.217  rhes564 rhes564

127.0.0.1   rhes564 localhost.localdomain   localhost

 

 

Mich Talebzadeh

 

http://talebzadehmich.wordpress.com

 

Author of the books A Practitioner’s Guide to Upgrading to Sybase ASE 15, 
ISBN 978-0-9563693-0-7. 

co-author Sybase Transact SQL Guidelines Best Practices, ISBN 
978-0-9759693-0-4

Publications due shortly:

Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and 
Coherence Cache

Oracle and Sybase, Concepts and Contrasts, ISBN: 978-0-9563693-1-4, volume one 
out shortly

 

NOTE: The information in this email is proprietary and confidential. This 
message is for the designated recipient only, if you are not the intended 
recipient, you should destroy it immediately. Any information in this message 
shall not be understood as given or endorsed by Peridale Ltd, its subsidiaries 
or their employees, unless expressly so stated. It is the responsibility of the 
recipient to ensure that this email is virus free, therefore neither Peridale 
Ltd, its subsidiaries nor their employees accept any responsibility.

 

From: MrAsanjar . [mailto:afsan...@gmail.com] 
Sent: 19 April 2015 18:38
To: user@hadoop.apache.org
Subject: Re: ipc.Client: Retrying connect to server

 

try adding 127.0.0.1 localhost to /etc/hosts file

 

On Sun, Apr 19, 2015 at 12:09 PM, Mich Talebzadeh m...@peridale.co.uk wrote:

Hi,

 

In addition, perhaps an easier approach would be a telnet to test quickly if 
server is started on a given port

 

Mine runs on port 9000 so if it works the telnet opening will be successful.

 

netstat -plten|grep java

 

tcp0  0 0.0.0.0:10020   0.0.0.0:*   
LISTEN  1009   19471  6898/java

tcp0  0 0.0.0.0:50020   0.0.0.0:*   
LISTEN  1009   15758  6213/java

tcp0  0 50.140.197.217:9000 0.0.0.0:*   
LISTEN  1009   15711  6113/java

tcp0  0 0.0.0.0:50090   0.0.0.0:*   
LISTEN  1009   15972  6405/java

tcp0  0 0.0.0.0:19888   0.0.0.0:*   
LISTEN  1009   18565  6898/java

tcp0  0 0.0.0.0:10033   0.0.0.0:*   
LISTEN  1009   18343  6898/java

tcp0  0 0.0.0.0:50070   0.0.0.0:*   
LISTEN  1009   15502  6113/java

tcp0  0 0.0.0.0:10010   0.0.0.0:*   
LISTEN  1009   21433  7335/java

tcp0  0 0.0.0.0:50010   0.0.0.0:*   
LISTEN  1009   15745  6213/java

tcp0  0 0.0.0.0:90830.0.0.0:*   
LISTEN  1009   19481  7110/java

tcp0  0 0.0.0.0:50075   0.0.0.0:*   
LISTEN  1009   15750  6213/java

 

hduser@rhes564::/home/hduser/dba/bin telnet rhes564 9000

Trying 50.140.197.217...

Connected to rhes564.

Escape character is '^]'.

 

HTH

 

Mich Talebzadeh

 

http://talebzadehmich.wordpress.com

 

Author of the books A Practitioner’s Guide to Upgrading to Sybase ASE 15, 
ISBN 978-0-9563693-0-7. 

co-author Sybase Transact SQL Guidelines Best Practices, ISBN 
978-0-9759693-0-4

Publications due shortly:

Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and 
Coherence Cache

Oracle and Sybase, Concepts and Contrasts, ISBN: 978-0-9563693-1-4, volume one 
out shortly

 

NOTE: The information in this email is proprietary and confidential. This 
message is for the designated recipient only, if you are not the intended 
recipient, you should destroy it immediately. Any information in this message 
shall not be understood as given or endorsed by Peridale Ltd, its subsidiaries 
or their employees, unless expressly so stated. It is the responsibility of the 
recipient to ensure that this email is virus free, therefore neither Peridale 
Ltd, its subsidiaries nor their employees accept any responsibility.

 

From: Brahma Reddy Battula [mailto:brahmareddy.batt...@hotmail.com] 
Sent: 19 April 2015 17:53
To: user@hadoop.apache.org
Subject: RE: ipc.Client: Retrying connect to server

 

Hello Mahmood Naderan,

 

When client is trying to connect to server with configured port(and address) 
and server is not started with that port then client will retry ( and you will 
get following error..)..

 

I can able to trace that Namenode is not running From JPS report which you had 
posted..Please check the namenode logs ( location : 
/home/mahmood/bigdatabench/apache/hadoop-1.0.2/libexec/../logs/hadoop-mahmood-namenode-tiger.out/log)

 

  _  

Date: Fri, 17 Apr 2015 08:22:12 +
From: nt_mahm...@yahoo.com
To: user@hadoop.apache.org
Subject: ipc.Client: 

Re: ResourceLocalizationService: Localizer failed when running pi example

2015-04-19 Thread Fernando O.
yeah... there's not much there:

-bash-4.1$ cd nm-local-dir/
-bash-4.1$ ll *
filecache:
total 0

nmPrivate:
total 0

usercache:
total 0

I'm using Open JDK, would that be a problem?

More log:

STARTUP_MSG:   java = 1.7.0_75
/
2015-04-19 14:38:58,168 INFO
org.apache.hadoop.yarn.server.nodemanager.NodeManager: registered UNIX
signal handlers for [TERM, HUP, INT]
2015-04-19 14:38:58,562 WARN org.apache.hadoop.util.NativeCodeLoader:
Unable to load native-hadoop library for your platform... using
builtin-java classes where applicable
2015-04-19 14:38:59,018 INFO org.apache.hadoop.yarn.event.AsyncDispatcher:
Registering class
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerEventType
for class
org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ContainerEventDispatcher
2015-04-19 14:38:59,020 INFO org.apache.hadoop.yarn.event.AsyncDispatcher:
Registering class
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationEventType
for class
org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ApplicationEventDispatcher
2015-04-19 14:38:59,021 INFO org.apache.hadoop.yarn.event.AsyncDispatcher:
Registering class
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.event.LocalizationEventType
for class
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService
2015-04-19 14:38:59,021 INFO org.apache.hadoop.yarn.event.AsyncDispatcher:
Registering class
org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServicesEventType
for class
org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices
2015-04-19 14:38:59,022 INFO org.apache.hadoop.yarn.event.AsyncDispatcher:
Registering class
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorEventType
for class
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl
2015-04-19 14:38:59,023 INFO org.apache.hadoop.yarn.event.AsyncDispatcher:
Registering class
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainersLauncherEventType
for class
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainersLauncher
2015-04-19 14:38:59,054 INFO org.apache.hadoop.yarn.event.AsyncDispatcher:
Registering class
org.apache.hadoop.yarn.server.nodemanager.ContainerManagerEventType for
class
org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl
2015-04-19 14:38:59,054 INFO org.apache.hadoop.yarn.event.AsyncDispatcher:
Registering class
org.apache.hadoop.yarn.server.nodemanager.NodeManagerEventType for class
org.apache.hadoop.yarn.server.nodemanager.NodeManager
2015-04-19 14:38:59,109 INFO org.apache.hadoop.metrics2.impl.MetricsConfig:
loaded properties from hadoop-metrics2.properties
2015-04-19 14:38:59,197 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
period at 10 second(s).
2015-04-19 14:38:59,197 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NodeManager metrics
system started
2015-04-19 14:38:59,217 INFO org.apache.hadoop.yarn.event.AsyncDispatcher:
Registering class
org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.event.LogHandlerEventType
for class
org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler
2015-04-19 14:38:59,217 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService:
per directory file limit = 8192
2015-04-19 14:38:59,227 INFO org.apache.hadoop.yarn.event.AsyncDispatcher:
Registering class
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.event.LocalizerEventType
for class
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerTracker
2015-04-19 14:38:59,248 WARN
org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: The
Auxilurary Service named 'mapreduce_shuffle' in the configuration is for
class class org.apache.hadoop.mapred.ShuffleHandler which has a name of
'httpshuffle'. Because these are not the same tools trying to send
ServiceData and read Service Meta Data may have issues unless the refer to
the name in the config.
2015-04-19 14:38:59,248 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices:
Adding auxiliary service httpshuffle, mapreduce_shuffle
2015-04-19 14:38:59,281 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
 Using ResourceCalculatorPlugin :
org.apache.hadoop.yarn.util.LinuxResourceCalculatorPlugin@7fc514a7
2015-04-19 14:38:59,281 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
 Using ResourceCalculatorProcessTree : null
2015-04-19 14:38:59,281 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:

RE: Trusted-realm vs default-realm kerberos issue

2015-04-19 Thread John Lilley
Michael and Alex, thanks for the replies.

The setup is indeed what Michael suggested, that the cluster KDC trusts the 
enterprise AD (which serves as a KDC also).
We did a lot more digging around and testing, and found that the problem was 
largely due to various flaws in our cluster kerb5.conf files not matching 
exactly.  Unfortunately we made so many attempts that I can’t now recall 
exactly what we did to bring it all into line.

john

From: Alexander Alten-Lorenz [mailto:wget.n...@gmail.com]
Sent: Wednesday, March 25, 2015 3:28 AM
To: user@hadoop.apache.org
Subject: Re: Trusted-realm vs default-realm kerberos issue

Do you have mapping rules, which tells Hadoop that the trusted realm is allowed 
to login?
http://mapredit.blogspot.de/2015/02/hadoop-and-trusted-mitv5-kerberos-with.html

BR,
 Alex


On 24 Mar 2015, at 18:21, Michael Segel 
michael_se...@hotmail.commailto:michael_se...@hotmail.com wrote:

So…

If I understand, you’re saying you have a one way trust set up so that the 
cluster’s AD trusts the Enterprise AD?

And by AD you really mean KDC?

On Mar 17, 2015, at 2:22 PM, John Lilley 
john.lil...@redpoint.netmailto:john.lil...@redpoint.net wrote:

AD

The opinions expressed here are mine, while they may reflect a cognitive 
thought, that is purely accidental.
Use at your own risk.
Michael Segel
michael_segel (AT) hotmail.comhttp://hotmail.com/








Re: ipc.Client: Retrying connect to server

2015-04-19 Thread Mahmood Naderan
Hi guys... Thanks for bringing up this thread. I forgot to post here that 
finally I solved the problem. Here is the tip which may be useful for someone 
else in the future.
You know what? When you run the format command, in the case that it need the 
confirmation, you have to press 'Y' (capital letter). If you, mean me!, press 
'y' it doesn't format and leave an error message which causes some further 
problems but you ignored it (because you may think that I confirmed the format 
command and know everything should be fine!).

Correctly pressing 'Y' in the format command and then start-all.sh, you will 
see that the ipc.client:retrying connect to server is gone... 
 Regards,
Mahmood 


  

  

Re: ResourceLocalizationService: Localizer failed when running pi example

2015-04-19 Thread Drake민영근
Hi,

guess the yarn.nodemanager.local-dirs property is the problem. Can you
provide that part of yarn-site.xml?

Thanks.

Drake 민영근 Ph.D
kt NexR

On Mon, Apr 20, 2015 at 4:27 AM, Fernando O. fot...@gmail.com wrote:

 yeah... there's not much there:

 -bash-4.1$ cd nm-local-dir/
 -bash-4.1$ ll *
 filecache:
 total 0

 nmPrivate:
 total 0

 usercache:
 total 0

 I'm using Open JDK, would that be a problem?

 More log:

 STARTUP_MSG:   java = 1.7.0_75
 /
 2015-04-19 14:38:58,168 INFO
 org.apache.hadoop.yarn.server.nodemanager.NodeManager: registered UNIX
 signal handlers for [TERM, HUP, INT]
 2015-04-19 14:38:58,562 WARN org.apache.hadoop.util.NativeCodeLoader:
 Unable to load native-hadoop library for your platform... using
 builtin-java classes where applicable
 2015-04-19 14:38:59,018 INFO org.apache.hadoop.yarn.event.AsyncDispatcher:
 Registering class
 org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerEventType
 for class
 org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ContainerEventDispatcher
 2015-04-19 14:38:59,020 INFO org.apache.hadoop.yarn.event.AsyncDispatcher:
 Registering class
 org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationEventType
 for class
 org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl$ApplicationEventDispatcher
 2015-04-19 14:38:59,021 INFO org.apache.hadoop.yarn.event.AsyncDispatcher:
 Registering class
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.event.LocalizationEventType
 for class
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService
 2015-04-19 14:38:59,021 INFO org.apache.hadoop.yarn.event.AsyncDispatcher:
 Registering class
 org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServicesEventType
 for class
 org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices
 2015-04-19 14:38:59,022 INFO org.apache.hadoop.yarn.event.AsyncDispatcher:
 Registering class
 org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorEventType
 for class
 org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl
 2015-04-19 14:38:59,023 INFO org.apache.hadoop.yarn.event.AsyncDispatcher:
 Registering class
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainersLauncherEventType
 for class
 org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainersLauncher
 2015-04-19 14:38:59,054 INFO org.apache.hadoop.yarn.event.AsyncDispatcher:
 Registering class
 org.apache.hadoop.yarn.server.nodemanager.ContainerManagerEventType for
 class
 org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl
 2015-04-19 14:38:59,054 INFO org.apache.hadoop.yarn.event.AsyncDispatcher:
 Registering class
 org.apache.hadoop.yarn.server.nodemanager.NodeManagerEventType for class
 org.apache.hadoop.yarn.server.nodemanager.NodeManager
 2015-04-19 14:38:59,109 INFO
 org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
 hadoop-metrics2.properties
 2015-04-19 14:38:59,197 INFO
 org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
 period at 10 second(s).
 2015-04-19 14:38:59,197 INFO
 org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NodeManager metrics
 system started
 2015-04-19 14:38:59,217 INFO org.apache.hadoop.yarn.event.AsyncDispatcher:
 Registering class
 org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.event.LogHandlerEventType
 for class
 org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler
 2015-04-19 14:38:59,217 INFO
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService:
 per directory file limit = 8192
 2015-04-19 14:38:59,227 INFO org.apache.hadoop.yarn.event.AsyncDispatcher:
 Registering class
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.event.LocalizerEventType
 for class
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerTracker
 2015-04-19 14:38:59,248 WARN
 org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: The
 Auxilurary Service named 'mapreduce_shuffle' in the configuration is for
 class class org.apache.hadoop.mapred.ShuffleHandler which has a name of
 'httpshuffle'. Because these are not the same tools trying to send
 ServiceData and read Service Meta Data may have issues unless the refer to
 the name in the config.
 2015-04-19 14:38:59,248 INFO
 org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices:
 Adding auxiliary service httpshuffle, mapreduce_shuffle
 2015-04-19 14:38:59,281 INFO
 org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
  Using ResourceCalculatorPlugin :
 

RE: how to delete logs automatically from hadoop yarn

2015-04-19 Thread Rohith Sharma K S
That’s  interesting use-case!!

 let’s say I want to delete container logs which are older than week or so. 
 So is there any configuration to do that?
I don’t think there is such configuration exist in the YARN currently. I think 
it should be able to handle from log4j properties.

But enabling log-aggregation, disk filling issue can be overcome. I think in 
the Hadoop-2.6 or later(yet to release)handling long running services on yarn 
is done in JIRA https://issues.apache.org/jira/i#browse/YARN-2443 .

 Because of these continuous logs, we are running out of Linux file limit 
 and thereafter containers are not launched because of exception while 
 creating log directory inside application ID directory
I could not get how continuous logs causing exceeding Linux resource limit.  
How many containers are running in cluster and per machine? If I think, each 
containers holds one resource for logging.


Thanks  Regards
Rohith Sharma K S

From: Smita Deshpande [mailto:smita.deshpa...@cumulus-systems.com]
Sent: 20 April 2015 10:23
To: user@hadoop.apache.org
Subject: RE: how to delete logs automatically from hadoop yarn

Hi Rohith,
Thanks for your solution. The actual problem we are looking at is : We have a 
lifelong running application, so configurations by which logs will be deleted 
right after application is finished will not help us.
Because of these continuous logs, we are running out of Linux file limit and 
thereafter containers are not launched because of exception while creating log 
directory inside application ID directory.
During the job execution itself, let’s say I want to delete container logs 
which are older than week or so. So is there any configuration to do that?

Thanks,
Smita


From: Rohith Sharma K S [mailto:rohithsharm...@huawei.com]
Sent: Monday, April 20, 2015 10:09 AM
To: user@hadoop.apache.orgmailto:user@hadoop.apache.org
Subject: RE: how to delete logs automatically from hadoop yarn

Hi

With below configuration , log deletion should be triggered.  You can see from 
the log that deletion has been set to 3600 sec in NM like below. May be you can 
check NM logs for the below log that give debug information.
“INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler:
 Scheduling Log Deletion for application: application_1428298081702_0008, with 
delay of 10800 seconds”

But there is another configuration which affect deletion task is 
“yarn.nodemanager.delete.debug-delay-sec”, default value is zero. It means 
immediately deletion will be triggered. Check is this is configured?
  property
description
  Number of seconds after an application finishes before the nodemanager's
  DeletionService will delete the application's localized file directory
  and log directory.

  To diagnose Yarn application problems, set this property's value large
  enough (for example, to 600 = 10 minutes) to permit examination of these
  directories. After changing the property's value, you must restart the
  nodemanager in order for it to have an effect.

  The roots of Yarn applications' work directories is configurable with
  the yarn.nodemanager.local-dirs property (see below), and the roots
  of the Yarn applications' log directories is configurable with the
  yarn.nodemanager.log-dirs property (see also below).
/description
nameyarn.nodemanager.delete.debug-delay-sec/name
value0/value
  /property


Thanks  Regards
Rohith Sharma K S
From: Sunil Garg [mailto:sunil.g...@cumulus-systems.com]
Sent: 20 April 2015 09:52
To: user@hadoop.apache.orgmailto:user@hadoop.apache.org
Subject: how to delete logs automatically from hadoop yarn


How to delete logs from Hadoop yarn automatically, I Have tried following 
settings but it is not working
Is there any other way we can do this or am I doing something wrong !!

property
nameyarn.log-aggregation-enable/name
valuefalse/value
/property

property
nameyarn.nodemanager.log.retain-seconds/name
value3600/value
/property

Thanks
Sunil Garg


RE: how to delete logs automatically from hadoop yarn

2015-04-19 Thread Smita Deshpande
Hi Rohith,
In our application, there were around 3,62,738 containers which 
ran successfully before we encountered this issue. So under 
userLogs/applicationId/ we had 3,62,738 directories, each directory having 
container’s stdout and stderr file. We are not expecting to rotate these stdout 
and stderr file as its mentioned in jira 2443. These logs are of no use after 
certain time, for a week we may need those in case we need to troubleshoot why 
container failed or so.

Thanks,
Smita

From: Rohith Sharma K S [mailto:rohithsharm...@huawei.com]
Sent: Monday, April 20, 2015 11:02 AM
To: user@hadoop.apache.org
Subject: RE: how to delete logs automatically from hadoop yarn

That’s  interesting use-case!!

 let’s say I want to delete container logs which are older than week or so. 
 So is there any configuration to do that?
I don’t think there is such configuration exist in the YARN currently. I think 
it should be able to handle from log4j properties.

But enabling log-aggregation, disk filling issue can be overcome. I think in 
the Hadoop-2.6 or later(yet to release)handling long running services on yarn 
is done in JIRA https://issues.apache.org/jira/i#browse/YARN-2443 .

 Because of these continuous logs, we are running out of Linux file limit 
 and thereafter containers are not launched because of exception while 
 creating log directory inside application ID directory
I could not get how continuous logs causing exceeding Linux resource limit.  
How many containers are running in cluster and per machine? If I think, each 
containers holds one resource for logging.


Thanks  Regards
Rohith Sharma K S

From: Smita Deshpande [mailto:smita.deshpa...@cumulus-systems.com]
Sent: 20 April 2015 10:23
To: user@hadoop.apache.orgmailto:user@hadoop.apache.org
Subject: RE: how to delete logs automatically from hadoop yarn

Hi Rohith,
Thanks for your solution. The actual problem we are looking at is : We have a 
lifelong running application, so configurations by which logs will be deleted 
right after application is finished will not help us.
Because of these continuous logs, we are running out of Linux file limit and 
thereafter containers are not launched because of exception while creating log 
directory inside application ID directory.
During the job execution itself, let’s say I want to delete container logs 
which are older than week or so. So is there any configuration to do that?

Thanks,
Smita


From: Rohith Sharma K S [mailto:rohithsharm...@huawei.com]
Sent: Monday, April 20, 2015 10:09 AM
To: user@hadoop.apache.orgmailto:user@hadoop.apache.org
Subject: RE: how to delete logs automatically from hadoop yarn

Hi

With below configuration , log deletion should be triggered.  You can see from 
the log that deletion has been set to 3600 sec in NM like below. May be you can 
check NM logs for the below log that give debug information.
“INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler:
 Scheduling Log Deletion for application: application_1428298081702_0008, with 
delay of 10800 seconds”

But there is another configuration which affect deletion task is 
“yarn.nodemanager.delete.debug-delay-sec”, default value is zero. It means 
immediately deletion will be triggered. Check is this is configured?
  property
description
  Number of seconds after an application finishes before the nodemanager's
  DeletionService will delete the application's localized file directory
  and log directory.

  To diagnose Yarn application problems, set this property's value large
  enough (for example, to 600 = 10 minutes) to permit examination of these
  directories. After changing the property's value, you must restart the
  nodemanager in order for it to have an effect.

  The roots of Yarn applications' work directories is configurable with
  the yarn.nodemanager.local-dirs property (see below), and the roots
  of the Yarn applications' log directories is configurable with the
  yarn.nodemanager.log-dirs property (see also below).
/description
nameyarn.nodemanager.delete.debug-delay-sec/name
value0/value
  /property


Thanks  Regards
Rohith Sharma K S
From: Sunil Garg [mailto:sunil.g...@cumulus-systems.com]
Sent: 20 April 2015 09:52
To: user@hadoop.apache.orgmailto:user@hadoop.apache.org
Subject: how to delete logs automatically from hadoop yarn


How to delete logs from Hadoop yarn automatically, I Have tried following 
settings but it is not working
Is there any other way we can do this or am I doing something wrong !!

property
nameyarn.log-aggregation-enable/name
valuefalse/value
/property

property
nameyarn.nodemanager.log.retain-seconds/name
value3600/value
/property

Thanks
Sunil Garg


RE: how to delete logs automatically from hadoop yarn

2015-04-19 Thread Mich Talebzadeh
I don’t think there is such configuration to trigger automatic removal of older 
files say more than 7 days through cron or Control M

 

 

# Clean up directory - get rid of old files

 

for i in $LOGDIR:14 $TMPDIR:4 $ETCDIR:7

do

THE_DIR=`echo $i|awk -F: '{print $1}'`

NO_DAYS=`echo $i|awk -F: '{print $2}'`

find $THE_DIR -mtime +${NO_DAYS} -exec rm -f {} \;

done

#

 

HTH

 

 

Mich Talebzadeh

 

http://talebzadehmich.wordpress.com

 

Author of the books A Practitioner’s Guide to Upgrading to Sybase ASE 15, 
ISBN 978-0-9563693-0-7. 

co-author Sybase Transact SQL Guidelines Best Practices, ISBN 
978-0-9759693-0-4

Publications due shortly:

Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and 
Coherence Cache

Oracle and Sybase, Concepts and Contrasts, ISBN: 978-0-9563693-1-4, volume one 
out shortly

 

NOTE: The information in this email is proprietary and confidential. This 
message is for the designated recipient only, if you are not the intended 
recipient, you should destroy it immediately. Any information in this message 
shall not be understood as given or endorsed by Peridale Ltd, its subsidiaries 
or their employees, unless expressly so stated. It is the responsibility of the 
recipient to ensure that this email is virus free, therefore neither Peridale 
Ltd, its subsidiaries nor their employees accept any responsibility.

 

From: Smita Deshpande [mailto:smita.deshpa...@cumulus-systems.com] 
Sent: 20 April 2015 05:53
To: user@hadoop.apache.org
Subject: RE: how to delete logs automatically from hadoop yarn

 

Hi Rohith,

Thanks for your solution. The actual problem we are looking at is : We have a 
lifelong running application, so configurations by which logs will be deleted 
right after application is finished will not help us.

Because of these continuous logs, we are running out of Linux file limit and 
thereafter containers are not launched because of exception while creating log 
directory inside application ID directory.

During the job execution itself, let’s say I want to delete container logs 
which are older than week or so. So is there any configuration to do that? 

 

Thanks,

Smita

 

 

From: Rohith Sharma K S [mailto:rohithsharm...@huawei.com] 
Sent: Monday, April 20, 2015 10:09 AM
To: user@hadoop.apache.org
Subject: RE: how to delete logs automatically from hadoop yarn

 

Hi 

 

With below configuration , log deletion should be triggered.  You can see from 
the log that deletion has been set to 3600 sec in NM like below. May be you can 
check NM logs for the below log that give debug information.

“INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler:
 Scheduling Log Deletion for application: application_1428298081702_0008, with 
delay of 10800 seconds”

 

But there is another configuration which affect deletion task is 
“yarn.nodemanager.delete.debug-delay-sec”, default value is zero. It means 
immediately deletion will be triggered. Check is this is configured?

  property

description

  Number of seconds after an application finishes before the nodemanager's 

  DeletionService will delete the application's localized file directory

  and log directory.

  

  To diagnose Yarn application problems, set this property's value large

  enough (for example, to 600 = 10 minutes) to permit examination of these

  directories. After changing the property's value, you must restart the 

  nodemanager in order for it to have an effect.

 

  The roots of Yarn applications' work directories is configurable with

  the yarn.nodemanager.local-dirs property (see below), and the roots

  of the Yarn applications' log directories is configurable with the 

  yarn.nodemanager.log-dirs property (see also below).

/description

nameyarn.nodemanager.delete.debug-delay-sec/name

value0/value

  /property

 

 

Thanks  Regards

Rohith Sharma K S

From: Sunil Garg [mailto:sunil.g...@cumulus-systems.com] 
Sent: 20 April 2015 09:52
To: user@hadoop.apache.org
Subject: how to delete logs automatically from hadoop yarn

 

 

How to delete logs from Hadoop yarn automatically, I Have tried following 
settings but it is not working 

Is there any other way we can do this or am I doing something wrong !!

 

property

nameyarn.log-aggregation-enable/name

valuefalse/value

/property

 

property

nameyarn.nodemanager.log.retain-seconds/name

value3600/value

/property

 

Thanks

Sunil Garg



how to delete logs automatically from hadoop yarn

2015-04-19 Thread Sunil Garg

How to delete logs from Hadoop yarn automatically, I Have tried following 
settings but it is not working
Is there any other way we can do this or am I doing something wrong !!

property
nameyarn.log-aggregation-enable/name
valuefalse/value
/property

property
nameyarn.nodemanager.log.retain-seconds/name
value3600/value
/property

Thanks
Sunil Garg


RE: how to delete logs automatically from hadoop yarn

2015-04-19 Thread Rohith Sharma K S
Hi

With below configuration , log deletion should be triggered.  You can see from 
the log that deletion has been set to 3600 sec in NM like below. May be you can 
check NM logs for the below log that give debug information.
“INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler:
 Scheduling Log Deletion for application: application_1428298081702_0008, with 
delay of 10800 seconds”

But there is another configuration which affect deletion task is 
“yarn.nodemanager.delete.debug-delay-sec”, default value is zero. It means 
immediately deletion will be triggered. Check is this is configured?
  property
description
  Number of seconds after an application finishes before the nodemanager's
  DeletionService will delete the application's localized file directory
  and log directory.

  To diagnose Yarn application problems, set this property's value large
  enough (for example, to 600 = 10 minutes) to permit examination of these
  directories. After changing the property's value, you must restart the
  nodemanager in order for it to have an effect.

  The roots of Yarn applications' work directories is configurable with
  the yarn.nodemanager.local-dirs property (see below), and the roots
  of the Yarn applications' log directories is configurable with the
  yarn.nodemanager.log-dirs property (see also below).
/description
nameyarn.nodemanager.delete.debug-delay-sec/name
value0/value
  /property


Thanks  Regards
Rohith Sharma K S
From: Sunil Garg [mailto:sunil.g...@cumulus-systems.com]
Sent: 20 April 2015 09:52
To: user@hadoop.apache.org
Subject: how to delete logs automatically from hadoop yarn


How to delete logs from Hadoop yarn automatically, I Have tried following 
settings but it is not working
Is there any other way we can do this or am I doing something wrong !!

property
nameyarn.log-aggregation-enable/name
valuefalse/value
/property

property
nameyarn.nodemanager.log.retain-seconds/name
value3600/value
/property

Thanks
Sunil Garg


RE: how to delete logs automatically from hadoop yarn

2015-04-19 Thread Smita Deshpande
Hi Rohith,
Thanks for your solution. The actual problem we are looking at is : We have a 
lifelong running application, so configurations by which logs will be deleted 
right after application is finished will not help us.
Because of these continuous logs, we are running out of Linux file limit and 
thereafter containers are not launched because of exception while creating log 
directory inside application ID directory.
During the job execution itself, let’s say I want to delete container logs 
which are older than week or so. So is there any configuration to do that?

Thanks,
Smita


From: Rohith Sharma K S [mailto:rohithsharm...@huawei.com]
Sent: Monday, April 20, 2015 10:09 AM
To: user@hadoop.apache.org
Subject: RE: how to delete logs automatically from hadoop yarn

Hi

With below configuration , log deletion should be triggered.  You can see from 
the log that deletion has been set to 3600 sec in NM like below. May be you can 
check NM logs for the below log that give debug information.
“INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler:
 Scheduling Log Deletion for application: application_1428298081702_0008, with 
delay of 10800 seconds”

But there is another configuration which affect deletion task is 
“yarn.nodemanager.delete.debug-delay-sec”, default value is zero. It means 
immediately deletion will be triggered. Check is this is configured?
  property
description
  Number of seconds after an application finishes before the nodemanager's
  DeletionService will delete the application's localized file directory
  and log directory.

  To diagnose Yarn application problems, set this property's value large
  enough (for example, to 600 = 10 minutes) to permit examination of these
  directories. After changing the property's value, you must restart the
  nodemanager in order for it to have an effect.

  The roots of Yarn applications' work directories is configurable with
  the yarn.nodemanager.local-dirs property (see below), and the roots
  of the Yarn applications' log directories is configurable with the
  yarn.nodemanager.log-dirs property (see also below).
/description
nameyarn.nodemanager.delete.debug-delay-sec/name
value0/value
  /property


Thanks  Regards
Rohith Sharma K S
From: Sunil Garg [mailto:sunil.g...@cumulus-systems.com]
Sent: 20 April 2015 09:52
To: user@hadoop.apache.orgmailto:user@hadoop.apache.org
Subject: how to delete logs automatically from hadoop yarn


How to delete logs from Hadoop yarn automatically, I Have tried following 
settings but it is not working
Is there any other way we can do this or am I doing something wrong !!

property
nameyarn.log-aggregation-enable/name
valuefalse/value
/property

property
nameyarn.nodemanager.log.retain-seconds/name
value3600/value
/property

Thanks
Sunil Garg


Again incompatibility, locating example jars

2015-04-19 Thread Mahmood Naderan
Hi,There is another incompatibility between 1.2.0 and 2.2.6. I appreciate is 
someone help to figure it out.This command works on 1.2.0
time ${HADOOP_HOME}/bin/hadoop jar  ${HADOOP_HOME}/hadoop-examples-*.jar grep 
${WORK_DIR}/data-MicroBenchmarks/in ${WORK_DIR}/data-MicroBenchmarks/out/grep 
a*xyz 
But on 2.6.0, I receive this error:
Not a valid JAR: 
/home/mahmood/bigdatabench/apache/hadoop-2.6.0/hadoop-examples-*.jar

Indeed there is no such file in that folder. So I guess it has been moved to 
another folder. However, there are three jar files in 2.6.0
/home/mahmood/bigdatabench/apache/hadoop-2.6.0/share/hadoop/mapreduce/sources/hadoop-mapreduce-examples-2.6.0-sources.jar
/home/mahmood/bigdatabench/apache/hadoop-2.6.0/share/hadoop/mapreduce/sources/hadoop-mapreduce-examples-2.6.0-test-sources.jar
/home/mahmood/bigdatabench/apache/hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar


Which one should I use?

Regards,
Mahmood

RE: Unable to load native-hadoop library

2015-04-19 Thread Brahma Reddy Battula
Hello Mich Talebzadeh
Please mention which release you are using and how did you compile ( if you had 
complied or from where did you took the release )
bq: WARN util.NativeCodeLoader: Unable to load native-hadoop library for your 
platform... using builtin-java classes where applicable
This will come, when client did not find hadoop natives (hadoop.so,,) in class 
pathPlease check ,whethere you have hadoop natives in classpath or not..?
Normally, Hadoop will load natives from ${HADOOP_HOME}/lib/native or 
java.library.path..

Date: Sun, 19 Apr 2015 11:21:01 -0500
Subject: Re: Unable to load native-hadoop library
From: afsan...@gmail.com
To: user@hadoop.apache.org

are you using ubuntu? if yes look at JIRA HADOOP-10988
On Sat, Apr 18, 2015 at 2:00 PM, Mich Talebzadeh m...@peridale.co.uk wrote:
No I believe it was something to do with compilation. It is only a warning 
However, you should consider using hdfs dfs as opposed to hadoop fs I guess 
someone realised that “hadoop” is the name of eco system (HDFS + MapReduce)  
and hdfs is the actual file J which is more appropriate for a command syntax 
Mich Talebzadeh http://talebzadehmich.wordpress.com Author of the books A 
Practitioner’s Guide to Upgrading to Sybase ASE 15, ISBN 978-0-9563693-0-7. 
co-author Sybase Transact SQL Guidelines Best Practices, ISBN 
978-0-9759693-0-4Publications due shortly:Creating in-memory Data Grid for 
Trading Systems with Oracle TimesTen and Coherence CacheOracle and Sybase, 
Concepts and Contrasts, ISBN: 978-0-9563693-1-4, volume one out shortly NOTE: 
The information in this email is proprietary and confidential. This message is 
for the designated recipient only, if you are not the intended recipient, you 
should destroy it immediately. Any information in this message shall not be 
understood as given or endorsed by Peridale Ltd, its subsidiaries or their 
employees, unless expressly so stated. It is the responsibility of the 
recipient to ensure that this email is virus free, therefore neither Peridale 
Ltd, its subsidiaries nor their employees accept any responsibility. From: 
Mahmood Naderan [mailto:nt_mahm...@yahoo.com] 
Sent: 18 April 2015 19:54
To: User
Subject: Unable to load native-hadoop library Hi,Regarding this warning WARN 
util.NativeCodeLoader: Unable to load native-hadoop library for your 
platform... using builtin-java classes where applicable It seems that the 
prebuild 32-bit binary is not compatible on the host's 64-bit architecture. 
Just want to know does it make sense? Is there any concern about the 
functionality? Regards,
Mahmood
  

Re: ResourceLocalizationService: Localizer failed when running pi example

2015-04-19 Thread Alexander Alten-Lorenz
As you said, that looks like a config issue. I would spot on the NM's local 
scratch dir (yarn.nodemanager.local-dirs). 

But without a complete stack trace, its a blind call.

BR,
 AL

--
mapredit.blogspot.com

 On Apr 18, 2015, at 6:24 PM, Fernando O. fot...@gmail.com wrote:
 
 Hey All,
 It's me again with another noob question: I deployed a cluster (HA mode) 
 everything looked good but when I tried to run the pi example:
 
  bin/hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar 
 pi 16 100
 
 the same error occurs if I try to generate data with teragen 1 
 /test/data
 
 
 2015-04-18 15:49:04,090 INFO 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService:
  Localizer failed
 java.lang.NullPointerException
   at 
 org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:268)
   at 
 org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:344)
   at 
 org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:150)
   at 
 org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:131)
   at 
 org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:115)
   at 
 org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService.getLocalPathForWrite(LocalDirsHandlerService.java:420)
   at 
 org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerRunner.run(ResourceLocalizationService.java:1075)
 
 
 I'm guessing it's a configuration issue but I don't know what am I missing :S



Re: Unable to load native-hadoop library

2015-04-19 Thread MrAsanjar .
are you using ubuntu? if yes look at JIRA HADOOP-10988

On Sat, Apr 18, 2015 at 2:00 PM, Mich Talebzadeh m...@peridale.co.uk
wrote:

 No I believe it was something to do with compilation. It is only a warning



 However, you should consider using *hdfs dfs* as opposed to *hadoop fs*



 I guess someone realised that “hadoop” is the name of eco system (HDFS +
 MapReduce)  and hdfs is the actual file J which is more appropriate for a
 command syntax



 Mich Talebzadeh



 http://talebzadehmich.wordpress.com



 Author of the books* A Practitioner’s Guide to Upgrading to Sybase** ASE
 15, **ISBN 978-0-9563693-0-7*.

 co-author *Sybase Transact SQL Guidelines Best Practices, ISBN
 978-0-9759693-0-4*

 *Publications due shortly:*

 *Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and
 Coherence Cache*

 *Oracle and Sybase, Concepts and Contrasts*, ISBN: 978-0-9563693-1-4, volume
 one out shortly



 NOTE: The information in this email is proprietary and confidential. This
 message is for the designated recipient only, if you are not the intended
 recipient, you should destroy it immediately. Any information in this
 message shall not be understood as given or endorsed by Peridale Ltd, its
 subsidiaries or their employees, unless expressly so stated. It is the
 responsibility of the recipient to ensure that this email is virus free,
 therefore neither Peridale Ltd, its subsidiaries nor their employees accept
 any responsibility.



 *From:* Mahmood Naderan [mailto:nt_mahm...@yahoo.com]
 *Sent:* 18 April 2015 19:54
 *To:* User
 *Subject:* Unable to load native-hadoop library



 Hi,

 Regarding this warning



 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your
 platform... using builtin-java classes where applicable



 It seems that the prebuild 32-bit binary is not compatible on the host's
 64-bit architecture. Just want to know does it make sense? Is there any
 concern about the functionality?



 Regards,
 Mahmood



RE: ipc.Client: Retrying connect to server

2015-04-19 Thread Brahma Reddy Battula
Hello Mahmood Naderan,
When client is trying to connect to server with configured port(and address) 
and server is not started with that port then client will retry ( and you will 
get following error..)..
I can able to trace that Namenode is not running From JPS report which you had 
posted..Please check the namenode logs ( location : 
/home/mahmood/bigdatabench/apache/hadoop-1.0.2/libexec/../logs/hadoop-mahmood-namenode-tiger.out/log)
Date: Fri, 17 Apr 2015 08:22:12 +
From: nt_mahm...@yahoo.com
To: user@hadoop.apache.org
Subject: ipc.Client: Retrying connect to server

Hello,I have done all steps (as far as I know) to bring up the hadoop. However, 
I get the this error
15/04/17 12:45:31 INFO ipc.Client: Retrying connect to server: 
localhost/127.0.0.1:54310. Already tried 0 time(s).
There are a lot of threads and posts regarding this error and I tried them. 
However still stuck at this error :(
Can someone help me? What did I wrong?




Here are the configurations:
1) Hadoop configurations[mahmood@tiger hadoop-1.0.2]$ cat 
conf/mapred-site.xml?xml version=1.0??xml-stylesheet type=text/xsl 
href=configuration.xsl?!-- Put site-specific property overrides in this 
file. --configurationproperty  namemapred.job.tracker/name  
valuelocalhost:54311/value/propertyproperty  
namemapred.child.java.opts/name  
value-Xmx512m/value/property/configuration
[mahmood@tiger hadoop-1.0.2]$ cat conf/core-site.xml?xml 
version=1.0??xml-stylesheet type=text/xsl href=configuration.xsl?!-- 
Put site-specific property overrides in this file. --configurationproperty 
 namefs.default.name/name  
valuehdfs://localhost:54310/value/property/configuration
[mahmood@tiger hadoop-1.0.2]$ cat conf/hdfs-site.xml?xml 
version=1.0??xml-stylesheet type=text/xsl href=configuration.xsl?!-- 
Put site-specific property overrides in this file. --configurationproperty 
 namedfs.replication/name  value1/value/propertyproperty  
namehadoop.tmp.dir/name  
value/home/mahmood/bigdatabench/apache/hadoop-1.0.2/folders/tmp/value/propertyproperty
  namedfs.name.dir/name  
value/home/mahmood/bigdatabench/apache/hadoop-1.0.2/folders/name/value/propertyproperty
  namedfs.data.dir/name  
value/home/mahmood/bigdatabench/apache/hadoop-1.0.2/folders/data/value/property/configuration




2) Network configuration[root@tiger hadoop-1.0.2]# cat /etc/sysconfig/iptables# 
Firewall configuration written by system-config-firewall# Manual customization 
of this file is not recommended.*filter:INPUT ACCEPT [0:0]:FORWARD ACCEPT 
[0:0]:OUTPUT ACCEPT [0:0]-A INPUT -m state --state ESTABLISHED,RELATED -j 
ACCEPT-A INPUT -p icmp -j ACCEPT-A INPUT -i lo -j ACCEPT-A INPUT -m state 
--state NEW -m tcp -p tcp --dport 5901 -j ACCEPT-A INPUT -m state --state NEW 
-m tcp -p tcp --dport 80 -j ACCEPT-A INPUT -m state --state NEW -m tcp -p tcp 
--dport 22 -j ACCEPT-A INPUT -m state --state NEW -m tcp -p tcp --dport 2049 -j 
ACCEPT-A INPUT -m state --state NEW -m tcp -p tcp --dport 54310 -j ACCEPT-A 
INPUT -m state --state NEW -m tcp -p tcp --dport 54311 -j ACCEPT-A INPUT -j 
REJECT --reject-with icmp-host-prohibited-A FORWARD -j REJECT --reject-with 
icmp-host-prohibitedCOMMIT
[root@tiger hadoop-1.0.2]# /etc/init.d/iptables restartiptables: Flushing 
firewall rules: [  OK  ]iptables: Setting chains to 
policy ACCEPT: filter  [  OK  ]iptables: Unloading modules: 
  [  OK  ]iptables: Applying firewall rules:
 [  OK  ]
[mahmood@tiger hadoop-1.0.2]$ netstat -an | grep 54310[mahmood@tiger 
hadoop-1.0.2]$ netstat -an | grep 54311tcp0  0 
:::127.0.0.1:54311  :::*LISTENtcp  426  
0 :::127.0.0.1:54311  :::127.0.0.1:49639  ESTABLISHEDtcp
0  0 :::127.0.0.1:49639  :::127.0.0.1:54311  ESTABLISHED





3) Starting Hadoop[mahmood@tiger hadoop-1.0.2]$ stop-all.shWarning: 
$HADOOP_HOME is deprecated.stopping jobtrackerlocalhost: Warning: $HADOOP_HOME 
is deprecated.localhost:localhost: stopping tasktrackerno namenode to 
stoplocalhost: Warning: $HADOOP_HOME is deprecated.localhost:localhost: no 
datanode to stoplocalhost: Warning: $HADOOP_HOME is 
deprecated.localhost:localhost: stopping secondarynamenode
[mahmood@tiger hadoop-1.0.2]$ start-all.shWarning: $HADOOP_HOME is 
deprecated.starting namenode, logging to 
/home/mahmood/bigdatabench/apache/hadoop-1.0.2/libexec/../logs/hadoop-mahmood-namenode-tiger.outlocalhost:
 Warning: $HADOOP_HOME is deprecated.localhost:localhost: starting datanode, 
logging to 
/home/mahmood/bigdatabench/apache/hadoop-1.0.2/libexec/../logs/hadoop-mahmood-datanode-tiger.outlocalhost:
 Warning: $HADOOP_HOME is deprecated.localhost:localhost: starting 
secondarynamenode, logging to 
/home/mahmood/bigdatabench/apache/hadoop-1.0.2/libexec/../logs/hadoop-mahmood-secondarynamenode-tiger.outstarting
 jobtracker, logging to 

RE: ipc.Client: Retrying connect to server

2015-04-19 Thread Mich Talebzadeh
Hi,

 

In addition, perhaps an easier approach would be a telnet to test quickly if
server is started on a given port

 

Mine runs on port 9000 so if it works the telnet opening will be successful.

 

netstat -plten|grep java

 

tcp0  0 0.0.0.0:10020   0.0.0.0:*
LISTEN  1009   19471  6898/java

tcp0  0 0.0.0.0:50020   0.0.0.0:*
LISTEN  1009   15758  6213/java

tcp0  0 50.140.197.217:9000 0.0.0.0:*
LISTEN  1009   15711  6113/java

tcp0  0 0.0.0.0:50090   0.0.0.0:*
LISTEN  1009   15972  6405/java

tcp0  0 0.0.0.0:19888   0.0.0.0:*
LISTEN  1009   18565  6898/java

tcp0  0 0.0.0.0:10033   0.0.0.0:*
LISTEN  1009   18343  6898/java

tcp0  0 0.0.0.0:50070   0.0.0.0:*
LISTEN  1009   15502  6113/java

tcp0  0 0.0.0.0:10010   0.0.0.0:*
LISTEN  1009   21433  7335/java

tcp0  0 0.0.0.0:50010   0.0.0.0:*
LISTEN  1009   15745  6213/java

tcp0  0 0.0.0.0:90830.0.0.0:*
LISTEN  1009   19481  7110/java

tcp0  0 0.0.0.0:50075   0.0.0.0:*
LISTEN  1009   15750  6213/java

 

hduser@rhes564::/home/hduser/dba/bin telnet rhes564 9000

Trying 50.140.197.217...

Connected to rhes564.

Escape character is '^]'.

 

HTH

 

Mich Talebzadeh

 

http://talebzadehmich.wordpress.com

 

Author of the books A Practitioner's Guide to Upgrading to Sybase ASE 15,
ISBN 978-0-9563693-0-7. 

co-author Sybase Transact SQL Guidelines Best Practices, ISBN
978-0-9759693-0-4

Publications due shortly:

Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and
Coherence Cache

Oracle and Sybase, Concepts and Contrasts, ISBN: 978-0-9563693-1-4, volume
one out shortly

 

NOTE: The information in this email is proprietary and confidential. This
message is for the designated recipient only, if you are not the intended
recipient, you should destroy it immediately. Any information in this
message shall not be understood as given or endorsed by Peridale Ltd, its
subsidiaries or their employees, unless expressly so stated. It is the
responsibility of the recipient to ensure that this email is virus free,
therefore neither Peridale Ltd, its subsidiaries nor their employees accept
any responsibility.

 

From: Brahma Reddy Battula [mailto:brahmareddy.batt...@hotmail.com] 
Sent: 19 April 2015 17:53
To: user@hadoop.apache.org
Subject: RE: ipc.Client: Retrying connect to server

 

Hello Mahmood Naderan,

 

When client is trying to connect to server with configured port(and address)
and server is not started with that port then client will retry ( and you
will get following error..)..

 

I can able to trace that Namenode is not running From JPS report which you
had posted..Please check the namenode logs ( location :
/home/mahmood/bigdatabench/apache/hadoop-1.0.2/libexec/../logs/hadoop-mahmoo
d-namenode-tiger.out/log)

 

  _  

Date: Fri, 17 Apr 2015 08:22:12 +
From: nt_mahm...@yahoo.com
To: user@hadoop.apache.org
Subject: ipc.Client: Retrying connect to server

Hello,

I have done all steps (as far as I know) to bring up the hadoop. However, I
get the this error

 

15/04/17 12:45:31 INFO ipc.Client: Retrying connect to server:
localhost/127.0.0.1:54310. Already tried 0 time(s).





There are a lot of threads and posts regarding this error and I tried them.
However still stuck at this error :(

 

Can someone help me? What did I wrong?



 

 

 





Here are the configurations:





1) Hadoop configurations

[mahmood@tiger hadoop-1.0.2]$ cat conf/mapred-site.xml
?xml version=1.0?
?xml-stylesheet type=text/xsl href=configuration.xsl?
!-- Put site-specific property overrides in this file. --
configuration
property
  namemapred.job.tracker/name
  valuelocalhost:54311/value
/property
property
  namemapred.child.java.opts/name
  value-Xmx512m/value
/property
/configuration







[mahmood@tiger hadoop-1.0.2]$ cat conf/core-site.xml
?xml version=1.0?
?xml-stylesheet type=text/xsl href=configuration.xsl?
!-- Put site-specific property overrides in this file. --
configuration
property
  namefs.default.name/name
  valuehdfs://localhost:54310/value
/property
/configuration







[mahmood@tiger hadoop-1.0.2]$ cat conf/hdfs-site.xml
?xml version=1.0?
?xml-stylesheet type=text/xsl href=configuration.xsl?
!-- Put site-specific property overrides in this file. --
configuration
property
  namedfs.replication/name
  value1/value
/property
property
  namehadoop.tmp.dir/name
  value/home/mahmood/bigdatabench/apache/hadoop-1.0.2/folders/tmp/value
/property
property
  namedfs.name.dir/name
  value/home/mahmood/bigdatabench/apache/hadoop-1.0.2/folders/name/value
/property
property
  namedfs.data.dir/name
  

RE: ResourceLocalizationService: Localizer failed when running pi example

2015-04-19 Thread Brahma Reddy Battula
As Alexander Alten-Lorenz pointed, it mostly config 
issue(yarn.nodemanager.local-dirs or mapred.local.dir)..
can you able provide full logs..?
Bytheway NPE is handled in  Trunk ..Please check HADOOP-8436 for more details..

From: wget.n...@gmail.com
Subject: Re: ResourceLocalizationService: Localizer failed when running pi 
example
Date: Sun, 19 Apr 2015 17:59:13 +0200
To: user@hadoop.apache.org

As you said, that looks like a config issue. I would spot on the NM's local 
scratch dir (yarn.nodemanager.local-dirs).
But without a complete stack trace, its a blind call.
BR, AL
--mapredit.blogspot.com
On Apr 18, 2015, at 6:24 PM, Fernando O. fot...@gmail.com wrote:Hey All,
It's me again with another noob question: I deployed a cluster (HA mode) 
everything looked good but when I tried to run the pi example:
 bin/hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar pi 
16 100

the same error occurs if I try to generate data with teragen 1 
/test/data

2015-04-18 15:49:04,090 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService:
 Localizer failedjava.lang.NullPointerException   at 
org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:268)
   at 
org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:344)
  at 
org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:150)
  at 
org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:131)
  at 
org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:115)
  at 
org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService.getLocalPathForWrite(LocalDirsHandlerService.java:420)
 at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerRunner.run(ResourceLocalizationService.java:1075)

I'm guessing it's a configuration issue but I don't know what am I missing :S