why the empty file need occupy one split?

2013-12-24 Thread ch huang
hi,maillist:
   i read the code of FileInputFormat,and find it will make a split
for empty file,but i think it's no means do such things,and it will cause
MR framework create a extra map task to do things,anyone can explain?


Re: Getting error unrecognized option -jvm on starting nodemanager

2013-12-24 Thread Sitaraman Vilayannur
When i run namenode with upgrade option i get the following error and
and namenode dosent start...
2013-12-24 14:48:38,595 INFO org.apache.hadoop.hdfs.StateChange:
STATE* Network topology has 0 racks and 0 datanodes
2013-12-24 14:48:38,595 INFO org.apache.hadoop.hdfs.StateChange:
STATE* UnderReplicatedBlocks has 0 blocks
2013-12-24 14:48:38,631 INFO org.apache.hadoop.ipc.Server: IPC Server
Responder: starting
2013-12-24 14:48:38,632 INFO org.apache.hadoop.ipc.Server: IPC Server
listener on 9000: starting
2013-12-24 14:48:38,633 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: NameNode RPC up at:
192.168.1.2/192.168.1.2:9000
2013-12-24 14:48:38,633 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Starting services
required for active state
2013-12-24 14:50:50,060 ERROR
org.apache.hadoop.hdfs.server.namenode.NameNode: RECEIVED SIGNAL 15:
SIGTERM
2013-12-24 14:50:50,062 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/
SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
/


On 12/24/13, Sitaraman Vilayannur  wrote:
> Found it,
>  I get the following error on starting namenode in 2.2
> 10/contrib/capacity-scheduler/*.jar:/usr/local/Software/hadoop-0.23.10/contrib/capacity-scheduler/*.jar:/usr/local/Software/hadoop-0.23.10/contrib/capacity-scheduler/*.jar
> STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common
> -r 1529768; compiled by 'hortonmu' on 2013-10-07T06:28Z
> STARTUP_MSG:   java = 1.7.0_45
> /
> 2013-12-24 13:25:48,876 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX
> signal handlers for [TERM, HUP, INT]
> 2013-12-24 13:25:49,042 INFO
> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
> hadoop-metrics2.properties
> 2013-12-24 13:25:49,102 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
> period at 10 second(s).
> 2013-12-24 13:25:49,102 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics
> system started
> 2013-12-24 13:25:49,232 WARN org.apache.hadoop.util.NativeCodeLoader:
> Unable to load native-hadoop library for your platform... using
> builtin-java classes where applicable
> 2013-12-24 13:25:49,375 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2013-12-24 13:25:49,410 INFO org.apache.hadoop.http.HttpServer: Added
> global filter 'safety'
> (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
> 2013-12-24 13:25:49,412 INFO org.apache.hadoop.http.HttpServer: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter)
> to context hdfs
> 2013-12-24 13:25:49,412 INFO org.apache.hadoop.http.HttpServer: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter)
> to context static
> 2013-12-24 13:25:49,412 INFO org.apache.hadoop.http.HttpServer: Added
> filter static_user_filter
> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter)
> to context logs
> 2013-12-24 13:25:49,422 INFO org.apache.hadoop.http.HttpServer:
> dfs.webhdfs.enabled = false
> 2013-12-24 13:25:49,432 INFO org.apache.hadoop.http.HttpServer: Jetty
> bound to port 50070
> 2013-12-24 13:25:49,432 INFO org.mortbay.log: jetty-6.1.26
> 2013-12-24 13:25:49,459 WARN org.mortbay.log: Can't reuse
> /tmp/Jetty_0_0_0_0_50070_hdfsw2cu08, using
> /tmp/Jetty_0_0_0_0_50070_hdfsw2cu08_2787234685293301311
> 2013-12-24 13:25:49,610 INFO org.mortbay.log: Started
> SelectChannelConnector@0.0.0.0:50070
> 2013-12-24 13:25:49,611 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at:
> 0.0.0.0:50070
> 2013-12-24 13:25:49,628 WARN
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image
> storage directory (dfs.namenode.name.dir) configured. Beware of
> dataloss due to lack of redundant storage directories!
> 2013-12-24 13:25:49,628 WARN
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one
> namespace edits storage directory (dfs.namenode.edits.dir) configured.
> Beware of dataloss due to lack of redundant storage directories!
> 2013-12-24 13:25:49,668 INFO
> org.apache.hadoop.hdfs.server.namenode.HostFileManager: read includes:
> HostSet(
> )
> 2013-12-24 13:25:49,669 INFO
> org.apache.hadoop.hdfs.server.namenode.HostFileManager: read excludes:
> HostSet(
> )
> 2013-12-24 13:25:49,670 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager:
> dfs.block.invalidate.limit=1000
> 2013-12-24 13:25:49,672 INFO org.apache.hadoop.util.GSet: Computing
> capacity for map BlocksMap
> 2013-12-24 13:25:49,672 INFO org.apache.hadoop.util.GSet: VM type   =
> 64-bit
> 2013-12-24 13:25:49,673 INFO org.apache.hadoop.util.GSet: 2.0% max
> memory = 889 MB
> 2013-12-24 13:25:49

Re: Getting error unrecognized option -jvm on starting nodemanager

2013-12-24 Thread Nitin Pawar
the issue here is you tried one version of hadoop and then changed to a
different version.

You can not do that directly with hadoop. You need to follow a process
while upgrading hadoop versions.

For now as you are just starting with hadoop, I would recommend just run a
dfs format and start the hdfs again


On Tue, Dec 24, 2013 at 2:57 PM, Sitaraman Vilayannur <
vrsitaramanietfli...@gmail.com> wrote:

> When i run namenode with upgrade option i get the following error and
> and namenode dosent start...
> 2013-12-24 14:48:38,595 INFO org.apache.hadoop.hdfs.StateChange:
> STATE* Network topology has 0 racks and 0 datanodes
> 2013-12-24 14:48:38,595 INFO org.apache.hadoop.hdfs.StateChange:
> STATE* UnderReplicatedBlocks has 0 blocks
> 2013-12-24 14:48:38,631 INFO org.apache.hadoop.ipc.Server: IPC Server
> Responder: starting
> 2013-12-24 14:48:38,632 INFO org.apache.hadoop.ipc.Server: IPC Server
> listener on 9000: starting
> 2013-12-24 14:48:38,633 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: NameNode RPC up at:
> 192.168.1.2/192.168.1.2:9000
> 2013-12-24 14:48:38,633 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Starting services
> required for active state
> 2013-12-24 14:50:50,060 ERROR
> org.apache.hadoop.hdfs.server.namenode.NameNode: RECEIVED SIGNAL 15:
> SIGTERM
> 2013-12-24 14:50:50,062 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> /
> SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
> /
>
>
> On 12/24/13, Sitaraman Vilayannur  wrote:
> > Found it,
> >  I get the following error on starting namenode in 2.2
> >
> 10/contrib/capacity-scheduler/*.jar:/usr/local/Software/hadoop-0.23.10/contrib/capacity-scheduler/*.jar:/usr/local/Software/hadoop-0.23.10/contrib/capacity-scheduler/*.jar
> > STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common
> > -r 1529768; compiled by 'hortonmu' on 2013-10-07T06:28Z
> > STARTUP_MSG:   java = 1.7.0_45
> > /
> > 2013-12-24 13:25:48,876 INFO
> > org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX
> > signal handlers for [TERM, HUP, INT]
> > 2013-12-24 13:25:49,042 INFO
> > org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
> > hadoop-metrics2.properties
> > 2013-12-24 13:25:49,102 INFO
> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
> > period at 10 second(s).
> > 2013-12-24 13:25:49,102 INFO
> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics
> > system started
> > 2013-12-24 13:25:49,232 WARN org.apache.hadoop.util.NativeCodeLoader:
> > Unable to load native-hadoop library for your platform... using
> > builtin-java classes where applicable
> > 2013-12-24 13:25:49,375 INFO org.mortbay.log: Logging to
> > org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> > org.mortbay.log.Slf4jLog
> > 2013-12-24 13:25:49,410 INFO org.apache.hadoop.http.HttpServer: Added
> > global filter 'safety'
> > (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
> > 2013-12-24 13:25:49,412 INFO org.apache.hadoop.http.HttpServer: Added
> > filter static_user_filter
> > (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter)
> > to context hdfs
> > 2013-12-24 13:25:49,412 INFO org.apache.hadoop.http.HttpServer: Added
> > filter static_user_filter
> > (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter)
> > to context static
> > 2013-12-24 13:25:49,412 INFO org.apache.hadoop.http.HttpServer: Added
> > filter static_user_filter
> > (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter)
> > to context logs
> > 2013-12-24 13:25:49,422 INFO org.apache.hadoop.http.HttpServer:
> > dfs.webhdfs.enabled = false
> > 2013-12-24 13:25:49,432 INFO org.apache.hadoop.http.HttpServer: Jetty
> > bound to port 50070
> > 2013-12-24 13:25:49,432 INFO org.mortbay.log: jetty-6.1.26
> > 2013-12-24 13:25:49,459 WARN org.mortbay.log: Can't reuse
> > /tmp/Jetty_0_0_0_0_50070_hdfsw2cu08, using
> > /tmp/Jetty_0_0_0_0_50070_hdfsw2cu08_2787234685293301311
> > 2013-12-24 13:25:49,610 INFO org.mortbay.log: Started
> > SelectChannelConnector@0.0.0.0:50070
> > 2013-12-24 13:25:49,611 INFO
> > org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at:
> > 0.0.0.0:50070
> > 2013-12-24 13:25:49,628 WARN
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image
> > storage directory (dfs.namenode.name.dir) configured. Beware of
> > dataloss due to lack of redundant storage directories!
> > 2013-12-24 13:25:49,628 WARN
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one
> > namespace edits storage directory (dfs.namenode.edits.dir) configured.
> > Beware of dataloss due to lack of redundant storage directories!
> > 2013-12-24 13:25:49,668 INFO
> > org.apache.hadoop.hdfs.server.namenode.Ho

Building Hadoop 1.2.1 in IntelliJ IDEA

2013-12-24 Thread Frank Scholten
Hi all,

I might have found a small bug in the CLI minicluster code on Hadoop 1.2.1
so I wanted to write a patch and test my code inside IntelliJ.

I followed the instructions on http://wiki.apache.org/hadoop/HadoopUnderIDEA.
I added libraries and source folders but I cannot build the test code of
the project.

The problem seems to be that the package structure under src/test varies.
There are org.apache.hadoop.* packages directly underneath it as well as a
few subfolders which themselves have packages. Adding src/test as a source
folder causes compilation errors for the packages under
subfolders because their package name does not match with the source folder
path.

How can I configure the project in IntelliJ so I can develop and run unit
tests?

Cheers,

Frank


RE: Building Hadoop 1.2.1 in IntelliJ IDEA

2013-12-24 Thread java8964
The best way, I am thinking, is to try following:
1) Use the ant command line to generate eclipse file from the hadoop 1.2.1 
source folder by "ant eclipse"2) After that, you can using "import project" in 
IntelliJ for "Eclipse" project, which will handle all the path correctly in 
Intellij for you.
Yong 

Date: Tue, 24 Dec 2013 12:32:43 +0100
Subject: Building Hadoop 1.2.1 in IntelliJ IDEA
From: fr...@frankscholten.nl
To: user@hadoop.apache.org

Hi all,

I might have found a small bug in 
the CLI minicluster code on Hadoop 1.2.1 so I wanted to write a patch 
and test my code inside IntelliJ.

I followed the instructions on 
http://wiki.apache.org/hadoop/HadoopUnderIDEA. I added libraries and  
source folders but I cannot build the test code of the project.

The
 problem seems to be that the package structure under src/test varies. 
There are org.apache.hadoop.*  packages directly underneath it as well 
as a few subfolders which themselves  have packages. Adding src/test as a
 source folder causes compilation errors for the packages under 

subfolders because their package name does not match with the source folder 
path.

How can I configure the project in IntelliJ so I can develop and run unit tests?

Cheers,

Frank 

Re: Getting error unrecognized option -jvm on starting nodemanager

2013-12-24 Thread Sitaraman Vilayannur
Hi Nitin,
 Even after formatting using hdfs namenode -format, i keep seeing namenode
not formatted in the logs when i try to start namenode
12/24 20:33:26 INFO namenode.FSNamesystem: supergroup=supergroup
13/12/24 20:33:26 INFO namenode.FSNamesystem: isPermissionEnabled=true
13/12/24 20:33:26 INFO namenode.NameNode: Caching file names occuring more
than 10 times
13/12/24 20:33:26 INFO namenode.NNStorage: Storage directory
/usr/local/Software/hadoop-0.23.10/data/hdfs/namenode has been successfully
formatted.
13/12/24 20:33:26 INFO namenode.FSImage: Saving image file
/usr/local/Software/hadoop-0.23.10/data/hdfs/namenode/current/fsimage.ckpt_000
using no compression
13/12/24 20:33:26 INFO namenode.FSImage: Image file of size 124 saved in 0
seconds.
13/12/24 20:33:26 INFO namenode.NNStorageRetentionManager: Going to retain
1 images with txid >= 0
13/12/24 20:33:26 INFO util.ExitUtil: Exiting with status 0
13/12/24 20:33:26 INFO namenode.NameNode: SHUTDOWN_MSG:
/
SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
/


2013-12-24 20:33:46,337 INFO org.apache.hadoop.hdfs.server.common.Storage:
Lock on /usr/local/Software/hadoop-2.2.0/data/hdfs/namenode/in_use.lock
acquired by nodename 7518@localhost.localdomain
2013-12-24 20:33:46,339 INFO org.mortbay.log: Stopped
SelectChannelConnector@0.0.0.0:50070
2013-12-24 20:33:46,340 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode
metrics system...
2013-12-24 20:33:46,340 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
stopped.
2013-12-24 20:33:46,340 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
shutdown complete.
2013-12-24 20:33:46,340 FATAL
org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
java.io.IOException: NameNode is not formatted.
at
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:210)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:787)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:568)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:443)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:491)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:684)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:669)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1254)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1320)
2013-12-24 20:33:46,342 INFO org.apache.hadoop.util.ExitUtil: Exiting with
status 1
2013-12-24 20:33:46,343 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/
SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
/




On Tue, Dec 24, 2013 at 3:13 PM, Nitin Pawar wrote:

> the issue here is you tried one version of hadoop and then changed to a
> different version.
>
> You can not do that directly with hadoop. You need to follow a process
> while upgrading hadoop versions.
>
> For now as you are just starting with hadoop, I would recommend just run a
> dfs format and start the hdfs again
>
>
> On Tue, Dec 24, 2013 at 2:57 PM, Sitaraman Vilayannur <
> vrsitaramanietfli...@gmail.com> wrote:
>
>> When i run namenode with upgrade option i get the following error and
>> and namenode dosent start...
>> 2013-12-24 14:48:38,595 INFO org.apache.hadoop.hdfs.StateChange:
>> STATE* Network topology has 0 racks and 0 datanodes
>> 2013-12-24 14:48:38,595 INFO org.apache.hadoop.hdfs.StateChange:
>> STATE* UnderReplicatedBlocks has 0 blocks
>> 2013-12-24 14:48:38,631 INFO org.apache.hadoop.ipc.Server: IPC Server
>> Responder: starting
>> 2013-12-24 14:48:38,632 INFO org.apache.hadoop.ipc.Server: IPC Server
>> listener on 9000: starting
>> 2013-12-24 14:48:38,633 INFO
>> org.apache.hadoop.hdfs.server.namenode.NameNode: NameNode RPC up at:
>> 192.168.1.2/192.168.1.2:9000
>> 2013-12-24 14:48:38,633 INFO
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Starting services
>> required for active state
>> 2013-12-24 14:50:50,060 ERROR
>> org.apache.hadoop.hdfs.server.namenode.NameNode: RECEIVED SIGNAL 15:
>> SIGTERM
>> 2013-12-24 14:50:50,062 INFO
>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>> /
>> SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
>> /
>>
>>
>> On 12/24/13, Sitaraman Vilayannur  wrote:
>> > Found it,
>> >  I get the following error on starting namen

Re: Getting error unrecognized option -jvm on starting nodemanager

2013-12-24 Thread Nitin Pawar
see the error .. it says not formatted
did you press Y or y ?
try again :)


On Tue, Dec 24, 2013 at 8:35 PM, Sitaraman Vilayannur <
vrsitaramanietfli...@gmail.com> wrote:

> Hi Nitin,
>  Even after formatting using hdfs namenode -format, i keep seeing namenode
> not formatted in the logs when i try to start namenode
> 12/24 20:33:26 INFO namenode.FSNamesystem: supergroup=supergroup
> 13/12/24 20:33:26 INFO namenode.FSNamesystem: isPermissionEnabled=true
> 13/12/24 20:33:26 INFO namenode.NameNode: Caching file names occuring more
> than 10 times
> 13/12/24 20:33:26 INFO namenode.NNStorage: Storage directory
> /usr/local/Software/hadoop-0.23.10/data/hdfs/namenode has been successfully
> formatted.
> 13/12/24 20:33:26 INFO namenode.FSImage: Saving image file
> /usr/local/Software/hadoop-0.23.10/data/hdfs/namenode/current/fsimage.ckpt_000
> using no compression
> 13/12/24 20:33:26 INFO namenode.FSImage: Image file of size 124 saved in 0
> seconds.
> 13/12/24 20:33:26 INFO namenode.NNStorageRetentionManager: Going to retain
> 1 images with txid >= 0
> 13/12/24 20:33:26 INFO util.ExitUtil: Exiting with status 0
> 13/12/24 20:33:26 INFO namenode.NameNode: SHUTDOWN_MSG:
>
> /
> SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
> /
>
>
> 2013-12-24 20:33:46,337 INFO org.apache.hadoop.hdfs.server.common.Storage:
> Lock on /usr/local/Software/hadoop-2.2.0/data/hdfs/namenode/in_use.lock
> acquired by nodename 7518@localhost.localdomain
> 2013-12-24 20:33:46,339 INFO org.mortbay.log: Stopped
> SelectChannelConnector@0.0.0.0:50070
> 2013-12-24 20:33:46,340 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode
> metrics system...
> 2013-12-24 20:33:46,340 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
> stopped.
> 2013-12-24 20:33:46,340 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
> shutdown complete.
> 2013-12-24 20:33:46,340 FATAL
> org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
> java.io.IOException: NameNode is not formatted.
> at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:210)
>
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:787)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:568)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:443)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:491)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:684)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:669)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1254)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1320)
> 2013-12-24 20:33:46,342 INFO org.apache.hadoop.util.ExitUtil: Exiting with
> status 1
> 2013-12-24 20:33:46,343 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>
> /
> SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
> /
>
>
>
>
> On Tue, Dec 24, 2013 at 3:13 PM, Nitin Pawar wrote:
>
>> the issue here is you tried one version of hadoop and then changed to a
>> different version.
>>
>> You can not do that directly with hadoop. You need to follow a process
>> while upgrading hadoop versions.
>>
>> For now as you are just starting with hadoop, I would recommend just run
>> a dfs format and start the hdfs again
>>
>>
>> On Tue, Dec 24, 2013 at 2:57 PM, Sitaraman Vilayannur <
>> vrsitaramanietfli...@gmail.com> wrote:
>>
>>> When i run namenode with upgrade option i get the following error and
>>> and namenode dosent start...
>>> 2013-12-24 14:48:38,595 INFO org.apache.hadoop.hdfs.StateChange:
>>> STATE* Network topology has 0 racks and 0 datanodes
>>> 2013-12-24 14:48:38,595 INFO org.apache.hadoop.hdfs.StateChange:
>>> STATE* UnderReplicatedBlocks has 0 blocks
>>> 2013-12-24 14:48:38,631 INFO org.apache.hadoop.ipc.Server: IPC Server
>>> Responder: starting
>>> 2013-12-24 14:48:38,632 INFO org.apache.hadoop.ipc.Server: IPC Server
>>> listener on 9000: starting
>>> 2013-12-24 14:48:38,633 INFO
>>> org.apache.hadoop.hdfs.server.namenode.NameNode: NameNode RPC up at:
>>> 192.168.1.2/192.168.1.2:9000
>>> 2013-12-24 14:48:38,633 INFO
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Starting services
>>> required for active state
>>> 2013-12-24 14:50:50,060 ERROR
>>> org.apache.hadoop.hdfs.server.namenode.NameNode: RECEIVED SIGNAL 15:
>>> SIGTERM
>>> 2013-12-24 14:50:50,062 INFO
>>> org.apache.hadoop.hdfs.server.nam

Re: Getting error unrecognized option -jvm on starting nodemanager

2013-12-24 Thread Sitaraman Vilayannur
I did press Y tried it several times and once more now.
Sitaraman


On Tue, Dec 24, 2013 at 8:38 PM, Nitin Pawar wrote:

> see the error .. it says not formatted
> did you press Y or y ?
> try again :)
>
>
> On Tue, Dec 24, 2013 at 8:35 PM, Sitaraman Vilayannur <
> vrsitaramanietfli...@gmail.com> wrote:
>
>> Hi Nitin,
>>  Even after formatting using hdfs namenode -format, i keep seeing
>> namenode not formatted in the logs when i try to start namenode
>> 12/24 20:33:26 INFO namenode.FSNamesystem: supergroup=supergroup
>> 13/12/24 20:33:26 INFO namenode.FSNamesystem: isPermissionEnabled=true
>> 13/12/24 20:33:26 INFO namenode.NameNode: Caching file names occuring
>> more than 10 times
>> 13/12/24 20:33:26 INFO namenode.NNStorage: Storage directory
>> /usr/local/Software/hadoop-0.23.10/data/hdfs/namenode has been successfully
>> formatted.
>> 13/12/24 20:33:26 INFO namenode.FSImage: Saving image file
>> /usr/local/Software/hadoop-0.23.10/data/hdfs/namenode/current/fsimage.ckpt_000
>> using no compression
>> 13/12/24 20:33:26 INFO namenode.FSImage: Image file of size 124 saved in
>> 0 seconds.
>> 13/12/24 20:33:26 INFO namenode.NNStorageRetentionManager: Going to
>> retain 1 images with txid >= 0
>> 13/12/24 20:33:26 INFO util.ExitUtil: Exiting with status 0
>> 13/12/24 20:33:26 INFO namenode.NameNode: SHUTDOWN_MSG:
>>
>> /
>> SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
>> /
>>
>>
>> 2013-12-24 20:33:46,337 INFO
>> org.apache.hadoop.hdfs.server.common.Storage: Lock on
>> /usr/local/Software/hadoop-2.2.0/data/hdfs/namenode/in_use.lock acquired by
>> nodename 7518@localhost.localdomain
>> 2013-12-24 20:33:46,339 INFO org.mortbay.log: Stopped
>> SelectChannelConnector@0.0.0.0:50070
>> 2013-12-24 20:33:46,340 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode
>> metrics system...
>> 2013-12-24 20:33:46,340 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
>> stopped.
>> 2013-12-24 20:33:46,340 INFO
>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
>> shutdown complete.
>> 2013-12-24 20:33:46,340 FATAL
>> org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
>> java.io.IOException: NameNode is not formatted.
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:210)
>>
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:787)
>> at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:568)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:443)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:491)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:684)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:669)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1254)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1320)
>> 2013-12-24 20:33:46,342 INFO org.apache.hadoop.util.ExitUtil: Exiting
>> with status 1
>> 2013-12-24 20:33:46,343 INFO
>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>>
>> /
>> SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
>> /
>>
>>
>>
>>
>> On Tue, Dec 24, 2013 at 3:13 PM, Nitin Pawar wrote:
>>
>>> the issue here is you tried one version of hadoop and then changed to a
>>> different version.
>>>
>>> You can not do that directly with hadoop. You need to follow a process
>>> while upgrading hadoop versions.
>>>
>>> For now as you are just starting with hadoop, I would recommend just run
>>> a dfs format and start the hdfs again
>>>
>>>
>>> On Tue, Dec 24, 2013 at 2:57 PM, Sitaraman Vilayannur <
>>> vrsitaramanietfli...@gmail.com> wrote:
>>>
 When i run namenode with upgrade option i get the following error and
 and namenode dosent start...
 2013-12-24 14:48:38,595 INFO org.apache.hadoop.hdfs.StateChange:
 STATE* Network topology has 0 racks and 0 datanodes
 2013-12-24 14:48:38,595 INFO org.apache.hadoop.hdfs.StateChange:
 STATE* UnderReplicatedBlocks has 0 blocks
 2013-12-24 14:48:38,631 INFO org.apache.hadoop.ipc.Server: IPC Server
 Responder: starting
 2013-12-24 14:48:38,632 INFO org.apache.hadoop.ipc.Server: IPC Server
 listener on 9000: starting
 2013-12-24 14:48:38,633 INFO
 org.apache.hadoop.hdfs.server.namenode.NameNode: NameNode RPC up at:
 192.168.1.2/192.168.1.2:9000
 2013-12-24 14:48:38,633 INFO
 org.apache.hadoop.hdfs.server.namenode.FSNamesyste

Re: Getting error unrecognized option -jvm on starting nodemanager

2013-12-24 Thread Manoj Babu
Lock on /usr/local/Software/hadoop-2.2.0/data/hdfs/namenode/*in_use.**lock
acquired by nodename* 7518@localhost.localdomain

stop all instance running and then do the steps.

Cheers!
Manoj.


On Tue, Dec 24, 2013 at 8:48 PM, Sitaraman Vilayannur <
vrsitaramanietfli...@gmail.com> wrote:

> I did press Y tried it several times and once more now.
> Sitaraman
>
>
> On Tue, Dec 24, 2013 at 8:38 PM, Nitin Pawar wrote:
>
>> see the error .. it says not formatted
>> did you press Y or y ?
>> try again :)
>>
>>
>> On Tue, Dec 24, 2013 at 8:35 PM, Sitaraman Vilayannur <
>> vrsitaramanietfli...@gmail.com> wrote:
>>
>>> Hi Nitin,
>>>  Even after formatting using hdfs namenode -format, i keep seeing
>>> namenode not formatted in the logs when i try to start namenode
>>> 12/24 20:33:26 INFO namenode.FSNamesystem: supergroup=supergroup
>>> 13/12/24 20:33:26 INFO namenode.FSNamesystem: isPermissionEnabled=true
>>> 13/12/24 20:33:26 INFO namenode.NameNode: Caching file names occuring
>>> more than 10 times
>>> 13/12/24 20:33:26 INFO namenode.NNStorage: Storage directory
>>> /usr/local/Software/hadoop-0.23.10/data/hdfs/namenode has been successfully
>>> formatted.
>>> 13/12/24 20:33:26 INFO namenode.FSImage: Saving image file
>>> /usr/local/Software/hadoop-0.23.10/data/hdfs/namenode/current/fsimage.ckpt_000
>>> using no compression
>>> 13/12/24 20:33:26 INFO namenode.FSImage: Image file of size 124 saved in
>>> 0 seconds.
>>> 13/12/24 20:33:26 INFO namenode.NNStorageRetentionManager: Going to
>>> retain 1 images with txid >= 0
>>> 13/12/24 20:33:26 INFO util.ExitUtil: Exiting with status 0
>>> 13/12/24 20:33:26 INFO namenode.NameNode: SHUTDOWN_MSG:
>>>
>>> /
>>> SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
>>> /
>>>
>>>
>>> 2013-12-24 20:33:46,337 INFO
>>> org.apache.hadoop.hdfs.server.common.Storage: Lock on
>>> /usr/local/Software/hadoop-2.2.0/data/hdfs/namenode/in_use.lock acquired by
>>> nodename 7518@localhost.localdomain
>>> 2013-12-24 20:33:46,339 INFO org.mortbay.log: Stopped
>>> SelectChannelConnector@0.0.0.0:50070
>>> 2013-12-24 20:33:46,340 INFO
>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode
>>> metrics system...
>>> 2013-12-24 20:33:46,340 INFO
>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
>>> stopped.
>>> 2013-12-24 20:33:46,340 INFO
>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
>>> shutdown complete.
>>> 2013-12-24 20:33:46,340 FATAL
>>> org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
>>> java.io.IOException: NameNode is not formatted.
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:210)
>>>
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:787)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:568)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:443)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:491)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:684)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:669)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1254)
>>> at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1320)
>>> 2013-12-24 20:33:46,342 INFO org.apache.hadoop.util.ExitUtil: Exiting
>>> with status 1
>>> 2013-12-24 20:33:46,343 INFO
>>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>>>
>>> /
>>> SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
>>> /
>>>
>>>
>>>
>>>
>>> On Tue, Dec 24, 2013 at 3:13 PM, Nitin Pawar wrote:
>>>
 the issue here is you tried one version of hadoop and then changed to a
 different version.

 You can not do that directly with hadoop. You need to follow a process
 while upgrading hadoop versions.

 For now as you are just starting with hadoop, I would recommend just
 run a dfs format and start the hdfs again


 On Tue, Dec 24, 2013 at 2:57 PM, Sitaraman Vilayannur <
 vrsitaramanietfli...@gmail.com> wrote:

> When i run namenode with upgrade option i get the following error and
> and namenode dosent start...
> 2013-12-24 14:48:38,595 INFO org.apache.hadoop.hdfs.StateChange:
> STATE* Network topology has 0 racks and 0 datanodes
> 2013-12-24 14:48:38,595 INFO org.apache.hadoop.hdfs.StateChange:
> STATE* UnderReplicatedBlocks has 0 blocks
> 2013-12-24 14:48:38,631 INFO org

Re: Building Hadoop 1.2.1 in IntelliJ IDEA

2013-12-24 Thread Frank Scholten
Hi Yong,

Thanks for the tip. Unfortunately this still gives the same issues in
IntelliJ.

Package name 'org.apache.hadoop.mapreduce.test.system' does not correspond
to the file path 'system.java.org.apache.hadoop.mapreduce.test.system'
less... (Ctrl+F1)
Detects package statements that do not correspond to the project directory
structure.

What is your setup for developing Hadoop? Eclipse? Curious what other
people are using which allows them to easily run tests, debug and so on.

Cheers,

Frank



On Tue, Dec 24, 2013 at 3:50 PM, java8964  wrote:

> The best way, I am thinking, is to try following:
>
> 1) Use the ant command line to generate eclipse file from the hadoop 1.2.1
> source folder by "ant eclipse"
> 2) After that, you can using "import project" in IntelliJ for "Eclipse"
> project, which will handle all the path correctly in Intellij for you.
>
> Yong
>
> --
> Date: Tue, 24 Dec 2013 12:32:43 +0100
> Subject: Building Hadoop 1.2.1 in IntelliJ IDEA
> From: fr...@frankscholten.nl
> To: user@hadoop.apache.org
>
>
> Hi all,
>
> I might have found a small bug in the CLI minicluster code on Hadoop 1.2.1
> so I wanted to write a patch and test my code inside IntelliJ.
>
> I followed the instructions on
> http://wiki.apache.org/hadoop/HadoopUnderIDEA. I added libraries and
> source folders but I cannot build the test code of the project.
>
> The problem seems to be that the package structure under src/test varies.
> There are org.apache.hadoop.* packages directly underneath it as well as a
> few subfolders which themselves have packages. Adding src/test as a source
> folder causes compilation errors for the packages under
> subfolders because their package name does not match with the source
> folder path.
>
> How can I configure the project in IntelliJ so I can develop and run unit
> tests?
>
> Cheers,
>
> Frank
>


client authentication when kerberos enabled

2013-12-24 Thread wzc
Hi all,

To access a Kerberos-protected cluster,  our hadoop clients need to get a
kerberos ticket (kinit user@realm) before submitting jobs. We want our
clients to  get rid of kerberos password, so we would like to use keytabs
for authentication. Here we export pincipals with the form
'username/host@realm'  and deploy them to our clients' hosts.

In addition, we want to make sure the host in the keytab matches the host
which one client submit job from.  Currently there is no host check on
client principal auth.

I have found some jira which maybe helpful:
https://issues.apache.org/jira/browse/HDFS-1003
https://issues.apache.org/jira/browse/HADOOP-7215

I have no idea how to achieve it,  I also wonder whether such check is
reasonable.
can anyone give me some hint?


Re: Getting error unrecognized option -jvm on starting nodemanager

2013-12-24 Thread Sitaraman Vilayannur
Hi Manoj,
 JPS says no instances are running?
[sitaraman@localhost sbin]$ jps
8934 Jps
You have new mail in /var/spool/mail/root
[sitaraman@localhost sbin]$

On 12/24/13, Manoj Babu  wrote:
> Lock on /usr/local/Software/hadoop-2.2.0/data/hdfs/namenode/*in_use.**lock
> acquired by nodename* 7518@localhost.localdomain
>
> stop all instance running and then do the steps.
>
> Cheers!
> Manoj.
>
>
> On Tue, Dec 24, 2013 at 8:48 PM, Sitaraman Vilayannur <
> vrsitaramanietfli...@gmail.com> wrote:
>
>> I did press Y tried it several times and once more now.
>> Sitaraman
>>
>>
>> On Tue, Dec 24, 2013 at 8:38 PM, Nitin Pawar
>> wrote:
>>
>>> see the error .. it says not formatted
>>> did you press Y or y ?
>>> try again :)
>>>
>>>
>>> On Tue, Dec 24, 2013 at 8:35 PM, Sitaraman Vilayannur <
>>> vrsitaramanietfli...@gmail.com> wrote:
>>>
 Hi Nitin,
  Even after formatting using hdfs namenode -format, i keep seeing
 namenode not formatted in the logs when i try to start namenode
 12/24 20:33:26 INFO namenode.FSNamesystem: supergroup=supergroup
 13/12/24 20:33:26 INFO namenode.FSNamesystem: isPermissionEnabled=true
 13/12/24 20:33:26 INFO namenode.NameNode: Caching file names occuring
 more than 10 times
 13/12/24 20:33:26 INFO namenode.NNStorage: Storage directory
 /usr/local/Software/hadoop-0.23.10/data/hdfs/namenode has been
 successfully
 formatted.
 13/12/24 20:33:26 INFO namenode.FSImage: Saving image file
 /usr/local/Software/hadoop-0.23.10/data/hdfs/namenode/current/fsimage.ckpt_000
 using no compression
 13/12/24 20:33:26 INFO namenode.FSImage: Image file of size 124 saved
 in
 0 seconds.
 13/12/24 20:33:26 INFO namenode.NNStorageRetentionManager: Going to
 retain 1 images with txid >= 0
 13/12/24 20:33:26 INFO util.ExitUtil: Exiting with status 0
 13/12/24 20:33:26 INFO namenode.NameNode: SHUTDOWN_MSG:

 /
 SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
 /


 2013-12-24 20:33:46,337 INFO
 org.apache.hadoop.hdfs.server.common.Storage: Lock on
 /usr/local/Software/hadoop-2.2.0/data/hdfs/namenode/in_use.lock acquired
 by
 nodename 7518@localhost.localdomain
 2013-12-24 20:33:46,339 INFO org.mortbay.log: Stopped
 SelectChannelConnector@0.0.0.0:50070
 2013-12-24 20:33:46,340 INFO
 org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode
 metrics system...
 2013-12-24 20:33:46,340 INFO
 org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics
 system
 stopped.
 2013-12-24 20:33:46,340 INFO
 org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics
 system
 shutdown complete.
 2013-12-24 20:33:46,340 FATAL
 org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode
 join
 java.io.IOException: NameNode is not formatted.
 at
 org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:210)

 at
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:787)
 at
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:568)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:443)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:491)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:684)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:669)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1254)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1320)
 2013-12-24 20:33:46,342 INFO org.apache.hadoop.util.ExitUtil: Exiting
 with status 1
 2013-12-24 20:33:46,343 INFO
 org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:

 /
 SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
 /




 On Tue, Dec 24, 2013 at 3:13 PM, Nitin Pawar
 wrote:

> the issue here is you tried one version of hadoop and then changed to
> a
> different version.
>
> You can not do that directly with hadoop. You need to follow a process
> while upgrading hadoop versions.
>
> For now as you are just starting with hadoop, I would recommend just
> run a dfs format and start the hdfs again
>
>
> On Tue, Dec 24, 2013 at 2:57 PM, Sitaraman Vilayannur <
> vrsitaramanietfli...@gmail.com> wrote:
>
>> When i run namenode with upgrade o

building hadoop 2.x from source

2013-12-24 Thread Karim Awara
Hi,

I managed to build hadoop 2.2 from source using maven and imported  hadoop
to eclipse and ran some unit tests on it. Now, I want to use my build to
actually execute shell commands on HDFS. How do I do that? because no
resource has talked about setting any conf files at all.

--
Best Regards,
Karim Ahmed Awara

-- 

--
This message and its contents, including attachments are intended solely 
for the original recipient. If you are not the intended recipient or have 
received this message in error, please notify me immediately and delete 
this message from your computer system. Any unauthorized use or 
distribution is prohibited. Please consider the environment before printing 
this email.


Re: Getting error unrecognized option -jvm on starting nodemanager

2013-12-24 Thread Sitaraman Vilayannur
Hi Manoj,
The directory is empty.
root@localhost logs]# cd /usr/local/Software/hadoop-2.2.0/data/hdfs/namenode/
[root@localhost namenode]# pwd
/usr/local/Software/hadoop-2.2.0/data/hdfs/namenode
[root@localhost namenode]# ls
[root@localhost namenode]#

But i still get the acquired statement in the logs.
3-12-25 09:44:42,415 INFO
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager:
dfs.block.invalidate.limit=1000
2013-12-25 09:44:42,417 INFO org.apache.hadoop.util.GSet: Computing
capacity for map BlocksMap
2013-12-25 09:44:42,417 INFO org.apache.hadoop.util.GSet: VM type   = 64-bit
2013-12-25 09:44:42,417 INFO org.apache.hadoop.util.GSet: 2.0% max
memory = 889 MB
2013-12-25 09:44:42,417 INFO org.apache.hadoop.util.GSet: capacity
 = 2^21 = 2097152 entries
2013-12-25 09:44:42,421 INFO
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
dfs.block.access.token.enable=false
2013-12-25 09:44:42,422 INFO
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
defaultReplication = 1
2013-12-25 09:44:42,422 INFO
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
maxReplication = 512
2013-12-25 09:44:42,422 INFO
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
minReplication = 1
2013-12-25 09:44:42,422 INFO
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
maxReplicationStreams  = 2
2013-12-25 09:44:42,422 INFO
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
shouldCheckForEnoughRacks  = false
2013-12-25 09:44:42,422 INFO
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
replicationRecheckInterval = 3000
2013-12-25 09:44:42,422 INFO
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
encryptDataTransfer= false
2013-12-25 09:44:42,426 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner
  = sitaraman (auth:SIMPLE)
2013-12-25 09:44:42,426 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup
  = supergroup
2013-12-25 09:44:42,426 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
isPermissionEnabled = true
2013-12-25 09:44:42,426 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
2013-12-25 09:44:42,427 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled:
true
2013-12-25 09:44:42,547 INFO org.apache.hadoop.util.GSet: Computing
capacity for map INodeMap
2013-12-25 09:44:42,547 INFO org.apache.hadoop.util.GSet: VM type   = 64-bit
2013-12-25 09:44:42,547 INFO org.apache.hadoop.util.GSet: 1.0% max
memory = 889 MB
2013-12-25 09:44:42,547 INFO org.apache.hadoop.util.GSet: capacity
 = 2^20 = 1048576 entries
2013-12-25 09:44:42,548 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
occuring more than 10 times
2013-12-25 09:44:42,550 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
dfs.namenode.safemode.threshold-pct = 0.999128746033
2013-12-25 09:44:42,550 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
dfs.namenode.safemode.min.datanodes = 0
2013-12-25 09:44:42,550 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
dfs.namenode.safemode.extension = 3
2013-12-25 09:44:42,551 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on
namenode is enabled
2013-12-25 09:44:42,551 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will
use 0.03 of total heap and retry cache entry expiry time is 60
millis
2013-12-25 09:44:42,553 INFO org.apache.hadoop.util.GSet: Computing
capacity for map Namenode Retry Cache
2013-12-25 09:44:42,553 INFO org.apache.hadoop.util.GSet: VM type   = 64-bit
2013-12-25 09:44:42,553 INFO org.apache.hadoop.util.GSet:
0.02999329447746% max memory = 889 MB
2013-12-25 09:44:42,553 INFO org.apache.hadoop.util.GSet: capacity
 = 2^15 = 32768 entries
2013-12-25 09:44:42,562 INFO
org.apache.hadoop.hdfs.server.common.Storage: Lock on
/usr/local/Software/hadoop-2.2.0/data/hdfs/namenode/in_use.lock
acquired by nodename 9626@localhost.localdomain
2013-12-25 09:44:42,564 INFO org.mortbay.log: Stopped
SelectChannelConnector@0.0.0.0:50070
2013-12-25 09:44:42,564 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode
metrics system...
2013-12-25 09:44:42,565 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics
system stopped.
2013-12-25 09:44:42,565 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics
system shutdown complete.
2013-12-25 09:44:42,565 FATAL
org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode
join
java.io.IOException: NameNode is not formatted.
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:210)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:787)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:568)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.

Re: Mounting HDFS on client

2013-12-24 Thread Brandon Li
Hi Kurt,

So sorry for the late reply.

This blog is a step by step walk through for using the NFS gateway on
Sandbox1.0:
http://hortonworks.com/blog/how-to-enable-nfs-access-to-hdfs-in-hortonworks-sandbox/

For Sandbox 2.0, it should be similar. If you don't want to change any
default NFS gateway configurations for the tests, you can skip the steps of
configuration changes and the steps to restart HDFS/MapReduce.

Thanks,
Brandon


On Sat, Dec 21, 2013 at 9:08 AM, Kurt Moesky  wrote:

> Hi Brandon,
>
> Thanks for your detailed response, I will give this a try.
>
> Also, I see you are with Horton Works, is there a way to use the 2.0
> Sandbox to map any of the vg_sandbox to another Linux box or the Windows
> Server I am running the Virtualbox that hosts the sandbox?
>
> Kurt
>
>
> On Fri, Dec 20, 2013 at 5:15 PM, Brandon Li wrote:
>
>> Mount on Windows client is a bit different. Here are the steps to mount
>> export on Windows:
>>
>> 1. Enable NFS Client on the Windows client system.
>>
>> Step 1. Enable File Services Role. Go to Server Management – > Add Roles
>> -> File Services
>>
>> Step 2. Install Services for Network File System. Go to File Services – >
>> Add Role Services
>>
>> 2. Mount the export, can use either mount command or "net use" command:
>> c:> mount * 192.168.111.158! <== note, "*" can also be a drive character
>> here
>> c:> net use Z: 192.168.111.158!
>>
>> (there is a space between Z: and !
>>
>> another important thing is ! mark, this does the trick. Withoutout !,
>> Windows NFS client doesn't work with root export "/")
>>
>> There is a user mapping issue: map Windows users to Linux users.
>> Since NFS gateway doesn't recognize Windows users, it maps them to the
>> Linux user "nobody".
>>
>> To do any data I/O test, you can create a directory on HDFS and give
>> everyone access permission.
>>
>> In reality, most administrators use Windows AD server to manage the user
>> mapping between Windows and Linux systems so the user mapping is not an
>> issue in those environments.
>>  Thanks,
>> Brandon
>>
>>
>> On Fri, Dec 20, 2013 at 4:06 PM, Brandon Li wrote:
>>
>>>
>>> Hi Kurt,
>>>
>>> You can use the HDFS NFS gateway to mount HDFS. The limitation is that
>>> random write is not supported.
>>>
>>> JIRA HDFS-5347 added a user guider.
>>> https://issues.apache.org/jira/browse/HDFS-5347
>>> There is the html attachment to this JIRA:
>>> HdfsNfsGateway.new.html
>>> 
>>>
>>> If you are using branch 2.3, 2.4 or trunk, "mvn site" will generate the
>>> user guide too.
>>>
>>> Thanks,
>>> Brandon
>>>
>>>
>>> On Fri, Dec 20, 2013 at 2:27 PM, Kurt Moesky wrote:
>>>
 I am a new to HDFS and reviewing some of the features for in-house use.

 Is there a way to mount HDFS directly on a Linux and Windows client? I
 believe I read something about there being some limitations, but that there
 is possibly a FUSE solution. Any information on this (with links to a
 how-to) would be greatly appreciated.

 Thx.

>>>
>>>
>>
>> CONFIDENTIALITY NOTICE
>> NOTICE: This message is intended for the use of the individual or entity
>> to which it is addressed and may contain information that is confidential,
>> privileged and exempt from disclosure under applicable law. If the reader
>> of this message is not the intended recipient, you are hereby notified that
>> any printing, copying, dissemination, distribution, disclosure or
>> forwarding of this communication is strictly prohibited. If you have
>> received this communication in error, please contact the sender immediately
>> and delete it from your system. Thank You.
>>
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Getting error unrecognized option -jvm on starting nodemanager

2013-12-24 Thread Manoj Babu
Try by removing the file /usr/local/Software/hadoop-2.
2.0/data/hdfs/namenode/*in_use.**lock*

Cheers!
Manoj.


On Tue, Dec 24, 2013 at 9:03 PM, Manoj Babu  wrote:

> Lock on /usr/local/Software/hadoop-2.2.0/data/hdfs/namenode/*in_use.**lock
> acquired by nodename* 7518@localhost.localdomain
>
> stop all instance running and then do the steps.
>
> Cheers!
> Manoj.
>
>
> On Tue, Dec 24, 2013 at 8:48 PM, Sitaraman Vilayannur <
> vrsitaramanietfli...@gmail.com> wrote:
>
>> I did press Y tried it several times and once more now.
>> Sitaraman
>>
>>
>> On Tue, Dec 24, 2013 at 8:38 PM, Nitin Pawar wrote:
>>
>>> see the error .. it says not formatted
>>> did you press Y or y ?
>>> try again :)
>>>
>>>
>>> On Tue, Dec 24, 2013 at 8:35 PM, Sitaraman Vilayannur <
>>> vrsitaramanietfli...@gmail.com> wrote:
>>>
 Hi Nitin,
  Even after formatting using hdfs namenode -format, i keep seeing
 namenode not formatted in the logs when i try to start namenode
 12/24 20:33:26 INFO namenode.FSNamesystem: supergroup=supergroup
 13/12/24 20:33:26 INFO namenode.FSNamesystem: isPermissionEnabled=true
 13/12/24 20:33:26 INFO namenode.NameNode: Caching file names occuring
 more than 10 times
 13/12/24 20:33:26 INFO namenode.NNStorage: Storage directory
 /usr/local/Software/hadoop-0.23.10/data/hdfs/namenode has been successfully
 formatted.
 13/12/24 20:33:26 INFO namenode.FSImage: Saving image file
 /usr/local/Software/hadoop-0.23.10/data/hdfs/namenode/current/fsimage.ckpt_000
 using no compression
 13/12/24 20:33:26 INFO namenode.FSImage: Image file of size 124 saved
 in 0 seconds.
 13/12/24 20:33:26 INFO namenode.NNStorageRetentionManager: Going to
 retain 1 images with txid >= 0
 13/12/24 20:33:26 INFO util.ExitUtil: Exiting with status 0
 13/12/24 20:33:26 INFO namenode.NameNode: SHUTDOWN_MSG:

 /
 SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
 /


 2013-12-24 20:33:46,337 INFO
 org.apache.hadoop.hdfs.server.common.Storage: Lock on
 /usr/local/Software/hadoop-2.2.0/data/hdfs/namenode/in_use.lock acquired by
 nodename 7518@localhost.localdomain
 2013-12-24 20:33:46,339 INFO org.mortbay.log: Stopped
 SelectChannelConnector@0.0.0.0:50070
 2013-12-24 20:33:46,340 INFO
 org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode
 metrics system...
 2013-12-24 20:33:46,340 INFO
 org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
 stopped.
 2013-12-24 20:33:46,340 INFO
 org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
 shutdown complete.
 2013-12-24 20:33:46,340 FATAL
 org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
 java.io.IOException: NameNode is not formatted.
 at
 org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:210)

 at
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:787)
 at
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:568)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:443)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:491)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:684)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:669)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1254)
 at
 org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1320)
 2013-12-24 20:33:46,342 INFO org.apache.hadoop.util.ExitUtil: Exiting
 with status 1
 2013-12-24 20:33:46,343 INFO
 org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:

 /
 SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
 /




 On Tue, Dec 24, 2013 at 3:13 PM, Nitin Pawar 
 wrote:

> the issue here is you tried one version of hadoop and then changed to
> a different version.
>
> You can not do that directly with hadoop. You need to follow a process
> while upgrading hadoop versions.
>
> For now as you are just starting with hadoop, I would recommend just
> run a dfs format and start the hdfs again
>
>
> On Tue, Dec 24, 2013 at 2:57 PM, Sitaraman Vilayannur <
> vrsitaramanietfli...@gmail.com> wrote:
>
>> When i run namenode with upgrade option i get the following error and
>> and namenode dosent