Error importing hbase table on new system

2015-09-27 Thread Håvard Wahl Kongsgård
Hi, Iam trying to import a old backup to a new smaller system (just
single node, to get the data out)

when I use

sudo -u hbase hbase -Dhbase.import.version=0.94
org.apache.hadoop.hbase.mapreduce.Import crawler
/crawler_hbase/crawler

I get this error in the tasks . Is this a permission problem?


2015-09-26 23:56:32,995 ERROR
org.apache.hadoop.security.UserGroupInformation:
PriviledgedActionException as:mapred (auth:SIMPLE)
cause:java.io.IOException: keyvalues=NONE read 4096 bytes, should read
14279
2015-09-26 23:56:32,996 WARN org.apache.hadoop.mapred.Child: Error running child
java.io.IOException: keyvalues=NONE read 4096 bytes, should read 14279
at 
org.apache.hadoop.io.SequenceFile$Reader.getCurrentValue(SequenceFile.java:2221)
at 
org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader.nextKeyValue(SequenceFileRecordReader.java:74)
at 
org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:483)
at 
org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:76)
at 
org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:85)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:139)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:672)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:330)
at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.mapred.Child.main(Child.java:262)
2015-09-26 23:56:33,002 INFO org.apache.hadoop.mapred.Task: Runnning
cleanup for the task



-- 
Håvard Wahl Kongsgård
Data Scientist


Re: windows installation error

2015-09-27 Thread Onder SEZGIN
Hi,

I am writing this email as a note for other users to resolve if they come
across such issue.
Found the solution by trying numerous building hadoop distributions.
The problem is not due to hadoop distributions or versions obviously,
It is because of building winutils id hadoop-common project.

While building winutils, it was requiring

cvtres.exe to be called to build the Windows Solution
dependency(winutils.sln).

So if you installed .net 4.5 framework,

cvtres.exe was using a wrong version of dll dependency.

What i did was to uninstall .net framework 4.5 and installing 4.0 framework
which is using the right version dll file and helping maven package ...
command to get over the issue and build hadoop successfully,

Possibble other solutions which is reasoned by cvtres.exe are explained in
stackoverflow and in  this blog post and windows update reference site.

http://stackoverflow.com/questions/10888391/error-link-fatal-error-lnk1123-failure-during-conversion-to-coff-file-inval

https://aimslife.wordpress.com/2014/01/22/incremental-linking-lnk1123-error-in-vs2010-while-building-project/

http://blogs.msdn.com/b/heaths/archive/2011/04/01/visual-c-2010-sp1-compiler-update-for-the-windows-sdk-7-1.aspx

Thanks in advance...





On Sat, Sep 26, 2015 at 11:59 PM, Onder SEZGIN 
wrote:

> Yes, the command i am running is
>
> >mvn -e package -Pdist,native-win -DskipTests -Dtar
>
> and i follow BUILDING.txt
>
> Önder
>
> On Sat, Sep 26, 2015 at 11:31 PM, Ted Yu  wrote:
>
>> Have you specified the following profile ?
>>
>> -Pnative-win
>>
>> Please see 'Building on Windows' section in BUILDING.txt
>>
>> Details can be found in this commit:
>>
>> 638801cce16fc1dc3259c541dc30a599faaddda1
>>
>> Cheers
>>
>> On Sat, Sep 26, 2015 at 6:51 AM, Onder SEZGIN 
>> wrote:
>>
>>> It was the latest release.
>>> -X gives the same output because i had already supplied -e option while
>>> running mvn package command.
>>>
>>> On Saturday, September 26, 2015, Ted Yu  wrote:
>>>
 Which release of hadoop are you installing ?

 Have you tried supplying -X switch to see debug logs ?

 Cheers

 On Sat, Sep 26, 2015 at 5:46 AM, Onder SEZGIN 
 wrote:

> Hi,
>
> I am trying to build hadoop on windows.
> and i am getting the error below.
>
> Is there anyone who could get over the issue?
>
> Cheers
>
> [ERROR] Failed to execute goal
> org.codehaus.mojo:exec-maven-plugin:1.3.1:exec (c
> ompile-ms-winutils) on project hadoop-common: Command execution
> failed. Process
> exited with an error: 1 (Exit value: 1) -> [Help 1]
> org.apache.maven.lifecycle.LifecycleExecutionException: Failed to
> execute goal o
> rg.codehaus.mojo:exec-maven-plugin:1.3.1:exec (compile-ms-winutils) on
> project h
> adoop-common: Command execution failed.
> at
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor
> .java:216)
> at
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor
> .java:153)
> at
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor
> .java:145)
> at
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProje
> ct(LifecycleModuleBuilder.java:84)
> at
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProje
> ct(LifecycleModuleBuilder.java:59)
> at
> org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBu
> ild(LifecycleStarter.java:183)
> at
> org.apache.maven.lifecycle.internal.LifecycleStarter.execute(Lifecycl
> eStarter.java:161)
> at
> org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:317)
> at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:152)
> at org.apache.maven.cli.MavenCli.execute(MavenCli.java:555)
> at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:214)
> at org.apache.maven.cli.MavenCli.main(MavenCli.java:158)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
> java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
> sorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at
> org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Laun
> cher.java:289)
> at
> org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.jav
> a:229)
> at
> org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(La
> uncher.java:415)
> at
> org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:
> 356)
> Caused by: org.apache.maven.plugin.MojoExecutionException: Command
> execution fai
> led.

Re: Build error on windows

2015-09-27 Thread Onder SEZGIN
Hi,

I am writing this email as a note for other users to resolve if they come
across such issue.
Found the solution by trying numerous building hadoop distributions.
The problem is not due to hadoop distributions or versions obviously,
It is because of building winutils id hadoop-common project.

While building winutils, it was requiring

cvtres.exe to be called to build the Windows Solution
dependency(winutils.sln).

So if you installed .net 4.5 framework,

cvtres.exe was using a wrong version of dll dependency.

What i did was to uninstall .net framework 4.5 and installing 4.0 framework
which is using the right version dll file and helping maven package ...
command to get over the issue and build hadoop successfully,

Possibble other solutions which is reasoned by cvtres.exe are explained in
stackoverflow and in  this blog post and windows update reference site.

http://stackoverflow.com/questions/10888391/error-link-fatal-error-lnk1123-failure-during-conversion-to-coff-file-inval

https://aimslife.wordpress.com/2014/01/22/incremental-linking-lnk1123-error-in-vs2010-while-building-project/

http://blogs.msdn.com/b/heaths/archive/2011/04/01/visual-c-2010-sp1-compiler-update-for-the-windows-sdk-7-1.aspx

Thanks in advance...

On Sat, Sep 26, 2015 at 5:49 PM, Onder SEZGIN  wrote:

> Hi,
>
> I am trying to build hadoop on windows.
> and i am getting the error below.
>
> Is there anyone who could get over the issue?
>
> Cheers
>
> [ERROR] Failed to execute goal
> org.codehaus.mojo:exec-maven-plugin:1.3.1:exec (c
> ompile-ms-winutils) on project hadoop-common: Command execution failed.
> Process
> exited with an error: 1 (Exit value: 1) -> [Help 1]
> org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute
> goal o
> rg.codehaus.mojo:exec-maven-plugin:1.3.1:exec (compile-ms-winutils) on
> project h
> adoop-common: Command execution failed.
> at
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor
> .java:216)
> at
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor
> .java:153)
> at
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor
> .java:145)
> at
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProje
> ct(LifecycleModuleBuilder.java:84)
> at
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProje
> ct(LifecycleModuleBuilder.java:59)
> at
> org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBu
> ild(LifecycleStarter.java:183)
> at
> org.apache.maven.lifecycle.internal.LifecycleStarter.execute(Lifecycl
> eStarter.java:161)
> at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:317)
> at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:152)
> at org.apache.maven.cli.MavenCli.execute(MavenCli.java:555)
> at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:214)
> at org.apache.maven.cli.MavenCli.main(MavenCli.java:158)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
> java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
> sorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at
> org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Laun
> cher.java:289)
> at
> org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.jav
> a:229)
> at
> org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(La
> uncher.java:415)
> at
> org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:
> 356)
> Caused by: org.apache.maven.plugin.MojoExecutionException: Command
> execution fai
> led.
> at org.codehaus.mojo.exec.ExecMojo.execute(ExecMojo.java:303)
> at
> org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(Default
> BuildPluginManager.java:106)
> at
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor
> .java:208)
> ... 19 more
> Caused by: org.apache.commons.exec.ExecuteException: Process exited with
> an erro
> r: 1 (Exit value: 1)
> at
> org.apache.commons.exec.DefaultExecutor.executeInternal(DefaultExecut
> or.java:402)
> at
> org.apache.commons.exec.DefaultExecutor.execute(DefaultExecutor.java:
> 164)
> at
> org.codehaus.mojo.exec.ExecMojo.executeCommandLine(ExecMojo.java:750)
>
> at org.codehaus.mojo.exec.ExecMojo.execute(ExecMojo.java:292)
> ... 21 more
> [ERROR]
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions,
> please rea
> d the following articles:
> [ERROR] [Help 1]
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionE
> xception
> [ERROR]
> [ERROR] After correcting the problems, you can resume the build with the

Re: Error importing hbase table on new system

2015-09-27 Thread Ted Yu
Is the single node system secure ?
Have you checked hdfs healthiness ?
To which release of hbase were you importing ?

Thanks

> On Sep 27, 2015, at 3:06 AM, Håvard Wahl Kongsgård 
>  wrote:
> 
> Hi, Iam trying to import a old backup to a new smaller system (just
> single node, to get the data out)
> 
> when I use
> 
> sudo -u hbase hbase -Dhbase.import.version=0.94
> org.apache.hadoop.hbase.mapreduce.Import crawler
> /crawler_hbase/crawler
> 
> I get this error in the tasks . Is this a permission problem?
> 
> 
> 2015-09-26 23:56:32,995 ERROR
> org.apache.hadoop.security.UserGroupInformation:
> PriviledgedActionException as:mapred (auth:SIMPLE)
> cause:java.io.IOException: keyvalues=NONE read 4096 bytes, should read
> 14279
> 2015-09-26 23:56:32,996 WARN org.apache.hadoop.mapred.Child: Error running 
> child
> java.io.IOException: keyvalues=NONE read 4096 bytes, should read 14279
> at 
> org.apache.hadoop.io.SequenceFile$Reader.getCurrentValue(SequenceFile.java:2221)
> at 
> org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader.nextKeyValue(SequenceFileRecordReader.java:74)
> at 
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:483)
> at 
> org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:76)
> at 
> org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:85)
> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:139)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:672)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:330)
> at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
> at org.apache.hadoop.mapred.Child.main(Child.java:262)
> 2015-09-26 23:56:33,002 INFO org.apache.hadoop.mapred.Task: Runnning
> cleanup for the task
> 
> 
> 
> -- 
> Håvard Wahl Kongsgård
> Data Scientist


Re: Error importing hbase table on new system

2015-09-27 Thread Håvard Wahl Kongsgård
>Is the single node system secure ?

No have not activated, just defaults

the mapred conf.








  

mapred.job.tracker

rack3:8021

  


  

  

mapred.jobtracker.plugins

org.apache.hadoop.thriftfs.ThriftJobTrackerPlugin

Comma-separated list of jobtracker plug-ins to be activated.



  

  

jobtracker.thrift.address

0.0.0.0:9290

  




>>Have you checked hdfs healthiness ?


sudo -u hdfs hdfs dfsadmin -report

Configured Capacity: 2876708585472 (2.62 TB)

Present Capacity: 1991514849280 (1.81 TB)

DFS Remaining: 1648230617088 (1.50 TB)

DFS Used: 343284232192 (319.71 GB)

DFS Used%: 17.24%

Under replicated blocks: 52

Blocks with corrupt replicas: 0

Missing blocks: 0


-

Datanodes available: 1 (1 total, 0 dead)


Live datanodes:

Name: 127.0.0.1:50010 (localhost)

Hostname: rack3

Decommission Status : Normal

Configured Capacity: 2876708585472 (2.62 TB)

DFS Used: 343284232192 (319.71 GB)

Non DFS Used: 885193736192 (824.40 GB)

DFS Remaining: 1648230617088 (1.50 TB)

DFS Used%: 11.93%

DFS Remaining%: 57.30%

Last contact: Sun Sep 27 13:44:45 CEST 2015


>>To which release of hbase were you importing ?

Hbase 0.94 (CHD 4)

the new one is CHD 5.4

On Sun, Sep 27, 2015 at 1:32 PM, Ted Yu  wrote:
> Is the single node system secure ?
> Have you checked hdfs healthiness ?
> To which release of hbase were you importing ?
>
> Thanks
>
>> On Sep 27, 2015, at 3:06 AM, Håvard Wahl Kongsgård 
>>  wrote:
>>
>> Hi, Iam trying to import a old backup to a new smaller system (just
>> single node, to get the data out)
>>
>> when I use
>>
>> sudo -u hbase hbase -Dhbase.import.version=0.94
>> org.apache.hadoop.hbase.mapreduce.Import crawler
>> /crawler_hbase/crawler
>>
>> I get this error in the tasks . Is this a permission problem?
>>
>>
>> 2015-09-26 23:56:32,995 ERROR
>> org.apache.hadoop.security.UserGroupInformation:
>> PriviledgedActionException as:mapred (auth:SIMPLE)
>> cause:java.io.IOException: keyvalues=NONE read 4096 bytes, should read
>> 14279
>> 2015-09-26 23:56:32,996 WARN org.apache.hadoop.mapred.Child: Error running 
>> child
>> java.io.IOException: keyvalues=NONE read 4096 bytes, should read 14279
>> at 
>> org.apache.hadoop.io.SequenceFile$Reader.getCurrentValue(SequenceFile.java:2221)
>> at 
>> org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader.nextKeyValue(SequenceFileRecordReader.java:74)
>> at 
>> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:483)
>> at 
>> org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:76)
>> at 
>> org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:85)
>> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:139)
>> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:672)
>> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:330)
>> at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
>> at java.security.AccessController.doPrivileged(Native Method)
>> at javax.security.auth.Subject.doAs(Subject.java:415)
>> at 
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
>> at org.apache.hadoop.mapred.Child.main(Child.java:262)
>> 2015-09-26 23:56:33,002 INFO org.apache.hadoop.mapred.Task: Runnning
>> cleanup for the task
>>
>>
>>
>> --
>> Håvard Wahl Kongsgård
>> Data Scientist



-- 
Håvard Wahl Kongsgård
Data Scientist


Re: Error importing hbase table on new system

2015-09-27 Thread Ted Yu
Have you verified that the files to be imported are in HFilev2 format ?

http://hbase.apache.org/book.html#_hfile_tool

Cheers

On Sun, Sep 27, 2015 at 4:47 AM, Håvard Wahl Kongsgård <
haavard.kongsga...@gmail.com> wrote:

> >Is the single node system secure ?
>
> No have not activated, just defaults
>
> the mapred conf.
>
> 
>
> 
>
>
> 
>
>   
>
> mapred.job.tracker
>
> rack3:8021
>
>   
>
>
>   
>
>   
>
> mapred.jobtracker.plugins
>
> org.apache.hadoop.thriftfs.ThriftJobTrackerPlugin
>
> Comma-separated list of jobtracker plug-ins to be
> activated.
>
> 
>
>   
>
>   
>
> jobtracker.thrift.address
>
> 0.0.0.0:9290
>
>   
>
> 
>
>
> >>Have you checked hdfs healthiness ?
>
>
> sudo -u hdfs hdfs dfsadmin -report
>
> Configured Capacity: 2876708585472 (2.62 TB)
>
> Present Capacity: 1991514849280 (1.81 TB)
>
> DFS Remaining: 1648230617088 (1.50 TB)
>
> DFS Used: 343284232192 (319.71 GB)
>
> DFS Used%: 17.24%
>
> Under replicated blocks: 52
>
> Blocks with corrupt replicas: 0
>
> Missing blocks: 0
>
>
> -
>
> Datanodes available: 1 (1 total, 0 dead)
>
>
> Live datanodes:
>
> Name: 127.0.0.1:50010 (localhost)
>
> Hostname: rack3
>
> Decommission Status : Normal
>
> Configured Capacity: 2876708585472 (2.62 TB)
>
> DFS Used: 343284232192 (319.71 GB)
>
> Non DFS Used: 885193736192 (824.40 GB)
>
> DFS Remaining: 1648230617088 (1.50 TB)
>
> DFS Used%: 11.93%
>
> DFS Remaining%: 57.30%
>
> Last contact: Sun Sep 27 13:44:45 CEST 2015
>
>
> >>To which release of hbase were you importing ?
>
> Hbase 0.94 (CHD 4)
>
> the new one is CHD 5.4
>
> On Sun, Sep 27, 2015 at 1:32 PM, Ted Yu  wrote:
> > Is the single node system secure ?
> > Have you checked hdfs healthiness ?
> > To which release of hbase were you importing ?
> >
> > Thanks
> >
> >> On Sep 27, 2015, at 3:06 AM, Håvard Wahl Kongsgård <
> haavard.kongsga...@gmail.com> wrote:
> >>
> >> Hi, Iam trying to import a old backup to a new smaller system (just
> >> single node, to get the data out)
> >>
> >> when I use
> >>
> >> sudo -u hbase hbase -Dhbase.import.version=0.94
> >> org.apache.hadoop.hbase.mapreduce.Import crawler
> >> /crawler_hbase/crawler
> >>
> >> I get this error in the tasks . Is this a permission problem?
> >>
> >>
> >> 2015-09-26 23:56:32,995 ERROR
> >> org.apache.hadoop.security.UserGroupInformation:
> >> PriviledgedActionException as:mapred (auth:SIMPLE)
> >> cause:java.io.IOException: keyvalues=NONE read 4096 bytes, should read
> >> 14279
> >> 2015-09-26 23:56:32,996 WARN org.apache.hadoop.mapred.Child: Error
> running child
> >> java.io.IOException: keyvalues=NONE read 4096 bytes, should read 14279
> >> at
> org.apache.hadoop.io.SequenceFile$Reader.getCurrentValue(SequenceFile.java:2221)
> >> at
> org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader.nextKeyValue(SequenceFileRecordReader.java:74)
> >> at
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:483)
> >> at
> org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:76)
> >> at
> org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:85)
> >> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:139)
> >> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:672)
> >> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:330)
> >> at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
> >> at java.security.AccessController.doPrivileged(Native Method)
> >> at javax.security.auth.Subject.doAs(Subject.java:415)
> >> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
> >> at org.apache.hadoop.mapred.Child.main(Child.java:262)
> >> 2015-09-26 23:56:33,002 INFO org.apache.hadoop.mapred.Task: Runnning
> >> cleanup for the task
> >>
> >>
> >>
> >> --
> >> Håvard Wahl Kongsgård
> >> Data Scientist
>
>
>
> --
> Håvard Wahl Kongsgård
> Data Scientist
>


RE: Problem running example (wrong IP address)

2015-09-27 Thread Brahma Reddy Battula
Thanks for sharing the logs.
Problem is interesting..can you please post namenode logs and dual IP 
configurations(thinking problem with gateway while sending requests from 52.1 
segment to 51.1 segment..)

Thanks And RegardsBrahma Reddy Battula

Date: Fri, 25 Sep 2015 12:19:00 -0500
Subject: Re: Problem running example (wrong IP address)
From: dwmaill...@gmail.com
To: user@hadoop.apache.org

hadoop-master http://pastebin.com/yVF8vCYShadoop-data1 
http://pastebin.com/xMEdf01ehadoop-data2 http://pastebin.com/prqd02eZ


On Fri, Sep 25, 2015 at 11:53 AM, Brahma Reddy Battula 
 wrote:



sorry,I am not able to access the logs, could please post in paste bin or 
attach the 192.168.51.6( as your query is why different IP) DN logs and 
namenode logs here..?



Thanks And RegardsBrahma Reddy Battula

Date: Fri, 25 Sep 2015 11:16:55 -0500
Subject: Re: Problem running example (wrong IP address)
From: dwmaill...@gmail.com
To: user@hadoop.apache.org

Brahma,
Thanks for the reply. I'll keep this conversation here in the user list. The 
/etc/hosts file is identical on all three nodes
hadoop@hadoop-data1:~$ cat /etc/hosts127.0.0.1 localhost192.168.51.4 
hadoop-master
192.168.52.4 hadoop-data1192.168.52.6 hadoop-data2
hadoop@hadoop-data2:~$ cat /etc/hosts127.0.0.1 localhost192.168.51.4 
hadoop-master
192.168.52.4 hadoop-data1192.168.52.6 hadoop-data2
hadoop@hadoop-master:~$ cat /etc/hosts127.0.0.1 localhost192.168.51.4 
hadoop-master
192.168.52.4 hadoop-data1192.168.52.6 hadoop-data2
Here are the startup logs for all three 
nodes:https://gist.github.com/dwatrous/7241bb804a9be8f9303f
https://gist.github.com/dwatrous/bcd85cda23d6eca3a68b
https://gist.github.com/dwatrous/922c4f773aded0137fa3

Thanks for your help.

On Fri, Sep 25, 2015 at 10:33 AM, Brahma Reddy Battula 
 wrote:







Seems DN started in three machines and failed in hadoop-data1(192.168.52.4)..





192.168.51.6 : giving IP as 192.168.51.1...can you please check /etc/hosts file 
of 192.168.51.6 (might be 192.168.51.1 is configured in /etc/hosts)



192.168.52.4 : datanode startup might be failed ( you can check this node logs)



192.168.51.4 :  Datanode starup is success..which is in master node..









Thanks & Regards

 Brahma Reddy Battula

 










From: Daniel Watrous [dwmaill...@gmail.com]

Sent: Friday, September 25, 2015 8:41 PM

To: user@hadoop.apache.org

Subject: Re: Problem running example (wrong IP address)






I'm still stuck on this and posted it to stackoverflow:
http://stackoverflow.com/questions/32785256/hadoop-datanode-binds-wrong-ip-address





Thanks,
Daniel



On Fri, Sep 25, 2015 at 8:28 AM, Daniel Watrous 
 wrote:


I could really use some help here. As you can see from the output below, the 
two attached datanodes are identified with a non-existent IP address. Can 
someone tell me how that gets selected or how to explicitly set it. Also, why 
are both datanodes
 shown under the same name/IP?





hadoop@hadoop-master:~$ hdfs dfsadmin -report
Configured Capacity: 84482326528 (78.68 GB)
Present Capacity: 75745546240 (70.54 GB)
DFS Remaining: 75744862208 (70.54 GB)
DFS Used: 684032 (668 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0



-
Live datanodes (2):



Name: 192.168.51.1:50010 (192.168.51.1)
Hostname: hadoop-data1
Decommission Status : Normal
Configured Capacity: 42241163264 (39.34 GB)
DFS Used: 303104 (296 KB)
Non DFS Used: 
4302479360 (4.01 GB)
DFS Remaining: 37938380800 (35.33 GB)
DFS Used%: 0.00%
DFS Remaining%: 89.81%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Fri Sep 25 13:25:37 UTC 2015






Name: 192.168.51.4:50010 (hadoop-master)
Hostname: hadoop-master
Decommission Status : Normal
Configured Capacity: 42241163264 (39.34 GB)
DFS Used: 380928 (372 KB)
Non DFS Used: 
4434300928 (4.13 GB)
DFS Remaining: 37806481408 (35.21 GB)
DFS Used%: 0.00%
DFS Remaining%: 89.50%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Fri Sep 25 13:25:38 UTC 2015













On Thu, Sep 24, 2015 at 5:05 PM, Daniel Watrous 
 wrote:


The IP address is clearly wrong, but I'm not sure how it gets set. Can someone 
tell me how to configure it to choose a valid IP address?




On Thu, Sep 24, 2015 at 3:26 PM, Daniel Watrous 
 wrote:


I just noticed that both datanodes appear to have chosen that IP address and 
bound that port for HDFS communication.



http://screencast.com/t/OQNbrWFF





Any idea why this would be? Is there some way to specify which IP/hostname 
should be used for that?





On Thu, Sep 24, 2015 at 3:11 PM, Daniel Watrous 
 wrote:



When I try to run a map reduce example, I get the following error:




hadoop@hadoop-master:~$ hadoop jar 
/usr/local/hadoop/share/hadoop/

Re: Error importing hbase table on new system

2015-09-27 Thread Håvard Wahl Kongsgård
Yes, I have tried to read them on another system as well. It worked
there. But I don't know if they are HFilev1 or HFilev2 format(any way
to check ?? )

This is the first lines from one of the files

SEQ1org.apache.hadoop.hbase.io.ImmutableBytesWritable%org.apache.hadoop.hbase.client.Result*org.apache.hadoop.io.compress.DefaultCodec���N��
$��a&t��wb%!10107712083-10152358443612846x���P]�6:�w|pw
��� �$��K 0� Npw�$x�@����s�;�Ɦ���V�^��ݽW����
�қ��<�/��0/�?'/�
�/�����7{kG[(������w��OY^I���}9��l��;�TJ�����J�‹pu���V�ӡm�\E@��V6�oe45U���,�3���Ͻ�w��O���zڼ�/��歇�KȦ/?��Y;�/�������}���룫-�'_�k���q�$��˨�����^
���i���tH$/��e.J��{S�\��S>Gd���1~p#��o����M
�!٠��;c��IkQ
�A)|d�i�(Z�f��oPb�j{��x����`�b���cbb`�"�}�HCG��&�JG�%��',*!!��
������&�_Q��R�2�1��_��~>:�b����w@�B�~Y�H�(�h/FR_+��nX`#�
|D����j��܏��f��ƨT��k/颚h��4`+Q#�ⵕ�,Z�80�V:�
)Y)4Lq��[�z#���T wrote:
> Have you verified that the files to be imported are in HFilev2 format ?
>
> http://hbase.apache.org/book.html#_hfile_tool
>
> Cheers
>
> On Sun, Sep 27, 2015 at 4:47 AM, Håvard Wahl Kongsgård
>  wrote:
>>
>> >Is the single node system secure ?
>>
>> No have not activated, just defaults
>>
>> the mapred conf.
>>
>> 
>>
>> 
>>
>>
>> 
>>
>>   
>>
>> mapred.job.tracker
>>
>> rack3:8021
>>
>>   
>>
>>
>>   
>>
>>   
>>
>> mapred.jobtracker.plugins
>>
>> org.apache.hadoop.thriftfs.ThriftJobTrackerPlugin
>>
>> Comma-separated list of jobtracker plug-ins to be
>> activated.
>>
>> 
>>
>>   
>>
>>   
>>
>> jobtracker.thrift.address
>>
>> 0.0.0.0:9290
>>
>>   
>>
>> 
>>
>>
>> >>Have you checked hdfs healthiness ?
>>
>>
>> sudo -u hdfs hdfs dfsadmin -report
>>
>> Configured Capacity: 2876708585472 (2.62 TB)
>>
>> Present Capacity: 1991514849280 (1.81 TB)
>>
>> DFS Remaining: 1648230617088 (1.50 TB)
>>
>> DFS Used: 343284232192 (319.71 GB)
>>
>> DFS Used%: 17.24%
>>
>> Under replicated blocks: 52
>>
>> Blocks with corrupt replicas: 0
>>
>> Missing blocks: 0
>>
>>
>> -
>>
>> Datanodes available: 1 (1 total, 0 dead)
>>
>>
>> Live datanodes:
>>
>> Name: 127.0.0.1:50010 (localhost)
>>
>> Hostname: rack3
>>
>> Decommission Status : Normal
>>
>> Configured Capacity: 2876708585472 (2.62 TB)
>>
>> DFS Used: 343284232192 (319.71 GB)
>>
>> Non DFS Used: 885193736192 (824.40 GB)
>>
>> DFS Remaining: 1648230617088 (1.50 TB)
>>
>> DFS Used%: 11.93%
>>
>> DFS Remaining%: 57.30%
>>
>> Last contact: Sun Sep 27 13:44:45 CEST 2015
>>
>>
>> >>To which release of hbase were you importing ?
>>
>> Hbase 0.94 (CHD 4)
>>
>> the new one is CHD 5.4
>>
>> On Sun, Sep 27, 2015 at 1:32 PM, Ted Yu  wrote:
>> > Is the single node system secure ?
>> > Have you checked hdfs healthiness ?
>> > To which release of hbase were you importing ?
>> >
>> > Thanks
>> >
>> >> On Sep 27, 2015, at 3:06 AM, Håvard Wahl Kongsgård
>> >>  wrote:
>> >>
>> >> Hi, Iam trying to import a old backup to a new smaller system (just
>> >> single node, to get the data out)
>> >>
>> >> when I use
>> >>
>> >> sudo -u hbase hbase -Dhbase.import.version=0.94
>> >> org.apache.hadoop.hbase.mapreduce.Import crawler
>> >> /crawler_hbase/crawler
>> >>
>> >> I get this error in the tasks . Is this a permission problem?
>> >>
>> >>
>> >> 2015-09-26 23:56:32,995 ERROR
>> >> org.apache.hadoop.security.UserGroupInformation:
>> >> PriviledgedActionException as:mapred (auth:SIMPLE)
>> >> cause:java.io.IOException: keyvalues=NONE read 4096 bytes, should read
>> >> 14279
>> >> 2015-09-26 23:56:32,996 WARN org.apache.hadoop.mapred.Child: Error
>> >> running child
>> >> java.io.IOException: keyvalues=NONE read 4096 bytes, should read 14279
>> >> at
>> >> org.apache.hadoop.io.SequenceFile$Reader.getCurrentValue(SequenceFile.java:2221)
>> >> at
>> >> org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader.nextKeyValue(SequenceFileRecordReader.java:74)
>> >> at
>> >> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:483)
>> >> at
>> >> org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:76)
>> >> at
>> >> org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:85)
>> >> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:139)
>> >> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:672)
>> >> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:330)
>> >> at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
>> >> at java.security.AccessController.doPrivileged(Native Method)
>> >> at javax.security.auth.Subject.doAs(Subject.java:415)
>> >> at
>> >> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
>> >> at org.apache.hadoop.mapred.Child.main(Child.java:262)
>> >> 2015-09-26 23:56:33,002 INFO org.apache.hadoop.mapred.Task: Runnning
>> >> cleanup for the task
>> >>
>> >>
>> >>

Re: Error importing hbase table on new system

2015-09-27 Thread Ted Yu
Have you used HFile tool ?

The tool would print out that information as part of metadata.

Cheers

On Sun, Sep 27, 2015 at 9:19 AM, Håvard Wahl Kongsgård <
haavard.kongsga...@gmail.com> wrote:

> Yes, I have tried to read them on another system as well. It worked
> there. But I don't know if they are HFilev1 or HFilev2 format(any way
> to check ?? )
>
> This is the first lines from one of the files
>
> SEQ
> 1org.apache.hadoop.hbase.io.ImmutableBytesWritable%org.apache.hadoop.hbase.client.Result
> *org.apache.hadoop.io.compress.DefaultCodec���N��
> $��a&t ��wb%!10107712083-10152358443612846x��� P ]�6:� w |pw
>  ���   �$��K 0�   Npw�$x�@� ���s�;�Ɦ���V�^��ݽW� �� �
> �қ ��<� /��0/ � ?'/ �
>  � /��� �� 7{kG [(���  ���w�� OY^I ���}9 � �l��;�TJ�� �� �J� ‹  
>  pu���V�   ӡm�\E @ ��V6�oe45U ���,�3 ���Ͻ�w��O���zڼ�/��歇�KȦ/ ?��
> Y;� / ��� �� �� }� ��룫-�'_�k� ��q� $ ��˨� � ���^
> ��� i��� tH$/��e.J��{S �\��S >G d���1~ p#��  o �� ��M
> �!٠��;c��I kQ
> �A)|d�i�(Z�f��o Pb �j {�  �x��� � `�b���cbb`�"�} �
> HCG��&�JG�%��',*!!��
> �� �  � ��& �_Q��R�2�1��_��~>:� b  � ���w @�B�  ~Y�H�(�h/FR
> _+��nX `#�
> |D��� �j��܏�� f ��ƨT��k/ 颚h ��4` +Q#�ⵕ�,Z�80�V:�
>  )Y)4Lq��[�   z#���T
> -Håvard
>
> On Sun, Sep 27, 2015 at 4:06 PM, Ted Yu  wrote:
> > Have you verified that the files to be imported are in HFilev2 format ?
> >
> > http://hbase.apache.org/book.html#_hfile_tool
> >
> > Cheers
> >
> > On Sun, Sep 27, 2015 at 4:47 AM, Håvard Wahl Kongsgård
> >  wrote:
> >>
> >> >Is the single node system secure ?
> >>
> >> No have not activated, just defaults
> >>
> >> the mapred conf.
> >>
> >> 
> >>
> >> 
> >>
> >>
> >> 
> >>
> >>   
> >>
> >> mapred.job.tracker
> >>
> >> rack3:8021
> >>
> >>   
> >>
> >>
> >>   
> >>
> >>   
> >>
> >> mapred.jobtracker.plugins
> >>
> >> org.apache.hadoop.thriftfs.ThriftJobTrackerPlugin
> >>
> >> Comma-separated list of jobtracker plug-ins to be
> >> activated.
> >>
> >> 
> >>
> >>   
> >>
> >>   
> >>
> >> jobtracker.thrift.address
> >>
> >> 0.0.0.0:9290
> >>
> >>   
> >>
> >> 
> >>
> >>
> >> >>Have you checked hdfs healthiness ?
> >>
> >>
> >> sudo -u hdfs hdfs dfsadmin -report
> >>
> >> Configured Capacity: 2876708585472 (2.62 TB)
> >>
> >> Present Capacity: 1991514849280 (1.81 TB)
> >>
> >> DFS Remaining: 1648230617088 (1.50 TB)
> >>
> >> DFS Used: 343284232192 (319.71 GB)
> >>
> >> DFS Used%: 17.24%
> >>
> >> Under replicated blocks: 52
> >>
> >> Blocks with corrupt replicas: 0
> >>
> >> Missing blocks: 0
> >>
> >>
> >> -
> >>
> >> Datanodes available: 1 (1 total, 0 dead)
> >>
> >>
> >> Live datanodes:
> >>
> >> Name: 127.0.0.1:50010 (localhost)
> >>
> >> Hostname: rack3
> >>
> >> Decommission Status : Normal
> >>
> >> Configured Capacity: 2876708585472 (2.62 TB)
> >>
> >> DFS Used: 343284232192 (319.71 GB)
> >>
> >> Non DFS Used: 885193736192 (824.40 GB)
> >>
> >> DFS Remaining: 1648230617088 (1.50 TB)
> >>
> >> DFS Used%: 11.93%
> >>
> >> DFS Remaining%: 57.30%
> >>
> >> Last contact: Sun Sep 27 13:44:45 CEST 2015
> >>
> >>
> >> >>To which release of hbase were you importing ?
> >>
> >> Hbase 0.94 (CHD 4)
> >>
> >> the new one is CHD 5.4
> >>
> >> On Sun, Sep 27, 2015 at 1:32 PM, Ted Yu  wrote:
> >> > Is the single node system secure ?
> >> > Have you checked hdfs healthiness ?
> >> > To which release of hbase were you importing ?
> >> >
> >> > Thanks
> >> >
> >> >> On Sep 27, 2015, at 3:06 AM, Håvard Wahl Kongsgård
> >> >>  wrote:
> >> >>
> >> >> Hi, Iam trying to import a old backup to a new smaller system (just
> >> >> single node, to get the data out)
> >> >>
> >> >> when I use
> >> >>
> >> >> sudo -u hbase hbase -Dhbase.import.version=0.94
> >> >> org.apache.hadoop.hbase.mapreduce.Import crawler
> >> >> /crawler_hbase/crawler
> >> >>
> >> >> I get this error in the tasks . Is this a permission problem?
> >> >>
> >> >>
> >> >> 2015-09-26 23:56:32,995 ERROR
> >> >> org.apache.hadoop.security.UserGroupInformation:
> >> >> PriviledgedActionException as:mapred (auth:SIMPLE)
> >> >> cause:java.io.IOException: keyvalues=NONE read 4096 bytes, should
> read
> >> >> 14279
> >> >> 2015-09-26 23:56:32,996 WARN org.apache.hadoop.mapred.Child: Error
> >> >> running child
> >> >> java.io.IOException: keyvalues=NONE read 4096 bytes, should read
> 14279
> >> >> at
> >> >>
> org.apache.hadoop.io.SequenceFile$Reader.getCurrentValue(SequenceFile.java:2221)
> >> >> at
> >> >>
> org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader.nextKeyValue(SequenceFileRecordReader.java:74)
> >> >> at
> >> >>
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:483)
> >> >> at
> >> >>
> org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:76)
> >> >> at
> >> >>
> org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:85)
> >> >> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:

Re: Error importing hbase table on new system

2015-09-27 Thread Håvard Wahl Kongsgård
But is not that tool for reading regions not export (dumps) ??

-Håvard

On Sun, Sep 27, 2015 at 6:23 PM, Ted Yu  wrote:
> Have you used HFile tool ?
>
> The tool would print out that information as part of metadata.
>
> Cheers
>
> On Sun, Sep 27, 2015 at 9:19 AM, Håvard Wahl Kongsgård
>  wrote:
>>
>> Yes, I have tried to read them on another system as well. It worked
>> there. But I don't know if they are HFilev1 or HFilev2 format(any way
>> to check ?? )
>>
>> This is the first lines from one of the files
>>
>> SEQ
>> 1org.apache.hadoop.hbase.io.ImmutableBytesWritable%org.apache.hadoop.hbase.client.Result
>> *org.apache.hadoop.io.compress.DefaultCodec���N��
>> $��a&t ��wb%!10107712083-10152358443612846x��� P ]�6:� w |pw
>>  ���   �$��K 0�   Npw�$x�@� ���s�;�Ɦ���V�^��ݽW� �� �
>> �қ ��<� /��0/ � ?'/ �
>>  � /��� �� 7{kG [(���  ���w�� OY^I ���}9 � �l��;�TJ�� �� �J� ‹  
>> pu���V�   ӡm�\E @ ��V6�oe45U ���,�3 ���Ͻ�w��O���zڼ�/��歇�KȦ/ ?��  Y;�
>> / ��� �� �� }� ��룫-�'_�k� ��q� $ ��˨� � ���^
>> ��� i��� tH$/��e.J��{S �\��S >G d���1~ p#��  o �� ��M
>> �!٠��;c��I kQ
>> �A)|d�i�(Z�f��o Pb �j {�  �x��� � `�b���cbb`�"�} �
>> HCG��&�JG�%��',*!!��
>> �� �  � ��& �_Q��R�2�1��_��~>:� b  � ���w @�B�  ~Y�H�(�h/FR
>> _+��nX `#�
>> |D��� �j��܏�� f ��ƨT��k/ 颚h ��4` +Q#�ⵕ�,Z�80�V:�
>>  )Y)4Lq��[�   z#���T>
>> -Håvard
>>
>> On Sun, Sep 27, 2015 at 4:06 PM, Ted Yu  wrote:
>> > Have you verified that the files to be imported are in HFilev2 format ?
>> >
>> > http://hbase.apache.org/book.html#_hfile_tool
>> >
>> > Cheers
>> >
>> > On Sun, Sep 27, 2015 at 4:47 AM, Håvard Wahl Kongsgård
>> >  wrote:
>> >>
>> >> >Is the single node system secure ?
>> >>
>> >> No have not activated, just defaults
>> >>
>> >> the mapred conf.
>> >>
>> >> 
>> >>
>> >> 
>> >>
>> >>
>> >> 
>> >>
>> >>   
>> >>
>> >> mapred.job.tracker
>> >>
>> >> rack3:8021
>> >>
>> >>   
>> >>
>> >>
>> >>   
>> >>
>> >>   
>> >>
>> >> mapred.jobtracker.plugins
>> >>
>> >> org.apache.hadoop.thriftfs.ThriftJobTrackerPlugin
>> >>
>> >> Comma-separated list of jobtracker plug-ins to be
>> >> activated.
>> >>
>> >> 
>> >>
>> >>   
>> >>
>> >>   
>> >>
>> >> jobtracker.thrift.address
>> >>
>> >> 0.0.0.0:9290
>> >>
>> >>   
>> >>
>> >> 
>> >>
>> >>
>> >> >>Have you checked hdfs healthiness ?
>> >>
>> >>
>> >> sudo -u hdfs hdfs dfsadmin -report
>> >>
>> >> Configured Capacity: 2876708585472 (2.62 TB)
>> >>
>> >> Present Capacity: 1991514849280 (1.81 TB)
>> >>
>> >> DFS Remaining: 1648230617088 (1.50 TB)
>> >>
>> >> DFS Used: 343284232192 (319.71 GB)
>> >>
>> >> DFS Used%: 17.24%
>> >>
>> >> Under replicated blocks: 52
>> >>
>> >> Blocks with corrupt replicas: 0
>> >>
>> >> Missing blocks: 0
>> >>
>> >>
>> >> -
>> >>
>> >> Datanodes available: 1 (1 total, 0 dead)
>> >>
>> >>
>> >> Live datanodes:
>> >>
>> >> Name: 127.0.0.1:50010 (localhost)
>> >>
>> >> Hostname: rack3
>> >>
>> >> Decommission Status : Normal
>> >>
>> >> Configured Capacity: 2876708585472 (2.62 TB)
>> >>
>> >> DFS Used: 343284232192 (319.71 GB)
>> >>
>> >> Non DFS Used: 885193736192 (824.40 GB)
>> >>
>> >> DFS Remaining: 1648230617088 (1.50 TB)
>> >>
>> >> DFS Used%: 11.93%
>> >>
>> >> DFS Remaining%: 57.30%
>> >>
>> >> Last contact: Sun Sep 27 13:44:45 CEST 2015
>> >>
>> >>
>> >> >>To which release of hbase were you importing ?
>> >>
>> >> Hbase 0.94 (CHD 4)
>> >>
>> >> the new one is CHD 5.4
>> >>
>> >> On Sun, Sep 27, 2015 at 1:32 PM, Ted Yu  wrote:
>> >> > Is the single node system secure ?
>> >> > Have you checked hdfs healthiness ?
>> >> > To which release of hbase were you importing ?
>> >> >
>> >> > Thanks
>> >> >
>> >> >> On Sep 27, 2015, at 3:06 AM, Håvard Wahl Kongsgård
>> >> >>  wrote:
>> >> >>
>> >> >> Hi, Iam trying to import a old backup to a new smaller system (just
>> >> >> single node, to get the data out)
>> >> >>
>> >> >> when I use
>> >> >>
>> >> >> sudo -u hbase hbase -Dhbase.import.version=0.94
>> >> >> org.apache.hadoop.hbase.mapreduce.Import crawler
>> >> >> /crawler_hbase/crawler
>> >> >>
>> >> >> I get this error in the tasks . Is this a permission problem?
>> >> >>
>> >> >>
>> >> >> 2015-09-26 23:56:32,995 ERROR
>> >> >> org.apache.hadoop.security.UserGroupInformation:
>> >> >> PriviledgedActionException as:mapred (auth:SIMPLE)
>> >> >> cause:java.io.IOException: keyvalues=NONE read 4096 bytes, should
>> >> >> read
>> >> >> 14279
>> >> >> 2015-09-26 23:56:32,996 WARN org.apache.hadoop.mapred.Child: Error
>> >> >> running child
>> >> >> java.io.IOException: keyvalues=NONE read 4096 bytes, should read
>> >> >> 14279
>> >> >> at
>> >> >>
>> >> >> org.apache.hadoop.io.SequenceFile$Reader.getCurrentValue(SequenceFile.java:2221)
>> >> >> at
>> >> >>
>> >> >> org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader.nextKeyValue(SequenceFileRecordReader.java:74)
>> >> >> at
>> >> >>
>> >> >> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReade

RE: Why would ApplicationManager request RAM more that defaut 1GB?

2015-09-27 Thread Naganarasimha G R (Naga)
Hi Ilya,
I think that property is of less significance, its only to confirm wrt behavior 
wrt virtual memory. But the imp one is, can we get the snapshot of the heap 
(using the command shared earlier),
from it roughly we can determine which object is hogging the memory.

+ Naga



From: Ilya Karpov [i.kar...@cleverdata.ru]
Sent: Friday, September 25, 2015 14:34
To: user@hadoop.apache.org
Subject: Re: Why would ApplicationManager request RAM more that defaut 1GB?

Hi Manoj & Naga,
I’m surprised but there is no such a property in CHD conf files (greped all 
*.xml in OSes where yarn lives!)
I think that this property is set by Cloudera: 
http://image.slidesharecdn.com/yarnsaboutyarn-kathleenting112114-141125155911-conversion-gate01/95/yarns-about-yarn-28-638.jpg?cb=1416931543
(we use chd 5.4.5)

25 сент. 2015 г., в 10:19, Naganarasimha Garla 
mailto:naganarasimha...@gmail.com>> написал(а):

Hi Manoj & Ilya,

>From the logs
2015-09-21 22:50:34,018 WARN 
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
 Container [pid=13982,containerID=container_1442402147223_0165_01_01] is 
running beyond physical memory limits.

This indicates that its increase in Physical memory limits and not virtual 
limits due to which container was killed and also probability that Container 
Monitor  would  wait till vmem is 3.4 GB when limit is 2.1 GB to kill it is 
less.

Vmem includes overall memory limits including the files opened etc ... but 
seems like virtual mem check in your setup seems to be disabled. please check 
for configuration as mentioned by Manoj "yarn.nodemanager.vmem-check-enabled "  
to cross verify


On Fri, Sep 25, 2015 at 12:15 PM, Ilya Karpov 
mailto:i.kar...@cleverdata.ru>> wrote:
Hello, Manoj
the actual question is why this happens

24 сент. 2015 г., в 20:39, manoj 
mailto:manojm@gmail.com>> написал(а):

Hello IIya,

Looks like the Vmem usage is going above the above 2.1 of Pmem times thats why 
the container is getting killed,

1.0 GB of 1 GB physical memory used; 3.4 GB of 2.1 GB virtual memory used

By default Vmem is set to 2.1 times of the Pmem.
Looks like your job is taking 3.4GB!

You can change the ratio by setting in Yarn-site.xml:
yarn.nodemanager.vmem-pmem-ratio

You can optionally disable this check by setting following to false:

yarn.nodemanager.vmem-check-enabled


Thanks,
-Manoj

On Wed, Sep 23, 2015 at 12:36 AM, Ilya Karpov 
mailto:i.kar...@cleverdata.ru>> wrote:
Great thanks for your reply!

>1. Which version of Hadoop/ YARN ?
Hadoop(command: hadoop version):
Hadoop 2.6.0-cdh5.4.5
Subversion http://github.com/cloudera/hadoop -r 
ab14c89fe25e9fb3f9de4fb852c21365b7c5608b
Compiled by jenkins on 2015-08-12T21:11Z
Compiled with protoc 2.5.0
>From source with checksum d31cb7e46b8602edaf68d335b785ab
This command was run using 
/opt/cloudera/parcels/CDH-5.4.5-1.cdh5.4.5.p0.7/jars/hadoop-common-2.6.0-cdh5.4.5.jar
Yarn (command: yarn version) prints exactly the same.

>2. From the logs is it getting killed due to over usage of Vmem or Physical 
>memory ?
Because of over usage of Physical memory. Last seconds of life:
2015-09-21 22:50:34,017 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
 Memory usage of ProcessTree 13982 for container-id 
container_1442402147223_0165_01_01: 1.0 GB of 1 GB physical memory used; 
3.4 GB of 2.1 GB virtual memory used
2015-09-21 22:50:34,017 WARN 
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
 Process tree for container: container_1442402147223_0165_01_01 has 
processes older than 1 iteration running over the configured limit. 
Limit=1073741824, current usage = 1074352128
2015-09-21 22:50:34,018 WARN 
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
 Container [pid=13982,containerID=container_1442402147223_0165_01_01] is 
running beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical 
memory used; 3.4 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1442402147223_0165_01_01 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) 
SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 13994 13982 13982 13982 (java) 4285 714 3602911232 261607 
/opt/jdk1.8.0_60/bin/java -Dlog4j.configuration=container-log4j.properties 
-Dyarn.app.container.log.dir=/var/log/hadoop-yarn/contai
ner/application_1442402147223_0165/container_1442402147223_0165_01_01 
-Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
-Djava.net.preferIPv4Stack=true -Xmx825955249 org.apache.had
oop.mapreduce.v2.app.MRAppMaster
|- 13982 13980 13982 13982 (bash) 0 0 14020608 686 /bin/bash -c 
/opt/jdk1.8.0_60/bin/java -Dlog4j.configuration=container-log4j.properties 
-Dyarn.app.container.log.dir=/var/log/hadoop-yarn/container/application_1442402147223_0165/container_