should i need tune mapred.child.java.opts option if i use yarn and MRV2?

2014-05-22 Thread ch huang
hi,mailist:
 i want to know if this option still cause limitation in YARN?


Re: question about NM heapsize

2014-05-22 Thread Tsuyoshi OZAWA
Thank you for the point, Vinod. You're right.

Thanks, Tsuyoshi
 On May 22, 2014 9:26 PM, "Vinod Kumar Vavilapalli" 
wrote:

> Not "in addition to that". You should only use the memory-mb
> configuration. Giving 15GB to NodeManger itself will eat into the total
> memory available for containers.
>
> Vinod
>
> On May 22, 2014, at 8:25 PM, Tsuyoshi OZAWA 
> wrote:
>
> hi,
>
> In addition to that, you need to change property  *yarn*.*nodemanager*.
> resource.*memory*-mb in yarn-site.xmk to make NM recognize memory usage.
> On May 22, 2014 7:50 PM, "ch huang"  wrote:
>
>> hi,maillist:
>>
>> i set YARN_NODEMANAGER_HEAPSIZE=15000,so the NM run in a 15G JVM,but
>> why i see web ui of yarn ,in it's Active Nodes -> Mem Avail ,only 8GB?
>> ,why?
>>
>
>
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity
> to which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.


Re: hadoop job stuck in accepted state

2014-05-22 Thread Rahul Singh
Thanks Sebastian. The job was stuck due to memory issues. I found the below
mentioned link very useful in configuring the yarn.

http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.6.0/bk_installing_manually_book/content/rpm-chap1-11.html

Regards,
-Rahul Singh


On Fri, May 23, 2014 at 9:59 AM, Rahul Singh wrote:

> Thanks. The job was stuck be
>
>
> On Wed, May 21, 2014 at 11:10 PM, Sebastian Gäde 
> wrote:
>
>> Hi,
>>
>> I remember having a similar issue. My job was demanding for more memory
>> then available in the cluster, that’s why it was waiting forever.
>>
>> Could you check the resource-/nodemanager logs? Also the „Scheduler" page
>> in the Webapp might give a hint whether the job is not starting because of
>> unsufficient resources.
>>
>> Cheers
>> Seb.
>>
>> Am 21.05.2014 um 16:19 schrieb Rahul Singh :
>>
>> Hi,
>>   I am trying to run the wordcount example on single node cluster but my
>> job is stuck in accepted state. Various details metioned below(ScreenShot
>> attached):
>>
>> hadoop version
>> Hadoop 2.3.0
>> Subversion http://svn.apache.org/repos/asf/hadoop/common -r 1567123
>> Compiled by jenkins on 2014-02-11T13:40Z
>> Compiled with protoc 2.5.0
>> From source with checksum dfe46336fbc6a044bc124392ec06b85
>>
>>
>> machine details:
>> Linux L-user-Tech 3.13.0-24-generic #47-Ubuntu SMP Fri May 2 23:30:00 UTC
>> 2014 x86_64 x86_64 x86_64 GNU/Linux
>>
>>
>>
>>  Command line:
>> hduser@L-user-Tech:~$ hadoop jar
>> /usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.3.0.jar
>> wordcount /user/hduser/input /user/hduser/output
>> 14/05/21 19:13:34 WARN util.NativeCodeLoader: Unable to load
>> native-hadoop library for your platform... using builtin-java classes where
>> applicable
>> 14/05/21 19:13:35 INFO client.RMProxy: Connecting to ResourceManager at /
>> 0.0.0.0:8032
>> 14/05/21 19:13:35 INFO input.FileInputFormat: Total input paths to
>> process : 1
>> 14/05/21 19:13:35 INFO mapreduce.JobSubmitter: number of splits:1
>> 14/05/21 19:13:35 INFO mapreduce.JobSubmitter: Submitting tokens for job:
>> job_1400678752370_0003
>> 14/05/21 19:13:36 INFO impl.YarnClientImpl: Submitted application
>> application_1400678752370_0003
>> 14/05/21 19:13:36 INFO mapreduce.Job: The url to track the job:
>> http://L-user-Tech:8088/proxy/application_1400678752370_0003/
>> 14/05/21 19:13:36 INFO mapreduce.Job: Running job: job_1400678752370_0003
>>
>>
>>  There are no logs generated.
>>
>>  Let me know any if any resolutions available?
>>
>> Thanks and Regards,
>> -Rahul Singh
>> 
>>
>>
>>
>


Re: hadoop job stuck in accepted state

2014-05-22 Thread Rahul Singh
Thanks. The job was stuck be


On Wed, May 21, 2014 at 11:10 PM, Sebastian Gäde wrote:

> Hi,
>
> I remember having a similar issue. My job was demanding for more memory
> then available in the cluster, that’s why it was waiting forever.
>
> Could you check the resource-/nodemanager logs? Also the „Scheduler" page
> in the Webapp might give a hint whether the job is not starting because of
> unsufficient resources.
>
> Cheers
> Seb.
>
> Am 21.05.2014 um 16:19 schrieb Rahul Singh :
>
> Hi,
>   I am trying to run the wordcount example on single node cluster but my
> job is stuck in accepted state. Various details metioned below(ScreenShot
> attached):
>
> hadoop version
> Hadoop 2.3.0
> Subversion http://svn.apache.org/repos/asf/hadoop/common -r 1567123
> Compiled by jenkins on 2014-02-11T13:40Z
> Compiled with protoc 2.5.0
> From source with checksum dfe46336fbc6a044bc124392ec06b85
>
>
> machine details:
> Linux L-user-Tech 3.13.0-24-generic #47-Ubuntu SMP Fri May 2 23:30:00 UTC
> 2014 x86_64 x86_64 x86_64 GNU/Linux
>
>
>
> Command line:
> hduser@L-user-Tech:~$ hadoop jar
> /usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.3.0.jar
> wordcount /user/hduser/input /user/hduser/output
> 14/05/21 19:13:34 WARN util.NativeCodeLoader: Unable to load native-hadoop
> library for your platform... using builtin-java classes where applicable
> 14/05/21 19:13:35 INFO client.RMProxy: Connecting to ResourceManager at /
> 0.0.0.0:8032
> 14/05/21 19:13:35 INFO input.FileInputFormat: Total input paths to process
> : 1
> 14/05/21 19:13:35 INFO mapreduce.JobSubmitter: number of splits:1
> 14/05/21 19:13:35 INFO mapreduce.JobSubmitter: Submitting tokens for job:
> job_1400678752370_0003
> 14/05/21 19:13:36 INFO impl.YarnClientImpl: Submitted application
> application_1400678752370_0003
> 14/05/21 19:13:36 INFO mapreduce.Job: The url to track the job:
> http://L-user-Tech:8088/proxy/application_1400678752370_0003/
> 14/05/21 19:13:36 INFO mapreduce.Job: Running job: job_1400678752370_0003
>
>
>  There are no logs generated.
>
>  Let me know any if any resolutions available?
>
> Thanks and Regards,
> -Rahul Singh
> 
>
>
>


Re: question about NM heapsize

2014-05-22 Thread Vinod Kumar Vavilapalli
Not "in addition to that". You should only use the memory-mb configuration. 
Giving 15GB to NodeManger itself will eat into the total memory available for 
containers.

Vinod

On May 22, 2014, at 8:25 PM, Tsuyoshi OZAWA  wrote:

> hi,
> 
> In addition to that, you need to change property  yarn.nodemanager. 
> resource.memory-mb in yarn-site.xmk to make NM recognize memory usage.
> 
> On May 22, 2014 7:50 PM, "ch huang"  wrote:
> hi,maillist:
>  
> i set YARN_NODEMANAGER_HEAPSIZE=15000,so the NM run in a 15G JVM,but why 
> i see web ui of yarn ,in it's Active Nodes -> Mem Avail ,only 8GB? ,why?


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: MapReduce scalability study

2014-05-22 Thread Sylvain Gault
On Thu, May 22, 2014 at 04:47:28PM -0400, Marcos Ortiz wrote:
> On Thursday, May 22, 2014 10:17:42 PM Sylvain Gault wrote:
> > Hello,
> >
> > I'm new to this mailing list, so forgive me if I don't do everything
> > right.
> >
> > I didn't know whether I should ask on this mailing list or on
> > mapreduce-dev or on yarn-dev. So I'll just start there. ^^
> >
> > Short story: I'm looking for some paper(s) studying the scalability
> > of Hadoop MapReduce. And I found this extremely difficult to find on
> > google scholar. Do you have something worth citing in a PhD thesis?
> >
> > Long story: I'm writing my PhD thesis about MapReduce and when I talk
> > about Hadoop I'd like to say "how much it scales". I heared two years
> > ago some people say that "Yahoo! got it scale up to 4000 nodes and plan
> > to try on 6000 nodes" or something like that. I also heared that
> > YARN/MRv2 should scale better, but I don't plan to talk much about
> > YARN/MRv2. So I'd take anything I could cite as a reference in my
> > manuscript. :)
> 
> Hello, Sylvain.
> 
> One of the reason why the Hadoop dev team began to work in YARN is precisely
> looking for a more scalable and resourceful Hadoop system, so if you actually
> want to talk about Hadoop scalability, you should talk about YARN and MR2.
> 
>  
> 
> The paper is here:
> 
> https://developer.yahoo.com/blogs/hadoop/
> next-generation-apache-hadoop-mapreduce-3061.html
> 

This was a very interesting reading.
Maybe not very academic, but if that's all we got, I take it.

I also found these:
https://developer.yahoo.com/blogs/hadoop/scaling-hadoop-4000-nodes-yahoo-410.html
https://developer.yahoo.com/blogs/hadoop/hadoop-sorts-petabyte-16-25-hours-terabyte-62-422.html

Somehow I was expecting that someone did a real scalability study
comparing MRv2 and MRv1. Comparing the total time of several benchmark
for a number of nodes 1000, 2000, ... 6000. And plotting some curves. :)
But that's just how I would have done it. :)


> You should talk with Arun C Murthy, Chief Architect at Hortonworks about all
> these topics. He could help you much more than I could.

I'm convinced it would be very very interesting. But I do not have much
time to spend on understanding Hadoop and I still have several chapters
to write. :)

I almost have everything I needed to know about Hadoop. But when I'm
done, I may also ask people here to proof-read what I wrote about it. :)



Sylvain


Re: Failed to run 'mvn package' on hadoop-2.2 using Cygwin

2014-05-22 Thread sam liu
I install JDK in Cygwin. After replacing '\\' with '/', still failed.

Even after I reinstalled protobuf in Cygwin, I still failed and met same
exception...

I am confusing why I can not encounter such exception when running 'protoc
--version' directly in shell, but always encounter following exception when
compiling hadoop project. It's a strange issue...








*[WARNING] [/usr/local/lib/bin/protoc, --version] failed:
java.io.IOException: Cannot run program "/usr/local/lib/bin/protoc":
CreateProcess error=2, The system cannot find the file specified.[ERROR]
stdout: []... ...Caused by: org.apache.maven.plugin.MojoExecutionException:
org.apache.maven.plugin.MojoExecutionException: 'protoc --version' did not
return a versionat
org.apache.hadoop.maven.plugin.protoc.ProtocMojo.execute(ProtocMojo.java:107)
at
org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:101)
at
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:209)
... 23 more*



2014-05-21 23:05 GMT+08:00 bo yang :

> By the way, how did you install your JDK? I installed JDK under
> windows, and then in Cygwin, I point to that JDK. If you build and install
> JDK under Cygwin, the file path with "\\" might not work since Cygwin
> (Linux) uses "/" as separator.
>
>
> On Wed, May 21, 2014 at 1:52 AM, Krishna Chaitanya  > wrote:
>
>> Try installing protocol buffer again..make clean,  make and make install
>> On May 21, 2014 1:49 PM, "sam liu"  wrote:
>>
>>> Failed again...
>>>
>>> I modified ProtocMojo.java as:
>>>
>>>
>>>
>>> * if (protocCommand == null || protocCommand.trim().isEmpty()) {
>>> protocCommand = "D:\\software\\Linux\\cygwin64\\bin\\protoc.exe";
>>> //protocCommand = "protoc";  }*
>>>
>>> And then, still encounterred issues as below:
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> *[INFO] BUILD FAILURE[INFO]
>>> 
>>> [INFO] Total time: 24.266s[INFO] Finished at: Wed May 21 16:14:58 CST
>>> 2014[INFO] Final Memory: 50M/512M[INFO]
>>> [ERROR]
>>> Failed to execute goal org.apache.hadoop:hadoop-maven-plugins:2.2.0:protoc
>>> (compile-protoc) on project hadoop-common:
>>> org.apache.maven.plugin.MojoExecutionException: 'protoc --version' did not
>>> return a version -> [Help 1]
>>> org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute
>>> goal org.apache.hadoop:hadoop-maven-plugins:2.2.0:protoc (compile-protoc)
>>> on project hadoop-common: org.apache.maven.plugin.MojoExecutionException:
>>> 'protoc --version' did not return a version at
>>> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:217)
>>> at
>>> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
>>> at
>>> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
>>> at
>>> org.apache.maven.lifecycle.internal.MojoExecutor.executeForkedExecutions(MojoExecutor.java:365)
>>> at
>>> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:199)
>>> at
>>> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
>>> at
>>> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
>>> at
>>> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84)
>>> at
>>> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59)
>>> at
>>> org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183)
>>> at
>>> org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161)
>>> at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:320)
>>> at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:156)at
>>> org.apache.maven.cli.MavenCli.execute(MavenCli.java:537)at
>>> org.apache.maven.cli.MavenCli.doMain(MavenCli.java:196)at
>>> org.apache.maven.cli.MavenCli.main(MavenCli.java:141) at
>>> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:48)
>>> at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>> at java.lang.reflect.Method.invoke(Method.java:600)at
>>> org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:290)
>>> at
>>> org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:230)
>>> at
>>> org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:409)
>>> at
>>> org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:352)Caused

Re: question about NM heapsize

2014-05-22 Thread Tsuyoshi OZAWA
hi,

In addition to that, you need to change property  *yarn*.*nodemanager*.
resource.*memory*-mb in yarn-site.xmk to make NM recognize memory usage.
On May 22, 2014 7:50 PM, "ch huang"  wrote:

> hi,maillist:
>
> i set YARN_NODEMANAGER_HEAPSIZE=15000,so the NM run in a 15G JVM,but
> why i see web ui of yarn ,in it's Active Nodes -> Mem Avail ,only 8GB?
> ,why?
>


question about NM heapsize

2014-05-22 Thread ch huang
hi,maillist:

i set YARN_NODEMANAGER_HEAPSIZE=15000,so the NM run in a 15G JVM,but
why i see web ui of yarn ,in it's Active Nodes -> Mem Avail ,only 8GB? ,why?


Re: MapReduce scalability study

2014-05-22 Thread Sylvain Gault
I only talk about Hadoop because it is the de-facto implementation of
MapReduce. But for the remaining of my thesis, I took a more general
approach and implemented my algorithms in a custom MapReduce
implentation.

I learned yesterday about the existence of YARN. :D And I definitely
can't not talk about it since it's the future and 1.x will be abandoned.
But I mostly know about MRv1, so I decided to only briefly talk about
MRv2 when the difference are relevant. i.e. for scalability and global
architecture I guess.

Sylvain

On Thu, May 22, 2014 at 05:39:43PM -0300, Marco Shaw wrote:
> I would consider the timeframe that you are looking for to determine if you 
> should focus on Hadoop 2.x (with YARN) or older. 2.x should scale much better 
> than 1.x. 
> 
> Keep in mind that 2.x was only "officially" released late last year. 
> 
> Marco
> 
> > On May 22, 2014, at 5:17 PM, Sylvain Gault  wrote:
> > 
> > Hello,
> > 
> > I'm new to this mailing list, so forgive me if I don't do everything
> > right.
> > 
> > I didn't know whether I should ask on this mailing list or on
> > mapreduce-dev or on yarn-dev. So I'll just start there. ^^
> > 
> > Short story: I'm looking for some paper(s) studying the scalability
> > of Hadoop MapReduce. And I found this extremely difficult to find on
> > google scholar. Do you have something worth citing in a PhD thesis?
> > 
> > Long story: I'm writing my PhD thesis about MapReduce and when I talk
> > about Hadoop I'd like to say "how much it scales". I heared two years
> > ago some people say that "Yahoo! got it scale up to 4000 nodes and plan
> > to try on 6000 nodes" or something like that. I also heared that
> > YARN/MRv2 should scale better, but I don't plan to talk much about
> > YARN/MRv2. So I'd take anything I could cite as a reference in my
> > manuscript. :)
> > 
> > 
> > Best regards,
> > Sylvain Gault


Re: copy files from ftp to hdfs in parallel, distcp failed

2014-05-22 Thread Shlash
Hi 
Can help me to solve this problem please, if you solved it.
Best regards

Shlash



Re: MapReduce scalability study

2014-05-22 Thread Marcos Ortiz

On Thursday, May 22, 2014 10:17:42 PM Sylvain Gault wrote:
> Hello,
> 
> I'm new to this mailing list, so forgive me if I don't do everything
> right.
> 
> I didn't know whether I should ask on this mailing list or on
> mapreduce-dev or on yarn-dev. So I'll just start there. ^^
> 
> Short story: I'm looking for some paper(s) studying the scalability
> of Hadoop MapReduce. And I found this extremely difficult to find on
> google scholar. Do you have something worth citing in a PhD thesis?
> 
> Long story: I'm writing my PhD thesis about MapReduce and when I talk
> about Hadoop I'd like to say "how much it scales". I heared two years
> ago some people say that "Yahoo! got it scale up to 4000 nodes and plan
> to try on 6000 nodes" or something like that. I also heared that
> YARN/MRv2 should scale better, but I don't plan to talk much about
> YARN/MRv2. So I'd take anything I could cite as a reference in my
> manuscript. :)
Hello, Sylvain.
One of the reason why the Hadoop dev team began to work in YARN is precisely 
looking for a more scalable and resourceful Hadoop system, so if you actually 
want to 
talk about Hadoop scalability, you should talk about YARN and MR2.

The paper is here:
https://developer.yahoo.com/blogs/hadoop/next-generation-apache-hadoop-mapreduce-3061.html

and the related JIRA issues here:
https://issues.apache.org/jira/browse/MAPREDUCE-278
https://issues.apache.org/jira/browse/MAPREDUCE-279

You should talk with Arun C Murthy, Chief Architect at Hortonworks about all 
these 
topics. He could help you much more than I could.

-- 
Marcos Ortiz[1] (@marcosluis2186[2])
http://about.me/marcosortiz[3] 
> 
> 
> Best regards,
> Sylvain Gault


[1] http://www.linkedin.com/in/mlortiz
[2] http://twitter.com/marcosluis2186
[3] http://about.me/marcosortiz

VII Escuela Internacional de Verano en la UCI del 30 de junio al 11 de julio de 
2014. Ver www.uci.cu

Re: MapReduce scalability study

2014-05-22 Thread Marco Shaw
I would consider the timeframe that you are looking for to determine if you 
should focus on Hadoop 2.x (with YARN) or older. 2.x should scale much better 
than 1.x. 

Keep in mind that 2.x was only "officially" released late last year. 

Marco

> On May 22, 2014, at 5:17 PM, Sylvain Gault  wrote:
> 
> Hello,
> 
> I'm new to this mailing list, so forgive me if I don't do everything
> right.
> 
> I didn't know whether I should ask on this mailing list or on
> mapreduce-dev or on yarn-dev. So I'll just start there. ^^
> 
> Short story: I'm looking for some paper(s) studying the scalability
> of Hadoop MapReduce. And I found this extremely difficult to find on
> google scholar. Do you have something worth citing in a PhD thesis?
> 
> Long story: I'm writing my PhD thesis about MapReduce and when I talk
> about Hadoop I'd like to say "how much it scales". I heared two years
> ago some people say that "Yahoo! got it scale up to 4000 nodes and plan
> to try on 6000 nodes" or something like that. I also heared that
> YARN/MRv2 should scale better, but I don't plan to talk much about
> YARN/MRv2. So I'd take anything I could cite as a reference in my
> manuscript. :)
> 
> 
> Best regards,
> Sylvain Gault


MapReduce scalability study

2014-05-22 Thread Sylvain Gault
Hello,

I'm new to this mailing list, so forgive me if I don't do everything
right.

I didn't know whether I should ask on this mailing list or on
mapreduce-dev or on yarn-dev. So I'll just start there. ^^

Short story: I'm looking for some paper(s) studying the scalability
of Hadoop MapReduce. And I found this extremely difficult to find on
google scholar. Do you have something worth citing in a PhD thesis?

Long story: I'm writing my PhD thesis about MapReduce and when I talk
about Hadoop I'd like to say "how much it scales". I heared two years
ago some people say that "Yahoo! got it scale up to 4000 nodes and plan
to try on 6000 nodes" or something like that. I also heared that
YARN/MRv2 should scale better, but I don't plan to talk much about
YARN/MRv2. So I'd take anything I could cite as a reference in my
manuscript. :)


Best regards,
Sylvain Gault


Re: Job Tracker Stops as Task Tracker starts

2014-05-22 Thread Raj K Singh
the problem seems with the java 7, install java 6 and retry.


Raj K Singh
http://in.linkedin.com/in/rajkrrsingh
http://www.rajkrrsingh.blogspot.com
Mobile  Tel: +91 (0)9899821370


On Wed, May 21, 2014 at 6:34 PM, Faisal Rabbani <
faisalrabb...@platalytics.com> wrote:

> My jobtracker/tasktracker everything is working fine but I am unable to
> access jobtracker web homepage. Similarly in hbase I am getting this
> exception on every command run, but hbase is working fine.
>
> hbase(main):003:0> list
> TABLE
>
> t1
>
> Java::JavaLang::NoSuchMethodError:
> sun.misc.FloatingDecimal.digitsRoundedUp()Z
>
>
>
> On Wed, May 21, 2014 at 12:43 PM, Faisal Rabbani <
> faisalrabb...@platalytics.com> wrote:
>
>>
>> Hadoop 2.0.0-cdh4.6.0 and java version "1.7.0_55"
>>
>>
>>
>> On Tue, May 20, 2014 at 10:01 PM, Marcos Ortiz  wrote:
>>
>>>  What version of JDK are you using in your servers?
>>>
>>> What version of Hadoop are you using?
>>>
>>>
>>>
>>> --
>>>
>>> Marcos Ortiz  
>>> (@marcosluis2186
>>> )
>>>
>>> http://about.me/marcosortiz
>>>
>>> On Tuesday, May 20, 2014 09:01:07 PM Faisal Rabbani wrote:
>>>
>>> > Hi,
>>>
>>> > I just installed jobtracker and task trackers but as soon as I start
>>> any of
>>>
>>> > my tasktrackers Job trackers homepage gives following error:
>>>
>>> >
>>>
>>> >
>>>
>>> >
>>>
>>> > java.lang.NoSuchMethodError:
>>> sun.misc.FloatingDecimal.digitsRoundedUp()Z
>>>
>>> > at java.text.DigitList.set(DigitList.java:292)
>>>
>>> > at java.text.DecimalFormat.format(DecimalFormat.java:599)
>>>
>>> > at java.text.DecimalFormat.format(DecimalFormat.java:522)
>>>
>>> > at java.text.NumberFormat.format(NumberFormat.java:271)
>>>
>>> > at
>>>
>>> >
>>> org.apache.hadoop.mapred.jobtracker_jsp.generateSummaryTable(jobtracker_jsp.
>>>
>>> > java:26) at
>>>
>>> >
>>> org.apache.hadoop.mapred.jobtracker_jsp._jspService(jobtracker_jsp.java:146)
>>>
>>> > at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:98)
>>> at
>>>
>>> > javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
>>>
>>> > at
>>> org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
>>>
>>> > at
>>>
>>> >
>>> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler
>>>
>>> > .java:1221) at
>>>
>>> >
>>> org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(Sta
>>>
>>> > ticUserWebFilter.java:109) at
>>>
>>> >
>>> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler
>>>
>>> > .java:1212) at
>>>
>>> >
>>> org.apache.hadoop.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.jav
>>>
>>> > a:1069) at
>>>
>>> >
>>> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler
>>>
>>> > .java:1212) at
>>>
>>> > org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) at
>>>
>>> >
>>> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler
>>>
>>> > .java:1212) at
>>>
>>> > org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) at
>>>
>>> >
>>> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler
>>>
>>> > .java:1212) at
>>>
>>> >
>>> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399) at
>>>
>>> >
>>> org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
>>>
>>> > at
>>> org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
>>>
>>> > at
>>> org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
>>>
>>> > at
>>> org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
>>>
>>> > at
>>>
>>> >
>>> org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerColl
>>>
>>> > ection.java:230) at
>>>
>>> >
>>> org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at
>>>
>>> > org.mortbay.jetty.Server.handle(Server.java:326)
>>>
>>> > at
>>> org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
>>>
>>> > at
>>>
>>> >
>>> org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnectio
>>>
>>> > n.java:928) at
>>> org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
>>>
>>> > at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
>>>
>>> > at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
>>>
>>> > at
>>>
>>> >
>>> org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
>>>
>>> > at
>>>
>>> >
>>> org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582
>>>
>>> > )
>>>
>>> >
>>>
>>> >
>>>
>>> > whereas in* jobtracker::50030/machines.jsp?type=active* all
>>> tasktrackers
>>>
>>> > are showed in running state
>>>
>>> > hmaster01  Hadoop Machine
>>>
>>> > ListActive
>>>
>>> > Task Trackers*Task Trackers**Name**Host**# running tasks**Max Map
>>> Tasks**Max
>>>
>>> > Reduce Tasks**Task Failures**Dire

Re: HDFS Quota Error

2014-05-22 Thread Nitin Pawar
based on your table file format along with table definition  and what kind
of query you are running on that data will decide on how many files need to
created. These files are created as temporary output from maps till
reducers consume them.

you can control how many files hive's job should create on run time by
setting
set hive.merge.mapfiles=true;
set hive.exec.max.dynamic.partitions.pernode=1;
set hive.exec.max.dynamic.partitions=2; # check what number you want to
set this to based on your machine configs

set hive.exec.max.created.files=20;

Also if your table has lots of small files then change the input file
format by setting
set hive.input.format=org.apache.hadoop.hive.ql.io.CombineHiveInputFormat;


But this will also depend on what size disk you have and what's your base
filesystem type.
also do not forget to set the ulimit to unlimited.
If have reset ulimit then you will need to restart your hadoop cluster.

wait for some experts from dev forum to give more insights on this


On Thu, May 22, 2014 at 3:53 PM, Natarajan, Prabakaran 1. (NSN -
IN/Bangalore)  wrote:

>  Hi,
>
>
>
> Thanks.
>
>
>
> Inode is 100% in the disk where it mounted to the directly
> /var/local/hadoop (its not temp, but hadoops working or cache directory).
> This happens when we run aggregation query in hive.  Looks like hive query
> (map-red) create many small files.
>
>
>
> How to control this? What are those files?
>
>
>
> *Thanks and Regards*
>
> Prabakaran.N  aka NP
>
> nsn, Bangalore
>
> *When "I" is replaced by "We" - even Illness becomes "Wellness"*
>
>
>
>
>
> *From:* ext Nitin Pawar [mailto:nitinpawar...@gmail.com]
> *Sent:* Thursday, May 22, 2014 3:07 PM
>
> *To:* user@hadoop.apache.org
> *Subject:* Re: HDFS Quota Error
>
>
>
> That means there are some or a process which are creating tons of small
> files and leaving it there when the work completed.
>
>
>
> To free up inode space you will need to delete the files.
>
> I do not think there is any other way.
>
>
>
> Check in your /tmp folder, how many files are there and if any process is
> leaving tmp files behind.
>
>
>
> On Thu, May 22, 2014 at 2:54 PM, Natarajan, Prabakaran 1. (NSN -
> IN/Bangalore)  wrote:
>
> Just noted that inode is 100%.  Any better solutions to solve this?
>
>
>
> *Thanks and Regards*
>
> Prabakaran.N  aka NP
>
> nsn, Bangalore
>
> *When "I" is replaced by "We" - even Illness becomes "Wellness"*
>
>
>
>
>
> *From:* ext Natarajan, Prabakaran 1. (NSN - IN/Bangalore) [mailto:
> prabakaran.1.natara...@nsn.com]
> *Sent:* Thursday, May 22, 2014 2:37 PM
> *To:* user@hadoop.apache.org
> *Subject:* RE: HDFS Quota Error
>
>
>
> Thanks for your reply.  But all the datanote disk has more than 50% space
> empty
>
>
>
> *Thanks and Regards*
>
> Prabakaran.N  aka NP
>
> nsn, Bangalore
>
> *When "I" is replaced by "We" - even Illness becomes "Wellness"*
>
>
>
>
>
> *From:* ext Nitin Pawar 
> [mailto:nitinpawar...@gmail.com]
>
> *Sent:* Thursday, May 22, 2014 12:56 PM
> *To:* user@hadoop.apache.org
> *Subject:* Re: HDFS Quota Error
>
>
>
> no space left on device can also mean that one of your datanode disk is
> full.
>
>
>
> Can you check disk used by each datanode.
>
>
>
> May be you will need to rebalance your replication so that some space is
> made free on this datanode.
>
>
>
> On Thu, May 22, 2014 at 12:34 PM, Natarajan, Prabakaran 1. (NSN -
> IN/Bangalore)  wrote:
>
> Hi
>
>
>
> When I run a query in Hive, I get below exception.  I noticed the error
> “No space left on device”.
>
>
>
> Then I did “hadoop fs -count -q /var/local/hadoop” – which gave below
> output
>
>
>
> none infnone inf   69
> 275  288034318 hdfs://nnode:54310/var/local/hadoop
>
>
>
> Why I am getting none and inf for space and remaining space quota?  Is
> this meaning is unlimited space or is there is any space left?
>
>
>
>
>
> I tried “hadoop dfsadmin -setSpaceQuota 100G /var/local/hadoop”  --  Not
> sure 100G is correct or not?  How much I need to set and how to calculate
> this?
>
>
>
> After setting 100G , I get the below output  for “hadoop fs -count -q
> /var/local/hadoop”
>
>
>
> none inf107374182400104408308039   73
> 286  29777 hdfs://nnode:54310/var/local/hadoop
>
>
>
>
>
> I have to wait to see whether 100G is going to give me an exception or
> not….
>
>
>
>
>
> --
>
>
>
>
>
> 2014-05-22 10:48:43,585 ERROR
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
> as:hadoop cause:java.io.FileNotFoundException:
> /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class
> (No space left on device)
>
> 2014-05-22 10:48:43,585 WARN org.apache.hadoop.mapred.TaskTracker: Error
> initializing attempt_201405211712_0625_r_01_2:
>
> java.io.FileNotFoundException:
> /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snap

RE: HDFS Quota Error

2014-05-22 Thread Natarajan, Prabakaran 1. (NSN - IN/Bangalore)
Hi,

Thanks.

Inode is 100% in the disk where it mounted to the directly /var/local/hadoop 
(its not temp, but hadoops working or cache directory).  This happens when we 
run aggregation query in hive.  Looks like hive query (map-red) create many 
small files.

How to control this? What are those files?

Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"


From: ext Nitin Pawar [mailto:nitinpawar...@gmail.com]
Sent: Thursday, May 22, 2014 3:07 PM
To: user@hadoop.apache.org
Subject: Re: HDFS Quota Error

That means there are some or a process which are creating tons of small files 
and leaving it there when the work completed.

To free up inode space you will need to delete the files.
I do not think there is any other way.

Check in your /tmp folder, how many files are there and if any process is 
leaving tmp files behind.

On Thu, May 22, 2014 at 2:54 PM, Natarajan, Prabakaran 1. (NSN - IN/Bangalore) 
mailto:prabakaran.1.natara...@nsn.com>> wrote:
Just noted that inode is 100%.  Any better solutions to solve this?

Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"


From: ext Natarajan, Prabakaran 1. (NSN - IN/Bangalore) 
[mailto:prabakaran.1.natara...@nsn.com]
Sent: Thursday, May 22, 2014 2:37 PM
To: user@hadoop.apache.org
Subject: RE: HDFS Quota Error

Thanks for your reply.  But all the datanote disk has more than 50% space empty

Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"


From: ext Nitin Pawar [mailto:nitinpawar...@gmail.com]
Sent: Thursday, May 22, 2014 12:56 PM
To: user@hadoop.apache.org
Subject: Re: HDFS Quota Error

no space left on device can also mean that one of your datanode disk is full.

Can you check disk used by each datanode.

May be you will need to rebalance your replication so that some space is made 
free on this datanode.

On Thu, May 22, 2014 at 12:34 PM, Natarajan, Prabakaran 1. (NSN - IN/Bangalore) 
mailto:prabakaran.1.natara...@nsn.com>> wrote:
Hi

When I run a query in Hive, I get below exception.  I noticed the error “No 
space left on device”.

Then I did “hadoop fs -count -q /var/local/hadoop” – which gave below output

none infnone inf   69  275  
288034318 hdfs://nnode:54310/var/local/hadoop

Why I am getting none and inf for space and remaining space quota?  Is this 
meaning is unlimited space or is there is any space left?


I tried “hadoop dfsadmin -setSpaceQuota 100G /var/local/hadoop”  --  Not sure 
100G is correct or not?  How much I need to set and how to calculate this?

After setting 100G , I get the below output  for “hadoop fs -count -q 
/var/local/hadoop”

none inf107374182400104408308039   73  286  
29777 hdfs://nnode:54310/var/local/hadoop


I have to wait to see whether 100G is going to give me an exception or not….


--


2014-05-22 10:48:43,585 ERROR org.apache.hadoop.security.UserGroupInformation: 
PriviledgedActionException as:hadoop cause:java.io.FileNotFoundException: 
/var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class
 (No space left on device)
2014-05-22 10:48:43,585 WARN org.apache.hadoop.mapred.TaskTracker: Error 
initializing attempt_201405211712_0625_r_01_2:
java.io.FileNotFoundException: 
/var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class
 (No space left on device)
at java.io.FileOutputStream.open(Native Method)
at java.io.FileOutputStream.(FileOutputStream.java:221)
at java.io.FileOutputStream.(FileOutputStream.java:171)
at org.apache.hadoop.util.RunJar.unJar(RunJar.java:51)
at 
org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:277)
at 
org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
at 
org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
at 
org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
at 
org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
at 
org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
at java.lang.Thread.run(Thread.java

Re: HDFS Quota Error

2014-05-22 Thread Nitin Pawar
That means there are some or a process which are creating tons of small
files and leaving it there when the work completed.

To free up inode space you will need to delete the files.
I do not think there is any other way.

Check in your /tmp folder, how many files are there and if any process is
leaving tmp files behind.


On Thu, May 22, 2014 at 2:54 PM, Natarajan, Prabakaran 1. (NSN -
IN/Bangalore)  wrote:

>  Just noted that inode is 100%.  Any better solutions to solve this?
>
>
>
> *Thanks and Regards*
>
> Prabakaran.N  aka NP
>
> nsn, Bangalore
>
> *When "I" is replaced by "We" - even Illness becomes "Wellness"*
>
>
>
>
>
> *From:* ext Natarajan, Prabakaran 1. (NSN - IN/Bangalore) [mailto:
> prabakaran.1.natara...@nsn.com]
> *Sent:* Thursday, May 22, 2014 2:37 PM
> *To:* user@hadoop.apache.org
> *Subject:* RE: HDFS Quota Error
>
>
>
> Thanks for your reply.  But all the datanote disk has more than 50% space
> empty
>
>
>
> *Thanks and Regards*
>
> Prabakaran.N  aka NP
>
> nsn, Bangalore
>
> *When "I" is replaced by "We" - even Illness becomes "Wellness"*
>
>
>
>
>
> *From:* ext Nitin Pawar 
> [mailto:nitinpawar...@gmail.com]
>
> *Sent:* Thursday, May 22, 2014 12:56 PM
> *To:* user@hadoop.apache.org
> *Subject:* Re: HDFS Quota Error
>
>
>
> no space left on device can also mean that one of your datanode disk is
> full.
>
>
>
> Can you check disk used by each datanode.
>
>
>
> May be you will need to rebalance your replication so that some space is
> made free on this datanode.
>
>
>
> On Thu, May 22, 2014 at 12:34 PM, Natarajan, Prabakaran 1. (NSN -
> IN/Bangalore)  wrote:
>
> Hi
>
>
>
> When I run a query in Hive, I get below exception.  I noticed the error
> “No space left on device”.
>
>
>
> Then I did “hadoop fs -count -q /var/local/hadoop” – which gave below
> output
>
>
>
> none infnone inf   69
> 275  288034318 hdfs://nnode:54310/var/local/hadoop
>
>
>
> Why I am getting none and inf for space and remaining space quota?  Is
> this meaning is unlimited space or is there is any space left?
>
>
>
>
>
> I tried “hadoop dfsadmin -setSpaceQuota 100G /var/local/hadoop”  --  Not
> sure 100G is correct or not?  How much I need to set and how to calculate
> this?
>
>
>
> After setting 100G , I get the below output  for “hadoop fs -count -q
> /var/local/hadoop”
>
>
>
> none inf107374182400104408308039   73
> 286  29777 hdfs://nnode:54310/var/local/hadoop
>
>
>
>
>
> I have to wait to see whether 100G is going to give me an exception or
> not….
>
>
>
>
>
> --
>
>
>
>
>
> 2014-05-22 10:48:43,585 ERROR
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
> as:hadoop cause:java.io.FileNotFoundException:
> /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class
> (No space left on device)
>
> 2014-05-22 10:48:43,585 WARN org.apache.hadoop.mapred.TaskTracker: Error
> initializing attempt_201405211712_0625_r_01_2:
>
> java.io.FileNotFoundException:
> /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class
> *(No space left on device)*
>
> at java.io.FileOutputStream.open(Native Method)
>
> at java.io.FileOutputStream.(FileOutputStream.java:221)
>
> at java.io.FileOutputStream.(FileOutputStream.java:171)
>
> at org.apache.hadoop.util.RunJar.unJar(RunJar.java:51)
>
> at
> org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:277)
>
> at
> org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
>
> at
> org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
>
> at
> org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
>
> at
> org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
>
> at java.security.AccessController.doPrivileged(Native Method)
>
> at javax.security.auth.Subject.doAs(Subject.java:415)
>
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>
> at
> org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
>
> at
> org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
>
> at
> org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
>
> at java.lang.Thread.run(Thread.java:744)
>
>
>
>
>
> *Thanks and Regards*
>
> Prabakaran.N  aka NP
>
> nsn, Bangalore
>
> *When "I" is replaced by "We" - even Illness becomes "Wellness"*
>
>
>
>
>
>
>
>
>
>
>
>
>
> --
> Nitin Pawar
>



-- 
Nitin Pawar


RE: HDFS Quota Error

2014-05-22 Thread Natarajan, Prabakaran 1. (NSN - IN/Bangalore)
Just noted that inode is 100%.  Any better solutions to solve this?

Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"


From: ext Natarajan, Prabakaran 1. (NSN - IN/Bangalore) 
[mailto:prabakaran.1.natara...@nsn.com]
Sent: Thursday, May 22, 2014 2:37 PM
To: user@hadoop.apache.org
Subject: RE: HDFS Quota Error

Thanks for your reply.  But all the datanote disk has more than 50% space empty

Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"


From: ext Nitin Pawar [mailto:nitinpawar...@gmail.com]
Sent: Thursday, May 22, 2014 12:56 PM
To: user@hadoop.apache.org
Subject: Re: HDFS Quota Error

no space left on device can also mean that one of your datanode disk is full.

Can you check disk used by each datanode.

May be you will need to rebalance your replication so that some space is made 
free on this datanode.

On Thu, May 22, 2014 at 12:34 PM, Natarajan, Prabakaran 1. (NSN - IN/Bangalore) 
mailto:prabakaran.1.natara...@nsn.com>> wrote:
Hi

When I run a query in Hive, I get below exception.  I noticed the error “No 
space left on device”.

Then I did “hadoop fs -count -q /var/local/hadoop” – which gave below output

none infnone inf   69  275  
288034318 hdfs://nnode:54310/var/local/hadoop

Why I am getting none and inf for space and remaining space quota?  Is this 
meaning is unlimited space or is there is any space left?


I tried “hadoop dfsadmin -setSpaceQuota 100G /var/local/hadoop”  --  Not sure 
100G is correct or not?  How much I need to set and how to calculate this?

After setting 100G , I get the below output  for “hadoop fs -count -q 
/var/local/hadoop”

none inf107374182400104408308039   73  286  
29777 hdfs://nnode:54310/var/local/hadoop


I have to wait to see whether 100G is going to give me an exception or not….


--


2014-05-22 10:48:43,585 ERROR org.apache.hadoop.security.UserGroupInformation: 
PriviledgedActionException as:hadoop cause:java.io.FileNotFoundException: 
/var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class
 (No space left on device)
2014-05-22 10:48:43,585 WARN org.apache.hadoop.mapred.TaskTracker: Error 
initializing attempt_201405211712_0625_r_01_2:
java.io.FileNotFoundException: 
/var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class
 (No space left on device)
at java.io.FileOutputStream.open(Native Method)
at java.io.FileOutputStream.(FileOutputStream.java:221)
at java.io.FileOutputStream.(FileOutputStream.java:171)
at org.apache.hadoop.util.RunJar.unJar(RunJar.java:51)
at 
org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:277)
at 
org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
at 
org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
at 
org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
at 
org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
at 
org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
at java.lang.Thread.run(Thread.java:744)


Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"







--
Nitin Pawar


RE: HDFS Quota Error

2014-05-22 Thread Natarajan, Prabakaran 1. (NSN - IN/Bangalore)
Thanks for your reply.  But all the datanote disk has more than 50% space empty

Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"


From: ext Nitin Pawar [mailto:nitinpawar...@gmail.com]
Sent: Thursday, May 22, 2014 12:56 PM
To: user@hadoop.apache.org
Subject: Re: HDFS Quota Error

no space left on device can also mean that one of your datanode disk is full.

Can you check disk used by each datanode.

May be you will need to rebalance your replication so that some space is made 
free on this datanode.

On Thu, May 22, 2014 at 12:34 PM, Natarajan, Prabakaran 1. (NSN - IN/Bangalore) 
mailto:prabakaran.1.natara...@nsn.com>> wrote:
Hi

When I run a query in Hive, I get below exception.  I noticed the error “No 
space left on device”.

Then I did “hadoop fs -count -q /var/local/hadoop” – which gave below output

none infnone inf   69  275  
288034318 hdfs://nnode:54310/var/local/hadoop

Why I am getting none and inf for space and remaining space quota?  Is this 
meaning is unlimited space or is there is any space left?


I tried “hadoop dfsadmin -setSpaceQuota 100G /var/local/hadoop”  --  Not sure 
100G is correct or not?  How much I need to set and how to calculate this?

After setting 100G , I get the below output  for “hadoop fs -count -q 
/var/local/hadoop”

none inf107374182400104408308039   73  286  
29777 hdfs://nnode:54310/var/local/hadoop


I have to wait to see whether 100G is going to give me an exception or not….


--


2014-05-22 10:48:43,585 ERROR org.apache.hadoop.security.UserGroupInformation: 
PriviledgedActionException as:hadoop cause:java.io.FileNotFoundException: 
/var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class
 (No space left on device)
2014-05-22 10:48:43,585 WARN org.apache.hadoop.mapred.TaskTracker: Error 
initializing attempt_201405211712_0625_r_01_2:
java.io.FileNotFoundException: 
/var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class
 (No space left on device)
at java.io.FileOutputStream.open(Native Method)
at java.io.FileOutputStream.(FileOutputStream.java:221)
at java.io.FileOutputStream.(FileOutputStream.java:171)
at org.apache.hadoop.util.RunJar.unJar(RunJar.java:51)
at 
org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:277)
at 
org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
at 
org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
at 
org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
at 
org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
at 
org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
at java.lang.Thread.run(Thread.java:744)


Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"







--
Nitin Pawar


RE: HDFS Quota Error

2014-05-22 Thread Natarajan, Prabakaran 1. (NSN - IN/Bangalore)
Thanks for your reply.  We have more than 50% disk space.

Just FYI.. This is not a physical machine. Its vmware virtual machine.

Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"


From: ext Aitor Perez Cedres [mailto:ape...@pragsis.com]
Sent: Thursday, May 22, 2014 1:04 PM
To: user@hadoop.apache.org
Subject: Re: HDFS Quota Error


Maybe you are out of space in a local disk? That location[1] looks like the 
local dir where MR places some intermediate files. Can you check the output of 
df -h on a shell?


[1] /var/local/hadoop/cache/mapred/local
On 22/05/14 09:04, Natarajan, Prabakaran 1. (NSN - IN/Bangalore) wrote:
Hi

When I run a query in Hive, I get below exception.  I noticed the error "No 
space left on device".

Then I did "hadoop fs -count -q /var/local/hadoop" - which gave below output

none infnone inf   69  275  
288034318 hdfs://nnode:54310/var/local/hadoop

Why I am getting none and inf for space and remaining space quota?  Is this 
meaning is unlimited space or is there is any space left?


I tried "hadoop dfsadmin -setSpaceQuota 100G /var/local/hadoop"  --  Not sure 
100G is correct or not?  How much I need to set and how to calculate this?

After setting 100G , I get the below output  for "hadoop fs -count -q 
/var/local/hadoop"

none inf107374182400104408308039   73  286  
29777 hdfs://nnode:54310/var/local/hadoop


I have to wait to see whether 100G is going to give me an exception or not


--


2014-05-22 10:48:43,585 ERROR org.apache.hadoop.security.UserGroupInformation: 
PriviledgedActionException as:hadoop cause:java.io.FileNotFoundException: 
/var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class
 (No space left on device)
2014-05-22 10:48:43,585 WARN org.apache.hadoop.mapred.TaskTracker: Error 
initializing attempt_201405211712_0625_r_01_2:
java.io.FileNotFoundException: 
/var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class
 (No space left on device)
at java.io.FileOutputStream.open(Native Method)
at java.io.FileOutputStream.(FileOutputStream.java:221)
at java.io.FileOutputStream.(FileOutputStream.java:171)
at org.apache.hadoop.util.RunJar.unJar(RunJar.java:51)
at 
org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:277)
at 
org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
at 
org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
at 
org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
at 
org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
at 
org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
at java.lang.Thread.run(Thread.java:744)


Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"





--
Aitor Pérez
Big Data System Engineer

Telf.: +34 917 680 490
Fax: +34 913 833 301
C/Manuel Tovar, 49-53 - 28034 Madrid - Spain

http://www.bidoop.es


Re: HDFS Quota Error

2014-05-22 Thread Aitor Perez Cedres


Maybe you are out of space in a local disk? That location[1] looks like 
the local dir where MR places some intermediate files. Can you check the 
output of df -h on a shell?



[1] /var/local/hadoop/cache/mapred/local

On 22/05/14 09:04, Natarajan, Prabakaran 1. (NSN - IN/Bangalore) wrote:

Hi
When I run a query in Hive, I get below exception.  I noticed the 
error "No space left on device".
Then I did "hadoop fs -count -q /var/local/hadoop" -- which gave below 
output
none infnone inf   69  
275  288034318 hdfs://nnode:54310/var/local/hadoop
Why I am getting none and inf for space and remaining space quota?  Is 
this meaning is unlimited space or is there is any space left?
I tried "hadoop dfsadmin -setSpaceQuota 100G /var/local/hadoop"  --  
Not sure 100G is correct or not? How much I need to set and how to 
calculate this?
After setting 100G , I get the below output  for "hadoop fs -count -q 
/var/local/hadoop"
none inf107374182400 104408308039   
73  286  29777 hdfs://nnode:54310/var/local/hadoop
I have to wait to see whether 100G is going to give me an exception or 
not

--
2014-05-22 10:48:43,585 ERROR 
org.apache.hadoop.security.UserGroupInformation: 
PriviledgedActionException as:hadoop 
cause:java.io.FileNotFoundException: 
/var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class 
(No space left on device)
2014-05-22 10:48:43,585 WARN org.apache.hadoop.mapred.TaskTracker: 
Error initializing attempt_201405211712_0625_r_01_2:
java.io.FileNotFoundException: 
/var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class 
*(No space left on device)*

at java.io.FileOutputStream.open(Native Method)
at java.io.FileOutputStream.(FileOutputStream.java:221)
at java.io.FileOutputStream.(FileOutputStream.java:171)
at org.apache.hadoop.util.RunJar.unJar(RunJar.java:51)
at 
org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:277)
at 
org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
at 
org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
at 
org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
at 
org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)

at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
at 
org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
at 
org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
at 
org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)

at java.lang.Thread.run(Thread.java:744)
*Thanks and Regards*
Prabakaran.N  aka NP
nsn, Bangalore
*/When "I" is replaced by "We" - even Illness becomes "Wellness"/*


--
*Aitor Pérez*
/Big Data System Engineer/

Telf.: +34 917 680 490
Fax: +34 913 833 301
C/Manuel Tovar, 49-53 - 28034 Madrid - Spain

_http://www.bidoop.es_



Re: HDFS Quota Error

2014-05-22 Thread Nitin Pawar
no space left on device can also mean that one of your datanode disk is
full.

Can you check disk used by each datanode.

May be you will need to rebalance your replication so that some space is
made free on this datanode.


On Thu, May 22, 2014 at 12:34 PM, Natarajan, Prabakaran 1. (NSN -
IN/Bangalore)  wrote:

>  Hi
>
> When I run a query in Hive, I get below exception.  I noticed the error
> “No space left on device”.
>
> Then I did “hadoop fs -count -q /var/local/hadoop” – which gave below
> output
>
> none infnone inf   69
> 275  288034318 hdfs://nnode:54310/var/local/hadoop
>
> Why I am getting none and inf for space and remaining space quota?  Is
> this meaning is unlimited space or is there is any space left?
>
>
> I tried “hadoop dfsadmin -setSpaceQuota 100G /var/local/hadoop”  --  Not
> sure 100G is correct or not?  How much I need to set and how to calculate
> this?
>
> After setting 100G , I get the below output  for “hadoop fs -count -q
> /var/local/hadoop”
>
> none inf107374182400104408308039   73
> 286  29777 hdfs://nnode:54310/var/local/hadoop
>
>
> I have to wait to see whether 100G is going to give me an exception or
> not….
>
>
> --
>
>
> 2014-05-22 10:48:43,585 ERROR
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
> as:hadoop cause:java.io.FileNotFoundException:
> /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class
> (No space left on device)
> 2014-05-22 10:48:43,585 WARN org.apache.hadoop.mapred.TaskTracker: Error
> initializing attempt_201405211712_0625_r_01_2:
> java.io.FileNotFoundException:
> /var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class
> *(No space left on device)*
> at java.io.FileOutputStream.open(Native Method)
> at java.io.FileOutputStream.(FileOutputStream.java:221)
> at java.io.FileOutputStream.(FileOutputStream.java:171)
> at org.apache.hadoop.util.RunJar.unJar(RunJar.java:51)
> at
> org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:277)
> at
> org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
> at
> org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
> at
> org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
> at
> org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
> at
> org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
> at
> org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
> at
> org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
> at java.lang.Thread.run(Thread.java:744)
>
>
> *Thanks and Regards*
> Prabakaran.N  aka NP
> nsn, Bangalore
> *When "I" is replaced by "We" - even Illness becomes "Wellness"*
>
>
>
>
>



-- 
Nitin Pawar


Re: Unable to connect Hive using JDBC program

2014-05-22 Thread sunitha penakalapati
Hi,
 
Please try this out..
 
 
To Start Hive on a perticular port->
[training@localhost hive]$ hive --service hiveserver 
Starting Hive Thrift Server
Hive history 
file=/tmp/training/hive_job_log_training_201405212357_1347630673.txt
OK

 
Sample Java Code to connect to hive --->
 
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.Statement;

public class HiveConn {
 private static String driverName = "org.apache.hadoop.hive.jdbc.HiveDriver";
 
 public static void test()
 {
  try{
   Class.forName(driverName);
   Connection con = 
DriverManager.getConnection("jdbc:hive://localhost:/default","","");
 
   Statement stmt = con.createStatement();
   
   // show tables
  String sql = "show tables ";
  System.out.println("Running: " + sql);
  ResultSet res = stmt.executeQuery(sql);
  if (res.next()) {
    System.out.println("Table Names"+res.getString(1));
  }
  }catch(Exception e)
  {
   System.out.println(""+e.getMessage());
  }
 }
 
 public static void main(String args[])
 {
  test();
}}
 
Regards,
Sunitha.
 
 
 

From: Sanjeevv Sriram 
To: user@hadoop.apache.org 
Sent: Wednesday, May 21, 2014 11:44 PM
Subject: Re: Unable to connect Hive using JDBC program





When I try to start hive server with out any port, I am getting below exception

[cloudera@localhost lib]$ hive --service hiveserver
Starting Hive Thrift Server
14/05/21 21:20:26 INFO Configuration.deprecation: mapred.input.dir.recursive is 
deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive
14/05/21 21:20:26 INFO Configuration.deprecation: mapred.max.split.size is 
deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
14/05/21 21:20:26 INFO Configuration.deprecation: mapred.min.split.size is 
deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
14/05/21 21:20:26 INFO Configuration.deprecation: 
mapred.min.split.size.per.rack is deprecated. Instead, use 
mapreduce.input.fileinputformat.split.minsize.per.rack
14/05/21 21:20:26 INFO Configuration.deprecation: 
mapred.min.split.size.per.node is deprecated. Instead, use 
mapreduce.input.fileinputformat.split.minsize.per.node
14/05/21 21:20:26 INFO Configuration.deprecation: mapred.reduce.tasks is 
deprecated. Instead, use mapreduce.job.reduces
14/05/21 21:20:26 INFO Configuration.deprecation: 
mapred.reduce.tasks.speculative.execution is deprecated. Instead, use 
mapreduce.reduce.speculative
14/05/21 21:20:26 WARN conf.HiveConf: DEPRECATED: Configuration property 
hive.metastore.local no longer has any effect. Make sure to provide a valid 
value for hive.metastore.uris if you are connecting to a remote metastore.
org.apache.thrift.transport.TTransportException: Could not create ServerSocket 
on address 0.0.0.0/0.0.0.0:1.
    at org.apache.thrift.transport.TServerSocket.(TServerSocket.java:93)
    at org.apache.thrift.transport.TServerSocket.(TServerSocket.java:75)
    at 
org.apache.hadoop.hive.metastore.TServerSocketKeepAlive.(TServerSocketKeepAlive.java:34)
    at org.apache.hadoop.hive.service.HiveServer.main(HiveServer.java:674)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:212)

I am attaching hive-site.xml. 

I am using CDH 5 verson.

Thanks,
Sanjeevv 



On Wed, May 21, 2014 at 7:40 AM, harish tangella  
wrote:

Hi,
>
>  Close the hive terminal and start the new without giving port number the 
>command is 
>    
>hive --service hiveserver don't give any port number hope it will works
>
>
>On Mon, May 19, 2014 at 11:27 PM, Sanjeevv Sriram  wrote:
>
>I tried with different ports... still I am getting the same issue
>>
>>
>>
>>On Mon, May 19, 2014 at 8:02 AM, harish tangella  
>>wrote:
>>
>>Hi,
>>>
>>>
>>>Start Hive server on a different port number,and try to connect using JDBC 
>>>connection
>>>
>>>
>>>On Mon, May 19, 2014 at 11:06 AM, Shengjun Xin  wrote:
>>>
>>>Can you use command line to connect hive?




On Mon, May 19, 2014 at 4:59 AM, Sanjeevv Sriram  
wrote:

Hi,
>
>Please help me I am unable to connect Hive using JDBC program.
>
>I am getting below exception:
>
>Exception in thread "main" java.sql.SQLException: 
>org.apache.thrift.transport.TTransportException: java.net.SocketException: 
>Connection reset
>    at 
>org.apache.hadoop.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:196)
>    at 
>org.apache.hadoop.hive.jdbc.HiveStatement.execute(HiveStatement.java:132)
>    at 
>org.apache.hadoop.hive.jdbc.HiveConnection.configureConnection(HiveConnection.java:132)
>    at 
>org.apache.hadoop.hive.jdbc.HiveConnection.(Hiv

HDFS Quota Error

2014-05-22 Thread Natarajan, Prabakaran 1. (NSN - IN/Bangalore)
Hi

When I run a query in Hive, I get below exception.  I noticed the error "No 
space left on device".

Then I did "hadoop fs -count -q /var/local/hadoop" - which gave below output

none infnone inf   69  275  
288034318 hdfs://nnode:54310/var/local/hadoop

Why I am getting none and inf for space and remaining space quota?  Is this 
meaning is unlimited space or is there is any space left?


I tried "hadoop dfsadmin -setSpaceQuota 100G /var/local/hadoop"  --  Not sure 
100G is correct or not?  How much I need to set and how to calculate this?

After setting 100G , I get the below output  for "hadoop fs -count -q 
/var/local/hadoop"

none inf107374182400104408308039   73  286  
29777 hdfs://nnode:54310/var/local/hadoop


I have to wait to see whether 100G is going to give me an exception or not


--


2014-05-22 10:48:43,585 ERROR org.apache.hadoop.security.UserGroupInformation: 
PriviledgedActionException as:hadoop cause:java.io.FileNotFoundException: 
/var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class
 (No space left on device)
2014-05-22 10:48:43,585 WARN org.apache.hadoop.mapred.TaskTracker: Error 
initializing attempt_201405211712_0625_r_01_2:
java.io.FileNotFoundException: 
/var/local/hadoop/cache/mapred/local/taskTracker/hadoop/jobcache/job_201405211712_0625/jars/org/iq80/snappy/SnappyCompressor.class
 (No space left on device)
at java.io.FileOutputStream.open(Native Method)
at java.io.FileOutputStream.(FileOutputStream.java:221)
at java.io.FileOutputStream.(FileOutputStream.java:171)
at org.apache.hadoop.util.RunJar.unJar(RunJar.java:51)
at 
org.apache.hadoop.mapred.JobLocalizer.localizeJobJarFile(JobLocalizer.java:277)
at 
org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:377)
at 
org.apache.hadoop.mapred.JobLocalizer.localizeJobFiles(JobLocalizer.java:367)
at 
org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:202)
at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1228)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
at 
org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1203)
at 
org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1118)
at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2430)
at java.lang.Thread.run(Thread.java:744)


Thanks and Regards
Prabakaran.N  aka NP
nsn, Bangalore
When "I" is replaced by "We" - even Illness becomes "Wellness"