Re: how to build .jar file for fair scheduler in hadoop 1.2.1

2014-05-02 Thread Ted Yu
Go to src/contrib/fairscheduler folder, run:

ant jar

You would find the jar file under build/contrib/fairscheduler/ of your
workspace.

Cheers


On Fri, May 2, 2014 at 8:43 PM, Mahesh Khandewal wrote:

>  up vote0down 
> votefavorite
>
> I have Hadoop 1.2.1 installed on my single node system. The path to hadoop
> is /usr/local/hadoop. Now how to create a .jar file of fair-scheduler. From
> while directory do i need to create .jar file. How to call ant command?? I
> am using ant 1.7 version.
>


how to build .jar file for fair scheduler in hadoop 1.2.1

2014-05-02 Thread Mahesh Khandewal
up vote0down 
votefavorite

I have Hadoop 1.2.1 installed on my single node system. The path to hadoop
is /usr/local/hadoop. Now how to create a .jar file of fair-scheduler. From
while directory do i need to create .jar file. How to call ant command?? I
am using ant 1.7 version.


Re: Random Exception

2014-05-02 Thread S.L
On Fri, May 2, 2014 at 12:20 PM, S.L  wrote:

> I am using Hadoop 2.3 , the problem is that my disk runs out of space
> (80GB) and then I reboot my machine , which causes my /tmp data to be
> deleted and frees up space , I then resubmit the job assuming that since
> the namenode and datanode data are not stored in /tmp everything should be
> ok.( I have  set the namenode and datanode to be stored at a dieffrent
> location than /tmp).
>
> By the way this is a single node cluster(psuedo distributed) setup.
>
>
> On Fri, May 2, 2014 at 9:02 AM, Marcos Ortiz  wrote:
>
>> It seems that your Hadoop data directory is broken or your disk has
>> problems.
>> Which version of Hadoop are you using?
>>
>> On Friday, May 02, 2014 08:43:44 AM S.L wrote:
>> > Hi All,
>> >
>> > I get this exception after af resubmit my failed MapReduce jon, can one
>> > please let me know what this exception means ?
>> >
>> > 14/05/02 01:28:25 INFO mapreduce.Job: Task Id :
>> > attempt_1398989569957_0021_m_00_0, Status : FAILED
>> > Error: org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not
>> > find any valid local directory for
>> > attempt_1398989569957_0021_m_00_0/intermediate.26
>> > at
>> >
>> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWr
>> > ite(LocalDirAllocator.java:402) at
>> >
>> org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocato
>> > r.java:150) at
>> >
>> org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocato
>> > r.java:131) at
>> > org.apache.hadoop.mapred.Merger$MergeQueue.merge(Merger.java:711) at
>> > org.apache.hadoop.mapred.Merger$MergeQueue.merge(Merger.java:579) at
>> > org.apache.hadoop.mapred.Merger.merge(Merger.java:150)
>> > at
>> >
>> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.mergeParts(MapTask.java:187
>> > 0) at
>> >
>> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1482)
>> > at
>> org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:437)
>> > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
>> > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
>> > at java.security.AccessController.doPrivileged(Native Method)
>> > at javax.security.auth.Subject.doAs(Unknown Source)
>> > at
>> >
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.ja
>> > va:1548)
>>
>> 
>> I Conferencia Científica Internacional UCIENCIA 2014 en la UCI del 24 al
>> 26 de abril de 2014, La Habana, Cuba. Ver http://uciencia.uci.cu
>>
>
>


Re: Wordcount file cannot be located

2014-05-02 Thread Hardik Pandya
Please add below to your config - for some reason hadoop-common jar is
being overwritten - please share your feedback - thanks
config.set("fs.hdfs.impl",org.apache.hadoop.hdfs.DistributedFileSystem.class.getName()



On Fri, May 2, 2014 at 12:08 AM, Alex Lee  wrote:

> I tried to add the code, but seems still not working.
> http://postimg.org/image/6c1dat3jx/
>
> 2014-05-02 11:56:06,780 WARN  [main] util.NativeCodeLoader
> (NativeCodeLoader.java:(62)) - Unable to load native-hadoop library
> for your platform... using builtin-java classes where applicable
> java.io.IOException: No FileSystem for scheme: hdfs
>
> Also, the eclipse DFS location can reach the /tmp/ but cannot enter the
> /user/
>
> Any suggestion, thanks.
>
> alex
>
> --
> From: unmeshab...@gmail.com
> Date: Fri, 2 May 2014 08:43:26 +0530
> Subject: Re: Wordcount file cannot be located
> To: user@hadoop.apache.org
>
>
> Try this along with your MapReduce source code
>
> Configuration config = new Configuration();
> config.set("fs.defaultFS", "hdfs://IP:port/");
> FileSystem dfs = FileSystem.get(config);
> Path path = new Path("/tmp/in");
>
> Let me know your thoughts.
>
>
> --
> *Thanks & Regards *
>
>
> *Unmesha Sreeveni U.B*
> *Hadoop, Bigdata Developer*
> *Center for Cyber Security | Amrita Vishwa Vidyapeetham*
> http://www.unmeshasreeveni.blogspot.in/
>
>
>


Re: Which database should be used

2014-05-02 Thread Marcos Ortiz

On Friday, May 02, 2014 04:21:58 PM Alex Lee wrote:
> There are many database, such as Hbase, hive and mango etc. I need to choose
> one to save data big volumn stream data from sensors.
 
> Will hbase be good, thanks.
HBase could be a good allied for this case. You should check OpenTSDB project 
like a similar case to your problem.
http://opentsdb.net/

You should check HBaseCon presentations and videos to see what could you use 
for your case.
 


I Conferencia Científica Internacional UCIENCIA 2014 en la UCI del 24 al 26 de 
abril de 2014, La Habana, Cuba. Ver http://uciencia.uci.cu


Re: Random Exception

2014-05-02 Thread Marcos Ortiz
It seems that your Hadoop data directory is broken or your disk has problems.
Which version of Hadoop are you using?

On Friday, May 02, 2014 08:43:44 AM S.L wrote:
> Hi All,
> 
> I get this exception after af resubmit my failed MapReduce jon, can one
> please let me know what this exception means ?
> 
> 14/05/02 01:28:25 INFO mapreduce.Job: Task Id :
> attempt_1398989569957_0021_m_00_0, Status : FAILED
> Error: org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not
> find any valid local directory for
> attempt_1398989569957_0021_m_00_0/intermediate.26
> at
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWr
> ite(LocalDirAllocator.java:402) at
> org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocato
> r.java:150) at
> org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocato
> r.java:131) at
> org.apache.hadoop.mapred.Merger$MergeQueue.merge(Merger.java:711) at
> org.apache.hadoop.mapred.Merger$MergeQueue.merge(Merger.java:579) at
> org.apache.hadoop.mapred.Merger.merge(Merger.java:150)
> at
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.mergeParts(MapTask.java:187
> 0) at
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1482)
> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:437)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Unknown Source)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.ja
> va:1548)

I Conferencia Científica Internacional UCIENCIA 2014 en la UCI del 24 al 26 de 
abril de 2014, La Habana, Cuba. Ver http://uciencia.uci.cu


Random Exception

2014-05-02 Thread S.L
Hi All,

I get this exception after af resubmit my failed MapReduce jon, can one
please let me know what this exception means ?

14/05/02 01:28:25 INFO mapreduce.Job: Task Id :
attempt_1398989569957_0021_m_00_0, Status : FAILED
Error: org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not
find any valid local directory for
attempt_1398989569957_0021_m_00_0/intermediate.26
at
org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:402)
at
org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:150)
at
org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:131)
at org.apache.hadoop.mapred.Merger$MergeQueue.merge(Merger.java:711)
at org.apache.hadoop.mapred.Merger$MergeQueue.merge(Merger.java:579)
at org.apache.hadoop.mapred.Merger.merge(Merger.java:150)
at
org.apache.hadoop.mapred.MapTask$MapOutputBuffer.mergeParts(MapTask.java:1870)
at
org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1482)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:437)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Unknown Source)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)


Re: Which database should be used

2014-05-02 Thread unmesha sreeveni
On Fri, May 2, 2014 at 1:51 PM, Alex Lee  wrote:

> hive


​HBase is better.​



-- 
*Thanks & Regards *


*Unmesha Sreeveni U.B*
*Hadoop, Bigdata Developer*
*Center for Cyber Security | Amrita Vishwa Vidyapeetham*
http://www.unmeshasreeveni.blogspot.in/


Which database should be used

2014-05-02 Thread Alex Lee
There are many database, such as Hbase, hive and mango etc. I need to choose 
one to save data big volumn stream data from sensors.
 
Will hbase be good, thanks.