Removing queues

2013-08-25 Thread Siddharth Tiwari
Whats the easiest way to remove queues from hadoop without restarting services ? Why cant we just refreshqueues ? Sent from my iPhone

Re: running map tasks in remote node

2013-08-25 Thread rab ra
Dear Yong, Thanks for your elaborate answer. Your answer really make sense and I am ending something close to it expect shared storage. In my usecase, I am not allowed to use any shared storage system. The reason being that the slave nodes may not be safe for hosting sensible data. (Because, they

Re: running map tasks in remote node

2013-08-25 Thread Harsh J
In a multi-node mode, MR requires a distributed filesystem (such as HDFS) to be able to run. On Sun, Aug 25, 2013 at 7:59 PM, rab ra wrote: > Dear Yong, > > Thanks for your elaborate answer. Your answer really make sense and I am > ending something close to it expect shared storage. > > In my use

provide Configuring the MapReduce(not find hadoop.job.ugi) Plugin with step by step detailed

2013-08-25 Thread Bhaskar T
-- [image: Picture] *SAIBABA - SAIBABA* http://bhaskar22.ucoz.com/ dear sir This is bhaskar .t working as *Asst Prof in computer engg dept of sanjivani engg college near shirdi* .presently iam teaching this lab to ME(PG) COMPUTER STUDENTS .I followed through your hadoop tutorial,but i have one pr

Fwd: provide Configuring the MapReduce(not find hadoop.job.ugi) Plugin with step by step detailed

2013-08-25 Thread Bhaskar T
[image: Picture] *SAIBABA - SAIBABA* http://bhaskar22.ucoz.com/ dear sir This is bhaskar .t working as *Asst Prof in computer engg dept of sanjivani engg college near shirdi* .presently iam teaching this lab to ME(PG) COMPUTER STUDENTS .I followed through yahoo hadoop tutorial,but i have one probl

Re: Writable readFields question

2013-08-25 Thread Ken Sullivan
That could be a possible, but ideally we wouldn't have to change how the data is being inserted. The data is originally going into accumulo tables from an existing c++ system with a JNI wrapper to insert a language independent serialized blob; the code for that is tested and running and best case

Re: Writable readFields question

2013-08-25 Thread Abhijit Sarkar
Ken, What about supplemental characters, the major reason for which Hadoop's Writeable implementations store the length? On Sun, Aug 25, 2013 at 4:09 PM, Ken Sullivan wrote: > That could be a possible, but ideally we wouldn't have to change how the > data is being inserted. The data is original

Re: DICOM Image Processing using Hadoop

2013-08-25 Thread Jens Scheidtmann
Hi Shalish, let me google that for you using "dicom java api". http://www.dcm4che.org/ HTH, Jens

Mapper and Reducer takes longer than usual for a HBase table aggregation task

2013-08-25 Thread Pavan Sudheendra
Hi all, My mapper function is processing and aggregating 3 HBase table's data and writing it to the reducer for further operations.. However, all the 3 tables have small number of rows.. Not in the order of millions.. Still my map task completes in 16:07:29,632 INFO JobClient:1435 - Running job

Re: Mapper and Reducer takes longer than usual for a HBase table aggregation task

2013-08-25 Thread Jens Scheidtmann
Hi Pavan, > 2. ) If my table is in the order of millions, the number of mappers is > increased to 5.. How does Hadoop know how many mappers to run for a > specific job? > > The number of input splits determines the number of mappers. Usually (in the default case) your source is split into hdfs bl

Re: Mapper and Reducer takes longer than usual for a HBase table aggregation task

2013-08-25 Thread Ted Yu
Pavan: Did you use TableInputFormat or its variant ? If so, take a look at TableSplit and how it is used in TableInputFormatBase#getSplits(). Cheers On Sun, Aug 25, 2013 at 2:36 PM, Jens Scheidtmann < jens.scheidtm...@gmail.com> wrote: > Hi Pavan, > > >> 2. ) If my table is in the order of mill

Re: Mapper and Reducer takes longer than usual for a HBase table aggregation task

2013-08-25 Thread 李洪忠
You need release your map code here to analyze the question. generally, when map/reduce hbase, scanner with filter(s) is used. so the mapper count is the hbase region count in your hbase table. As the reason why you reduce so slow, I guess, you have an disaster join on the three tables, which c

Re: DICOM Image Processing using Hadoop

2013-08-25 Thread JAGANADH G
Hi Shalish, Refer the report http://liu.diva-portal.org/smash/get/diva2:564782/FULLTEXT01.pdf . Some hints are available there . Best regards Jagan On Fri, Aug 23, 2013 at 3:01 PM, Shalish VJ wrote: > Hi, > > Is it possible to process DICOM images using hadoop. > Please help me with an examp

Re: Mapper and Reducer takes longer than usual for a HBase table aggregation task

2013-08-25 Thread anil gupta
Hi Pavan, Standalone cluster? How many RS you are running?What are you trying to achieve in MR? Have you tried increasing scanner caching? Slow is very theoretical unless we know some more details of your stuff. ~Anil On Sun, Aug 25, 2013 at 5:52 PM, 李洪忠 wrote: > You need release your map co

Re: Mapper and Reducer takes longer than usual for a HBase table aggregation task

2013-08-25 Thread Pavan Sudheendra
Jens, can i set a smaller value in my application? Is this valid? conf.setInt("mapred.max.split.size", 50); This is our mapred-site.xml: mapred.job.tracker ip-10-10-100170.eu-east-1.compute.internal:8021 mapred.job.tracker.http.address 0.0.0.0:50030 mapreduce.

Re: Mapper and Reducer takes longer than usual for a HBase table aggregation task

2013-08-25 Thread Pavan Sudheendra
Ted and lhztop, here is a gist of my code: http://pastebin.com/mxY4AqBA Can you suggest few ways of optimizing it? I know i am re-initializing the conf object in the map function everytime its called, i'll change that. Anil Gupta, 6 Node Cluster so 6 Region Servers.. I am basically trying to do a