Whats the easiest way to remove queues from hadoop without restarting services
? Why cant we just refreshqueues ?
Sent from my iPhone
Dear Yong,
Thanks for your elaborate answer. Your answer really make sense and I am
ending something close to it expect shared storage.
In my usecase, I am not allowed to use any shared storage system. The
reason being that the slave nodes may not be safe for hosting sensible
data. (Because, they
In a multi-node mode, MR requires a distributed filesystem (such as
HDFS) to be able to run.
On Sun, Aug 25, 2013 at 7:59 PM, rab ra wrote:
> Dear Yong,
>
> Thanks for your elaborate answer. Your answer really make sense and I am
> ending something close to it expect shared storage.
>
> In my use
--
[image: Picture] *SAIBABA - SAIBABA*
http://bhaskar22.ucoz.com/
dear sir
This is bhaskar .t working as *Asst Prof in computer engg dept of sanjivani
engg college near shirdi* .presently iam teaching this lab to ME(PG)
COMPUTER STUDENTS .I followed through your hadoop tutorial,but i have one
pr
[image: Picture] *SAIBABA - SAIBABA*
http://bhaskar22.ucoz.com/
dear sir
This is bhaskar .t working as *Asst Prof in computer engg dept of sanjivani
engg college near shirdi* .presently iam teaching this lab to ME(PG)
COMPUTER STUDENTS .I followed through yahoo hadoop tutorial,but i have one
probl
That could be a possible, but ideally we wouldn't have to change how the
data is being inserted. The data is originally going into accumulo tables
from an existing c++ system with a JNI wrapper to insert a language
independent serialized blob; the code for that is tested and running and
best case
Ken,
What about supplemental characters, the major reason for which Hadoop's
Writeable implementations store the length?
On Sun, Aug 25, 2013 at 4:09 PM, Ken Sullivan wrote:
> That could be a possible, but ideally we wouldn't have to change how the
> data is being inserted. The data is original
Hi Shalish,
let me google that for you using "dicom java api".
http://www.dcm4che.org/
HTH,
Jens
Hi all,
My mapper function is processing and aggregating 3 HBase table's data and
writing it to the reducer for further operations..
However, all the 3 tables have small number of rows.. Not in the order of
millions.. Still my map task completes in
16:07:29,632 INFO JobClient:1435 - Running job
Hi Pavan,
> 2. ) If my table is in the order of millions, the number of mappers is
> increased to 5.. How does Hadoop know how many mappers to run for a
> specific job?
>
> The number of input splits determines the number of mappers. Usually (in
the default case) your source is split into hdfs bl
Pavan:
Did you use TableInputFormat or its variant ?
If so, take a look at TableSplit and how it is used in
TableInputFormatBase#getSplits().
Cheers
On Sun, Aug 25, 2013 at 2:36 PM, Jens Scheidtmann <
jens.scheidtm...@gmail.com> wrote:
> Hi Pavan,
>
>
>> 2. ) If my table is in the order of mill
You need release your map code here to analyze the question. generally,
when map/reduce hbase, scanner with filter(s) is used. so the mapper
count is the hbase region count in your hbase table.
As the reason why you reduce so slow, I guess, you have an disaster join
on the three tables, which c
Hi Shalish,
Refer the report
http://liu.diva-portal.org/smash/get/diva2:564782/FULLTEXT01.pdf .
Some hints are available there .
Best regards
Jagan
On Fri, Aug 23, 2013 at 3:01 PM, Shalish VJ wrote:
> Hi,
>
> Is it possible to process DICOM images using hadoop.
> Please help me with an examp
Hi Pavan,
Standalone cluster? How many RS you are running?What are you trying to
achieve in MR? Have you tried increasing scanner caching?
Slow is very theoretical unless we know some more details of your stuff.
~Anil
On Sun, Aug 25, 2013 at 5:52 PM, 李洪忠 wrote:
> You need release your map co
Jens, can i set a smaller value in my application?
Is this valid?
conf.setInt("mapred.max.split.size", 50);
This is our mapred-site.xml:
mapred.job.tracker
ip-10-10-100170.eu-east-1.compute.internal:8021
mapred.job.tracker.http.address
0.0.0.0:50030
mapreduce.
Ted and lhztop, here is a gist of my code: http://pastebin.com/mxY4AqBA
Can you suggest few ways of optimizing it? I know i am re-initializing the
conf object in the map function everytime its called, i'll change that.
Anil Gupta, 6 Node Cluster so 6 Region Servers.. I am basically trying to
do a
16 matches
Mail list logo