Hi Colin,

I think this would work as a lab setup for testing how hadoop handles
hardware failures but I was thinking of the opposite on a much larger
scale.

It is though by many IT individuals that its smart to take one high
end system and split into multiple machines, though few people think
of taking multiple machines and integrating them into one and then
virtualising them apart again. This has been done though end data
storage has almost always resided on a SAN which has a cost
implication. I know Hadoops Distributed Filesystem is geared towards
batch processing and larger data sets with higher latency, though has
anyone every tried integrating it with a virtual server tech like kvm?
The other possibility is that Im nuts and have completely lost the plot.

Kind Regards

Brad

On Fri, Jun 6, 2008 at 5:03 PM, Colin Freas <[EMAIL PROTECTED]> wrote:
> I've wondered about this using single or dual quad-core machines with one
> spindle per core, and partitioning them out into 2, 4, 8, whatever virtual
> machines, possibly marking each physical box as a "rack".
>
> There would be some initial and ongoing sysadmin costs.  But could this
> increase thoughput on a small cluster, of 2 or 3 boxes with 16 or 24 cores,
> with many jobs by limiting the number of cores each job runs on, to say 8?
> Has anyone tried such a setup?
>
>
> On Fri, Jun 6, 2008 at 10:30 AM, Brad C <[EMAIL PROTECTED]> wrote:
>
>> Hello Everyone,
>>
>> I've been brainstorming recently and its always been in the back of my
>> mind, hadoop offers the functionality of clustering comodity systems
>> together, but how would one go about virtualising them apart again?
>>
>> Kind Regards
>>
>> Brad :)
>>
>

Reply via email to