Your mention of hosting VM's below: "to store VM-images which are used by KVM"
is interesting. You'd probably
benefit from some sort of de-duplication, which lustre doesn't do. The
workload would also seem to not fit into
lustre's key strengths.
Have you considered using something like a ZF
At most a day. Most of that day will be inserting the DVD's with Centos into
the machines and installing :)
The mkfs and mount of lustre will take a total of about an hour.
--
Dr Stuart Midgley
sdm...@gmail.com
On 28/04/2010, at 21:03 , Janne Aho wrote:
> Thought I would say thanks for al
Thought I would say thanks for all the input you have given and I'm
sorry for misspelling of LustreFS.
As we are completely green when it comes to any form of cluster file
systems and we have to concider all the facts before we definitely know
what we should do and how much time we have to calc
On 2010-04-26, at 05:29, Mag Gam wrote:
> Speaking of the future. Is there any more news about SNS? I think
> thats the only thing Lustre is missing to make it "production" ready
> and not just for research labs.
I agree, and this is one of the features that I will be advocating for our next
roun
Speaking of the future. Is there any more news about SNS? I think
thats the only thing Lustre is missing to make it "production" ready
and not just for research labs.
On Fri, Apr 23, 2010 at 12:07 PM, Stuart Midgley wrote:
> Yes, we suffer hardware failures. All the time. That is sort of the
Yes, we suffer hardware failures. All the time. That is sort of the point of
Lustre and a clustered file system :)
We have had double-disk failures with raid5 (recovered everything except ~1MB
of data), server failures, MDS failures etc. We successfully recovered from
them all. Sure, it can
Our success is based on simplicity. Software raid on direct attached disks
with no add-on cards (ie. ensure MB's have intel pro 1000 nics, at least 6 sata
ports and reliable cpu's etc).
Our first generation gear consisted of a super-micro MB, 2GB memory single dual
core intel cpu's and 6x750GB
Taking a break from my current non-computer related work..
My guess based on your success is your gear is not so much cheap, as
*cost effective high MTBF commodity parts*.
If you go for the absolute bargain basement stuff, you'll have problems
as individual components flake out.
If you spend
On 23/04/10 11:42, Stu Midgley wrote:
>> Would lustre have issues if using cheap off the shell components or
>> would people here think you need to have high end machines with built in
>> redundancy for everything?
>
> We run lustre on cheap off the shelf gear. We have 4 generations of
> cheapish
We run lustre on cheap off the shelf gear. We have 4 generations of
cheapish gear in a single 300TB lustre config (40 oss's)
It has been running very very well for about 3.5 years now.
> Would lustre have issues if using cheap off the shell components or
> would people here think you need to ha
On 22/04/10 17:38, Lundgren, Andrew wrote:
(somehow managed to send this as private mail, while it was ment to be
sent to the list)
sorry being old fashioned and answer inline, but it feels less jeopardy.
> I think the lustre 2.0 release notes indicated that lustre will continue but
> may onl
ay, April 22, 2010 12:53 AM
> To: Janne Aho
> Cc: lusterfs
> Subject: Re: [Lustre-discuss] Future of LusterFS?
>
> On 2010-04-22, at 00:33, Janne Aho wrote:
>
>> Today we have a storage system based on NFS, but we are really concerned
>> about redundancy and
>Make sure you read and understand the Lustre 2.0 release notes before you
>buy. There seemed to be some specifics in there about using Oracle hardware.
In all fairness ... that only matters if you pay Oracle for support. If
you aren't paying Oracle for support (or have no plans to), then it doe
Dilger
Sent: Thursday, April 22, 2010 12:53 AM
To: Janne Aho
Cc: lusterfs
Subject: Re: [Lustre-discuss] Future of LusterFS?
On 2010-04-22, at 00:33, Janne Aho wrote:
> Today we have a storage system based on NFS, but we are really concerned
> about redundancy and are at the brink to take th
On 22/04/10 08:56, Michael Schwartzkopff wrote:
On 22/04/10 08:53, Andreas Dilger wrote:
> Am Donnerstag, 22. April 2010 08:33:14 schrieb Janne Aho:
>> Hi,
>>
>> Today we have a storage system based on NFS, but we are really concerned
>> about redundancy and are at the brink to take the step to a c
Am Donnerstag, 22. April 2010 08:33:14 schrieb Janne Aho:
> Hi,
>
> Today we have a storage system based on NFS, but we are really concerned
> about redundancy and are at the brink to take the step to a cluster file
> system as glusterfs, but we have got suggestions on that lusterfs would
> have be
On 2010-04-22, at 00:33, Janne Aho wrote:
> Today we have a storage system based on NFS, but we are really concerned
> about redundancy and are at the brink to take the step to a cluster file
> system as glusterfs, but we have got suggestions on that lusterfs would
> have been the best option fo
Hi,
Today we have a storage system based on NFS, but we are really concerned
about redundancy and are at the brink to take the step to a cluster file
system as glusterfs, but we have got suggestions on that lusterfs would
have been the best option for us, but at the same time those who
"recom
18 matches
Mail list logo