When I was doing presales, the vast majority of our small to middle size
procurements were for a three years duration.
sometimes, the maintenance was extended for one year, but the cluster was
generally replaced after three years.
I can understand the fastest clusters might last longer (5 years for e
xample) because of the huge initial investment, especially if it takes a
year or so for the system to be fully ready for production.
past 5 years, and imho, a cluster made of commodities components does a
better job as a heater compared to number crunching.

Cheers,

Gilles

On Tuesday, March 22, 2016, Jeff Hammond <jeff.scie...@gmail.com> wrote:

>
>
> On Mon, Mar 21, 2016 at 6:06 AM, Gilles Gouaillardet <
> gilles.gouaillar...@gmail.com
> <javascript:_e(%7B%7D,'cvml','gilles.gouaillar...@gmail.com');>> wrote:
>
>> Durga,
>>
>> currently, the average life expectancy of a cluster is 3 years.
>>
>
> By average life expectancy, do you mean the average time to upgrade?  DOE
> supercomputers usually run for 5-6 years, and some HPC systems run for 6-9
> years.  I know that some folks upgrade their clusters more often than every
> 3 years, but I've never heard of an HPC system that was used for less than
> 3 years.
>
>
>> si if you have to architect a cluster out of off the shelf components, I
>> would recommend
>> you take the "best" components available today or to be released in a
>> very near future.
>> so many things can happen in 10 years, so I can only suggest you do not
>> lock in yourself with a given vendor.
>>
>>
> Indeed, just write code that relies upon open standards and then buy the
> hardware that supports those open standards best at any given point in time.
>
> MPI is, of course, the best open standard to which you should be writing
> applications, and I am absolutely certain that MPI will be supported by
> interconnect products in 10 years, if for no other reason than why Fortran
> 77 is widely supported nearly 40 years after its introduction :-)
>
> "best" should be understood as "best match with your needs and your
>> budget".
>>
>>
> And relying upon MPI rather than some hardware technology is the best way
> to optimize your budget, because MPI is supported by a huge range of
> networks, both commodity and custom.
>
>
>> as a general though, market is both rational and irrational, so the best
>> engineered technology might not always prevail. and I do not know the magic
>> recipe to guarantee success.
>>
>>
> For example, VHS vs Betamax :-)
>
> Jeff
>
>
>> Cheers,
>>
>> Gilles
>>
>> On Monday, March 21, 2016, dpchoudh . <dpcho...@gmail.com
>> <javascript:_e(%7B%7D,'cvml','dpcho...@gmail.com');>> wrote:
>>
>>> Hello all
>>>
>>> I don't mean this to be a political conversation, but more of a research
>>> type.
>>>
>>> From what I have been observing, some of the interconnects that had very
>>> good technological features as well as popularity in the past have
>>> basically gone down the history book and some others, with comparable
>>> feature set, have gained (although I won't put any names here, neither of
>>> these are commodity gigabit Ethernet).
>>>
>>> Any comments on what drives these factors? Put another way, if I am to
>>> architect a system consisting of commodity nodes today, how can I
>>> reasonably be sure that the interconnect will be a good choice, in all
>>> sense of the word 'good', say, 10 years down the road?
>>>
>>> Thanks
>>> Durga
>>>
>>> We learn from history that we never learn from history.
>>>
>>
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org <javascript:_e(%7B%7D,'cvml','us...@open-mpi.org');>
>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>> Link to this post:
>> http://www.open-mpi.org/community/lists/users/2016/03/28769.php
>>
>
>
>
> --
> Jeff Hammond
> jeff.scie...@gmail.com
> <javascript:_e(%7B%7D,'cvml','jeff.scie...@gmail.com');>
> http://jeffhammond.github.io/
>

Reply via email to