Re: [lustre-discuss] Designing a new Lustre system

2017-12-21 Thread Carlson, Timothy S
@lists.lustre.org Subject: Re: [lustre-discuss] Designing a new Lustre system Last I looked at Isilon, it serializes parallel writes to a single file. Ultimately, the data is striped across multiple data servers but it all channels through a single data server. If you only have file-per-process workloads, and

Re: [lustre-discuss] Designing a new Lustre system

2017-12-21 Thread John Bent
Last I looked at Isilon, it serializes parallel writes to a single file. Ultimately, the data is striped across multiple data servers but it all channels through a single data server. If you only have file-per-process workloads, and you have a lot of money, then Isilon is considered a solid enterp

Re: [lustre-discuss] Designing a new Lustre system

2017-12-21 Thread Glenn Lockwood
On Wed, Dec 20, 2017 at 8:21 AM, E.S. Rosenberg wrote: > > 4. One of my colleagues likes Isilon very much, I have not been able to > find any literature on if/how Lustre compares any pointers/knowledge on the > subject is very welcome. > > I haven't looked at Isilon in a while, but my recollectio

Re: [lustre-discuss] Designing a new Lustre system

2017-12-21 Thread E.S. Rosenberg
Thanks for all the great answers! Still looking for more info for #4... Thanks again, Eli On Thu, Dec 21, 2017 at 12:26 AM, Mohr Jr, Richard Frank (Rick Mohr) < rm...@utk.edu> wrote: > My $0.02 below. > > > On Dec 20, 2017, at 11:21 AM, E.S. Rosenberg > wrote: > > > > 1. After my recent experi

Re: [lustre-discuss] Designing a new Lustre system

2017-12-20 Thread Mohr Jr, Richard Frank (Rick Mohr)
My $0.02 below. > On Dec 20, 2017, at 11:21 AM, E.S. Rosenberg > wrote: > > 1. After my recent experience with failover I wondered is there any reason > not to set all machines that are within reasonable cable range as potential > failover nodes so that in the very unlikely event of both mach

Re: [lustre-discuss] Designing a new Lustre system

2017-12-20 Thread Ben Evans
:lustre-discuss-boun...@lists.lustre.org>> on behalf of "E.S. Rosenberg" mailto:esr+lus...@mail.hebrew.edu>> Date: Wednesday, December 20, 2017 at 11:21 AM To: "lustre-discuss@lists.lustre.org<mailto:lustre-discuss@lists.lustre.org>" mailto:lustre-discuss@lis

Re: [lustre-discuss] Designing a new Lustre system

2017-12-20 Thread Patrick Farrell
uot; mailto:esr+lus...@mail.hebrew.edu>> Date: Wednesday, December 20, 2017 at 10:21 AM To: "lustre-discuss@lists.lustre.org<mailto:lustre-discuss@lists.lustre.org>" mailto:lustre-discuss@lists.lustre.org>> Subject: [lustre-discuss] Designing a new Lustre system

[lustre-discuss] Designing a new Lustre system

2017-12-20 Thread E.S. Rosenberg
Hi everyone, We are currently looking into upgrading/replacing our Lustre system with a newer system. I had several ideas I'd like to run by you and also some questions: 1. After my recent experience with failover I wondered is there any reason not to set all machines that are within reasonable c