Re: [lustre-discuss] Running IBM Power boxes as OSSs?

2017-09-01 Thread Andrew Holway
my 0.02¢

This question is quite interesting considering Big Blue offers the
competing GPFS filesystem on Power.  I have it on fairly good authority
that Intel bought Whamcloud in order to compete with IBM for future very
large (exascale) supercomputer installations. Power architecture is
seemingly quite formidable in the supercomputing space so having a combined
filesystems and processor architecture is very important for intel if they
want to compete in the HPC space.

I doubt that Intel, as the current guardians of Lustre would allow any
serious work on supporting a Power power port. I guess this would be a bit
of an own goal!



On 1 September 2017 at 12:24, Daniel Kidger 
wrote:

> Hi.
>
> This is my first posting to the list.
> I have worked off an on with Lustre since a helping set up a demo at SC02
> in Baltimore.
> A long time has passed and I now find myself at IBM.
> The question I have today is:
>
> Are any sites running with IBM POWER hardware for their Lustre servers
> i.e. MDS and OSSs?
> The only references I find are very old, certainly long before the
> availability of little-endian RedHat.
>
> And if not, what are likely to be the pain points and hurdles in building
> and running Lustre on non-x86 platforms like POWER?
>
> Daniel
>
> Daniel Kidger
> IBM Systems, UK
> daniel.kid...@uk.ibm.com
> +44 (0)7818 522266 <+44%207818%20522266>
> Unless stated otherwise above:
> IBM United Kingdom Limited - Registered in England and Wales with number
> 741598.
> Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU
>
>
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
>
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] Backup software for Lustre

2017-02-07 Thread Andrew Holway
Would it be difficult to suspend IO and snapshot all the nodes (assuming
ZFS). Could you be sure that your MDS and OSS are synchronised?

On 7 February 2017 at 19:52, Mike Selway  wrote:

> Hello Brett,
>
>Actually, looking for someone who uses a commercialized
> approach (that retains user metadata and Lustre extended metadata) and not
> specifically the manual approaches of Chapter 17.
>
>
>
> Thanks!
> Mike
>
>
>
> *Mike Selway* *|** Sr. Tiered Storage Architect | Cray Inc.*
>
> Work +1-301-332-4116 <(301)%20332-4116> | msel...@cray.com
>
> 146 Castlemaine Ct,   Castle Rock,  CO  80104 | www.cray.com
>
>
>
> [image: cid:image001.png@01CF8974.1C1FA000] 
>
>
>
>
> *From:* Brett Lee [mailto:brettlee.lus...@gmail.com]
> *Sent:* Monday, February 06, 2017 11:45 AM
> *To:* Mike Selway 
> *Cc:* lustre-discuss@lists.lustre.org
> *Subject:* Re: [lustre-discuss] Backup software for Lustre
>
>
>
> Hey Mike,
>
>
>
> "Chapter 17" and
>
>
>
> http://www.intel.com/content/www/us/en/lustre/backup-and-
> restore-training.html
>
>
>
> both contain methods to backup & restore the entire Lustre file system.
>
>
>
> Are you looking for a solution that backs up only the (user) data files
> and their associated metadata (e.g. xattrs)?
>
>
> Brett
>
> --
>
> Protect Yourself From Cybercrime
>
> PDS Software Solutions LLC
>
> https://www.TrustPDS.com 
>
>
>
> On Mon, Feb 6, 2017 at 11:12 AM, Mike Selway  wrote:
>
> Hello,
>
>Anyone aware of and/or using a Backup software package to
> protect their LFS environment (not referring to the tools/scripts suggested
> in Chapter 17).
>
>
>
> Regards,
>
> Mike
>
>
>
> *Mike Selway* *|** Sr. Tiered Storage Architect | Cray Inc.*
>
> Work +1-301-332-4116 <(301)%20332-4116> | msel...@cray.com
>
> 146 Castlemaine Ct,   Castle Rock,  CO  80104 | www.cray.com
>
>
>
> [image: cid:image001.png@01CF8974.1C1FA000] 
>
>
>
>
>
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
>
>
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
>
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] Lustre Test Install

2016-04-12 Thread Andrew Holway
Hi Vivek:

https://aws.amazon.com/marketplace/pp/B01344V0C0/ref=ads_9935bf16-1513-1460476463

have fun!

Cheers,

Andrew

On 12 April 2016 at 17:49, Vivek Arora  wrote:

> Thank you Andrew.
>
> Would you happen to know the AMI number/id?
>
>
>
> *From:* Andrew Holway [mailto:andrew.hol...@gmail.com]
> *Sent:* Tuesday, April 12, 2016 7:40 PM
> *To:* Vivek Arora 
> *Cc:* lustre-discuss@lists.lustre.org
> *Subject:* Re: [lustre-discuss] Lustre Test Install
>
>
>
> Hi Vivek,
>
>
>
> Intel have released an Amazon AMI for lustre. I think its quite expensive
> however.
>
>
>
> Cheers,
>
>
>
> Andrew
>
>
>
>
>
> On 12 April 2016 at 06:06, Vivek Arora 
> wrote:
>
> Hi All,
>
>
>
> Very new to the lustre world.
>
>
>
> I am trying to do a test install of lustre on Amazon Web Services Linux
> Instances.
>
> Just for a test install what configuration of Linux servers should I chose
> and how many would I need?
>
> Is there a quick checklist of all things needed for the install and a
> short how-to that I can follow.
>
> I am going through the documentation, however, any help I can get on this
> would be great.
>
>
>
> Regards,
>
> *Vivek*
>
>
>
>
> --
>
> *CONFIDENTIALITY NOTICE: This Electronic Mail (e-mail) contains
> confidential and privileged information intended only for the use of the
> individual or entity to which it is sent. If the reader of this message is
> not the intended recipient, or the employee or agent responsible for
> delivery to the intended rec​ipient, you are hereby notified that any
> dissemination, distribution, or copying of this communication is STRICTLY
> PROHIBITED. If you have received this communication in error, please
> immediately notify the sender by replying through e-mail or telephone.*
>
>
> --
>
>
>
>
>
>
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
>
>
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] Lustre Test Install

2016-04-12 Thread Andrew Holway
Hi Vivek,

Intel have released an Amazon AMI for lustre. I think its quite expensive
however.

Cheers,

Andrew


On 12 April 2016 at 06:06, Vivek Arora  wrote:

> Hi All,
>
>
>
> Very new to the lustre world.
>
>
>
> I am trying to do a test install of lustre on Amazon Web Services Linux
> Instances.
>
> Just for a test install what configuration of Linux servers should I chose
> and how many would I need?
>
> Is there a quick checklist of all things needed for the install and a
> short how-to that I can follow.
>
> I am going through the documentation, however, any help I can get on this
> would be great.
>
>
>
> Regards,
>
> *Vivek*
>
>
>
>
> --
>
> *CONFIDENTIALITY NOTICE: This Electronic Mail (e-mail) contains
> confidential and privileged information intended only for the use of the
> individual or entity to which it is sent. If the reader of this message is
> not the intended recipient, or the employee or agent responsible for
> delivery to the intended rec​ipient, you are hereby notified that any
> dissemination, distribution, or copying of this communication is STRICTLY
> PROHIBITED. If you have received this communication in error, please
> immediately notify the sender by replying through e-mail or telephone.*
>
>
> --
>
>
>
>
>
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
>
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] zfs -- mds/mdt -- ssd model / type recommendation

2015-05-04 Thread Andrew Holway
ZFS should not be slower for very long. I understand that, now ZFS on Linux
is stable, many significant performance problems have been identified and
are being worked on.

On 5 May 2015 at 04:20, Andrew Wagner  wrote:

> I can offer some guidance on our experiences with ZFS Lustre MDTs. Patrick
> and Charlie are right - you will get less performance per $ out of ZFS MDTs
> vs. LDISKFS MDTs. That said, our RAID10 with 4x Dell Mixed Use Enterprise
> SSDs achieves similar performance to most of our LDISKFS MDTs. Our MDS was
> a Dell server and we wanted complete support coverage.
>
> One of the most important things for good performance with our ZFS MDS was
> RAM. We doubled the amount of RAM in the system after experiencing
> performance issues that were clearly memory pressure related. If you expect
> to have tens of millions of files, I wouldn't run the MDS without at least
> 128GB of RAM. I would be prepared to increase that number if you run into
> RAM bottlenecks - we ended up going to 256GB in the end.
>
> For a single OSS, you may not need 4x SSDs to deal with the load. We use
> the 4 disk RAID10 setup with a 1PB filesystem and 1.8PB filesystem. Our use
> case was more for archive purposes, so we wanted to go with a complete ZFS
> solution.
>
>
>
> On 5/4/2015 1:18 PM, Kevin Abbey wrote:
>
>> Hi,
>>
>>  For a single node OSS I'm planning to use a combined MGS/MDS. Can anyone
>> recommend an enterprise ssd designed for this workload?  I'd like to create
>> a raid10  with 4x ssd using zfs as the backing fs.
>>
>> Are there any published/documented systems using zfs in raid 10 using ssd?
>>
>> Thanks,
>> Kevin
>>
>>
>>
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
>
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [Lustre-discuss] Burst Buffer

2014-08-28 Thread Andrew Holway
> I would like to check if we are planning to deploy Burst Buffer; Can
Luster File System handle this functionality? what we have is two type of
storage SSD and SATA. we would like to check if Luster can handle the
movement of the data.

This is a kind of "How long is a piece of string" question. Lustre scales
past the TB/s performance. What kind of performance do you want?

Thanks,

Andrew

>
> thank you and best regards
> Bassel
>
> ___
> Lustre-discuss mailing list
> Lustre-discuss@lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
>
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] Lustre and ZFS notes available

2014-08-14 Thread Andrew Holway
Hi Scott,

Great job! Would you consider merging with the standard Lustre docs?

https://wiki.hpdd.intel.com/display/PUB/Documentation

Thanks,

Andrew


On 12 August 2014 18:58, Scott Nolin  wrote:

> Hello,
>
> At UW SSEC my group has been using Lustre for a few years, and recently
> Lustre with ZFS as the back end file system. We have found the Lustre
> community very open and helpful in sharing information. Specifically
> information from various LUG and LAD meetings and the mailing lists has
> been very helpful.
>
> With this in mind we would like to share some of our internal
> documentation and notes that may be useful to others. These are working
> notes, so not a complete guide.
>
> I want to be clear that the official Lustre documentation should be
> considered the correct reference material in general. But this information
> may be helpful for some -
>
> http://www.ssec.wisc.edu/~scottn/
>
> Topics that I think of particular interest may be lustre zfs install notes
> and JBOD monitoring.
>
> Scott Nolin
> UW SSEC
>
>
>
> ___
> Lustre-discuss mailing list
> Lustre-discuss@lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
>
>
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] Lustre Client building problem on Centos 6.5

2014-07-21 Thread Andrew Holway
On 21 July 2014 10:25, Sean Brisbane  wrote:

>  Hi Andrew,
>
> To get l2.5 to build on el6.5 I needed to disable building the server
> modules.
>
> rpmbuild --rebuild --define "lustre_name lustre-client" --define
> "configure_args --disable-server --enable-client" --without servers
> lustre-client-2.5.2-2.6.32_431.17.1.el6.x86_64.src.rpm
>

Hi Sean,

Yea I have tried various permutations of the command, all to no avail.  I
am trying the following:

rpmbuild --rebuild --without servers --define 'kversion $(uname -r)'
--define 'kdir /usr/src/kernels/2.6.32-431.20.3.el6.x86_64/' --rebuild
'lustre-client-2.5.1-2.6.32_431.5.1.el6.x86_64.src.rpm'

And see the same error:

+ cp /root/rpmbuild/BUILD/lustre-2.5.1/lustre/obdclass/llog_test.ko
/root/rpmbuild/BUILDROOT/lustre-client-2.5.1-2.6.32_431.5.1.el6.x86_64.x86_64/lib/modules/2.6.32-431.el6.x86_64/extra/kernel/fs/lustre

cp: cannot create regular file
`/root/rpmbuild/BUILDROOT/lustre-client-2.5.1-2.6.32_431.5.1.el6.x86_64.x86_64/lib/modules/2.6.32-431.el6.x86_64/extra/kernel/fs/lustre':
No such file or directory

error: Bad exit status from /var/tmp/rpm-tmp.rJHw4x (%install)
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] Lustre Client building problem on Centos 6.5

2014-07-21 Thread Andrew Holway
On a different system I am getting as follows:

+ cp /root/rpmbuild/BUILD/lustre-2.5.1/lustre/obdclass/llog_test.ko
/root/rpmbuild/BUILDROOT/lustre-client-2.5.1-2.6.32_431.5.1.el6.x86_64.x86_64/lib/modules/2.6.32-431.el6.x86_64/extra/kernel/fs/lustre

cp: cannot create regular file
`/root/rpmbuild/BUILDROOT/lustre-client-2.5.1-2.6.32_431.5.1.el6.x86_64.x86_64/lib/modules/2.6.32-431.el6.x86_64/extra/kernel/fs/lustre':
No such file or directory

error: Bad exit status from /var/tmp/rpm-tmp.yp1duv (%install)

RPM build errors:

user jenkins does not exist - using root

group jenkins does not exist - using root

user jenkins does not exist - using root

group jenkins does not exist - using root

Bad exit status from /var/tmp/rpm-tmp.yp1duv (%install)


On 21 July 2014 08:16, Andrew Holway  wrote:

> Hi Lustre folks,
>
> I'm getting this error when I try to compile the Lustre client.
>
> test -n "" \
> || find "lustre-source/lustre-2.5.2" -type d ! -perm -755 \
>  -exec chmod u+rwx,go+rx {} \; -o \
>   ! -type d ! -perm -444 -links 1 -exec chmod a+r {} \; -o \
>! -type d ! -perm -400 -exec chmod a+r {} \; -o \
>   ! -type d ! -perm -444 -exec /bin/sh
> /usr/src/redhat/BUILD/lustre-2.5.2/config/install-sh -c -m a+r {} {} \; \
>  || chmod -R a+r "lustre-source/lustre-2.5.2"
> + chmod -R go-w lustre-source/lustre-2.5.2
> + xargs chmod +x
> + find
> /usr/src/redhat/BUILDROOT/lustre-client-2.5.2-2.6.32_431.17.1.el6.x86_64.x86_64
> -name '*.so'
> + cat
> + ln -s Lustre.ha_v2
> /usr/src/redhat/BUILDROOT/lustre-client-2.5.2-2.6.32_431.17.1.el6.x86_64.x86_64/etc/ha.d/resource.d/Lustre
> ln: creating symbolic link
> `/usr/src/redhat/BUILDROOT/lustre-client-2.5.2-2.6.32_431.17.1.el6.x86_64.x86_64/etc/ha.d/resource.d/Lustre':
> No such file or directory
> error: Bad exit status from /var/tmp/rpm-tmp.QBYgh0 (%install)
>
>
> RPM build errors:
> user jenkins does not exist - using root
> group jenkins does not exist - using root
> user jenkins does not exist - using root
> group jenkins does not exist - using root
> Bad exit status from /var/tmp/rpm-tmp.QBYgh0 (%install)
>
> Im building with:
>
> rpmbuild --define 'kversion $(uname -r)' --define 'kdir
> /usr/src/kernels/$(uname -r)' --rebuild
> lustre-client-2.5.2-2.6.32_431.17.1.el6.x86_64.src.rpm
>
> Cheers,
>
> Andrew
>
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


[Lustre-discuss] Lustre Client building problem on Centos 6.5

2014-07-20 Thread Andrew Holway
Hi Lustre folks,

I'm getting this error when I try to compile the Lustre client.

test -n "" \
|| find "lustre-source/lustre-2.5.2" -type d ! -perm -755 \
-exec chmod u+rwx,go+rx {} \; -o \
  ! -type d ! -perm -444 -links 1 -exec chmod a+r {} \; -o \
  ! -type d ! -perm -400 -exec chmod a+r {} \; -o \
  ! -type d ! -perm -444 -exec /bin/sh
/usr/src/redhat/BUILD/lustre-2.5.2/config/install-sh -c -m a+r {} {} \; \
|| chmod -R a+r "lustre-source/lustre-2.5.2"
+ chmod -R go-w lustre-source/lustre-2.5.2
+ xargs chmod +x
+ find
/usr/src/redhat/BUILDROOT/lustre-client-2.5.2-2.6.32_431.17.1.el6.x86_64.x86_64
-name '*.so'
+ cat
+ ln -s Lustre.ha_v2
/usr/src/redhat/BUILDROOT/lustre-client-2.5.2-2.6.32_431.17.1.el6.x86_64.x86_64/etc/ha.d/resource.d/Lustre
ln: creating symbolic link
`/usr/src/redhat/BUILDROOT/lustre-client-2.5.2-2.6.32_431.17.1.el6.x86_64.x86_64/etc/ha.d/resource.d/Lustre':
No such file or directory
error: Bad exit status from /var/tmp/rpm-tmp.QBYgh0 (%install)


RPM build errors:
user jenkins does not exist - using root
group jenkins does not exist - using root
user jenkins does not exist - using root
group jenkins does not exist - using root
Bad exit status from /var/tmp/rpm-tmp.QBYgh0 (%install)

Im building with:

rpmbuild --define 'kversion $(uname -r)' --define 'kdir
/usr/src/kernels/$(uname -r)' --rebuild
lustre-client-2.5.2-2.6.32_431.17.1.el6.x86_64.src.rpm

Cheers,

Andrew
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


[Lustre-discuss] Lustre and Sync IO

2014-06-12 Thread Andrew Holway
Hi,

Can someone give me the story on Lustre and sync IO?

Thanks,

Andrew
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] Virtual machines on Lustre

2014-06-12 Thread Andrew Holway
On 12 June 2014 08:52, Oliver Mangold  wrote:

> On 11.06.2014 20:29, Christopher J. Morrone wrote:
> > Odds are that you would be able to do it.  How well it would perform,
> > and how easy it would be to admin are other questions. :)
> >
> We did it with KVM and PCI device passthrough for the IB and SAS
> adapters. Works fine and with little performance loss.
>

Passthrough for the SAS adapters? Our idea is to put qcow2 disk images on
our Lustre filesystem.

Thanks,

Andrew


>
>
> --
> Dr. Oliver Mangold
> System Analyst
> NEC Deutschland GmbH
> HPC Division
> Hessbrühlstraße 21b
> 70565 Stuttgart
> Germany
> Phone: +49 711 78055 13
> Mail: oliver.mang...@emea.nec.com
> ___
> Lustre-discuss mailing list
> Lustre-discuss@lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
>
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


[Lustre-discuss] Virtual machines on Lustre

2014-06-10 Thread Andrew Holway
Hello,

Has anyone had any experience running VMs on Lustre?

Thanks,

Andrew
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] Vdev Configuration on Lustre

2014-06-09 Thread Andrew Holway
Hi Indivar,

Yes.

Thanks,

Andrew


On 9 June 2014 21:21, Indivar Nair  wrote:

> Hi All,
>
> In Lustre, is it possible to create a zpool with multiple vdevs -
>
> E.g.:
>
> # zpool create tank *mirror* sde sdf *mirror* sdg sdh
> # zpool status
>   pool: tank
>  state: ONLINE
>  scan: none requested
> config:
>
>   NAMESTATE READ WRITE CKSUM
>   tankONLINE   0 0 0
> mirror-0  ONLINE   0 0 0
>   sde ONLINE   0 0 0
>   sdf ONLINE   0 0 0
> mirror-1  ONLINE   0 0 0
>   sdg ONLINE   0 0 0
>   sdh ONLINE   0 0 0
>
>
>
> This will allow us to have a single OST per OSS, with ZFS managing the
> striping across vdevs.
>
> Regards,
>
>
> Indivar Nair
>
> ___
> Lustre-discuss mailing list
> Lustre-discuss@lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
>
>
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] Same performance Infiniband and Ethernet

2014-05-19 Thread Andrew Holway
dd if=/dev/zero of=test.dat bs=1M count=1000 oflag=direct

oflag=direct forces directIO which is synchronous.


On 19 May 2014 14:41, Pardo Diaz, Alfonso  wrote:

> thank for your ideas,
>
>
> I have measure the OST RAID performance, and there isn’t a bottleneck in
> the RAID disk. If I write directly in the RAID I got:
>
> dd if=/dev/zero of=test.dat bs=1M count=1000
> 1000+0 records in
> 1000+0 records out
> 1048576000 bytes (1,0 GB) copied, 1,34852 s, 778 MB/s
>
> And If i use /dev/urandom as input file I get the same performance again
> for infiniband and ethernet connection.
>
> How can I write directly forgoing cache?
>
>
> Thanks again!
>
>
>
>
> El 19/05/2014, a las 13:24, Hammitt, Charles Allen 
> escribió:
>
> > Two things:
> >
> > 1)  Linux write cache is likely getting in the way; you'd be better off
> trying to write directly forgoing cache
> > 2)  you need to write a much bigger file than 1GB; try 50GB
> >
> >
> > Then as the previous poster said, maybe your disks aren't up to snuff or
> are misconfigured.
> > Also, very interesting, and impossible, to get 154MB/s out of a Single
> GbE link [128MB/s].  Should be more like 100-115.  Less this is
> 10/40GbE...if so... again, start at #1 and #2.
> >
> >
> >
> >
> > Regards,
> >
> > Charles
> >
> >
> >
> >
> > --
> > ===
> > Charles Hammitt
> > Storage Systems Specialist
> > ITS Research Computing @
> > The University of North Carolina-CH
> > 211 Manning Drive
> > Campus Box # 3420, ITS Manning, Room 2504
> > Chapel Hill, NC 27599
> > ===
> >
> >
> >
> >
> > -Original Message-
> > From: lustre-discuss-boun...@lists.lustre.org [mailto:
> lustre-discuss-boun...@lists.lustre.org] On Behalf Of Vsevolod Nikonorov
> > Sent: Monday, May 19, 2014 6:54 AM
> > To: lustre-discuss@lists.lustre.org
> > Subject: Re: [Lustre-discuss] Same performance Infiniband and Ethernet
> >
> > What disks do your OSTs have? Maybe you have reached your disk
> performance limit, so Infiniband gives some speedup, but very small. Did
> you try to enable striping on your Lustre filesystem? For instance, you can
> type something like this: "lfs setstripe -c 
> /mnt/lustre/somefolder" and than copy a file into that folder.
> >
> > Also, there's an opinion that sequence of zeros is not a good way to
> test a performance, so maybe you should try using /dev/urandom (which is
> rather slow, so it's better to have a pre-generated "urandom" file in /ram,
> or /dev/shm, or where your memory space is mounted to, and copy that file
> to Lustre filesystem as a test).
> >
> >
> >
> > Pardo Diaz, Alfonso писал 2014-05-19 14:33:
> >> Hi,
> >>
> >> I have migrated my Lustre 2.2 to 2.5.1 and I have equipped my OSS/MDS
> >> and clients with Infiniband QDR interfaces.
> >> I have compile lustre with OFED 3.2 and I have configured lnet module
> >> with:
> >>
> >> options lent networks=“o2ib(ib0),tcp(eth0)”
> >>
> >>
> >> But when I try to compare the lustre performance across Infiniband
> >> (o2ib), I get the same performance than across ethernet (tcp):
> >>
> >> INFINIBAND TEST:
> >> dd if=/dev/zero of=test.dat bs=1M count=1000
> >> 1000+0 records in
> >> 1000+0 records out
> >> 1048576000 bytes (1,0 GB) copied, 5,88433 s, 178 MB/s
> >>
> >> ETHERNET TEST:
> >> dd if=/dev/zero of=test.dat bs=1M count=1000
> >> 1000+0 records in
> >> 1000+0 records out
> >> 1048576000 bytes (1,0 GB) copied, 5,97423 s, 154 MB/s
> >>
> >>
> >> And this is my scenario:
> >>
> >> - 1 MDs with SSD RAID10 MDT
> >> - 10 OSS with 2 OST per OSS
> >> - Infiniband interface in connected mode
> >> - Centos 6.5
> >> - Lustre 2.5.1
> >> - Striped filesystem “lfs setstripe -s 1M -c 10"
> >>
> >>
> >> I know my infiniband running correctly, because if I use IPERF3
> >> between client and servers I got 40Gb/s by infiniband and 1Gb/s by
> >> ethernet connections.
> >>
> >>
> >>
> >> Could you help me?
> >>
> >
> >>
> >> Regards,
> >>
> >>
> >>
> >>
> >>
> >> Alfonso Pardo Diaz
> >> System Administrator / Researcher
> >> c/ Sola nº 1; 10200 Trujillo, ESPAÑA
> >> Tel: +34 927 65 93 17 Fax: +34 927 32 32 37
> >>
> >>
> >>
> >>
> >> 
> >> Confidencialidad:
> >> Este mensaje y sus ficheros adjuntos se dirige exclusivamente a su
> >> destinatario y puede contener información privilegiada o confidencial.
> >> Si no es vd. el destinatario indicado, queda notificado de que la
> >> utilización, divulgación y/o copia sin autorización está prohibida en
> >> virtud de la legislación vigente. Si ha recibido este mensaje por
> >> error, le rogamos que nos lo comunique inmediatamente respondiendo al
> >> mensaje y proceda a su destrucción.
> >>
> >> Disclaimer:
> >> This message and its attached files is intended exclusively for its
> >> recipients and may contain confidential information. If you received
> >> this e-mail in error you are hereby notified that any dissemination,
> >> copy or disclosure of this communication is strictly

[Lustre-discuss] European ZFS meetup 20th May in Paris

2014-05-08 Thread Andrew Holway
Hello,

I know this is slightly short notice but we are realising a lack of
Lustre and HPC storage types at the OpenZFS Europe Meetup on the 20th
May.

Please come to paris on the 20th May!

http://www.meetup.com/OpenZFS-Europe/events/177038202/

Thanks,

Andrew
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] NFS and Lustre 2.4.2

2014-03-13 Thread Andrew Holway
On 13 March 2014 18:57, Roger Sersted  wrote:
>
> As some of you know, there is a bug in the Lustre client that causes kernel
> crashes and other problems when exported via NFS.

Which bug is this? Do you have a link?

Thanks,

Andrew


> Unfortunately, this bug is causing me a lot of problems due to the way our
> environment is structured.
>
> Will there be a 2.4.x release addressing this issue?
> If yes, when?
> If no, is there a patch I can apply to the source to fix this?
>
> Was this addressed in the 2.5.0 client?
> If yes, can the 2.5.0 client connect to 2.4.2 servers?
> If no, is there a patch I can apply to the source to fix this?
>
> Any other suggestions?
>
> Thanks,
>
> Roger S.
>
> Argonne National Laboratory
> ___
> Lustre-discuss mailing list
> Lustre-discuss@lists.lustre.org
> http://lists.lustre.org/mailman/listinfo/lustre-discuss
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] [zfs-discuss] Problems getting Lustre started with ZFS

2013-10-24 Thread Andrew Holway
> You need to use unique index numbers for each OST, i.e. OST,
> OST1, etc.

I cannot see how to control this? I am creating new OST's but they are
all getting the same index number.

Could this be a problem with the mgs?

Thanks,

Andrew

>
> Ned
>
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to zfs-discuss+unsubscr...@zfsonlinux.org.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] [zfs-discuss] Problems getting Lustre started with ZFS

2013-10-23 Thread Andrew Holway
Thanks Ned! :)

On 23 October 2013 18:00, Ned Bass  wrote:
> On Wed, Oct 23, 2013 at 05:46:41PM +0100, Andrew Holway wrote:
>> Hello,
>>
>> I have hit a wall trying to get lustre started. I have followed this
>> to some extent:
>>
>> http://zfsonlinux.org/lustre-configure-single.html
>>
>> If someone could give me some guidance how to get these services
>> started it would be much appreciated.
>>
>> I running on Centos 6.4 and am getting my packages from:
>> http://archive.zfsonlinux.org/epel/6/SRPMS/
>>
>> Thanks,
>>
>> Andrew
>>
>>
>> [root@lustre1 ~]# zfs get lustre:svname
>> NAME  PROPERTY   VALUE   SOURCE
>> lustre-mdt0   lustre:svname  -   -
>> lustre-mdt0/mdt0  lustre:svname  lustre:MDT  local
>> lustre-mgslustre:svname  -   -
>> lustre-mgs/mgslustre:svname  MGS local
>> [root@lustre1 ~]# zfs get lustre:svname
>> NAME  PROPERTY   VALUE   SOURCE
>> lustre-mdt0   lustre:svname  -   -
>> lustre-mdt0/mdt0  lustre:svname  lustre:MDT  local
>> lustre-mgslustre:svname  -   -
>> lustre-mgs/mgslustre:svname  MGS local
>> [root@lustre1 ~]# /etc/init.d/lustre
>> anaconda-ks.cfg   .bash_profile .cshrc
>> install.log.syslog.ssh/ .viminfo
>> .bash_logout  .bashrc   install.log
>> ks-post-anaconda.log  .tcshrc
>> [root@lustre1 ~]# /etc/init.d/lustre start
>> [root@lustre1 ~]# /etc/init.d/lustre start lustre-MDT
>> lustre-MDT is not a valid lustre label on this node
>> [root@lustre1 ~]# /etc/init.d/lustre start MGS
>> MGS is not a valid lustre label on this node
>
> You need to configure an /etc/ldev.conf file.  See man ldev.conf(5).
> Make sure the first field matches `uname -n`.
>
>>
>> I have configured three OSS's with a single OST:
>>
>> Andrews-MacBook-Air:~ andrew$ for i in {201..204}; do ssh
>> root@192.168.0.$i "hostname; zfs get lustre:svname"; done
>> lustre1.calthrop.com
>> NAME  PROPERTY   VALUE   SOURCE
>> lustre-mdt0   lustre:svname  -   -
>> lustre-mdt0/mdt0  lustre:svname  lustre:MDT  local
>> lustre-mgslustre:svname  -   -
>> lustre-mgs/mgslustre:svname  MGS local
>> lustre2.calthrop.com
>> NAME  PROPERTY   VALUE   SOURCE
>> lustre-ost0   lustre:svname  -   -
>> lustre-ost0/ost0  lustre:svname  lustre:OST  local
>> lustre3.calthrop.com
>> NAME  PROPERTY   VALUE   SOURCE
>> lustre-ost0   lustre:svname  -   -
>> lustre-ost0/ost0  lustre:svname  lustre:OST  local
>
> You need to use unique index numbers for each OST, i.e. OST,
> OST1, etc.
>
> Ned
>
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to zfs-discuss+unsubscr...@zfsonlinux.org.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


[Lustre-discuss] Problems getting Lustre started with ZFS

2013-10-23 Thread Andrew Holway
Hello,

I have hit a wall trying to get lustre started. I have followed this
to some extent:

http://zfsonlinux.org/lustre-configure-single.html

If someone could give me some guidance how to get these services
started it would be much appreciated.

I running on Centos 6.4 and am getting my packages from:
http://archive.zfsonlinux.org/epel/6/SRPMS/

Thanks,

Andrew


[root@lustre1 ~]# zfs get lustre:svname
NAME  PROPERTY   VALUE   SOURCE
lustre-mdt0   lustre:svname  -   -
lustre-mdt0/mdt0  lustre:svname  lustre:MDT  local
lustre-mgslustre:svname  -   -
lustre-mgs/mgslustre:svname  MGS local
[root@lustre1 ~]# zfs get lustre:svname
NAME  PROPERTY   VALUE   SOURCE
lustre-mdt0   lustre:svname  -   -
lustre-mdt0/mdt0  lustre:svname  lustre:MDT  local
lustre-mgslustre:svname  -   -
lustre-mgs/mgslustre:svname  MGS local
[root@lustre1 ~]# /etc/init.d/lustre
anaconda-ks.cfg   .bash_profile .cshrc
install.log.syslog.ssh/ .viminfo
.bash_logout  .bashrc   install.log
ks-post-anaconda.log  .tcshrc
[root@lustre1 ~]# /etc/init.d/lustre start
[root@lustre1 ~]# /etc/init.d/lustre start lustre-MDT
lustre-MDT is not a valid lustre label on this node
[root@lustre1 ~]# /etc/init.d/lustre start MGS
MGS is not a valid lustre label on this node

I have configured three OSS's with a single OST:

Andrews-MacBook-Air:~ andrew$ for i in {201..204}; do ssh
root@192.168.0.$i "hostname; zfs get lustre:svname"; done
lustre1.calthrop.com
NAME  PROPERTY   VALUE   SOURCE
lustre-mdt0   lustre:svname  -   -
lustre-mdt0/mdt0  lustre:svname  lustre:MDT  local
lustre-mgslustre:svname  -   -
lustre-mgs/mgslustre:svname  MGS local
lustre2.calthrop.com
NAME  PROPERTY   VALUE   SOURCE
lustre-ost0   lustre:svname  -   -
lustre-ost0/ost0  lustre:svname  lustre:OST  local
lustre3.calthrop.com
NAME  PROPERTY   VALUE   SOURCE
lustre-ost0   lustre:svname  -   -
lustre-ost0/ost0  lustre:svname  lustre:OST  local
lustre4.calthrop.com
NAME  PROPERTY   VALUE   SOURCE
lustre-ost0   lustre:svname  -   -
lustre-ost0/ost0  lustre:svname  lustre:OST  local
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


Re: [Lustre-discuss] [zfs-discuss] ZFS/Lustre echo 0 >> max_cached_mb chewing 100% cpu

2013-10-22 Thread Andrew Holway
On 22 October 2013 16:21, Prakash Surya  wrote:
> This probably belongs on the Lustre mailing list.

I cross posted :)

> Regardless, I don't
> think you want to do that (do you?). It'll prevent any client side
> caching, and more importantly, I don't think it's a case that's been
> tested/optimized. What're you trying to acheive?

Sorry I was not clear, I didn't action this and I cant kill the
process. It seemed to start directly after running:

"FSTYPE=zfs /usr/lib64/lustre/tests/llmount.sh"

I have tried to kill it first with -2 upto -9 but the process will not budge.

Here is the top lines from perf top

37.39%  [osc]  [k] osc_set_info_async
 27.14%  [lov]  [k] lov_set_info_async
  4.13%  [kernel]   [k] kfree
  3.57%  [ptlrpc]   [k] ptlrpc_set_destroy
  3.14%  [kernel]   [k] mutex_unlock
  3.10%  [lustre]   [k] ll_wr_max_cached_mb
  3.00%  [kernel]   [k] mutex_lock
  2.82%  [ptlrpc]   [k] ptlrpc_prep_set
  2.52%  [kernel]   [k] __kmalloc

Thanks,

Andrew

>
> Also, just curious, where's the CPU time being spent? What process and/or
> kernel thread? What are the top entries listed when you run "perf top"?
>
> --
> Cheers, Prakash
>
> On Tue, Oct 22, 2013 at 12:53:44PM +0100, Andrew Holway wrote:
>> Hello,
>>
>> I have just setup a "toy" lustre setup using this guide here:
>> http://zfsonlinux.org/lustre and have this process chewing 100% cpu.
>>
>> sh -c echo 0 >> /proc/fs/lustre/llite/lustre-88006b0c7c00/max_cached_mb
>>
>> Until I get something more beasty I am using my desktop machine with
>> KVM. Using standard Centos 6.4 with latest kernel. (2.6.32-358.23.2).
>> my machine has 2GB ram
>>
>> Any ideas?
>>
>> Thanks,
>>
>> Andrew
>>
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to zfs-discuss+unsubscr...@zfsonlinux.org.
>
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to zfs-discuss+unsubscr...@zfsonlinux.org.
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss


[Lustre-discuss] ZFS/Lustre echo 0 >> max_cached_mb chewing 100% cpu

2013-10-22 Thread Andrew Holway
Hello,

I have just setup a "toy" lustre setup using this guide here:
http://zfsonlinux.org/lustre and have this process chewing 100% cpu.

sh -c echo 0 >> /proc/fs/lustre/llite/lustre-88006b0c7c00/max_cached_mb

Until I get something more beasty I am using my desktop machine with
KVM. Using standard Centos 6.4 with latest kernel. (2.6.32-358.23.2).
my machine has 2GB ram

Any ideas?

Thanks,

Andrew
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss