my 0.02¢
This question is quite interesting considering Big Blue offers the
competing GPFS filesystem on Power. I have it on fairly good authority
that Intel bought Whamcloud in order to compete with IBM for future very
large (exascale) supercomputer installations. Power architecture is
seemingly
Would it be difficult to suspend IO and snapshot all the nodes (assuming
ZFS). Could you be sure that your MDS and OSS are synchronised?
On 7 February 2017 at 19:52, Mike Selway wrote:
> Hello Brett,
>
>Actually, looking for someone who uses a commercialized
> approach (that reta
Hi Vivek:
https://aws.amazon.com/marketplace/pp/B01344V0C0/ref=ads_9935bf16-1513-1460476463
have fun!
Cheers,
Andrew
On 12 April 2016 at 17:49, Vivek Arora wrote:
> Thank you Andrew.
>
> Would you happen to know the AMI number/id?
>
>
>
> *From:* Andrew Holw
Hi Vivek,
Intel have released an Amazon AMI for lustre. I think its quite expensive
however.
Cheers,
Andrew
On 12 April 2016 at 06:06, Vivek Arora wrote:
> Hi All,
>
>
>
> Very new to the lustre world.
>
>
>
> I am trying to do a test install of lustre on Amazon Web Services Linux
> Instance
ZFS should not be slower for very long. I understand that, now ZFS on Linux
is stable, many significant performance problems have been identified and
are being worked on.
On 5 May 2015 at 04:20, Andrew Wagner wrote:
> I can offer some guidance on our experiences with ZFS Lustre MDTs. Patrick
> a
> I would like to check if we are planning to deploy Burst Buffer; Can
Luster File System handle this functionality? what we have is two type of
storage SSD and SATA. we would like to check if Luster can handle the
movement of the data.
This is a kind of "How long is a piece of string" question. L
Hi Scott,
Great job! Would you consider merging with the standard Lustre docs?
https://wiki.hpdd.intel.com/display/PUB/Documentation
Thanks,
Andrew
On 12 August 2014 18:58, Scott Nolin wrote:
> Hello,
>
> At UW SSEC my group has been using Lustre for a few years, and recently
> Lustre with
On 21 July 2014 10:25, Sean Brisbane wrote:
> Hi Andrew,
>
> To get l2.5 to build on el6.5 I needed to disable building the server
> modules.
>
> rpmbuild --rebuild --define "lustre_name lustre-client" --define
> "configure_args --disable-server --enable-client" --without servers
> lustre-client
nkins does not exist - using root
user jenkins does not exist - using root
group jenkins does not exist - using root
Bad exit status from /var/tmp/rpm-tmp.yp1duv (%install)
On 21 July 2014 08:16, Andrew Holway wrote:
> Hi Lustre folks,
>
> I'm getting this error when I
Hi Lustre folks,
I'm getting this error when I try to compile the Lustre client.
test -n "" \
|| find "lustre-source/lustre-2.5.2" -type d ! -perm -755 \
-exec chmod u+rwx,go+rx {} \; -o \
! -type d ! -perm -444 -links 1 -exec chmod a+r {} \; -o \
! -type d ! -perm -400 -exec chmod a+r {} \;
Hi,
Can someone give me the story on Lustre and sync IO?
Thanks,
Andrew
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
On 12 June 2014 08:52, Oliver Mangold wrote:
> On 11.06.2014 20:29, Christopher J. Morrone wrote:
> > Odds are that you would be able to do it. How well it would perform,
> > and how easy it would be to admin are other questions. :)
> >
> We did it with KVM and PCI device passthrough for the IB
Hello,
Has anyone had any experience running VMs on Lustre?
Thanks,
Andrew
___
Lustre-discuss mailing list
Lustre-discuss@lists.lustre.org
http://lists.lustre.org/mailman/listinfo/lustre-discuss
Hi Indivar,
Yes.
Thanks,
Andrew
On 9 June 2014 21:21, Indivar Nair wrote:
> Hi All,
>
> In Lustre, is it possible to create a zpool with multiple vdevs -
>
> E.g.:
>
> # zpool create tank *mirror* sde sdf *mirror* sdg sdh
> # zpool status
> pool: tank
> state: ONLINE
> scan: none request
dd if=/dev/zero of=test.dat bs=1M count=1000 oflag=direct
oflag=direct forces directIO which is synchronous.
On 19 May 2014 14:41, Pardo Diaz, Alfonso wrote:
> thank for your ideas,
>
>
> I have measure the OST RAID performance, and there isn’t a bottleneck in
> the RAID disk. If I write direc
Hello,
I know this is slightly short notice but we are realising a lack of
Lustre and HPC storage types at the OpenZFS Europe Meetup on the 20th
May.
Please come to paris on the 20th May!
http://www.meetup.com/OpenZFS-Europe/events/177038202/
Thanks,
Andrew
On 13 March 2014 18:57, Roger Sersted wrote:
>
> As some of you know, there is a bug in the Lustre client that causes kernel
> crashes and other problems when exported via NFS.
Which bug is this? Do you have a link?
Thanks,
Andrew
> Unfortunately, this bug is causing me a lot of problems due
> You need to use unique index numbers for each OST, i.e. OST,
> OST1, etc.
I cannot see how to control this? I am creating new OST's but they are
all getting the same index number.
Could this be a problem with the mgs?
Thanks,
Andrew
>
> Ned
>
> To unsubscribe from this group and stop
Thanks Ned! :)
On 23 October 2013 18:00, Ned Bass wrote:
> On Wed, Oct 23, 2013 at 05:46:41PM +0100, Andrew Holway wrote:
>> Hello,
>>
>> I have hit a wall trying to get lustre started. I have followed this
>> to some extent:
>>
>> http://zfsonlinux.or
Hello,
I have hit a wall trying to get lustre started. I have followed this
to some extent:
http://zfsonlinux.org/lustre-configure-single.html
If someone could give me some guidance how to get these services
started it would be much appreciated.
I running on Centos 6.4 and am getting my package
ss and/or
> kernel thread? What are the top entries listed when you run "perf top"?
>
> --
> Cheers, Prakash
>
> On Tue, Oct 22, 2013 at 12:53:44PM +0100, Andrew Holway wrote:
>> Hello,
>>
>> I have just setup a "toy" lustre setup using this guide h
Hello,
I have just setup a "toy" lustre setup using this guide here:
http://zfsonlinux.org/lustre and have this process chewing 100% cpu.
sh -c echo 0 >> /proc/fs/lustre/llite/lustre-88006b0c7c00/max_cached_mb
Until I get something more beasty I am using my desktop machine with
KVM. Using st
22 matches
Mail list logo