We run one OST per OSS and each OST is ~580TB. Lustre 2.8 or 2.10, ZFS 0.7.
On 10/8/19 10:50 AM, Carlson, Timothy S wrote:
I’ve been running 100->200TB OSTs making up small petabyte file systems for the
last 4 or 5 years with no pain. Lustre 2.5.x through current generation.
Plenty of ZFS rebui
ds."
>
> That's about 9M years, so it should probably be long enough? It might
> make sense to map "-1" internally to "(1 << 48) - 1" to make this easier.
>
> On May 8, 2019, at 17:18, Harr, Cameron wrote:
>> I had tested first and couldn
I had tested first and couldn't find a way to do so, so I was curious if
there was some undocumented way. I'm proceeding with, "No, there's not a
way."
On 5/6/19 12:52 PM, Andreas Dilger wrote:
> On Apr 11, 2019, at 11:02, Harr, Cameron wrote:
>> We'
We use a simple multipath config and then have our vdev_id.conf set up like the
following:
multipath yes
# Intent of channel names:
# First letter {L,U} indicates lower or upper enclosure
# PCI_ID HBA PORT CHANNEL NAME
channel05:00.0 1L
channel05:00.0
There was a thread a couple weeks back about users no longer being able
to run 'lfs check *' in 2.10 clients, but there was no resolution to it.
(http://lists.lustre.org/pipermail/lustre-discuss-lustre.org/2019-April/016386.html)
This is becoming an issue at our site as well now. Is this "featur
We're exploring an idea where we keep soft quotas enabled so that users
will be notified they're nearing their hard quotas (via in-house
scripts), but users don't like that the soft quota becomes a hard block
after the grace period. I can understand their rationale as well that
they should be a
Paul,
We still largely use static routing as we migrate from 2.5 and 2.8 to 2.10. We
basically cross mount all our production file systems across the various
compute clusters and have routing clusters to route Lustre traffic from IB or
OPA to Ethernet between buildings. Each building has its ow
We have multiple compute clusters that mount each of our Lustre file
systems and we do OS/kernel updates on them without regards to each
other. Sometimes a client cluster may be updated at the same time as one
of the Lustre clusters, but often it's not. This approach generally
works fine and jo
When you're over the soft limit, you *should* see an '*' in the listing, as
well as time left in the grace period. We've had mixed success with that
actually working however.
Cameron
On 1/9/19 5:21 AM, Moreno Diego (ID SIS) wrote:
Hi ANS,
About the soft limits and not receiving any warning or n
Russell,
Your symptoms are a little different from what I see when the MDS node's
passwd file is incomplete, but did you verify the affected_user has a
proper /etc/passwd entry on the MDS node(s)?
On 1/10/19 12:14 PM, Russell Dekema wrote:
> We've got a Lustre system running lustre-2.5.42.28.dd
In my brief attempts to use lfs migrate, I found performance pretty slow
(it's serial). I also got some ASSERTs which should be fixed in 2.10 per
LU-8807; note that I was using 2.8. On a more trivial level, I found the
-v|--verbose option to the command doesn't work.
On 01/07/2019 12:26 PM, Mo
I use ltop heavily:
https://github.com/LLNL/lmt
On 12/20/18 9:15 AM, Alexander I Kulyavtsev wrote:
1) cerebro + ltop still work.
2) telegraf + inflixdb (collector, time series DB ). Telegraf has input plugins
for lustre ("lustre2"), zfs, and many others. Grafana to plot live data from
DB.
You may already know this, but you'll probably want to use the -R option
as well, to replicate the Lustre attributes to the new dataset.
On 10/29/2018 08:33 AM, Mohr Jr, Richard Frank (Rick Mohr) wrote:
>> On Oct 29, 2018, at 1:12 AM, Riccardo Veraldi
>> wrote:
>>
>> it is time for me to move m
I'd second what Daniel said. Each of our MDS nodes has one zpool with
one mdt, except the first MDS node also has an mgs dataset on the pool.
The nodes are set up in failover pairs where each can see each other's
zpool and import them if necessary (with MMP protection turned on).
On 10/23/2018
14 matches
Mail list logo