Konstantin Ryabitsev <konstan...@linuxfoundation.org> wrote:
> On Wed, Aug 05, 2020 at 03:11:27AM +0000, Eric Wong wrote:
> > I've been mostly using ext4 on SSDs since I started public-inbox
> > and it works well.
> 
> As you know, I hope to move lore.kernel.org to a system with a hybrid 
> lvm-cache setup, specifically:
> 
> 12 x 1.8TB rotational drives set up in a lvm raid-6 array
> 2  x 450GB SSD drives are lvm-cache volume
> 
> This gives up 18TB capacity with a 900GB cache layer, and the FS on top 
> of that is XFS.
> 
> This is what is currently serving mirrors.edge.kernel.org (4 nodes 
> around the world).

Do you have any numbers on read IOPS or seek latency for the
RAID-6 array?  Also, how much RAM for the page cache?

Xapian is going to be tricky(*), and it's looking like group search
will require a separate index :<  The upside is it may be able
to gradually replace existing indices for WWW and deduplicate
much data for cross-posted messages.

IMAP/JMAP is a different story...

Removing or relocating inboxes isn't going to be fun, either.

(*) Xapian built-in sharding works well for matching CPU core count,
    but trying to use Xapian's MultiDatabase (via ->add_database) with
    the current mirror of lore (almost 400 shards) doesn't work well,
    at all.
--
unsubscribe: one-click, see List-Unsubscribe header
archive: https://public-inbox.org/meta/

Reply via email to