> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jesus Cea
> 
> 1. Is the L2ARC data stored in the SSD checksummed?. If so, can I
> expect that ZFS goes directly to the disk if the checksum is wrong?.

Yup.


> 2. Can I import a POOL if one/both L2ARC's are not available?.

Yup.


> 3. What happend if a L2ARC device, suddenly, "dissappears"?.

No problem.  You lose the acceleration that it was giving you, and reads
revert to the primary storage instead.


> 4. Any idea if L2ARC content will be persistent across system
> rebooting "eventually"?

I hope so...


> 5. Can I import a POOL if one/both ZIL devices are not available?. My
> pool is v22. I know that I can remove ZIL devices since v19, but I
> don't know if I can remove them AFTER they are physically unavailable,
> of before importing the pool (after a reboot).

No problem.  However - You can only import without the log devices by force.
Because if a pool is offline, and you're trying to import, the system has no
way to know if there were any uncommitted transactions in the log devices
unless it can read them.  It will allow you to discard the ZIL if it's
unavailable, but it warns you harshly and you do so at your own risk.  There
are not a lot of situations where that matters though.


> 6. Can I remove a ZIL device after ZFS consider it "faulty"?.

Yup.


> 7. What if a ZIL device "dissapears", suddenly?. I know that I could
> lose "committed" transactions in-fight, but will the machine crash?.
> Will it fallback to ZIL on harddisk?.

No problem.  It reverts to pool.  The only risk you have is ... Suppose some
write was already committed to log device, and then the log device
disappears before the TXG is actually committed to pool.  And then the
system hard-crashes before the TXG is actually flushed to pool.  That could
result in one TXG loss, but having all of your SSD's and system hard crash
within seconds of each other is ...  unlikely.  I calculate that risk to be
on the same order as meteor strike.  ;-)


> 8. Since my ZIL will be mirrored, I assume that the OS will actually
> will look for transactions to be replayed in both devices 

Correct.


> 9. If a ZIL device mirrored goes offline/online, will it resilver from
> the other side, or it will simply get new transactions, since old
> transactions are irrelevant after ¿30? seconds?.

I don't know, but it doesn't matter, does it?  Worst case, there's a few
seconds of degraded performance while resilvering.  So stop your
pre-schooler from yanking disks out of your server and reinserting them if
you want to prevent this.  ;-)


> 10. What happens if my 1GB of ZIL is too optimistic?. Will ZFS use the
> disks or it will stop writers until flushing ZIL to the HDs?.

Good question.  I don't know.


> Anything else I should consider?.
> 
> As you can see, my concerns concentrate in what happens if the SSDs go
> bad or "somebody" unplugs them "live".

Why are you more concerned about your SSD's going offline as opposed to your
HDD's?

In all but the most extreme cases, IMHO the best solution nowadays is either
to use an unmirrored log device (since losing your log device does not mean
pool destruction, and write performance might be better if you don't have to
wait for writing 2 devices)...   Or disable the ZIL.  If you disable the
ZIL, you get maximum performance at minimum cost, and depending on how you
use your system, it may be acceptable.


> I have backup of (most) of my data, but rebuilding a 12TB pool from
> backups, in a production machine, in a remote hosting, would be
> something I rather avoid :-p.
> 
> I know that hybrid HD+SSD pools were a bit flacky in the past (you
> lost the ZIL device, you kiss goodbye to your ZPOOL, in the pre-v19
> days), and I want to know what terrain I am getting into.

Those days are over.  It's solid and stable now...  Since I guess a year
ago, maybe two years.

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to