Re: [zfs-discuss] all in one server

2012-09-19 Thread Ian Collins

On 09/19/12 02:38 AM, Sašo Kiselkov wrote:

On 09/18/2012 04:31 PM, Eugen Leitl wrote:


Can I actually have a year's worth of snapshots in
zfs without too much performance degradation?

Each additional dataset (not sure about snapshots, though) increases
boot times slightly, however, I've seen pools with several hundred
datasets without any serious issues, so yes, it is possible. Be
prepared, though, that the data volumes might be substantial (depending
on your overall data turn-around per unit time between the snapshots).


The boot overhead for many (in my case 1200) filesystems isn't as bad 
as it was.  On our original Thumper I had to amalgamate all our user 
home directories into one filesystem due to slow boot.  Now I have split 
them again to send over a slow WAN...


Large numbers of snapshots (10's of thousands) don't appear to impact 
boot times.


--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] scripting incremental replication data streams

2012-09-19 Thread Karl Wagner

Hi Edward,

My own personal view on this is that the simplest option is the best.

In your script, create a new snapshot using one of 2 names. Let's call 
them SNAPSEND_A and SNAPSEND_B. You can decide which one by checking 
which currently exists.


As manual setup, on the first run, create SNAPSEND_A and send it to 
your target. This can, obviously, be done incrementally from your last 
replication/last common snapshot.


Now in your script, you would:
* Check your source dataset for the existence of SNAPSEND_A and 
SNAPSEND_B. Let's assume this is the first run after manual setup, so 
SNAPSEND_A will exist.

* Create SNAPSEND_B. Replicate this over to your receiving dataset.
* Remove SNAPSEND_A on both sides. This will leave all intermediate 
snapshots.


Next run, it will create SNAPSEND_A again, and remove B when finished.

Hope this helps.
Karl

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] OT: does the LSI 9211-8i fit into the HP N40L?

2012-09-19 Thread Eugen Leitl

Hi again,

thanks for all the replies to the all in one with ESXi, it was 
most illuminating. I will use this setup at my dayjob.

Now for a slight variation on a theme: N40L with ESXi with raw drive 
passthrough, with OpenIndiana/napp-it NFS or iSCSI export of underlying
devices. This particular setup is for a home VMWare lab, using
spare hardware parts I have around.

I'm trying to do something like 

http://forums.servethehome.com/showthread.php?464-HP-MicroServer-HBA-controller-recommendation

(using 4x SSD in http://www.sharkoon.com/?q=en/node/1824 and
4x SATA in the internal drive cage) and from that thread alone
can't quite tell whether it would fit, physically.

Anyone running LSI 9211-8i in that little box without 90 degree
Mini-SAS? If you do ineed need a 90 degree Mini-SAS, do you have a 
part number for me, perchance?

Would you use the 9211-8i for all 8 SATA devices internally,
disregarding the onboard mini-SAS to SATA chipset? Or use the 4x onboard
SATA to avoid saturating the port? I might or might not
use 2 TByte SAS instead of SATA, assuming I can get those
cheaply. It's 2-3 TByte SATA disks otherwise.

In case for 4x SSD (Intel 1st or 2nd gen ~80 GByte), would you go for
2x mirror for L2ARC and 2x mirror for ZIL? Some other configuration?

Thanks!

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss