Re: [zfs-discuss] Sun X4200 Question...

2013-03-15 Thread Tiernan OToole
Thanks for the info. I am planning g the install this weekend, between
formula one and other hardware upgrades... fingers crossed it works!
On 14 Mar 2013 09:19, Heiko L. h.lehm...@hs-lausitz.de wrote:


  support for VT, but nothing for AMD... The Opterons dont have VT, so i
 wont
  be using XEN, but the Zones may be useful...

 We use XEN/PV on X4200 for many years without problems.
 dom0: X4200+openindiana+xvm
 guests(PV): openindiana,linux/fedora,linux/debian
 (vmlinuz-2.6.32.28-xenU-32,vmlinuz-2.6.18-xenU64)


 regards Heiko

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Sun X4200 Question...

2013-03-11 Thread Tiernan OToole
I know this might be the wrong place to ask, but hopefully someone can
point me in the right direction...

I got my hands on a Sun x4200. Its the original one, not the M2, and has 2
single core Opterons, 4Gb RAM and 4 73Gb SAS Disks... But, I dont know what
to install on it... I was thinking of SmartOS, but the site mentions Intel
support for VT, but nothing for AMD... The Opterons dont have VT, so i wont
be using XEN, but the Zones may be useful...

Any advice?

Thanks!

-- 
Tiernan O'Toole
blog.lotas-smartman.net
www.geekphotographer.com
www.tiernanotoole.ie
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sun X4200 Question...

2013-03-11 Thread Tiernan OToole
to tell you the truth, i dont really need the virtualization stuff... Zones
sounds interesting, since it seems to be ligher weight than Xen or anything
like that...


On Mon, Mar 11, 2013 at 8:50 PM, Bob Friesenhahn 
bfrie...@simple.dallas.tx.us wrote:

 On Mon, 11 Mar 2013, Tiernan OToole wrote:

  I know this might be the wrong place to ask, but hopefully someone can
 point me in the right direction...
 I got my hands on a Sun x4200. Its the original one, not the M2, and has
 2 single core Opterons, 4Gb RAM and 4 73Gb SAS Disks...
 But, I dont know what to install on it... I was thinking of SmartOS, but
 the site mentions Intel support for VT, but nothing for
 AMD... The Opterons dont have VT, so i wont be using XEN, but the Zones
 may be useful...


 OpenIndiana or OmniOS seem like the most likely candidates.

 You can run VirtualBox on OpenIndiana and it should be able to work
 without VT extensions.

 Bob
 --
 Bob Friesenhahn
 bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/**
 users/bfriesen/ http://www.simplesystems.org/users/bfriesen/
 GraphicsMagick Maintainer,http://www.GraphicsMagick.org/




-- 
Tiernan O'Toole
blog.lotas-smartman.net
www.geekphotographer.com
www.tiernanotoole.ie
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Distro Advice

2013-02-26 Thread Tiernan OToole
Thanks all! I will check out FreeNAS and see what it can do... I will also
check my RAID Card and see if it can work with JBOD... fingers crossed...
The machine has a couple internal SATA ports (think there are 2, could be
4) so i was thinking of using those for boot disks and SSDs later...

As a follow up question: Data Deduplication: The machine, to start, will
have about 5Gb  RAM. I read somewhere that 20TB storage would require about
8GB RAM, depending on block size... Since i dont know block sizes, yet (i
store a mix of VMs, TV Shows, Movies and backups on the NAS) I am not sure
how much memory i will need (my estimate is 10TB RAW (8TB usable?) in a
ZRAID1 pool, and then 3TB RAW in a striped pool). If i dont have enough
memory now, can i enable DeDupe at a later stage when i add memory? Also,
if i pick FreeBSD now, and want to move to, say, Nexenta, is that possible?
Assuming the drives are just JBOD drives (to be confirmed) could they just
get imported?

Thanks.


On Mon, Feb 25, 2013 at 6:11 PM, Tim Cook t...@cook.ms wrote:




 On Mon, Feb 25, 2013 at 7:57 AM, Volker A. Brandt v...@bb-c.de wrote:

 Tim Cook writes:
   I need something that will allow me to share files over SMB (3 if
   possible), NFS, AFP (for Time Machine) and iSCSI. Ideally, i would
   like something i can manage easily and something that works with
   the Dell...
 
  All of them should provide the basic functionality you're looking
  for.
   None of them will provide SMB3 (at all) or AFP (without a third
  party package).

 FreeNAS has AFP built-in, including a Time Machine discovery method.

 The latest FreeNAS is still based on Samba 3.x, but they are aware
 of 4.x and will probably integrate it at some point in the future.
 Then you should have SMB3.  I don't know how far along they are...


 Best regards -- Volker



 FreeNAS comes with a package pre-installed to add AFP support.  There is
 no native AFP support in FreeBSD and by association FreeNAS.

 --Tim





-- 
Tiernan O'Toole
blog.lotas-smartman.net
www.geekphotographer.com
www.tiernanotoole.ie
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Distro Advice

2013-02-26 Thread Tiernan OToole
Thanks again lads. I will take all that info into advice, and will join
that new group also!

Thanks again!

--Tiernan


On Tue, Feb 26, 2013 at 8:44 AM, Tim Cook t...@cook.ms wrote:



 On Mon, Feb 25, 2013 at 10:33 PM, Tiernan OToole lsmart...@gmail.comwrote:

 Thanks all! I will check out FreeNAS and see what it can do... I will
 also check my RAID Card and see if it can work with JBOD... fingers
 crossed... The machine has a couple internal SATA ports (think there are 2,
 could be 4) so i was thinking of using those for boot disks and SSDs
 later...

 As a follow up question: Data Deduplication: The machine, to start, will
 have about 5Gb  RAM. I read somewhere that 20TB storage would require about
 8GB RAM, depending on block size... Since i dont know block sizes, yet (i
 store a mix of VMs, TV Shows, Movies and backups on the NAS) I am not sure
 how much memory i will need (my estimate is 10TB RAW (8TB usable?) in a
 ZRAID1 pool, and then 3TB RAW in a striped pool). If i dont have enough
 memory now, can i enable DeDupe at a later stage when i add memory? Also,
 if i pick FreeBSD now, and want to move to, say, Nexenta, is that possible?
 Assuming the drives are just JBOD drives (to be confirmed) could they just
 get imported?

 Thanks.




 Yes, you can move between FreeBSD and Illumos based distros as long as you
 are at a compatible zpool version (which they currently are).  I'd avoid
 deduplication unless you absolutely need it... it's still a bit of a
 kludge.  Stick to compression and your world will be a much happier place.

 --Tim





-- 
Tiernan O'Toole
blog.lotas-smartman.net
www.geekphotographer.com
www.tiernanotoole.ie
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS Distro Advice

2013-02-25 Thread Tiernan OToole
Good morning all.

My home NAS died over the weekend, and it leaves me with a lot of spare
drives (5 2Tb and 3 1Tb disks). I have a Dell Poweredge 2900 Server sitting
in the house, which has not been doing much over the last while (bought it
a few years back with the intent of using it as a storage box, since it has
8 Hot Swap drive bays) and i am now looking at building the NAS using ZFS...

But, now i am confused as to what OS to use... OpenIndiana? Nexenta?
FreeNAS/FreeBSD?

I need something that will allow me to share files over SMB (3 if
possible), NFS, AFP (for Time Machine) and iSCSI. Ideally, i would like
something i can manage easily and something that works with the Dell...

Any recommendations? Any comparisons to each?

Thanks.

-- 
Tiernan O'Toole
blog.lotas-smartman.net
www.geekphotographer.com
www.tiernanotoole.ie
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-07 Thread Tiernan OToole
Morning all...

I have a Dedicated server in a data center in Germany, and it has 2 3TB
drives, but only software RAID. I have got them to install VMWare ESXi and
so far everything is going ok... I have the 2 drives as standard data
stores...

But i am paranoid... So, i installed Nexenta as a VM, gave it a small disk
to boot off and 2 1Tb disks on separate physical drives... I have created a
mirror pool and shared it with VMWare over NFS and copied my ISOs to this
share...

So, 2 questions:

1: If you where given the same hardware, what would you do? (RAID card is
an extra EUR30 or so a month, which i don't really want to spend, but
could, if needs be...)
2: should i mirror the boot drive for the VM?

Thanks.
-- 
Tiernan O'Toole
blog.lotas-smartman.net
www.geekphotographer.com
www.tiernanotoole.ie
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-07 Thread Tiernan OToole
Thanks Eugen.

yea, i am with Hetzner, but no hardware passthough... As for ESXi, i am
happy with it, but its not booting from USB... its using the disk to boot
from... I am thinking of using a USB key to boot from though... just need
to figure out how to remotely do this and if i should...

Thanks again!

--Tiernan


On Wed, Nov 7, 2012 at 12:16 PM, Eugen Leitl eu...@leitl.org wrote:

 On Wed, Nov 07, 2012 at 12:58:04PM +0100, Sašo Kiselkov wrote:
  On 11/07/2012 12:39 PM, Tiernan OToole wrote:
   Morning all...
  
   I have a Dedicated server in a data center in Germany, and it has 2 3TB
   drives, but only software RAID. I have got them to install VMWare ESXi
 and
   so far everything is going ok... I have the 2 drives as standard data
   stores...
  
   But i am paranoid... So, i installed Nexenta as a VM, gave it a small
 disk
   to boot off and 2 1Tb disks on separate physical drives... I have
 created a
   mirror pool and shared it with VMWare over NFS and copied my ISOs to
 this
   share...
  
   So, 2 questions:
  
   1: If you where given the same hardware, what would you do? (RAID card
 is
   an extra EUR30 or so a month, which i don't really want to spend, but
   could, if needs be...)

 A RAID will only hurt you with all in one. Do you have hardware passthrough
 with Hetzner (I presume you're with them, from the sound of it) on ESXi?

   2: should i mirror the boot drive for the VM?
 
  If it were my money, I'd throw ESXi out the window and use Illumos for
  the hypervisor as well. You can use KVM for full virtualization and
  zones for light-weight. Plus, you'll be able to set up a ZFS mirror on

 I'm very interested, as I'm currently working on an all-in-one with
 ESXi (using N40L for prototype and zfs send target, and a Supermicro
 ESXi box for production with guests, all booted from USB internally
 and zfs snapshot/send source).

 Why would you advise against the free ESXi, booted from USB, assuming
 your hardware has disk pass-through? The UI is quite friendly, and it's
 easy to deploy guests across the network.

  the data pair and set copies=2 on the rpool if you don't have another
  disk to complete the rpool with it. Another possibility, though somewhat
  convoluted, is to slice up the disks into two parts: a small OS part and
  a large datastore part (e.g. 100GB for the OS, 900GB for the datastore).
  Then simply put the OS part in a three-way mirror rpool and the
  datastore part in a raidz (plus do a grubinstall on all disks). That
  way, you'll be able to sustain a single-disk failure of any one of the
  three disks.
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




-- 
Tiernan O'Toole
blog.lotas-smartman.net
www.geekphotographer.com
www.tiernanotoole.ie
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Building an On-Site and Off-Size ZFS server, replication question

2012-10-08 Thread Tiernan OToole
Ok, so, after reading a bit more of this discussion and after playing
around at the weekend, i have a couple of questions to ask...

1: Do my pools need to be the same? for example, the pool in the datacenter
is 2 1Tb drives in Mirror. in house i have 5 200Gb virtual drives in
RAIDZ1, giving 800Gb usable. If i am backing up stuff to the home server,
can i still do a ZFS Send, even though underlying system is different?
2: If i give out a partition as an iSCSI LUN, can this be ZFS Sended as
normal, or is there any difference?

Thanks.

--Tiernan

On Mon, Oct 8, 2012 at 3:51 AM, Richard Elling richard.ell...@gmail.comwrote:

 On Oct 7, 2012, at 3:50 PM, Johannes Totz johan...@jo-t.de wrote:

  On 05/10/2012 15:01, Edward Ned Harvey
  (opensolarisisdeadlongliveopensolaris) wrote:
  From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
  boun...@opensolaris.org] On Behalf Of Tiernan OToole
 
  I am in the process of planning a system which will have 2 ZFS
  servers, one on site, one off site. The on site server will be
  used by workstations and servers in house, and most of that will
  stay in house. There will, however, be data i want backed up
  somewhere else, which is where the offsite server comes in... This
  server will be sitting in a Data Center and will have some storage
  available to it (the whole server currently has 2 3Tb drives,
  though they are not dedicated to the ZFS box, they are on VMware
  ESXi). There is then some storage (currently 100Gb, but more can
  be requested) of SFTP enabled backup which i plan to use for some
  snapshots, but more on that later.
 
  Anyway, i want to confirm my plan and make sure i am not missing
  anything here...
 
  * build server in house with storage, pools, etc... * have a
  server in data center with enough storage for its reason, plus the
  extra for offsite backup * have one pool set as my offsite
  pool... anything in here should be backed up off site also... *
  possibly have another set as very offsite which will also be
  pushed to the SFTP server, but not sure... * give these pools out
  via SMB/NFS/iSCSI * every 6 or so hours take a snapshot of the 2
  offsite pools. * do a ZFS send to the data center box * nightly,
  on the very offsite pool, do a ZFS send to the SFTP server * if
  anything goes wrong (my server dies, DC server dies, etc), Panic,
  download, pray... the usual... :)
 
  Anyway, I want to make sure i am doing this correctly... Is there
  anything on that list that sounds stupid or am i doing anything
  wrong? am i missing anything?
 
  Also, as a follow up question, but slightly unrelated, when it
  comes to the ZFS Send, i could use SSH to do the send, directly to
  the machine... Or i could upload the compressed, and possibly
  encrypted dump to the server... Which, for resume-ability and
  speed, would be suggested? And if i where to go with an upload
  option, any suggestions on what i should use?
 
  It is recommended, whenever possible, you should pipe the zfs send
  directly into a zfs receive on the receiving system.  For two
  solid reasons:
 
  If a single bit is corrupted, the whole stream checksum is wrong and
  therefore the whole stream is rejected.  So if this occurs, you want
  to detect it (in the form of one incremental failed) and then
  correct it (in the form of the next incremental succeeding).
  Whereas, if you store your streams on storage, it will go undetected,
  and everything after that point will be broken.
 
  If you need to do a restore, from a stream stored on storage, then
  your only choice is to restore the whole stream.  You cannot look
  inside and just get one file.  But if you had been doing send |
  receive, then you obviously can look inside the receiving filesystem
  and extract some individual specifics.
 
  If the recipient system doesn't support zfs receive, [...]
 
  On that note, is there a minimal user-mode zfs thing that would allow
  receiving a stream into an image file? No need for file/directory access
  etc.

 cat :-)

  I was thinking maybe the zfs-fuse-on-linux project may have suitable
 bits?

 I'm sure most Linux distros have cat
  -- richard

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




-- 
Tiernan O'Toole
blog.lotas-smartman.net
www.geekphotographer.com
www.tiernanotoole.ie
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Building an On-Site and Off-Size ZFS server, replication question

2012-10-08 Thread Tiernan OToole
Cool beans lads. Thanks!

On Mon, Oct 8, 2012 at 8:17 AM, Ian Collins i...@ianshome.com wrote:

 On 10/08/12 20:08, Tiernan OToole wrote:

 Ok, so, after reading a bit more of this discussion and after playing
 around at the weekend, i have a couple of questions to ask...

 1: Do my pools need to be the same? for example, the pool in the
 datacenter is 2 1Tb drives in Mirror. in house i have 5 200Gb virtual
 drives in RAIDZ1, giving 800Gb usable. If i am backing up stuff to the home
 server, can i still do a ZFS Send, even though underlying system is
 different?


 Yes you can, just make sure you have enough space!


  2: If i give out a partition as an iSCSI LUN, can this be ZFS Sended as
 normal, or is there any difference?


 It can be sent as normal.

 --
 Ian.

 __**_
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/**mailman/listinfo/zfs-discusshttp://mail.opensolaris.org/mailman/listinfo/zfs-discuss




-- 
Tiernan O'Toole
blog.lotas-smartman.net
www.geekphotographer.com
www.tiernanotoole.ie
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Building an On-Site and Off-Size ZFS server, replication question

2012-10-05 Thread Tiernan OToole
Good morning.

I am in the process of planning a system which will have 2 ZFS servers, one
on site, one off site. The on site server will be used by workstations and
servers in house, and most of that will stay in house. There will, however,
be data i want backed up somewhere else, which is where the offsite server
comes in... This server will be sitting in a Data Center and will have some
storage available to it (the whole server currently has 2 3Tb drives,
though they are not dedicated to the ZFS box, they are on VMware ESXi).
There is then some storage (currently 100Gb, but more can be requested) of
SFTP enabled backup which i plan to use for some snapshots, but more on
that later.

Anyway, i want to confirm my plan and make sure i am not missing anything
here...

* build server in house with storage, pools, etc...
* have a server in data center with enough storage for its reason, plus the
extra for offsite backup
* have one pool set as my offsite pool... anything in here should be
backed up off site also...
* possibly have another set as very offsite which will also be pushed to
the SFTP server, but not sure...
* give these pools out via SMB/NFS/iSCSI
* every 6 or so hours take a snapshot of the 2 offsite pools.
* do a ZFS send to the data center box
* nightly, on the very offsite pool, do a ZFS send to the SFTP server
* if anything goes wrong (my server dies, DC server dies, etc), Panic,
download, pray... the usual... :)

Anyway, I want to make sure i am doing this correctly... Is there anything
on that list that sounds stupid or am i doing anything wrong? am i missing
anything?

Also, as a follow up question, but slightly unrelated, when it comes to the
ZFS Send, i could use SSH to do the send, directly to the machine... Or i
could upload the compressed, and possibly encrypted dump to the server...
Which, for resume-ability and speed, would be suggested? And if i where to
go with an upload option, any suggestions on what i should use?

Thanks.
-- 
Tiernan O'Toole
blog.lotas-smartman.net
www.geekphotographer.com
www.tiernanotoole.ie
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Building an On-Site and Off-Size ZFS server, replication question

2012-10-05 Thread Tiernan OToole
Thanks for that Jim!

Sounds like a plan there... One question about the storing ZFS dumps in a
file... So, the idea of storing the data in a SFTP server which has an
unknown underlying file system... Is that defiantly off limits, or can it
be done? and should i be doing a full dump or just an incremental one?
maybe incremental daily, and then a full dump weekly?

Thanks!

--Tiernan

On Fri, Oct 5, 2012 at 9:36 AM, Jim Klimov jimkli...@cos.ru wrote:

 2012-10-05 11:17, Tiernan OToole wrote:

 Also, as a follow up question, but slightly unrelated, when it comes to
 the ZFS Send, i could use SSH to do the send, directly to the machine...
 Or i could upload the compressed, and possibly encrypted dump to the
 server... Which, for resume-ability and speed, would be suggested? And
 if i where to go with an upload option, any suggestions on what i should
 use?


 As for this, the answer depends on network bandwidth, reliability,
 and snapshot file size - ultimately, on the probability and retry
 cost of an error during transmission.

 Many posters on the list strongly object to using files as storage
 for snapshot streams, because in reliability this is (may be) worse
 than a single-disk pool and bitrot on it - a single-bit error in
 a snapshot file can render it and all newer snapshots invalid and
 un-importable.

 Still, given enough scratch space on the sending and receiving sides
 and a bad (slow, glitchy) network in-between, I did go with compressed
 files of zfs-send streams (perhaps making recursion myself and using
 smaller files of one snapshot each - YMMV). For compression on multiCPU
 senders I can strongly suggest pigz --fast $filename (I did have
 problems in pigz-1.7.1 compressing several files with one command,
 maybe that's fixed now). If you're tight on space/transfer size more
 than on CPU, you can try other parallel algos - pbzip2, p7zip, etc.
 Likewise, you can also pass the file into an encryptor of your choice.

 Then I can rsync these files to the receiving server, using rsync -c
 and/or md5sum, sha256sum, sha1sum or whatever tool(s) of your liking
 to validate that the files received match those sent - better safe
 than sorry. I'm usually using rsync -cavPHK for any work, which
 gives you retryable transfers in case network goes down, status bar,
 directory recursion and hardlink support among other things.

 NFS is also retryable if so configured (even if the receiver gets
 rebooted in the process), and if you, for example, already have
 VPN between two sites, you might find it faster than rsync which
 involves extra encryption - maybe redundant in VPN case.

 When the scratch area on the receiver has got and validated the
 compressed snapshot stream, I can gzcat it and pipe into zfs recv.
 This ultimately validates that the zfs-send stream arrived intact
 and is fully receivable, and only then I can delete the temporary
 files involved - or retry the send from different steps (it is
 possible that the initial file was corrupted in RAM, etc.)

 Note that such approach via files essentially disables zfs-send
 deduplication which may be available in protocol between two
 active zfs commands, but AFAIK this does not preclude you from
 receiving data into deduped datasets - local dedup happens upon
 block writes anyway, like compression, encryption and stuff like
 that.

 HTH,
 //Jim Klimov

 __**_
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/**mailman/listinfo/zfs-discusshttp://mail.opensolaris.org/mailman/listinfo/zfs-discuss




-- 
Tiernan O'Toole
blog.lotas-smartman.net
www.geekphotographer.com
www.tiernanotoole.ie
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Building an On-Site and Off-Size ZFS server, replication question

2012-10-05 Thread Tiernan OToole
Thanks again Jim. Very handy info. This is now my weekend project, so
hopefully things go well!

--Tiernan

On Fri, Oct 5, 2012 at 10:40 AM, Jim Klimov jimkli...@cos.ru wrote:

 2012-10-05 13:13, Tiernan OToole wrote:

 Thanks for that Jim!

 Sounds like a plan there... One question about the storing ZFS dumps in
 a file... So, the idea of storing the data in a SFTP server which has an
 unknown underlying file system... Is that defiantly off limits, or can
 it be done?


 Mileages do vary. Maybe you should pack the stream files into
 archives with error-correction codes (or at least verification
 CRCs) like ZIP, RAR, likely p7zip, maybe others; and also keep
 checksum files. At least this can help detect or even fix small
 nasty surprises.

 The general concern is that zfs send streams have no built-in
 redundancy, I'm not sure about error-checking - likely it is
 there. And it is widely assumed that this being a stream, a small
 error can redirect the flow widely differently from expectations
 and cause the whole dataset state to be invalid (likely this
 snapshot receiving will be aborted, and then you can't receive
 any newer ones over it).

 That said, some people do keep the streams on tape; the NDMP
 tools and protocol from Sun IIRC do the same for backups.
 So it's not off-limits, but precautions may be due (keep 2+
 copies, do CRC/ECC and so on).


  and should i be doing a full dump or just an incremental
  one? maybe incremental daily, and then a full dump weekly?

 A full dump of a large filesystem can be unbearably large for
 storage and transfers. Still, the idea of storing occasional
 full snapshots and a full history of incrementals (so that you
 can try to recover starting from any of the full snapshots you
 have) sounds sane. This way you have a sort of second copy by
 virtue of a full snapshot incorporating some state of the dataset
 and if the most recent one is broken - you can try to recover
 with the one(s) before it and applying more incremental snapshots.
 Likewise, errors in very old snapshots become irrelevant when a
 newer full snapshot is intact. But sometimes two or more things
 can break - or be detected to break - at once ;)

 In particular, regular drills should be done (and provisioned
 for) to test that you can in fact recover from your backups,
 and that they do contain all the data you need. Older configs
 can become obsolete as live systems evolve...

 HTH,
 //Jim


 __**_
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/**mailman/listinfo/zfs-discusshttp://mail.opensolaris.org/mailman/listinfo/zfs-discuss




-- 
Tiernan O'Toole
blog.lotas-smartman.net
www.geekphotographer.com
www.tiernanotoole.ie
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Building an On-Site and Off-Size ZFS server, replication question

2012-10-05 Thread Tiernan OToole
Thanks Ian. That sounds like an option also. The plan was to break up the
file systems anyway, since some i will want to be replicated remotely, and
others not as much.

--Tiernan

On Fri, Oct 5, 2012 at 11:17 AM, Ian Collins i...@ianshome.com wrote:

 On 10/05/12 21:36, Jim Klimov wrote:

 2012-10-05 11:17, Tiernan OToole wrote:

 Also, as a follow up question, but slightly unrelated, when it comes to
 the ZFS Send, i could use SSH to do the send, directly to the machine...
 Or i could upload the compressed, and possibly encrypted dump to the
 server... Which, for resume-ability and speed, would be suggested? And
 if i where to go with an upload option, any suggestions on what i should
 use?

 As for this, the answer depends on network bandwidth, reliability,
 and snapshot file size - ultimately, on the probability and retry
 cost of an error during transmission.

 Many posters on the list strongly object to using files as storage
 for snapshot streams, because in reliability this is (may be) worse
 than a single-disk pool and bitrot on it - a single-bit error in
 a snapshot file can render it and all newer snapshots invalid and
 un-importable.

 Still, given enough scratch space on the sending and receiving sides
 and a bad (slow, glitchy) network in-between, I did go with compressed
 files of zfs-send streams (perhaps making recursion myself and using
 smaller files of one snapshot each - YMMV). For compression on multiCPU
 senders I can strongly suggest pigz --fast $filename (I did have
 problems in pigz-1.7.1 compressing several files with one command,
 maybe that's fixed now). If you're tight on space/transfer size more
 than on CPU, you can try other parallel algos - pbzip2, p7zip, etc.
 Likewise, you can also pass the file into an encryptor of your choice.


 I do have to suffer a slow, glitchy WAN to a remote server and rather than
 send stream files, I broke the data on the remote server into a more fine
 grained set of filesystems than I would do normally.  In this case, I made
 the directories under what would have been the leaf filesystems filesystems
 themselves.

 By spreading the data over more filesystems, the individual incremental
 sends are smaller, so there is less data to resend if the link burps during
 a transfer.

 --
 Ian.

 __**_
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/**mailman/listinfo/zfs-discusshttp://mail.opensolaris.org/mailman/listinfo/zfs-discuss




-- 
Tiernan O'Toole
blog.lotas-smartman.net
www.geekphotographer.com
www.tiernanotoole.ie
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Large scale performance query

2011-07-25 Thread Tiernan OToole
they dont go into too much detail on their setup, and they are not running
Solaris, but they do mention how their SATA cards see different drives,
based on where they are placed they also have a second revision at
http://blog.backblaze.com/2011/07/20/petabytes-on-a-budget-v2-0revealing-more-secrets/
which
talks about building their system with 135Tb in a single 45 bay 4U box...

I am also interested in this kind of scale... Looking at the BackBlaze box,
i am thinking of building something like this, but not in one go... so,
anything you do find out in your build, keep us informed! :)

--Tiernan

On Mon, Jul 25, 2011 at 4:25 PM, Roberto Waltman li...@rwaltman.com wrote:


 Phil Harrison wrote:
  Hi All,
 
  Hoping to gain some insight from some people who have done large scale
  systems before? I'm hoping to get some performance estimates, suggestions
  and/or general discussion/feedback.

 No personal experience, but you may find this useful:
 Petabytes on a budget


 http://blog.backblaze.com/2009/09/01/petabytes-on-a-budget-how-to-build-cheap-cloud-storage/

 --

 Roberto Waltman

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




-- 
Tiernan O'Toole
blog.lotas-smartman.net
www.geekphotographer.com
www.tiernanotoole.ie
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Migrating from ZFS-Fuse to ZFS Proper...

2011-07-22 Thread Tiernan OToole
Good morning all.

I built a test system using Linux (Ubuntu) and ZFS Fuse, just for testing...
I formatted 2 drives in the pool as ZFS, and have been putting data on the
system to see how performance worked, etc... Is this pool compatible with
ZFS proper? Eg, Solaris Express? Open Indiana?, or Nexenta? Its not a major
problem if its not, as i was eventually going to kill the pool and create,
what i would call, production pool, but just wondering...

Thanks.

-- 
Tiernan O'Toole
blog.lotas-smartman.net
www.geekphotographer.com
www.tiernanotoole.ie
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Migrating from ZFS-Fuse to ZFS Proper...

2011-07-22 Thread Tiernan OToole
never mind... found an answer here:
http://groups.google.com/a/zfsonlinux.org/group/zfs-discuss/browse_thread/thread/3183dab146d5f1af/d9e9d59b19aa4401?lnk=raot

On Fri, Jul 22, 2011 at 4:59 PM, Tiernan OToole lsmart...@gmail.com wrote:

 Good morning all.

 I built a test system using Linux (Ubuntu) and ZFS Fuse, just for
 testing... I formatted 2 drives in the pool as ZFS, and have been putting
 data on the system to see how performance worked, etc... Is this pool
 compatible with ZFS proper? Eg, Solaris Express? Open Indiana?, or Nexenta?
 Its not a major problem if its not, as i was eventually going to kill the
 pool and create, what i would call, production pool, but just wondering...

 Thanks.

 --
 Tiernan O'Toole
 blog.lotas-smartman.net
 www.geekphotographer.com
 www.tiernanotoole.ie




-- 
Tiernan O'Toole
blog.lotas-smartman.net
www.geekphotographer.com
www.tiernanotoole.ie
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zil on multiple usb keys

2011-07-18 Thread Tiernan OToole
Ok, so, taking 2 300Gb disks, and 2 500Gb disks, and creating an 800Gb
mirrored striped thing is sounding like a bad idea... what about just
creating a pool of all disks, without using mirrors? I seen something called
copies, which if i am reading correctly, will make sure a number of copies
of a file exist... Am i reading that correctly? If this does work the way i
think it works, then taking all 4 disks, and making one large 1.6Tb pool,
setting copies to 2, should, in theory, create a poor mans pool with
striping, right?

--Tiernan

On Mon, Jul 18, 2011 at 7:20 AM, Brandon High bh...@freaks.com wrote:

 On Sun, Jul 17, 2011 at 12:13 PM, Edward Ned Harvey
 opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
  Actually, you can't do that.  You can't make a vdev from other vdev's,
 and when it comes to striping and mirroring your only choice is to do it the
 right way.
 
  If you were REALLY trying to go out of your way to do it wrong somehow, I
 suppose you could probably make a zvol from a stripe, and then export it to
 yourself via iscsi, repeat with another zvol, and then mirror the two iscsi
 targets.   ;-)  You might even be able to do the same crazy thing with
 simply zvol's and no iscsi...  But either way you'd really be going out of
 your way to create a problem.   ;-)

 The right way to do it, um, incorrectly is to create a striped device
 using SVM, and use that as a vdev for your pool.

 So yes, you could create two 800GB stripes, and use them to create a
 ZFS mirror. But it would be a really bad idea.

 -B

 --
 Brandon High : bh...@freaks.com
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




-- 
Tiernan O'Toole
blog.lotas-smartman.net
www.tiernanotoolephotography.com
www.the-hairy-one.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zil on multiple usb keys

2011-07-16 Thread Tiernan OToole
Well, not knowing a lot about these, but if the flash stick is based on SSD,
then it might work well, but if its just a standard USB key rebundled as a
eSATA disk, maybe not...

On Fri, Jul 15, 2011 at 5:54 PM, Eugen Leitl eu...@leitl.org wrote:

 On Fri, Jul 15, 2011 at 04:21:13PM +, Tiernan OToole wrote:
  This might be a stupid question, but here goes... Would adding, say, 4 4
 or 8gb usb keys as a zil make enough of a difference for writes on an iscsi
 shared vol?
 
  I am finding reads are not too bad (40is mb/s over gige on 2 500gb drives
 stripped) but writes top out at about 10 and drop a lot lower... If I where
 to add a couple usb keys for zil, would it make a difference?

 Speaking of which, is there a point in using an eSATA flash stick?
 If yes, which?

 --
 Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
 __
 ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
 8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




-- 
Tiernan O'Toole
blog.lotas-smartman.net
www.tiernanotoolephotography.com
www.the-hairy-one.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zil on multiple usb keys

2011-07-16 Thread Tiernan OToole
Thanks for the info. need to rebuild my machine and ZFS pool kind of new
to this and realized i built it as a stripe, not a mirror... also, want to
add extra disks...

As a follow up question:

I have 2 500Gb internal drives and 2 300Gb USB drives. If i where to create
a 2 pools, a 300Gb and a 500Gb in each, and then mirror over them, would
that work? is it even posible? or what would you recomend for that setup?

Thanks.

--Tiernan

On Fri, Jul 15, 2011 at 5:39 PM, Edward Ned Harvey 
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:

  From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
  boun...@opensolaris.org] On Behalf Of Tiernan OToole
 
  This might be a stupid question, but here goes... Would adding, say, 4 4
 or
  8gb usb keys as a zil make enough of a difference for writes on an iscsi
 shared
  vol?
 
  I am finding reads are not too bad (40is mb/s over gige on 2 500gb drives
  stripped) but writes top out at about 10 and drop a lot lower... If I
 where to
  add a couple usb keys for zil, would it make a difference?

 Unfortunately, usb keys, even the fastest ones, are slower than physical
 hard drives.  I even went out of my way to buy a super expensive super fast
 USB3 16G fob...  And it's still slower than a super-cheap USB2 sata hard
 drive.

 There is a way you can evaluate the effect of adding a fast slog device
 without buying one.  (It would have to be a fast device, certainly no USB
 fobs.)  Just temporarily disable your ZIL.  That's the fastest you can
 possibly go.  If it makes a big difference, then getting a fast slog device
 will help you approach that theoretical limit.  If it doesn't make a huge
 difference, then adding slog will not do you any good.

 To disable ZIL, if your pool is sufficiently recent, use the zfs set sync=
 command.  It takes effect immediately.  If you have an older system, you'll
 have to use a different command, and you'll probably have to remount your
 filesystem in order for the change to take effect.




-- 
Tiernan O'Toole
blog.lotas-smartman.net
www.tiernanotoolephotography.com
www.the-hairy-one.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zil on multiple usb keys

2011-07-16 Thread Tiernan OToole
So, i like the sound of that, but the box is a very frankinbox like... it
has 2 SATA ports, one used for the boot drive, one for one of the 500s...
the second 500Gb is IDE. The 2 USB drives both internally are SATA, so
pulling one and plugging it internally wont work that well... but thats for
the info.

On Sat, Jul 16, 2011 at 3:09 PM, Edward Ned Harvey 
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:

  From: Tiernan OToole [mailto:lsmart...@gmail.com]
  Sent: Saturday, July 16, 2011 7:46 AM
 
  I have 2 500Gb internal drives and 2 300Gb USB drives. If i where to
 create a 2
  pools, a 300Gb and a 500Gb in each, and then mirror over them, would that
  work? is it even posible? or what would you recomend for that setup?

 I think the risk of accidental disconnection is higher on the USB drive.
  So
 I would recommend swapping the disks inside the enclosures...  One 500
 inside, one 500 outside, one 300 inside, one 300 outside.   Mirror the 500G
 drives to each other, mirror the 300g drives to each other.  That way, if
 you accidentally disconnect one or both of the external drives, you just
 plug it back in and everything moves forward without any problem.




-- 
Tiernan O'Toole
blog.lotas-smartman.net
www.tiernanotoolephotography.com
www.the-hairy-one.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zil on multiple usb keys

2011-07-16 Thread Tiernan OToole
thats not a typo... I was thinking 2 pools, 800gb each, and mirrored...
think i should mess around with this setup a bit more and see what i can get
working... might work better if i just move them into a new enclosure... we
see what happens...

Thanks for the info on the USB drives... if the ZIL drive falls over, does
ZFS not recover well? do i need to reboot fully?

Thanks.

--Tienan

On Sat, Jul 16, 2011 at 3:23 PM, Jim Klimov jimkli...@cos.ru wrote:

  2011-07-16 15:46, Tiernan OToole пишет:

 Thanks for the info. need to rebuild my machine and ZFS pool kind of
 new to this and realized i built it as a stripe, not a mirror... also, want
 to add extra disks...

  As a follow up question:

  I have 2 500Gb internal drives and 2 300Gb USB drives. If i where to
 create a 2 pools, a 300Gb and a 500Gb in each, and then mirror over them,
 would that work? is it even posible? or what would you recomend for that
 setup?

  Is there a typo? It would rather be a 2*300Gb mirror and a 2*500Gb
 mirror,
 with a stripe above them as much as writes can get balanced.

 That would work (with forcing on command-line), is possible, moderately
 recommmended because unbalanced setups can have more issues than
 usual (hence you must use the force to enable such setup).

 And just in case, this pool can not be a bootable rpool.

 You might make a 2*200Gb slice mirror for an rpool and a more balanced
 4*300Gb pool of any layout (raid10, raidz123)...

 As for using USB sticks, I started my unlucky setup with some sticks used
 as L2ARC, and about once a week the device got lost (possibly because
 a stick could slide a bit from its contact bay on the chassis - BIOS also
 did
 not see the stick until it was re-plugged). Loss of a device would also
 hang
 my pool for quite a long while...

  Thanks.

  --Tiernan

 On Fri, Jul 15, 2011 at 5:39 PM, Edward Ned Harvey 
 opensolarisisdeadlongliveopensola...@nedharvey.com wrote:

  From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
  boun...@opensolaris.org] On Behalf Of Tiernan OToole
  
  This might be a stupid question, but here goes... Would adding, say, 4 4
 or
  8gb usb keys as a zil make enough of a difference for writes on an iscsi
 shared
  vol?
 
  I am finding reads are not too bad (40is mb/s over gige on 2 500gb
 drives
  stripped) but writes top out at about 10 and drop a lot lower... If I
 where to
  add a couple usb keys for zil, would it make a difference?

  Unfortunately, usb keys, even the fastest ones, are slower than physical
 hard drives.  I even went out of my way to buy a super expensive super
 fast
 USB3 16G fob...  And it's still slower than a super-cheap USB2 sata hard
 drive.

 There is a way you can evaluate the effect of adding a fast slog device
 without buying one.  (It would have to be a fast device, certainly no USB
 fobs.)  Just temporarily disable your ZIL.  That's the fastest you can
 possibly go.  If it makes a big difference, then getting a fast slog
 device
 will help you approach that theoretical limit.  If it doesn't make a huge
 difference, then adding slog will not do you any good.

 To disable ZIL, if your pool is sufficiently recent, use the zfs set sync=
 command.  It takes effect immediately.  If you have an older system,
 you'll
 have to use a different command, and you'll probably have to remount your
 filesystem in order for the change to take effect.




 --
 Tiernan O'Toole
 blog.lotas-smartman.net
 www.tiernanotoolephotography.com
 www.the-hairy-one.com


 ___
 zfs-discuss mailing 
 listzfs-discuss@opensolaris.orghttp://mail.opensolaris.org/mailman/listinfo/zfs-discuss



 --


 ++
 ||
 | Климов Евгений, Jim Klimov |
 | технический директор   CTO |
 | ЗАО ЦОС и ВТ  JSC COSHT |
 ||
 | +7-903-7705859 (cellular)  mailto:jimkli...@cos.ru 
 jimkli...@cos.ru |
 |  CC:ad...@cos.ru,jimkli...@mail.ru |
 ++
 | ()  ascii ribbon campaign - against html mail  |
 | /\- against microsoft attachments  |
 ++




 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




-- 
Tiernan O'Toole
blog.lotas-smartman.net
www.tiernanotoolephotography.com
www.the-hairy-one.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Zil on multiple usb keys

2011-07-15 Thread Tiernan OToole
This might be a stupid question, but here goes... Would adding, say, 4 4 or 8gb 
usb keys as a zil make enough of a difference for writes on an iscsi shared 
vol? 

I am finding reads are not too bad (40is mb/s over gige on 2 500gb drives 
stripped) but writes top out at about 10 and drop a lot lower... If I where to 
add a couple usb keys for zil, would it make a difference?

Thanks.
Sent from a fruity device
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Recomendations for Storage Pool Config

2010-06-25 Thread Tiernan OToole
Good morning all.



This question has probably poped up before, but maybe not in this exact way…



I am planning on building a SAN for my home meta centre, and have some of
the raid cards I need for the build. I will be ordering the case soon, and
then the drives. The cards I have are 2 8 port PXI-Express cards (A dell
Perc 5 and a Adaptec card…). The case will have 20 hot swap SAS/SATA drives,
and I will be adding a third RAID controller to allow the full 20 drives.



I have read something about trying to setup redundancy with the RAID
controllers, so having zpools spanning multiple controllers. Given I won’t
be using the on-board RAID features of the cards, I am wondering how this
should be setup…



I was thinking of zpools: 2+2+1 X 4 in ZRAID2. This way, I could lose a
controller and not lose any data from the pools… But is this theory correct?
If I were to use 2Tb drives, each zpool would be 10Tb RAW and 6TB useable…
giving me a total of 40Tb RAW and 24Tb usable…



Is this over kill? Should I be worrying about losing a controller?



Thanks in advance.



--Tiernan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Plan for upgrading a ZFS based SAN

2010-02-17 Thread Tiernan OToole
At the moment is just one pool with a plan to add the 500gb drives... What 
would be recommend?

-Original Message-
From: Brandon High bh...@freaks.com
Sent: 17 February 2010 01:00
To: Tiernan OToole lsmart...@gmail.com
Cc: Robert Milkowski mi...@task.gda.pl; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Plan for upgrading a ZFS based SAN

On Tue, Feb 16, 2010 at 3:13 PM, Tiernan OToole lsmart...@gmail.com wrote:
 Cool... Thanks for the advice! Buy why would it be a good idea to change 
 layout on bigger disks?

On top of the reasons Bob gave, your current layout will be very
unbalanced after adding devices. You can't currently add more devices
to a raidz vdev or remove a top level vdev from a pool, so you'll be
stuck with 8 drives in a raidz2, 3 drives in a raidz, and any future
additions in additional vdevs.

When you say you have 2 pools, do you mean two vdevs in one pool, or
actually two pools?

-B

-- 
Brandon High : bh...@freaks.com
Indecision is the key to flexibility.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Plan for upgrading a ZFS based SAN

2010-02-16 Thread Tiernan OToole
So, does that work with RAIDZ1 and 2 pools?

On Tue, Feb 16, 2010 at 1:47 PM, Robert Milkowski mi...@task.gda.pl wrote:



 On Mon, 15 Feb 2010, Tiernan OToole wrote:

  Good morning all.

 I am in the process of building my V1 SAN for media storage in house, and
 i
 am already thinkg ov the V2 build...

 Currently, there are 8 250Gb hdds and 3 500Gb disks. the 8 250s are in a
 RAIDZ2 array, and the 3 500s will be in RAIDZ1...

 At the moment, the current case is quite full. i am looking at a 20 drive
 hotswap case, which i plan to order soon. when the time comes, and i start
 upgrading the drives to larger drives, say 1Tb drives, would it be easy to
 migrate the contents of the RAIDZ2 array to the new Array? I see mentions
 of
 ZFS Send and ZFS recieve, but i have no idea if they would do the job...



 if you can expose both disk arrays to the host then you can replace (zpool
 replace) a disk (vdev) one-by-one. Once you replaced all disks with larger
 ones zfs will automatically enlarge your pool.

 --
 Robert Milkowski
 http://milek.blogspot.com




-- 
Tiernan O'Toole
blog.lotas-smartman.net
www.tiernanotoolephotography.com
www.the-hairy-one.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Plan for upgrading a ZFS based SAN

2010-02-16 Thread Tiernan OToole
Cool... Thanks for the advice! Buy why would it be a good idea to change layout 
on bigger disks?

-Original Message-
From: Brandon High bh...@freaks.com
Sent: 16 February 2010 18:26
To: Tiernan OToole lsmart...@gmail.com
Cc: Robert Milkowski mi...@task.gda.pl; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Plan for upgrading a ZFS based SAN

On Tue, Feb 16, 2010 at 8:25 AM, Tiernan OToole lsmart...@gmail.com wrote:
 So, does that work with RAIDZ1 and 2 pools?

Yes. Replace all the disks in one vdev, and that vdev will become
larger. Your disk layout won't change though - You'll still have a
raidz vdev, a raidz2 vdev. It might be a good idea to revise the
layout a bit with larger disks.

If you do change the layout, then a send/receive is the easiest way to
move your data. It can be used to copy everything on the pool
(including snapshots, etc) to your new system.

-B

-- 
Brandon High : bh...@freaks.com
Suspicion Breeds Confidence

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Plan for upgrading a ZFS based SAN

2010-02-15 Thread Tiernan OToole
Good morning all.

I am in the process of building my V1 SAN for media storage in house, and i
am already thinkg ov the V2 build...

Currently, there are 8 250Gb hdds and 3 500Gb disks. the 8 250s are in a
RAIDZ2 array, and the 3 500s will be in RAIDZ1...

At the moment, the current case is quite full. i am looking at a 20 drive
hotswap case, which i plan to order soon. when the time comes, and i start
upgrading the drives to larger drives, say 1Tb drives, would it be easy to
migrate the contents of the RAIDZ2 array to the new Array? I see mentions of
ZFS Send and ZFS recieve, but i have no idea if they would do the job...

Thanks in advance.

-- 
Tiernan O'Toole
blog.lotas-smartman.net
www.tiernanotoolephotography.com
www.the-hairy-one.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] 3ware 9650 SE

2010-02-01 Thread Tiernan OToole




Good morning.

looking at the 3ware
9650 SE raid controller for a new build... anyone have any luck with
this card? their site says they support OpenSolaris... anyone used one?

Thanks.



Tiernan OToole
Software Developer
Chat Google Talk: lsmart...@gmail.com Skype: tiernanotoole MSN: lotas...@hotmail.com
Contact Me 
Tiernans
Comms Closet New Year, New Upgrades

---
@ WiseStamp Signature.
Get
it now




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 3ware 9650 SE

2010-02-01 Thread Tiernan OToole
Thanks for the feedback lads... dont really need the boot drives to be 
on the array... was going to use the onboard controller for that... got 
an adaptec card already, so might look at those again...


--Tiernan

On 01/02/2010 15:59, TheJay wrote:

I use the Beta 9.5.3 ISO Opensolaris package with OSOL DEV131 build - I had the 
3ware support team help me. It works like a charm on my 9650se-24m8 with 20 
drives


On Feb 1, 2010, at 3:02 AM, Kjetil Torgrim Homme wrote:

   

Tiernan O'Toolelsmart...@gmail.com  writes:

 

looking at the 3ware 9650 SE raid controller for a new build... anyone
have any luck with this card? their site says they support
OpenSolaris... anyone used one?
   

didn't work too well for me.  it's fast and nice for a couple of days,
then the driver gets slower and slower, and eventually it gets stuck and
all I/O freezes.  preventive reboots were needed.  I used the newest
driver from 3ware/AMCC with 2008.11 and 2009.05.

--
Kjetil T. Homme
Redpill Linpro AS - Changing the game

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Media server build

2010-01-31 Thread Tiernan OToole








Thanks for the info.

I will take the Napp-it question off line with Gnther and see if i can
fix that.

Intel Nic sounds like a plan... was thinking of sticking 2 in anyway...
just looking at the cards though, they are GigaNIX 2032T and searching
for this online results nothing when i include Solaris or
OpenSolaris... Pity...

finally, i will use the 750 somewhere else, and use 3 500s i have
here... should be enough to start with... 



Tiernan OToole
Software Developer
Chat Google Talk: lsmart...@gmail.com Skype: tiernanotoole MSN: lotas...@hotmail.com
Contact Me 
Tiernans
Comms Closet New Year, New Upgrades

---
@ WiseStamp Signature.
Get
it now

On 31/01/2010 18:43, Gnther wrote:

  hello

napp-it is just a simple cgi-script

common 3 reasons for error 500:
- file permission: just set all files in /napp-it/.. to 777 recursively
  (rwx for all - napp-it will set them correct at first call)

-files must be copied in ascii-mode
  have you used winscp? ; otherwise you must care about

-cgi-bin folder must be allowed to hold executabels
  is set in apache config file in /etc/apache2/sites/enabled/000-default
  should be ok per default

- missing modules (not a problem in nexenta)

please look at apache error log-file at
/var/log/apache2/error.log

there you will find the reason for this error

your nics:
i suppose, your other nics are not supported by default
you can try to find and install drivers or (better):
forget them and buy a intel nic -(about 50 euro, no problem)

your hd:
if you build a raid 1 or raid-z, the capacity depends on
the smallest drive, so your 750 hd is used as 500 gig-

gea
  




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Media server build

2010-01-30 Thread Tiernan OToole








right... so, the machine booted with NexentaCore, but have a couple of
questions... probably stupid ones...

firstly, trying to install Napp-it fails... 500 error messages... tried
all the tips, any recommendations?

secondly, i have 8 250Gb hdds (on a raid controller, but listed out as
8 drives, no RAID), a 120Gb (boot) a 500Gb and a 750Gb... the 8 drives
are in ZRAID2 ATM, but any recommendations how i should setup the other
2? i can swap the 750 for another 500, which i have 2 of... and i could
get all 3 500's in...should i go zraid1 with the 3 500s or do something
else?

finally, i have 3 net cards in the box (not ZFS specific, but i will
ask here anyway...) and only the onboard has been configured but its
only 100mb/s... how do i figure out what the others are and add them?

Again, stupid newbe questions here...



Tiernan OToole
Software Developer
Chat Google Talk: lsmart...@gmail.com Skype: tiernanotoole MSN: lotas...@hotmail.com
Contact Me 
Tiernans
Comms Closet New Year, New Upgrades

---
@ WiseStamp Signature.
Get
it now

On 30/01/2010 09:33, Gnther wrote:

  hello 

may i suggest my free napp-it zfs-server
it is based on free nexenta3 (core) or opensolaris/eon, 

(no hd limit, deduplication, zfs3, all the new stuff)

-with user editable webgui, 
-easy setup instructions (copy and run)
 and a hardware reference design. 

howto see
http://www.napp-it.org/napp-it.pdf

more
http://www.napp-it.org

gea
  




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Media server build

2010-01-29 Thread Tiernan OToole
thanks.

I have looked at nexentastor, but i have a lot more drives than 2Tb... i
know their nexentacore could be better suited... I think its also based on
OpenSolaris too, correct?

On Fri, Jan 29, 2010 at 2:00 AM, Thomas Burgess wonsl...@gmail.com wrote:



 On Thu, Jan 28, 2010 at 7:58 PM, Tiernan OToole lsmart...@gmail.comwrote:

 Good morning. This is more than likley a stupid question on this alias
 but I will ask anyway. I am building a media server in the house and
 am trying to figure out what os to install. I know it must have zfs
 support but can't figure if I should use Freenas or open solaris.

 Free nas has the advantage of out of the box setup but is there
 anything similar for opensolaris? Also, ability to boot and install
 from USB key would be handy.

 Thanks.

 --Tiernan

 You should def. go with opensolaris or something based on opensolaris if
 possible BUT there is one major caveat

 OpenSolaris has a much smaller HCL than FreeBSD.  Finding hardware for Osol
 isn't hard if you design your system around it, but it CAN be an issue when
 you are using found hardware or old stuff you just happen to have.  If you
 are designing a system from scratch and don't mind doing the research, it's
 a nice option.

 The main reason i PERSONALLY say to go with Osol is that it has the newest
 ZFS features and i found CIFS performance to be great, not to mention easy
 to set up. (you download 2 packages and then simply use zfs set sharesmb=on,
 what could be easier)

 FreeBSD is great too (and FreeNAS is based on FreeBSD) but for PURE
 fileserver/nas I think opensolaris is a better choice.






  --

 Tiernan O'Toole
 blog.lotas-smartman.net
 www.tiernanotoolephotography.com
 www.the-hairy-one.com
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss





-- 
Tiernan O'Toole
blog.lotas-smartman.net
www.tiernanotoolephotography.com
www.the-hairy-one.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Media server build

2010-01-29 Thread Tiernan OToole
cool lads! thanks for the links. checking out Simon's posts now. going to
try get my hands on an external DVD Drive and install something tomorrow...
NextaCore might be the way to go if it has both OpenSolaris and Debian, or
StormOS, which sounds interesting too...

I agree, 4Tb for a home server is not a lot... currently, raw, there is more
than that in the case ATM, and there is another 4Tb RAW still to go in (need
a bigger case...).

Thanks again.
On Fri, Jan 29, 2010 at 10:34 PM, Simon Breden sbre...@gmail.com wrote:

 Yep, you're right, the topic was media server build :)

 Cheers,
 Simon
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




-- 
Tiernan O'Toole
blog.lotas-smartman.net
www.tiernanotoolephotography.com
www.the-hairy-one.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Media server build

2010-01-28 Thread Tiernan OToole
Good morning. This is more than likley a stupid question on this alias
but I will ask anyway. I am building a media server in the house and
am trying to figure out what os to install. I know it must have zfs
support but can't figure if I should use Freenas or open solaris.

Free nas has the advantage of out of the box setup but is there
anything similar for opensolaris? Also, ability to boot and install
from USB key would be handy.

Thanks.

--Tiernan

-- 
Tiernan O'Toole
blog.lotas-smartman.net
www.tiernanotoolephotography.com
www.the-hairy-one.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] $100 SSD = 5x faster dedupe

2010-01-07 Thread Tiernan OToole
Sorry to hijack the thread, but can you explain your setup? Sounds
interesting, but need more info...

Thanks!

--Tiernan

On Jan 7, 2010 11:56 PM, Marty Scholes martyscho...@yahoo.com wrote:

Ian wrote:  Why did you set dedup=verify on the USB pool?
Because that is my last-ditch copy of the data and MUST be correct.  At the
same time, I want to cram as much data as possible into the pool.

If I ever go to the USB pool, something has already gone horribly wrong and
I am desperate.  I can't comprehend the anxiety I would have if one or more
stripes had a birthday collision giving me silent data corruption that I
found out about months or years later.

It's probably paranoid, but a level of paranoia I can live with.

Good question, by the way.

-- This message posted from
opensolaris.org___

zfs-discuss mailing list zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zf...
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS iSCSI volume problems

2010-01-06 Thread Tiernan OToole
stupid question, but it wouldent by any chance be an Intel Network adapter?
had a weird problem on Windows which had the same issue... new net driver
solved the problem... wonder if the Intel drive has the same problem on
Solaris...

--Tiernan

On Wed, Jan 6, 2010 at 2:28 PM, John hort...@gmail.com wrote:

 I'm using snv_111 to host iSCSI for my backups. This went fine until I
 enabled compression on the volume. About halfway through a backup (~250gb
 done), Solaris loses its network connection with no errors logged
 (/var/adm/messages and /var/log/* with no entries for an hour preceding).
 After reformatting the iSCSI volume (from Windows) and starting from scratch
 to use compression, it takes about 10gb to trigger this. I've tried several
 switches and routers, and the same always happens - the Solaris system drops
 off the network for about 15 minutes.

 I tried to do a 'zfs destroy' on the volume, and it hung, along with all
 other zfs commands for about 3 hours. The volume only ever contained 550gb
 total of data.

 This is a 700gb volume on two mirrored 1tb drives.

 Looking back through the OpenSolaris-help mailing list, I see the zfs
 destroy hanging problem, but it looked like people believed that to be more
 related to dedup.

 Anyone have any ideas on what I could be doing better to avoid these
 behaviors?
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




-- 
Tiernan O'Toole
blog.lotas-smartman.net
www.tiernanotoolephotography.com
www.the-hairy-one.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] De-Dupe and iSCSI

2009-11-03 Thread Tiernan OToole
Good morning all...

Great work on the De-Dupe stuff. cant wait to try it out. but quick question
about iSCSI and De-Dupe. will it work? if i share out a ZVOL to another
machine and copy some simular files to it (thinking VMs) will they get
de-duplicated?

Thanks.

-- 
Tiernan O'Toole
blog.lotas-smartman.net
www.tiernanotoolephotography.com
www.the-hairy-one.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss