Re: [OpenIndiana-discuss] Does the latest OI version allow ashift=12 for zpool creation?

2015-10-28 Thread andy thomas
What I would like to do is create a new pool with ashift=12 even though 
the current disks have 512 byte/sectors. So when the next disk fails, I 
can get a current model disk with 4k sectors to replace it. Configuring 
ashift sizes at pool creation time can be done in FreeBSD and (I think) in 
ZFSonLinux as well.


Andy

On Tue, 27 Oct 2015, jason matthews wrote:





If your new drives are misrepresenting their sector size you can override the 
sector size, thanks to george wilson, in sd.conf. this is common for SSDs.


http://wiki.illumos.org/display/illumos/List+of+sd-config-list+entries+for+Advanced-Format+drives

You can gather the data for the identifying text from iostat -En or format -> 
select disk -> inquiry


once the zpool created you can verify with zdb -vv  |grep ashfit


j.

On 10/27/15 11:17 AM, andy thomas wrote:
I admit I haven't been on this forum for years, such is the reliability of 
my OI 148 server built in 2011. I'm using 3 x 2 TB WD2002FAEX 512-byte 
sector disks for a ZFS RAIDz1 pool in this and about a year ago, one of 
these disks failed - I tried fitting a more recent WD2002 disk with 4k 
sectors but zpool replace complained about the sector mismatch and in the 
end I had to find another WD2002FAEX (and it was quite expensive too).


This server is now free to be rebuilt so I upgraded it today to OI 151a9 
but the latest zpool doesn't seem to offer the option to set the ashift 
value. I wondered if I installed the latest OI from DVD, destroying the 
original pool, would zpool default to using ashift=12? This would allow me 
to use both 512 and 4096 byte sectors in vdevs.


Andy

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss




___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss





Andy Thomas,
Time Domain Systems

Tel: +44 (0)7866 556626
Fax: +44 (0)20 8372 2582
http://www.time-domain.co.uk

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Does the latest OI version allow ashift=12 for zpool creation?

2015-10-28 Thread jason matthews


I think I provided you the means to do that. Once upon a time there was 
a patch for zpool(1) to create ashit modifications. As far as I know, it 
never made it into an OI distribution. At some point in time  Wilson 
added this functionality to the sd driver and no one looked back.


I am interested to know what you think about the performance of your 
256b drives working in a 4k zpool. Please let me know how that works out.


I use 8k sector sizes on our SSDs (which internally do 8k writes) with 
128k record sizes for what are natively 8k writes from postgres. The 
SSDs have a mix of firmware that reports 256b and 4k sector sizes to the 
host. Nailing the sector size allowed me to skip flashing the firmware 
on hundreds of DC S3700 drives. Conventional wisdom is to align your 
recordsize with your I/O patter. However, using 8k recordsizes horribly 
fragmented the pool over time and performance plummeted.  I consider 
this formulation secret sauce for very active databases.


j.

On 10/28/15 4:40 AM, andy thomas wrote:
What I would like to do is create a new pool with ashift=12 even 
though the current disks have 512 byte/sectors. So when the next disk 
fails, I can get a current model disk with 4k sectors to replace it. 
Configuring ashift sizes at pool creation time can be done in FreeBSD 
and (I think) in ZFSonLinux as well.


Andy

On Tue, 27 Oct 2015, jason matthews wrote:





If your new drives are misrepresenting their sector size you can 
override the sector size, thanks to george wilson, in sd.conf. this 
is common for SSDs.


http://wiki.illumos.org/display/illumos/List+of+sd-config-list+entries+for+Advanced-Format+drives 



You can gather the data for the identifying text from iostat -En or 
format -> select disk -> inquiry


once the zpool created you can verify with zdb -vv  |grep ashfit


j.

On 10/27/15 11:17 AM, andy thomas wrote:
I admit I haven't been on this forum for years, such is the 
reliability of my OI 148 server built in 2011. I'm using 3 x 2 TB 
WD2002FAEX 512-byte sector disks for a ZFS RAIDz1 pool in this and 
about a year ago, one of these disks failed - I tried fitting a more 
recent WD2002 disk with 4k sectors but zpool replace complained 
about the sector mismatch and in the end I had to find another 
WD2002FAEX (and it was quite expensive too).


This server is now free to be rebuilt so I upgraded it today to OI 
151a9 but the latest zpool doesn't seem to offer the option to set 
the ashift value. I wondered if I installed the latest OI from DVD, 
destroying the original pool, would zpool default to using 
ashift=12? This would allow me to use both 512 and 4096 byte sectors 
in vdevs.


Andy

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss




___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss





Andy Thomas,
Time Domain Systems

Tel: +44 (0)7866 556626
Fax: +44 (0)20 8372 2582
http://www.time-domain.co.uk

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss




___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Broken zpool

2015-10-28 Thread Bob Friesenhahn

On Tue, 27 Oct 2015, jason matthews wrote:


People who buy giant ass disks and then complain about how long it takes to 
resilver a giant ass disk are out of their minds. They remind me of morons 
that buy houses next airports and then complain about the noise of airplanes.


It is difficult to buy anything but "giant ass disks" any more since 
that is mostly what they sell now.  IMHO a disk larger than 1TB 
already qualifies as a "giant ass disk" since resilver rates are not 
increasing (unless one switches to SSDs).


There is always the option of only using a fraction of the pool 
storage.


Regardless any zfs pool can fail, regardless of its theoretical level 
of data redundancy.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Broken zpool

2015-10-28 Thread Jerry Kemp

Absolutely Bob,

I know that the OP wasn't happy to hear it, but Jason made a very good (reply) 
post that shared the comments that I'm sure that we were all thinking.


I have a (Solaris admin) friend at another company who apparently has quite a 
few friends/colleagues/etc who are running Solaris or some Solaris based distro 
at home for ZFS based data storage, and he shared (what I feel) is a somewhat 
unique viewpoint for at home user who won't back up.


From a high level view, his comment to them is to NOT run a mirror.  His 
suggestion to them is to just run a straight drive, then every evening or 
downtime, bring the other disk(s) online and sync them with the online master, 
using rsync, or your favorite utility, then once the sync is complete, offline 
the newly synced data and put it away.


I realize that none of this helps the OP at this point, but this is presented as 
food for though, or a directly related item of discussion.


Jerry




On 10/28/15 11:45 AM, Bob Friesenhahn wrote:


Regardless any zfs pool can fail, regardless of its theoretical level of data
redundancy.

Bob


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] Support for USB3?

2015-10-28 Thread Rich Teer
Hi all,

Hopefully a very quick question here: is USB3 supported yet, and if so, what
2+ port cards are recommended?  I'm specifically talking about using ZFS on
external hard drives, and am thinking of using SmartOS if that helps.  The
USB card would be going into a PCIe slot in my Ultra 20 M2.  (I'm currently
using one of the on-board USB2 interfaces, but the 480 Mb/s speed is killing
me!)

Cheers,

-- 
Rich Teer, Publisher
Vinylphile Magazine

www.vinylphilemag.com

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Broken zpool

2015-10-28 Thread jason matthews


Let me apologize in advanced for inter-mixing comments.


On 10/27/15 7:44 PM, Rainer Heilke wrote:


I am not trying to be a dick (it happens naturally), but if you cant
afford to backup terabytes of data, then you cant afford to have
terabytes of data.


That is a meaningless statement, that reflects nothing in real-world 
terms.
The true cost of a byte of data that you care about is the money you pay 
for the initial storage, and then the money you pay to back it up. For 
work, my front line databases have 64TB of mirrored net storage. The 
costs dont stop there. There is another 200TB of net storage dedicated 
to holding enough log data to rebuild the last 18 months from scratch. I 
also have two sets of slaves that snapshot themselves frequently. One 
set is a single disk, the other is raidz. These are not just backups. 
One set runs batch jobs, one runs the front end portal, and the masters 
are in charge of data ingestions.


The slaves are useful backups for zpool corruption on the front end but 
not necessarily for human error. For human error, say where someone 
destroys a table that replicates across all the slaves and some how isnt 
noticed until all the snapshots are deleted then we have the logs. I 
have different kinds of backups taken at different intervals to handle 
different kids of failures. Some are are live, some are snapshots, and 
some are source data. You need to determine your level of risk 
tolerance. That might mean using zfs send/recv to two different zpools 
with the same or different protection levels.


If you dont backup, you set yourself up for unrecoverable problems. In 
four years of running high transaction, high throughput databases on ZFS 
I have had to rebuild pools from time to time for different reasons. 
However, never for corruption. I have had other problems like unbalanced 
write load across vdevs and metaslab fragmentation. My point is, dont 
under estimate the costs of maintaining a byte of data. You might need 
the backup one day, even with protections that ZFS provides.


That said, instead of running mirrors run loose disks and backup to the 
second pool at a frequency you are comfortable with. You need to 
prioritize your resources against your risk tolerance. It is tempting to 
do mirrors because it is sexy but that might not be the best strategy.



This is just good stewardship of data you want to keep.


That's an arrogant statement, presuming that if a person doesn't have 
gobs of money, they shouldn't bother with computers at all.
I didnt write anything like that. What I am saying is you need to get 
more creative on how to protect your data. Yes, money makes it easier 
but you have options.



People who buy giant ass disks and then complain about how long it takes
to resilver a giant ass disk are out of their minds.


I am not complaining about the time it takes; I know full well how 
long it can take. I am complaining that the "resilvering" stops dead. 
(More on this below.)


This is trickier. I dont recall you saying it stops dead. I thought it 
was just "slow."


When the scrub is stopped dead, what does "iostat -nMxC  1" look like? 
Are there drives indicating 100% busy? high wait or asvc_t times?


Do you have any controller errors? Does iostat -En report any errors?

Have you tried mounting the pool ro, stopping the scrub, and then 
copying data off?


Here are some hail mary settings that probably wont help. I offer them 
(in no particular order) to try to improve the scrub time performance, 
minimized the number of enqueued I/Os in case that is exacerbating the 
problem some how, and attempt to limit the amount of time spent on a 
failing I/O. Your scrubs may be stopping because you have a disk that 
exhibiting a poor failure mode. Namely, some sort of internal error and 
it just keeps retrying which makes the pool wedge. WD is not the brand I 
go to for enterprise failure modes.


* dont spend more than 8ms on any single i/o
set sd:sd_io_time=8
* resilver in 5 second intervals minimum
set zfs:zfs_resilver_min_time_ms = 5000
set zfs:zfs_resilver_delay = 0
* enqueue only 5 I/Os
set zfs:zfs_top_maxinflight = 2


Apply these settings and try to resilver again. If this doesnt work, dd 
the drives to new ones. Using dd will likely identify which drive is 
wedging ZFS as it will either not complete or it will error out.

I have no idea what happened to your system for you to loose three disks
simultaneously.


This was covered in a thread ages ago; the tech took days to find the 
problem, which was a CMOS battery that was on Death's door.


I am not sure who the tech is, but at least two people on this list told 
you check the CMOS battery. I think Bob and I both recommended changing 
the battery. Others might have as well.



I just dont see you recovering from this scenario where you have
two bad drives trying to resilver from each other.


They aren't trying to resilver from each other. The dead disk is gone. 
The good disk is trying to re

Re: [OpenIndiana-discuss] Support for USB3?

2015-10-28 Thread Bob Friesenhahn

On Wed, 28 Oct 2015, Rich Teer wrote:


Hi all,

Hopefully a very quick question here: is USB3 supported yet, and if so, what
2+ port cards are recommended?  I'm specifically talking about using ZFS on
external hard drives, and am thinking of using SmartOS if that helps.  The
USB card would be going into a PCIe slot in my Ultra 20 M2.  (I'm currently
using one of the on-board USB2 interfaces, but the 480 Mb/s speed is killing
me!)


USB-3 is not supported yet.  I have not heard of anyone working on it.

I am also using USB-2 backup drives with ZFS (for over 6 years 
already) for backups.  I take care to constrain the amount of data 
which needs to be backed up and use compression so USB-2 still works 
for me.


Consider ZFS on Linux or FreeBSD if you need USB-3 and ZFS within the 
next year.


Also consider eSATA since that can be supported by OpenIndiana and is 
commonly available on external drives.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Support for USB3?

2015-10-28 Thread jason matthews



On 10/28/15 1:58 PM, Bob Friesenhahn wrote:
Also consider eSATA since that can be supported by OpenIndiana and is 
commonly available on external drives. 


The electrical and protocol specification for esata is the same as sata. 
E-SATA cables I think need to be shielded. The only difference is the 
amount of line current the controller has to put on the circuit to work 
on a longer distance. This is a long way of saying you could in theory 
make an internal SATA jack an external jack with high quality shielded 
cables, and a little luck.


Something like this:
http://www.monoprice.com/Product?p_id=7638&gclid=CPKc4KyJ5sgCFYZefgodcowF_w

Be advised, cables may be one area in life where you get what you pay 
for. I am not sure monoprice is the right choice but you get the idea.


j.


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Support for USB3?

2015-10-28 Thread Rich Teer
On Wed, 28 Oct 2015, Bob Friesenhahn wrote:

> USB-3 is not supported yet.  I have not heard of anyone working on it.

Ah, that is what I feared, but thanks for confirming it.

> Consider ZFS on Linux or FreeBSD if you need USB-3 and ZFS within the next
> year.

Yep; I've already started pondering those options...

> Also consider eSATA since that can be supported by OpenIndiana and is commonly
> available on external drives.

Agreed.  I was using eSATA with 3 single disk enclosures, and it worked well
until the latter started failing (not the disk in at least one case), so I
bought a 4-bay enclosure with eSATA and USB-3 connections.  The enclosure
requires that the HBA supports SATA port multipliers, but unfortunately the
LSI HBA I'm using doesn't.  I'm more than willing to give eSATA a try, if
someone is happy to recommend a cheap enough HBA that is known to work.

-- 
Rich Teer, Publisher
Vinylphile Magazine

www.vinylphilemag.com

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Support for USB3?

2015-10-28 Thread Tim Mooney

In regard to: Re: [OpenIndiana-discuss] Support for USB3?, Bob Friesenhahn...:


USB-3 is not supported yet.  I have not heard of anyone working on it.


The day is coming where USB3 or 3.1 is going to be the standard for
motherboards, and it's going to be difficult to find a MoBo with USB2.

Who within the Illumos or OI community is capable of doing the necessary
development to add full USB3 support?

Does anyone know if there's been discussion of funding someone to
write the necessary driver(s)?

Tim
--
Tim Mooney tim.moo...@ndsu.edu
Enterprise Computing & Infrastructure  701-231-1076 (Voice)
Room 242-J6, Quentin Burdick Building  701-231-8541 (Fax)
North Dakota State University, Fargo, ND 58105-5164

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] What is the recommended way to back up root pool?

2015-10-28 Thread Bob Friesenhahn

What is the recommended approach to back up a zfs root pool?

For other pools I use zfs send/receive and/or rsync-based methods.

The zfs root pool is different since it contains multiple filesystems, 
with the filesystem for one one BE being mounted at a time:


% zfs list -r -t filesystem rpool
NAMEUSED  AVAIL  REFER  MOUNTPOINT
rpool  79.7G   377G50K  /rpool
rpool/ROOT 29.5G   377G31K  legacy
rpool/ROOT/openindiana 15.8M   377G  3.15G  /
rpool/ROOT/openindiana-1   38.9M   377G  5.97G  /
rpool/ROOT/openindiana-2   40.6M   377G  11.3G  /
rpool/ROOT/openindiana-2-backup-1   124K   377G  10.5G  /
rpool/ROOT/openindiana-3   48.4M   377G  13.2G  /
rpool/ROOT/openindiana-3-backup-176K   377G  11.4G  /
rpool/ROOT/openindiana-3-backup-245K   377G  11.5G  /
rpool/ROOT/openindiana-3-backup-3   123K   377G  11.9G  /
rpool/ROOT/openindiana-3-backup-444K   377G  12.1G  /
rpool/ROOT/openindiana-4   20.1M   377G  18.8G  /
rpool/ROOT/openindiana-4-backup-195K   377G  13.3G  /
rpool/ROOT/openindiana-4-backup-2   156K   377G  18.8G  /
rpool/ROOT/openindiana-5   17.5M   377G  18.9G  /
rpool/ROOT/openindiana-6   29.4G   377G  17.6G  /
rpool/export121M   377G32K  /export
rpool/export/home   121M   377G32K  /export/home

This means that there are multiple filesystems which would need to be 
backed up in order to save a replica of the pool.


At the moment I am using rsync-based backup of only selected 
filesystems.


It is not clear to me where configuration due to utilities like 
'ipadm' and 'dladm' is stored, but I am pretty sure it is to files 
under the /etc directory.


What is recommended/common practice for backing up the root pool?

Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] What is the recommended way to back up root pool?

2015-10-28 Thread Doug Hughes
for home or for office?
for office, I don't back up root pool. it's considered disposible and
reproducible via reinstall. (that plus config management)
for home, you can zfs send it somewhere to a file if you want, or you can
tar it up since that's probably easier to restore individual files after an
oops. I do the latter. Then you could reinstall from golden image and
restore the files you need.


On Wed, Oct 28, 2015 at 7:24 PM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:

> What is the recommended approach to back up a zfs root pool?
>
> For other pools I use zfs send/receive and/or rsync-based methods.
>
> The zfs root pool is different since it contains multiple filesystems,
> with the filesystem for one one BE being mounted at a time:
>
> % zfs list -r -t filesystem rpool
> NAMEUSED  AVAIL  REFER  MOUNTPOINT
> rpool  79.7G   377G50K  /rpool
> rpool/ROOT 29.5G   377G31K  legacy
> rpool/ROOT/openindiana 15.8M   377G  3.15G  /
> rpool/ROOT/openindiana-1   38.9M   377G  5.97G  /
> rpool/ROOT/openindiana-2   40.6M   377G  11.3G  /
> rpool/ROOT/openindiana-2-backup-1   124K   377G  10.5G  /
> rpool/ROOT/openindiana-3   48.4M   377G  13.2G  /
> rpool/ROOT/openindiana-3-backup-176K   377G  11.4G  /
> rpool/ROOT/openindiana-3-backup-245K   377G  11.5G  /
> rpool/ROOT/openindiana-3-backup-3   123K   377G  11.9G  /
> rpool/ROOT/openindiana-3-backup-444K   377G  12.1G  /
> rpool/ROOT/openindiana-4   20.1M   377G  18.8G  /
> rpool/ROOT/openindiana-4-backup-195K   377G  13.3G  /
> rpool/ROOT/openindiana-4-backup-2   156K   377G  18.8G  /
> rpool/ROOT/openindiana-5   17.5M   377G  18.9G  /
> rpool/ROOT/openindiana-6   29.4G   377G  17.6G  /
> rpool/export121M   377G32K  /export
> rpool/export/home   121M   377G32K  /export/home
>
> This means that there are multiple filesystems which would need to be
> backed up in order to save a replica of the pool.
>
> At the moment I am using rsync-based backup of only selected filesystems.
>
> It is not clear to me where configuration due to utilities like 'ipadm'
> and 'dladm' is stored, but I am pretty sure it is to files under the /etc
> directory.
>
> What is recommended/common practice for backing up the root pool?
>
> Bob
> --
> Bob Friesenhahn
> bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
> GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
>
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> http://openindiana.org/mailman/listinfo/openindiana-discuss
>
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] What is the recommended way to back up root pool?

2015-10-28 Thread jason matthews



At home, I do nothing for rpools. I dont even mirror them and I havent 
even written out a process for recovery. I assume I can just reinstall 
and re-import any custom xml files for smf that i have archived.


for work, i have automation to rebuild rpools but i dont back them up.

Here is one thought. Use snapshots on your rpool, synchronize one or 
more mirrors once per day on a rotation basis. once complete, split the 
mirror. I would nail the boot device in the LSI config using alt-b. this 
command sequence might be hidden depending on the version of firmware on 
your controller. make sure you are install the grub boot blocks too.


I havent tested this and the one time i actually tried to split a mirror 
the system crashed. It might be worth exploring though.



j.

On 10/28/15 4:24 PM, Bob Friesenhahn wrote:

What is the recommended approach to back up a zfs root pool?

For other pools I use zfs send/receive and/or rsync-based methods.

The zfs root pool is different since it contains multiple filesystems, 
with the filesystem for one one BE being mounted at a time:


% zfs list -r -t filesystem rpool
NAMEUSED  AVAIL  REFER  MOUNTPOINT
rpool  79.7G   377G50K  /rpool
rpool/ROOT 29.5G   377G31K  legacy
rpool/ROOT/openindiana 15.8M   377G  3.15G  /
rpool/ROOT/openindiana-1   38.9M   377G  5.97G  /
rpool/ROOT/openindiana-2   40.6M   377G  11.3G  /
rpool/ROOT/openindiana-2-backup-1   124K   377G  10.5G  /
rpool/ROOT/openindiana-3   48.4M   377G  13.2G  /
rpool/ROOT/openindiana-3-backup-176K   377G  11.4G  /
rpool/ROOT/openindiana-3-backup-245K   377G  11.5G  /
rpool/ROOT/openindiana-3-backup-3   123K   377G  11.9G  /
rpool/ROOT/openindiana-3-backup-444K   377G  12.1G  /
rpool/ROOT/openindiana-4   20.1M   377G  18.8G  /
rpool/ROOT/openindiana-4-backup-195K   377G  13.3G  /
rpool/ROOT/openindiana-4-backup-2   156K   377G  18.8G  /
rpool/ROOT/openindiana-5   17.5M   377G  18.9G  /
rpool/ROOT/openindiana-6   29.4G   377G  17.6G  /
rpool/export121M   377G32K  /export
rpool/export/home   121M   377G32K /export/home

This means that there are multiple filesystems which would need to be 
backed up in order to save a replica of the pool.


At the moment I am using rsync-based backup of only selected filesystems.

It is not clear to me where configuration due to utilities like 
'ipadm' and 'dladm' is stored, but I am pretty sure it is to files 
under the /etc directory.


What is recommended/common practice for backing up the root pool?

Bob



___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] What is the recommended way to back up root pool?

2015-10-28 Thread jason matthews



zfs split may also be undocumented depending on your distribution.

j.

On 10/28/15 5:08 PM, jason matthews wrote:



At home, I do nothing for rpools. I dont even mirror them and I havent 
even written out a process for recovery. I assume I can just reinstall 
and re-import any custom xml files for smf that i have archived.


for work, i have automation to rebuild rpools but i dont back them up.

Here is one thought. Use snapshots on your rpool, synchronize one or 
more mirrors once per day on a rotation basis. once complete, split 
the mirror. I would nail the boot device in the LSI config using 
alt-b. this command sequence might be hidden depending on the version 
of firmware on your controller. make sure you are install the grub 
boot blocks too.


I havent tested this and the one time i actually tried to split a 
mirror the system crashed. It might be worth exploring though.



j.

On 10/28/15 4:24 PM, Bob Friesenhahn wrote:

What is the recommended approach to back up a zfs root pool?

For other pools I use zfs send/receive and/or rsync-based methods.

The zfs root pool is different since it contains multiple 
filesystems, with the filesystem for one one BE being mounted at a time:


% zfs list -r -t filesystem rpool
NAMEUSED  AVAIL  REFER MOUNTPOINT
rpool  79.7G   377G50K  /rpool
rpool/ROOT 29.5G   377G31K  legacy
rpool/ROOT/openindiana 15.8M   377G  3.15G  /
rpool/ROOT/openindiana-1   38.9M   377G  5.97G  /
rpool/ROOT/openindiana-2   40.6M   377G  11.3G  /
rpool/ROOT/openindiana-2-backup-1   124K   377G  10.5G  /
rpool/ROOT/openindiana-3   48.4M   377G  13.2G  /
rpool/ROOT/openindiana-3-backup-176K   377G  11.4G  /
rpool/ROOT/openindiana-3-backup-245K   377G  11.5G  /
rpool/ROOT/openindiana-3-backup-3   123K   377G  11.9G  /
rpool/ROOT/openindiana-3-backup-444K   377G  12.1G  /
rpool/ROOT/openindiana-4   20.1M   377G  18.8G  /
rpool/ROOT/openindiana-4-backup-195K   377G  13.3G  /
rpool/ROOT/openindiana-4-backup-2   156K   377G  18.8G  /
rpool/ROOT/openindiana-5   17.5M   377G  18.9G  /
rpool/ROOT/openindiana-6   29.4G   377G  17.6G  /
rpool/export121M   377G32K  /export
rpool/export/home   121M   377G32K /export/home

This means that there are multiple filesystems which would need to be 
backed up in order to save a replica of the pool.


At the moment I am using rsync-based backup of only selected 
filesystems.


It is not clear to me where configuration due to utilities like 
'ipadm' and 'dladm' is stored, but I am pretty sure it is to files 
under the /etc directory.


What is recommended/common practice for backing up the root pool?

Bob





___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] What is the recommended way to back up root pool?

2015-10-28 Thread jason matthews


never mind, zfs split doesnt work on rpools.

j.

On 10/28/15 5:08 PM, jason matthews wrote:



zfs split may also be undocumented depending on your distribution.

j.

On 10/28/15 5:08 PM, jason matthews wrote:



At home, I do nothing for rpools. I dont even mirror them and I 
havent even written out a process for recovery. I assume I can just 
reinstall and re-import any custom xml files for smf that i have 
archived.


for work, i have automation to rebuild rpools but i dont back them up.

Here is one thought. Use snapshots on your rpool, synchronize one or 
more mirrors once per day on a rotation basis. once complete, split 
the mirror. I would nail the boot device in the LSI config using 
alt-b. this command sequence might be hidden depending on the version 
of firmware on your controller. make sure you are install the grub 
boot blocks too.


I havent tested this and the one time i actually tried to split a 
mirror the system crashed. It might be worth exploring though.



j.

On 10/28/15 4:24 PM, Bob Friesenhahn wrote:

What is the recommended approach to back up a zfs root pool?

For other pools I use zfs send/receive and/or rsync-based methods.

The zfs root pool is different since it contains multiple 
filesystems, with the filesystem for one one BE being mounted at a 
time:


% zfs list -r -t filesystem rpool
NAMEUSED  AVAIL  REFER MOUNTPOINT
rpool  79.7G   377G50K  /rpool
rpool/ROOT 29.5G   377G31K  legacy
rpool/ROOT/openindiana 15.8M   377G  3.15G  /
rpool/ROOT/openindiana-1   38.9M   377G  5.97G  /
rpool/ROOT/openindiana-2   40.6M   377G  11.3G  /
rpool/ROOT/openindiana-2-backup-1   124K   377G  10.5G  /
rpool/ROOT/openindiana-3   48.4M   377G  13.2G  /
rpool/ROOT/openindiana-3-backup-176K   377G  11.4G  /
rpool/ROOT/openindiana-3-backup-245K   377G  11.5G  /
rpool/ROOT/openindiana-3-backup-3   123K   377G  11.9G  /
rpool/ROOT/openindiana-3-backup-444K   377G  12.1G  /
rpool/ROOT/openindiana-4   20.1M   377G  18.8G  /
rpool/ROOT/openindiana-4-backup-195K   377G  13.3G  /
rpool/ROOT/openindiana-4-backup-2   156K   377G  18.8G  /
rpool/ROOT/openindiana-5   17.5M   377G  18.9G  /
rpool/ROOT/openindiana-6   29.4G   377G  17.6G  /
rpool/export121M   377G32K /export
rpool/export/home   121M   377G32K /export/home

This means that there are multiple filesystems which would need to 
be backed up in order to save a replica of the pool.


At the moment I am using rsync-based backup of only selected 
filesystems.


It is not clear to me where configuration due to utilities like 
'ipadm' and 'dladm' is stored, but I am pretty sure it is to files 
under the /etc directory.


What is recommended/common practice for backing up the root pool?

Bob





___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss




___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] What is the recommended way to back up root pool?

2015-10-28 Thread Bob Friesenhahn

On Wed, 28 Oct 2015, Doug Hughes wrote:


for home or for office?
for office, I don't back up root pool. it's considered disposible and
reproducible via reinstall. (that plus config management)
for home, you can zfs send it somewhere to a file if you want, or you can
tar it up since that's probably easier to restore individual files after an
oops. I do the latter. Then you could reinstall from golden image and
restore the files you need.


Assume that this is for a network server with advanced network 
configuration settings, ssh config, zones, etc.


If was the same as a standard OS install without subsequent 
configuration, then backing up would not be so important.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] What is the recommended way to back up root pool?

2015-10-28 Thread Bob Friesenhahn

On Wed, 28 Oct 2015, jason matthews wrote:



never mind, zfs split doesnt work on rpools.


'zpool offline -t pool device', 'dd' device image, 'zpool online pool 
device' would work but it would be a stupid backup and extremely slow 
if the disk is large.  If the other disk failed during this procedure, 
then the system would panic.  Some resilvering would be required when 
the device is placed back on line.


It is not clear to me where persistent configuration is stored.  For 
example, 'ipadm' accesses a service via 
"/etc/svc/volatile/ipadm/ipmgmt_door".  I was thinking that the 
configuration data is stored in /etc/svc/volatile/ipadm/aobjmap.conf 
but evidence suggests that this data is used when the ipmgmtd daemon 
is restarted.


If one restores a system based on a partial file backup (e.g. /etc), 
what files need to be restored to restore the persistent network 
configuration?  The manual page does not say.


Zfs configuration is easy due to zfs import/export and 'zfs history'. 
However, one must still remember which pools should be imported (if 
they are remote).


Doing a zfs send of the current boot environment (ignoring all others) 
still seems easiest.  This still misses any filesystems outside of the 
boot environment.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] What is the recommended way to back up root pool?

2015-10-28 Thread Jason Matthews

Go old school. Use dd to manually mirror two or more identical drives. 

Apply boot blocks. Nail the boot device in Lsi or bios. 

J. 

Sent from my iPhone

> On Oct 28, 2015, at 5:59 PM, Bob Friesenhahn  
> wrote:
> 
>> On Wed, 28 Oct 2015, Doug Hughes wrote:
>> 
>> for home or for office?
>> for office, I don't back up root pool. it's considered disposible and
>> reproducible via reinstall. (that plus config management)
>> for home, you can zfs send it somewhere to a file if you want, or you can
>> tar it up since that's probably easier to restore individual files after an
>> oops. I do the latter. Then you could reinstall from golden image and
>> restore the files you need.
> 
> Assume that this is for a network server with advanced network configuration 
> settings, ssh config, zones, etc.
> 
> If was the same as a standard OS install without subsequent configuration, 
> then backing up would not be so important.
> 
> Bob
> -- 
> Bob Friesenhahn
> bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
> GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
> 
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> http://openindiana.org/mailman/listinfo/openindiana-discuss
> 

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] What is the recommended way to back up root pool?

2015-10-28 Thread Doug Hughes
I'd put that in my config mgmt system (or installer)


Sent from my android device.

-Original Message-
From: Bob Friesenhahn 
To: Discussion list for OpenIndiana 
Sent: Wed, 28 Oct 2015 21:00
Subject: Re: [OpenIndiana-discuss] What is the recommended way to back up root 
pool?

On Wed, 28 Oct 2015, Doug Hughes wrote:

> for home or for office?
> for office, I don't back up root pool. it's considered disposible and
> reproducible via reinstall. (that plus config management)
> for home, you can zfs send it somewhere to a file if you want, or you can
> tar it up since that's probably easier to restore individual files after an
> oops. I do the latter. Then you could reinstall from golden image and
> restore the files you need.

Assume that this is for a network server with advanced network 
configuration settings, ssh config, zones, etc.

If was the same as a standard OS install without subsequent 
configuration, then backing up would not be so important.

Bob
-- 
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Broken zpool

2015-10-28 Thread Rainer Heilke

On 28/10/2015 1:47 PM, jason matthews wrote:



Let me apologize in advanced for inter-mixing comments.

Ditto.


I am not trying to be a dick (it happens naturally), but if you cant
afford to backup terabytes of data, then you cant afford to have
terabytes of data.


That is a meaningless statement, that reflects nothing in real-world
terms.

The true cost of a byte of data that you care about is the money you pay
for the initial storage, and then the money you pay to back it up. For
work, my front line databases have 64TB of mirrored net storage.


When you said "you," it implied (to me, at least) a home system, since 
we're talking about a home system from the start. Certainly, if it is a 
system that a company uses for its data, all of what you say is correct. 
But a company, regardless of size, can write these expenses off. 
Individuals cannot do that with their home systems. For them, this 
paradigm is much more vague if it exists at all.


So, while I was talking apples, you were talking parsnips. My apologies 
for not making that clearer. (All of that said, the DVD drive has been 
acting up. Perhaps a writable Blu-Ray is in the wind. Since the price of 
them has dropped further than the price of oil, that may make backups of 
the more important data possible.)



The
costs dont stop there. There is another 200TB of net storage dedicated
to holding enough log data to rebuild the last 18 months from scratch. I
also have two sets of slaves that snapshot themselves frequently. One
set is a single disk, the other is raidz. These are not just backups.
One set runs batch jobs, one runs the front end portal, and the masters
are in charge of data ingestions.


Don't forget the costs added on by off-site storage, etc. I don't care 
how many times the data is backed up, if it's all in the same building 
that just burned to the ground... That is, unless your zfs sends are 
going to a different site...



If you dont backup, you set yourself up for unrecoverable problems. In


I believe this may be the first time (for me) that simply replacing a 
failed drive resulted in data corruption in a zpool. I've certainly 
never seen this level of mess before.



That said, instead of running mirrors run loose disks and backup to the
second pool at a frequency you are comfortable with. You need to
prioritize your resources against your risk tolerance. It is tempting to
do mirrors because it is sexy but that might not be the best strategy.


That is something for me to think about. (I don't do *anything* on 
computers because it's "sexy." I did mirrors for security (remember, 
they hadn't failed for me at such a monumental level previously).



That's an arrogant statement, presuming that if a person doesn't have
gobs of money, they shouldn't bother with computers at all.

I didnt write anything like that. What I am saying is you need to get
more creative on how to protect your data. Yes, money makes it easier
but you have options.


My apologies; on its own, it came across that way.


I am not complaining about the time it takes; I know full well how
long it can take. I am complaining that the "resilvering" stops dead.
(More on this below.)



When the scrub is stopped dead, what does "iostat -nMxC  1" look like?
Are there drives indicating 100% busy? high wait or asvc_t times?


 sudo iostat -nMxC  1
extended device statistics
r/sw/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b device
0.00.00.00.0  0.0  0.00.00.0   0   0 c4
0.00.00.00.0  0.0  0.00.00.0   0   0 c4t0d0
   23.3   55.20.60.3  0.2  0.32.14.5   5  27 c3d0
0.00.00.00.0  0.0  0.00.00.2   0   0 c3d1
0.00.00.00.0  0.0  0.00.05.5   0   0 c6d1
  360.1   13.1   29.00.1  1.3  1.53.44.0  48  82 c6d0
9.7  330.90.0   29.1  0.1  0.60.31.6   9  52 c7d1
  359.9  354.6   28.3   28.5 30.2  3.4   42.24.7  85  85 data
   23.2   34.90.60.3  6.2  0.3  106.95.6   6  12 rpool
extended device statistics
r/sw/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b device
0.00.00.00.0  0.0  0.00.00.0   0   0 c4
0.00.00.00.0  0.0  0.00.00.0   0   0 c4t0d0
0.0  112.10.00.3  0.0  0.40.04.0   0  45 c3d0
0.00.00.00.0  0.0  0.00.00.0   0   0 c3d1
0.00.00.00.0  0.0  0.00.00.0   0   0 c6d1
   71.0   10.02.30.0  1.6  1.1   19.8   14.0  54  60 c6d0
   40.0   44.00.12.2  0.2  1.11.8   12.8  12  83 c7d1
  111.1   58.02.42.2 18.9  3.5  112.0   20.6  54  54 data
0.0   58.00.00.3  0.0  0.00.00.6   0   3 rpool
extended device statistics
r/sw/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b device
0.00.00.00.0  0.0  0.00.00.0   0   0 c4
0.00.00.00.0  0.0  0.00.00