Re: [zfs-discuss] zfs-discuss mailing list opensolaris EOL

2013-02-16 Thread Bryan Horstmann-Allen
+--
| On 2013-02-17 18:40:47, Ian Collins wrote:
| 
 One of its main advantages is it has been platform agnostic.  We see 
 Solaris, Illumos, BSD and more recently ZFS on Linux questions all give the 
 same respect.

 I do hope we can get another, platform agnostic, home for this list.

As the guy who provides the illumos mailing list services, and as someone who
has deeply vested interests in seeing ZFS thrive on all platforms, I'm happy to
suggest that we'd welcome all comers on z...@lists.illumos.org.

The more brains in one place, the better.
-- 
bdha
cyberpunk is dead. long live cyberpunk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs-discuss mailing list opensolaris EOL

2013-02-16 Thread Bryan Horstmann-Allen
+--
| On 2013-02-17 01:17:58, Tim Cook wrote:
| 
| While I'm sure many appreciate the offer as I do, I can tell you for me
| personally: never going to happen.  Why would I spend all that time and
| energy participating in ANOTHER list controlled by Oracle, when they have
| shown they have no qualms about eliminating it with basically 0 warning, at
| their whim?  I'll be sticking to the illumos lists that I'm confident will
| be turned over to someone else should the current maintainers decide they
| no longer wish to contribute to the project.  On the flip side, I think we
| welcome all Oracle employees to participate in that list should corporate
| policy allow you to.

The current maintainers (Pobox.com, Listbox.com) have a long history with open
source software in the Perl community, both on the CPAN and more recently with
employee time being allocated for work on the interpreter releases.

Pobox is in love with both email _and_ open source.

We're also halfway done migrating our systems from Solaris (which we've run
since 2006) to OmniOS. Like many other companies represented on the zfs list,
Pobox/Listbox is betting on illumos. We'll continue to play nicely.

If something occurs and we can no longer host the services we currently do, I
personally guarantee a cordial handover (more than likely to something Josh and
I set up.)

Furthermore, at my dayjob, we're deploying Joyent's SmartDatacenter in a
non-trivial way, and so I'm also committed to the success of SmartOS.

Finally: There are still a lot of knowledgable people with good ethics doing
good work at Oracle. I'd love to see them on the illumos lists.

(If anyone wants to grump about that, they should ask to borrow that particular
Oracle employees shoes to walk a mile in, before they stick one of their own in
their mouth.)

Cheers.
-- 
bdha
cyberpunk is dead. long live cyberpunk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] grrr, How to get rid of mis-touched file named `-c'

2011-11-23 Thread Bryan Horstmann-Allen
+--
| On 2011-11-23 13:43:10, Harry Putnam wrote:
| 
| Somehow I touched some rather peculiar file names in ~.  Experimenting
| with something I've now forgotten I guess.
| 
| Anyway I now have 3 zero length files with names -O, -c, -k.

Use '--' to denote the end of arguments.

  $ uname -a
  SunOS lab 5.10 Generic_142910-17 i86pc i386 i86pc
  $ touch -- -c
  $ ls -l
  total 1
  -rw-r--r--   1 bda  bda0 Nov 23 15:25 -c
  $ rm -- -c
  $ ls -l
  total 0

Cheers.
-- 
bdha
cyberpunk is dead. long live cyberpunk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] detach configured log devices?

2011-03-16 Thread Bryan Horstmann-Allen
+--
| On 2011-03-16 12:33:58, Jim Mauro wrote:
| 
| With ZFS, Solaris 10 Update 9, is it possible to
| detach configured log devices from a zpool?
| 
| I have a zpool with 3 F20 mirrors for the ZIL. They're
| coming up corrupted. I want to detach them, remake
| the devices and reattach them to the zpool.

Yes, you can, using zpool remove.

If they're faulted and you can't import the pool, use import -m.

Cheers.
-- 
bdha
cyberpunk is dead. long live cyberpunk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] drive replaced from spare

2010-11-23 Thread Bryan Horstmann-Allen
+--
| On 2010-11-23 13:28:38, Tony Schreiner wrote:
| 
 am I supposed to do something with c1t3d0 now?

Presumably you want to replace the dead drive with one that works?

zpool offline the dead drive, if it isn't already, pull it, plug in the new
one, do devfsadm -C, cfgadm -al, watch dmesg to see if the ctd changed, then
use zpool replace deaddisk newdisk to get the pool proper again.

Spares are .. spares. They're there for events, not for running in production.

The above process is documented more usefully in the ZFS Administration Guide.

Cheers.
-- 
bdha
cyberpunk is dead. long live cyberpunk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Replacing log devices takes ages

2010-11-19 Thread Bryan Horstmann-Allen
Disclaimer: Solaris 10 U8.

I had an SSD die this morning and am in the process of replacing the 1GB
partition which was part of a log mirror. The SSDs do nothing else.

The resilver has been running for ~30m, and suggests it will finish sometime
before Elvis returns from Andromeda, though perhaps only just barely (we'll
probably have to run to the airport to meet him at security).

 scrub: resilver in progress for 0h25m, 3.15% done, 13h8m to go
 scrub: resilver in progress for 0h26m, 3.17% done, 13h36m to go
 scrub: resilver in progress for 0h27m, 3.18% done, 14h4m to go
 scrub: resilver in progress for 0h28m, 3.19% done, 14h32m to go
 scrub: resilver in progress for 0h29m, 3.20% done, 15h0m to go
 scrub: resilver in progress for 0h30m, 3.23% done, 15h25m to go
 scrub: resilver in progress for 0h31m, 3.25% done, 15h50m to go
 scrub: resilver in progress for 0h32m, 3.30% done, 16h7m to go
 scrub: resilver in progress for 0h33m, 3.34% done, 16h24m to go
 scrub: resilver in progress for 0h35m, 3.37% done, 16h43m to go
 scrub: resilver in progress for 0h36m, 3.39% done, 17h5m to go

According to zpool iostat -v, the log contains ~900k of data on it.

The disks are not particularly busy (c0t3d0 is the replacing disk):

# iostat -xne c0t3d0 c0t5d0 5
extended device statistics    errors --- 
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b s/w h/w trn tot 
device
0.20.10.65.0  0.0  0.00.05.7   0   0   0   0   0   0 
c0t3d0
5.3   52.3   68.2 1694.1  0.0  0.20.04.2   0   2   2   0   0   2 
c0t5d0
extended device statistics    errors --- 
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b s/w h/w trn tot 
device
3.0  112.60.9 6064.0  0.0  0.10.00.8   0   9   0   0   0   0 
c0t3d0
6.4  118.8   39.5 6519.7  0.0  0.00.00.3   0   3   2   0   0   2 
c0t5d0
extended device statistics    errors --- 
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b s/w h/w trn tot 
device
1.0   50.20.3 5068.8  0.0  1.40.0   27.5   0   6   0   0   0   0 
c0t3d0
   36.0   61.8  534.1 5921.6  0.0  0.50.05.5   0   6   2   0   0   2 
c0t5d0
extended device statistics    errors --- 
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b s/w h/w trn tot 
device
0.0   58.00.0 1590.4  0.0  0.00.00.8   0   3   0   0   0   0 
c0t3d0
   39.2   67.0  651.3 1884.9  0.0  0.00.00.5   0   3   2   0   0   2 
c0t5d0
extended device statistics    errors --- 
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b s/w h/w trn tot 
device
0.0   23.40.0  678.3  0.0  0.00.00.4   0   1   0   0   0   0 
c0t3d0
   11.8   30.6  135.0 1025.4  0.0  0.00.00.3   0   1   2   0   0   2 
c0t5d0
extended device statistics    errors --- 
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b s/w h/w trn tot 
device
0.0   20.20.0 1045.0  0.0  0.00.01.2   0   1   0   0   0   0 
c0t3d0
   14.8   25.8  131.9 1335.7  0.0  0.00.00.4   0   1   2   0   0   2 
c0t5d0
extended device statistics    errors --- 
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b s/w h/w trn tot 
device
0.0   33.00.0 2029.6  0.0  0.10.01.9   0   2   0   0   0   0 
c0t3d0
1.8   37.6   37.9 2107.0  0.0  0.00.00.6   0   1   2   0   0   2 
c0t5d0
extended device statistics    errors --- 
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b s/w h/w trn tot 
device
0.0   21.20.0  797.6  0.0  0.00.00.7   0   1   0   0   0   0 
c0t3d0
   12.2   22.8  111.9  823.2  0.0  0.00.00.4   0   1   2   0   0   2 
c0t5d0

My question is twofold:

Why do log mirrors need to resilver at all?

Why does this seem like it's going to take a full day, if I'm lucky?

(If the answer is: Shut up and upgrade, that's fine.)

Cheers.
-- 
bdha
cyberpunk is dead. long live cyberpunk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Ideas for ghetto file server data reliability?

2010-11-15 Thread Bryan Horstmann-Allen
+--
| On 2010-11-15 10:21:06, Edward Ned Harvey wrote:
| 
| Backups.
| 
| Even if you upgrade your hardware to better stuff... with ECC and so on ...
| There is no substitute for backups.  Period.  If you care about your data,
| you will do backups.  Period.

Backups are not going to save you from bad memory writing corrupted data to
disk.

If your RAM flips a bit and writes garbage to disk, and you back up that
garbage, guess what: Your backups are full of garbage.

Invest in ECC RAM and hardware that is, at the least, less likely to screw you.

Test your backups to ensure you can trust them.
-- 
bdha
cyberpunk is dead. long live cyberpunk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] New system, Help needed!

2010-11-15 Thread Bryan Horstmann-Allen
+--
| On 2010-11-15 08:48:55, Frank wrote:
| 
| I am a newbie on Solaris. 
| We recently purchased a Sun Sparc M3000 server. It comes with 2 identical 
hard drives. I want to setup a raid 1. After searching on google, I found that 
the hardware raid was not working with M3000. So I am here to look for help on 
how to setup ZFS to use raid 1. Currently one hard drive is installed with 
Solaris 10 10/09, I want to setup ZFS raid 1 without reinstalling Solaris, it 
that possible, and how can I do that. 

If you have a zfsroot (rpool) you can just google for zfs rpool mirror
attach. It is trivial. Amounts to prvtoc disk1 | disk2 ; zpool attach disk2
disk1 ; installgrub disk2.

If you have a UFS root you'll need to migrate. See the Solaris 10 sysadmin docs
for that process. Alternatively you can use SVM to set up a mirror, but SVM is
dead weight going forward.
-- 
bdha
cyberpunk is dead. long live cyberpunk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Ideas for ghetto file server data reliability?

2010-11-15 Thread Bryan Horstmann-Allen
+--
| On 2010-11-15 11:27:02, Toby Thain wrote:
| 
|  Backups are not going to save you from bad memory writing corrupted data to
|  disk.
| 
| It is, however, a major motive for using ZFS in the first place.

In this context, not trusting your disks is the motive. If corruption (even
against metadata) happens in-memory, ZFS will happily write it to disk. Has
this behavior changed in the last 6 months?
-- 
bdha
cyberpunk is dead. long live cyberpunk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Any limit on pool hierarchy?

2010-11-08 Thread Bryan Horstmann-Allen
+--
| On 2010-11-08 13:27:09, Peter Taps wrote:
| 
| From zfs documentation, it appears that a vdev can be built from more 
vdevs. That is, a raidz vdev can be built across a bunch of mirrored vdevs, and 
a mirror can be built across a few raidz vdevs.  
| 
| Is my understanding correct? Also, is there a limit on the depth of a vdev?

You are incorrect.

The man page states:

 Virtual devices cannot be nested, so a mirror or raidz  vir-
 tual device can only contain files or disks. Mirrors of mir-
 rors (or other combinations) are not allowed.

 A pool can have any number of virtual devices at the top  of
 the  configuration  (known as root vdevs). Data is dynami-
 cally distributed across all top-level  devices  to  balance
 data  among  devices.  As new virtual devices are added, ZFS
 automatically places data on the newly available devices.

 A pool can have any number of virtual devices at the top  of
 the  configuration  (known as root vdevs). Data is dynami-
 cally distributed across all top-level  devices  to  balance
 data  among  devices.  As new virtual devices are added, ZFS
 automatically places data on the newly available devices.

-- 
bdha
cyberpunk is dead. long live cyberpunk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS recovery tool for Solaris 10 with a dead slog?

2010-11-04 Thread Bryan Horstmann-Allen
I just had an SSD blow out on me, taking a v10 zpool with it. The pool
currently shows up as UNAVAIL, missing device.

The system is currently running U9, which has `import -F`, but not `import -m`.
My understanding is the pool would need to be =19 for that to work regardless.

I have copies of zpool.cache from when the SSD was alive, and its GUID.

Looking at https://github.com/pjjw/logfix it appears all I really need to do is
mock up a new log device and update the labels in ZFS. All. However, logfix
appears to want some version of Nevada.

Does anyone have any tools for Solaris 10 that will accomplish this?

Barring that, I suppose I could put SXCE b130 on it and give logfix a shot.

Cheers.
-- 
bdha
cyberpunk is dead. long live cyberpunk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best practice for Sol10U9 ZIL -- mirrored or not?

2010-09-16 Thread Bryan Horstmann-Allen
+--
| On 2010-09-16 18:08:46, Ray Van Dolson wrote:
| 
| Best practice in Solaris 10 U8 and older was to use a mirrored ZIL.
| 
| With the ability to remove slog devices in Solaris 10 U9, we're
| thinking we may get more bang for our buck to use two slog devices for
| improved IOPS performance instead of needing the redundancy so much.
| 
| Any thoughts on this?
| 
| If we lost our slog devices and had to reboot, would the system come up
| (eg could we remove failed slog devices from the zpool so the zpool
| would come online..)

The ability to remove the slogs isn't really the win here, it's import -F. The
problem is: If the ZIL dies, you will lose whatever writes were in-flight.

I've just deployed some SSD ZIL (on U9), and decided to mirror them. Cut the
two SSDs into 1GB and 31GB partitions, mirrored the two 1GBs as slog and have
the two 31GB as L2ARC.

So far extremely happy with it. Running a scrub during production hours,
before, was unheard of. (And, well, production for mail storage is basically
all hours, so.)

As for running non-mirrored slogs... dunno. Our customers would be pretty
pissed if we lost any mail, so I doubt I will do so. My SSDs were only $90
each, though, so cost is hardly a factor for us.

Cheers.
-- 
bdha
cyberpunk is dead. long live cyberpunk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss