Re: ZFS i/o error in recent 12.0

2018-03-21 Thread Markus Wild
Hello Thomas,

> > I had faced the exact same issue on a HP Microserver G8 with 8TB disks and 
> > a 16TB zpool on FreeBSD 11 about a year
> > ago.  
> I will ask you the same question as I asked the OP:
> 
> Has this pool had new vdevs addded to it since the server was installed?

No. This is a microserver with only 4 (not even hotplug) trays. It was set up 
using the freebsd installer 
originally. I had to apply the (then patch, don't know whether it's included 
standard now) btx loader fix to retry
a failed read to get around BIOS bugs with that server, but after that, the 
server booted fine. It's only after
a bit of use and a kernel update that things went south. I tried many different 
things at that time, but the only
approach that worked for me was to steal 2 of the 4 swap partitions which I 
placed on every disk initially, and 
build a mirrored boot zpool from those. The loader had no problem loading the 
kernel from that, and when the kernel
took over, it had no problem using the original root pool (that the boot loader 
wasn't able to find/load). Whence my
conclusion that the 2nd stage boot loader has a problem (probably due to yet 
another bios bug on that server) loading
blocks beyond a certain limit, which could be 2TB or 4TB.

> What does a "zpool status" look like when the pool is imported?

$ zpool status
  pool: zboot
 state: ONLINE
  scan: scrub repaired 0 in 0h0m with 0 errors on Wed Mar 21 03:58:36 2018
config:

NAME   STATE READ WRITE CKSUM
zboot  ONLINE   0 0 0
  mirror-0 ONLINE   0 0 0
gpt/zfs-boot0  ONLINE   0 0 0
gpt/zfs-boot1  ONLINE   0 0 0

errors: No known data errors

  pool: zroot
 state: ONLINE
  scan: scrub repaired 0 in 6h49m with 0 errors on Sat Mar 10 10:17:49 2018
config:

NAME  STATE READ WRITE CKSUM
zroot ONLINE   0 0 0
  mirror-0ONLINE   0 0 0
gpt/zfs0  ONLINE   0 0 0
gpt/zfs1  ONLINE   0 0 0
  mirror-1ONLINE   0 0 0
gpt/zfs2  ONLINE   0 0 0
gpt/zfs3  ONLINE   0 0 0

errors: No known data errors

Please note: this server is in use at a customer now, it's workin fine with 
this workaround. I just brought it up 
to give a possible explanation to the observed problem of the original poster, 
and that it _might_ have nothing to do
with a newer version of the current kernel, but rather be due to the updated 
kernel being written to a new location
on disk, which can't be read properly by the boot loader.

Cheers,
Markus
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: ZFS i/o error in recent 12.0

2018-03-20 Thread Markus Wild
Hi there,

> I've been encountered suddenly death in ZFS full volume
> machine(r330434) about 10 days after installation[1]:
> 
> ZFS: i/o error - all block copies unavailable
> ZFS: can't read MOS of pool zroot
> gptzfsboot: failed to mount default pool zroot
> 

> 268847104  30978715648  4  freebsd-zfs  (14T)

^^^


I had faced the exact same issue on a HP Microserver G8 with 8TB disks and a 
16TB zpool on FreeBSD 11 about a year ago.
My conclusion was, that over time (and updating the kernel), the blocks for 
that kernel file were reallocated to a
later spot on the disks, and that however the loader fetches those blocks, it 
now failed doing so (perhaps a 2/4TB
limit/bug with the BIOS of that server? Unfortunately, there was no UEFI 
support for it, don't know whether that
changed in the meantime). The pool was always importable fine with the USB 
stick, the problem was only with the boot
loader. I worked around the problem stealing space from the swap partitions on 
two disks to build a "zboot" pool, just
containing the /boot directory, having the boot loader load the kernel from 
there, and then still mount the real root
pool to run the system off using loader-variables in loader.conf of the boot 
pool. It's a hack, but it's working
fine since (the server is being used as a backup repository). This is what I 
have in the "zboot" boot/loader.conf:

# zfs boot kludge due to buggy bios
vfs.root.mountfrom="zfs:zroot/ROOT/fbsd11"


If you're facing the same problem, you might give this a shot? You seem to have 
plenty of swap to canibalize as well;)

Cheers,
Markus




___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: OCZ ssdpx-1rvd0120 REVODRIVE support

2016-04-21 Thread Markus Wild

> > > HI! I've recently got a SSD device. Yes, not a disk, but a device.
> > > It's called, i. e. one of the first REVODRIVEs.
> > > It's a PCI-express card with two embedded ssd disks about ~ 55GB size.
> > > And it's a raid card. Fake software raid. You can set it up as a
> > > RAID0, RAID1, etc and a CONCATENATION. No way to leave it unconfigured
> > > or set it as JBOD or something else.
> > > You just won't be able to boot from this device in that case.

I used to have one of these in my workstation (mine was recognized as 4 SATA 
drives
of about 60GB). I had put these into a zfs pool with raidz1 and was able to 
boot from
the card without any issue. BIOS reported them as 4 individual drives. These 
are 
pre-NVME, so no NVME driver was necessary. Unfortunately, the card recently 
died of
old age, so I can't provide any more than historic references. The type was 
OCZSSDPX-1RVDX0240 . 

Cheers,
Markus
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"