r...@tos-backup:~# pstack /dev/rdsk/core
core '/dev/rdsk/core' of 1217: format
fee62e4a UDiv (4, 0, 8046c80, 80469a0, 8046a30, 8046a50) + 2a
08079799 auto_sense (4, 0, 8046c80, 0) + 281
...
Seems that one function call is missing in the back trace
between auto_sense and UDiv, because UDiv
- Original Message -
...
r...@tos-backup:~# format
Searching for disks...Arithmetic Exception (core dumped)
This error also seems to occur on osol 134. Any idea
what this might be?
What stack backtrace is reported for that core dump (pstack core) ?
--
This message posted from
Why does zfs produce a batch of writes every 30 seconds on opensolaris b134
(5 seconds on a post b142 kernel), when the system is idle?
It was caused by b134 gnome-terminal. I had an iostat
running in a gnome-terminal window, and the periodic
iostat output is written to a temporary file by
Why does zfs produce a batch of writes every 30 seconds on opensolaris b134
(5 seconds on a post b142 kernel), when the system is idle?
On an idle OpenSolaris 2009.06 (b111) system, /usr/demo/dtrace/iosnoop.d
shows no i/o activity for at least 15 minutes.
The same dtrace test on an idle b134
It would be nice if the 32bit osol kernel support
48bit LBA
Is already supported, for may years (otherwise
disks with a capacity = 128GB could not be
used with Solaris) ...
(similar to linux, not sure if 32bit BSD
supports 48bit LBA ), then the drive would probably
work - perhaps later in
I just installed opensolaris build 130 which i
downloaded from genunix. The install went
fineand the first reboot after install seemed to
work but when i powered down and rebooted fully, it
locks up as soon as i log in.
Hmm, seems you're asking in the wrong forum.
Sounds more like a
in the build 130 annoucement you can find this:
13540 Xserver crashes and freezes a system installed with LiveCD on bld 130
It is for sure this bug. This is ok, i
can do most of what i need via ssh. I just
wasn't sure if it was a bug or if i had done
something wrongi had tried
I wasnt clear in my description, I m referring to ext4 on Linux. In
fact on a system with low RAM even the dd command makes the system
horribly unresponsive.
IMHO not having fairshare or timeslicing between different processes
issuing reads is frankly unacceptable given a lame user
So.. it seems that data is deduplicated, zpool has
54.1G of free space, but I can use only 40M.
It's x86, ONNV revision 10924, debug build, bfu'ed from b125.
I think I'm observing the same (with changeset 10936) ...
I created a 2GB file, and a tank zpool on top of that file,
with
But: Isn't there an implicit expectation for a space guarantee associated
with a
dataset? In other words, if a dataset has 1GB of data, isn't it natural to
expect to be able to overwrite that space with other
data?
Is there such a space guarantee for compressed or cloned zfs?
--
This
Well, then you could have more logical space than
physical space, and that would be extremely cool,
I think we already have that, with zfs clones.
I often clone a zfs onnv workspace, and everything
is deduped between zfs parent snapshot and clone
filesystem. The clone (initially) needs no
I have a functional OpenSolaris x64 system on which I need to physically
move the boot disk, meaning its physical device path will change and
probably its cXdX name.
When I do this the system fails to boot
...
How do I inform ZFS of the new path?
...
Do I need to boot from the LiveCD
No there was no error level fatal.
Well, here is what I have tried since:
a) I´ve tried to install a custom grub like described here:
http://defect.opensolaris.org/bz/show_bug.cgi?id=4755#c28
With that in place, I just get the grub prompt. I´ve
tried to zpool import -f rpool when this
Does this give you anything?
[url=http://bildr.no/view/460193][img]http://bildr.no/thumb/460193.jpeg[/img][/url]
That looks like the zfs mountroot panic you
get when the root disk was moved to a different
physical location (e.g. different usb port).
In this case the physical device path
Nah, that didnt seem to do the trick.
After unmounting
and rebooting, i get the same error msg from my
previous post.
Did you get these scsi error messages during installation
to the usb stick, too?
Another thing that confuses me: the unit attention /
medium may have changed message is
Are there any message with Error level: fatal ?
Not that I know of, however, i can check. But im
unable to find out what to change in grub to get
verbose output rather than just the splashimage.
Edit the grub commands, delete all splashimage,
foreground and background lines, and delete
The GRUB menu is presented, no problem there, and
then the opensolaris progress bar. But im unable to
find a way to view any details on whats happening
there. The progress bar just keep scrolling and
scrolling.
Press the ESC key; this should switch back from
graphics to text mode and most
I've found it only works for USB sticks up to 4GB :(
If I tried a USB stick bigeer than that, it didn't boot.
Works for me on 8GB USB sticks.
It is possible that the stick you've tried has some
issues with the Solaris USB drivers, and needs to
have one of the workarounds from the
scsa2usb.conf
Well, here is the error:
... usb stick reports(?) scsi error: medium may have changed ...
That's strange. The media in a flash memory
stick can't be changed - although most sticks
report that they do have removable media.
Maybe this stick needs one of the workarounds
that can be enabled in
How can i implement that change, after installing the
OS? Or do I need to build my own livecd?
Boot from the livecd, attach the usb stick,
open a terminal window, pfexec bash starts
a root shell, zpool import -f rpool should
find and import the zpool from the usb stick.
Mount the root
Not a ZFS bug. IIRC, the story goes something like this: a SMI
label only works to 1 TByte, so to use 1 TByte, you need an
EFI label. For older x86 systems -- those which are 32-bit -- you
probably have a BIOS which does not handle EFI labels. This
will become increasingly irritating
I had a system with it's boot drive
attached to a backplane which worked fine. I tried
moving that drive to the onboard controller and a few
seconds into booting it would just reboot.
In certain cases zfs is able to find the drive on the
new physical device path (IIRC: the disk's devid
32 bit Solaris can use at most 2^31 as disk address; a disk block is
512bytes, so in total it can address 2^40 bytes.
A SMI label found in Solaris 10 (update 8?) and OpenSolaris has been
enhanced
and can address 2TB but only on a 64 bit system.
is what the problem is. so 32-bit
besides performance aspects, what`s the con`s of
running zfs on 32 bit ?
The default 32 bit kernel can cache a limited amount of data
( 512MB) - unless you lower the kernelbase parameter.
In the end the small cache size on 32 bit explains the inferior
performance compared to the 64 bit kernel.
The problem was with the shell. For whatever reason,
/usr/bin/ksh can't rejoin the files correctly. When
I switched to /sbin/sh, the rejoin worked fine, the
cksum's matched, ...
The ksh I was using is:
# what /usr/bin/ksh
/usr/bin/ksh:
Version M-11/16/88i
SunOS 5.10 Generic
bash-3.00# zfs mount usbhdd1
cannot mount 'usbhdd1': E/A-Fehler
bash-3.00#
Why is there an I/O error?
Is there any information logged to /var/adm/messages when this
I/O error is reported? E.g. timeout errors for the USB storage device?
--
This message posted from opensolaris.org
Again, what I'm trying to do is to boot the same OS from physical
drive - once natively on my notebook, the other time from withing
Virtualbox. There are two problems, at least. First is the bootpath as
in VB it emulates the disk as IDE while booting natively it is sata.
When I started
Cannot mount root on /[EMAIL PROTECTED],0/pci103c,[EMAIL PROTECTED],2/[EMAIL
PROTECTED],0:a fstype zfs
Is that physical device path correct for your new system?
Or is this the physical device path (stored on-disk in the zpool label)
from some other system? In this case you may be able to
THe lock I observed happened inside the BIOS of the card after the main board
BIOS jumped into the board BIOS. This was before any bootloader has been
ionvolved.
Is there a disk using a zpool with an EFI disk label? Here's a link to an old
thread about systems hanging in BIOS POST when they
What Widows utility you are talking about? I have
used the Sandisk utility program to remove the U3
Launchpad (which creates a permanent hsfs partition
in the flash disk), but it does not help the problem.
That's the problem, most usb sticks don't require any
special software and just work
W. Wayne Liauh wrote:
If you are running B95, that may be the problem. I
have no problem booting B93 ( previous builds) from
a USB stick, but B95, which has a newer version of
ZFS, does not allow me to boot from it ( the USB
stick was of course recognized during installation of
B95, just
I have OpenSolaris (snv_95) installed into my laptop (single sata disk)
and tomorrow I updated my pool with:
# zpool -V 11 -a
and after I start a scrub into the pool with:
# zpool scrub rpool
# zpool status -vx
NAMESTATE READ WRITE CKSUM
rpool
On 08/21/08 17:26, Jürgen Keil wrote:
Looks like bug 6727872, which is fixed in build 96.
http://bugs.opensolaris.org/view_bug.do?bug_id=6727872
that pool contains normal OpenSolaris mountpoints,
Did you upgrade the opensolaris installation in the past?
AFAIK the opensolaris upgrade
Recently, I needed to move the boot disks containing a ZFS root pool in an
Ultra 1/170E running snv_93 to a different system (same hardware) because
the original system was broken/unreliable.
To my dismay, unlike with UFS, the new machine wouldn't boot:
WARNING: pool 'root' could not be
I wrote:
Bill Sommerfeld wrote:
On Fri, 2008-07-18 at 10:28 -0700, Jürgen Keil wrote:
I ran a scrub on a root pool after upgrading to snv_94, and got
checksum errors:
Hmm, after reading this, I started a zpool scrub on my mirrored pool,
on a system that is running post
Bill Sommerfeld wrote:
On Fri, 2008-07-18 at 10:28 -0700, Jürgen Keil wrote:
I ran a scrub on a root pool after upgrading to snv_94, and got checksum
errors:
Hmm, after reading this, I started a zpool scrub on my mirrored pool,
on a system that is running post snv_94 bits
Rustam wrote:
I'm living with this error for almost 4 months and probably have record
number of checksum errors:
# zpool status -xv
pool: box5
...
errors: Permanent errors have been detected in the
following files:
box5:0x0
I've Sol 10 U5 though.
I suspect that this
I ran a scrub on a root pool after upgrading to snv_94, and got checksum
errors:
Hmm, after reading this, I started a zpool scrub on my mirrored pool,
on a system that is running post snv_94 bits: It also found checksum errors
# zpool status files
pool: files
state: DEGRADED
status: One
I ran a scrub on a root pool after upgrading to snv_94, and got checksum
errors:
Hmm, after reading this, I started a zpool scrub on my mirrored pool,
on a system that is running post snv_94 bits: It also found checksum errors
...
OTOH, trying to verify checksums with zdb -c didn't
Mike Gerdts wrote
By default, only kernel memory is dumped to the dump device. Further,
this is compressed. I have heard that 3x compression is common and
the samples that I have range from 3.51x - 6.97x.
My samples are in the range 1.95x - 3.66x. And yes, I lost
a few crash dumps on a box
I wanted to resurrect an old dual P3 system with a couple of IDE drives
to use as a low power quiet NIS/DHCP/FlexLM server so I tried installing
ZFS boot from build 90.
Jun 28 16:09:19 zack scsi: [ID 107833 kern.warning] WARNING: /[EMAIL
PROTECTED],0/[EMAIL PROTECTED],1/[EMAIL PROTECTED]
I've got Solaris Express Community Edition build 75
(75a) installed on an Asus P5K-E/WiFI-AP (ip35/ICH9R
based) board. CPU=Q6700, RAM=8Gb, disk=Samsung
HD501LJ and (older) Maxtor 6H500F0.
When the O/S is running on bare metal, ie no xVM/Xen
hypervisor, then everything is fine.
When
size=66560)
In-Reply-To: [EMAIL PROTECTED]
Mime-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
Approved: 3sm4u3
X-OpenSolaris-URL:
http://www.opensolaris.org/jive/message.jspa?messageID=163221tstart=0#163221
how does one free
Regarding compression, if I am not mistaken, grub
cannot access files that are compressed.
There was a bug where grub was unable to access files
on zfs that contained holes:
Bug ID 6541114
SynopsisGRUB/ZFS fails to load files from a default compressed (lzjb)
root
I would like confirm that Solaris Express Developer Edition 09/07
b70, you can't have /usr on a separate zfs filesystem because of
broken dependencies.
1/ Part of the problem is that /sbin/zpool is linked to
/usr/lib/libdiskmgt.so.1
Yep, in the past this happened on several occasions
Yesterday I tried to clone a xen dom0 zfs root filesystem and hit this panic
(probably Bug ID 6580715):
System is running last week's opensolaris bits (but I'm also accessing the zpool
using the xen snv_66 bits).
files/s11-root-xen: is an existing version 1 zfs
files/[EMAIL PROTECTED]: new
I tried to copy a 8GB Xen domU disk image from a zvol device
to an image file on an ufs filesystem, and was surprised that
reading from the zvol character device doesn't detect EOF.
I've filed bug 6596419...
Requesting a sponsor for bug 6596419...
I tried to copy a 8GB Xen domU disk image from a zvol device
to an image file on an ufs filesystem, and was surprised that
reading from the zvol character device doesn't detect EOF.
On snv_66 (sparc) and snv_73 (x86) I can reproduce it, like this:
# zfs create -V 1440k tank/floppy-img
# dd
I tried to copy a 8GB Xen domU disk image from a zvol device
to an image file on an ufs filesystem, and was surprised that
reading from the zvol character device doesn't detect EOF.
On snv_66 (sparc) and snv_73 (x86) I can reproduce it, like this:
# zfs create -V 1440k tank/floppy-img
I tried to copy a 8GB Xen domU disk image from a zvol device
to an image file on an ufs filesystem, and was surprised that
reading from the zvol character device doesn't detect EOF.
I've filed bug 6596419...
This message posted from opensolaris.org
using hyperterm, I captured the panic message as:
SunOS Release 5.11 Version snv_69 32-bit
Copyright 1983-2007 Sun Microsystems, Inc. All
rights reserved.
Use is subject to license terms.
panic[cpu0]/thread=fec1ede0: Can't handle mwait size
0
fec37e70 unix:mach_alloc_mwait+72
in my setup i do not install the ufsroot.
i have 2 disks
-c0d0 for the ufs install
-c1d0s0 which is my zfs root i want to exploit
my idea is to remove the c0d0 disk when the system will be ok
Btw. if you're trying to pull the ufs disk c0d0 from the system, and
physically move the zfs
I managed to create a link in a ZFS directory that I can't remove.
# find . -print
.
./bayes_journal
find: stat() error ./bayes.lock.router.3981: No such
file or directory
./user_prefs
#
ZFS scrub shows no problems in the pool. Now, this
was probably cause when I was doing some
I'm running snv 65 and having an issue
much like this:
http://osdir.com/ml/solaris.opensolaris.help/2006-11/msg00047.html
Bug 6414472?
Has anyone found a workaround?
You can try to patch my suggested fix for 6414472 into the ata binary
and see if it helps:
By coincidence, I spent some time dtracing 6560174 yesterday afternoon on
b62, and these bugs are indeed duplicates. I never noticed 6445725 because my
system wasn't hanging but as the notes say, the fix for 6434435 changes the
problem, and instead the error that gets propogated back from
Nope, no work-around.
OK. Then I have 3 questions:
1) How do I destroy the pool that was on the firewire
drive? (So that zfs stops complaining about it)
Even if the drive is disconnected, it should be possible
to zpool export it, so that the OS forgets about it
and doesn't try to mount
3) Can your code diffs be integrated into the OS on my end to use this
drive, and if so, how?
I believe the bug is still being worked on, right Jürgen ?
The opensolaris sponsor process for fixing bug 6445725 seems
to got stuck. I ping'ed Alan P. on the state of that bug...
This
I think I have ran into this bug, 6560174, with a firewire drive.
And 6560174 might be a duplicate of 6445725
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
And 6560174 might be a duplicate of 6445725
I see what you mean. Unfortunately there does not
look to be a work-around.
Nope, no work-around. This is a scsa1394 bug; it
has some issues when it is used from interrupt context.
I have some source code diffs, that are supposed to
fix the
Yesterday I was surprised because an old snv_66 kernel
(installed as a new zfs rootfs) refused to mount.
Error message was
Mismatched versions: File system is version 2 on-disk format,
which is incompatible with this software version 1!
I tried to prepare that snv_66 rootfs when
Shouldn't S10u3 just see the newer on-disk format and
report that fact, rather than complain it is corrupt?
Yep, I just tried it, and it refuses to zpool import the newer pool,
telling me about the incompatible version. So I guess the pool
format isn't the correct explanation for the Dick
I used a zpool on a usb key today to get some core files off a non-networked
Thumper running S10U4 beta.
Plugging the stick into my SXCE b61 x86 machine worked fine; I just had to
'zpool import sticky' and it worked ok.
But when we attach the drive to a blade 100 (running s10u3), it sees
i think i have read somewhere that zfs gzip
compression doesn`t scale well since the in-kernel
compression isn`t done multi-threaded.
is this true - and if so - will this be fixed ?
If you're writing lots of data, zfs gzip compression
might not be a good idea for a desktop machine, because
Hello Jürgen,
Monday, June 4, 2007, 7:09:59 PM, you wrote:
Patching zfs_prefetch_disable = 1 has helped
It's my belief this mainly aids scanning metadata. my
testing with rsync and yours with find (and seen with
du ; zpool iostat -v 1 ) pans this out..
mainly tracked in bug
Patching zfs_prefetch_disable = 1 has helped
It's my belief this mainly aids scanning metadata. my
testing with rsync and yours with find (and seen with
du ; zpool iostat -v 1 ) pans this out..
mainly tracked in bug 6437054 vdev_cache: wise up or die
I wrote
Instead of compiling opensolaris for 4-6 hours, I've now used
the following find / grep test using on-2007-05-30 sources:
1st test using Nevada build 60:
% cd /files/onnv-2007-05-30
% repeat 10 /bin/time find usr/src/ -name *.[hc] -exec grep FooBar {} +
This find + grep command
I wrote
Has anyone else noticed a significant zfs performance
deterioration when running recent opensolaris bits?
My 32-bit / 768 MB Toshiba Tecra S1 notebook was able
to do a full opensolaris release build in ~ 4 hours 45
minutes (gcc shadow compilation disabled; using an lzjb
compressed
Has anyone else noticed a significant zfs performance deterioration
when running recent opensolaris bits?
My 32-bit / 768 MB Toshiba Tecra S1 notebook was able to do a
full opensolaris release build in ~ 4 hours 45 minutes (gcc shadow
compilation disabled; using an lzjb compressed zpool / zfs
Would you mind also doing:
ptime dd if=/dev/dsk/c2t1d0 of=/dev/null bs=128k count=1
to see the raw performance of underlying hardware.
This dd command is reading from the block device,
which might cache dataand probably splits requests
into maxphys pieces (which happens to be 56K on an
by Eric in build 59.
It was pointed out by Jürgen Keil that using ZFS compression
submits a lot of prio 60 tasks to the system task queues;
this would clobber interactive performance.
Actually the taskq spa_zio_issue / spa_zio_intr run at
prio 99 (== maxclsyspri or MAXCLSYSPRI):
http
A couple more questions here.
[mpstat]
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 0 0 3109 3616 316 196 5 17 48 45 245 0 85 0 15
1 0 0 3127 3797 592 217 4 17 63 46 176 0 84 0 15
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0
with recent bits ZFS compression is now handled concurrently with
many CPUs working on different records.
So this load will burn more CPUs and acheive it's results
(compression) faster.
So the observed pauses should be consistent with that of a load
generating high system time.
The
A couple more questions here.
...
What do you have zfs compresison set to? The gzip level is
tunable, according to zfs set, anyway:
PROPERTY EDIT INHERIT VALUES
compression YES YES on | off | lzjb | gzip | gzip-[1-9]
I've used the default gzip compression level, that
Roch Bourbonnais wrote
with recent bits ZFS compression is now handled concurrently with
many CPUs working on different records.
So this load will burn more CPUs and acheive it's results
(compression) faster.
Is this done using the taskq's, created in spa_activate()?
A couple more questions here.
...
You still have idle time in this lockstat (and mpstat).
What do you get for a lockstat -A -D 20 sleep 30?
Do you see anyone with long lock hold times, long
sleeps, or excessive spinning?
Hmm, I ran a series of lockstat -A -l ph_mutex -s 16 -D 20 sleep 5
I just had a quick play with gzip compression on a filesystem and the
result was the machine grinding to a halt while copying some large
(.wav) files to it from another filesystem in the same pool.
The system became very unresponsive, taking several seconds to echo
keystrokes. The box is a
The reason you are busy computing SHA1 hashes is you are using
/dev/urandom. The implementation of drv/random uses
SHA1 for mixing,
actually strictly speaking it is the swrand provider that does that part.
Ahh, ok.
So, instead of using dd reading from /dev/urandom all the time,
I've now
We are running Solaris 10 11/06 on a Sun V240 with 2
CPUS and 8 GB of memory. This V240 is attached to a
3510 FC that has 12 x 300 GB disks. The 3510 is
configured as HW RAID 5 with 10 disks and 2 spares
and it's exported to the V240 as a single LUN.
We create iso images of our product in
I still haven't got any warm and fuzzy responses
yet solidifying ZFS in combination with Firewire or USB enclosures.
I was unable to use zfs (that is zpool create or mkfs -F ufs) on
firewire devices, because scsa1394 would hang the system as
soon as multiple concurrent write commands are
I have my /usr filesystem configured as a zfs filesystem,
using a legacy mountpoint. I noticed that the system boots
with atime updates temporarily turned off (and doesn't record
file accesses in the /usr filesystem):
# df -h /usr
Filesystem size used avail capacity Mounted on
Hmmm, so there is lots of evictable cache here (mostly in the MFU
part of the cache)... could you make your core file available?
I would like to take a look at it.
Isn't this just like:
6493923 nfsfind on ZFS filesystem quickly depletes memory in a 1GB system
Which was introduced in
I've noticed that fstyp on a floppy media formatted with pcfs now needs
somewhere between
30 - 100 seconds to find out that the floppy media is formatted with pcfs.
E.g. on sparc snv_48, I currently observe this:
% time fstyp /vol/dev/rdiskette0/nomedia
pcfs
0.01u 0.10s 1:38.84 0.1%
zfs's
Sounds familiar. Yes it is a small system a Sun blade 100 with 128MB of
memory.
Oh, 128MB...
Btw, does anyone know if there are any minimum hardware (physical memory)
requirements for using ZFS?
It seems as if ZFS wan't tested that much on machines with 256MB (or less)
I just retried to reproduce it to generate a reliable
test case. Unfortunately, I cannot reproduce the
error message. So I really have no idea what might
have cause it
I also had this problem 2-3 times in the past,
but I cannot reproduce it.
This is:
6483887 without direct management, arc ghost lists can run amok
That seems to be a new bug?
http://bugs.opensolaris.org does not yet find it.
The fix I have in mind is to control the ghost lists as part of
the arc_buf_hdr_t allocations. If you want to test out my fix,
I can send
The disks in that Blade 100, are these IDE disks?
The performance problem is probably bug 6421427:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6421427
A fix for the issue was integrated into the Opensolaris 20060904 source
drop (actually closed binary drop):
We are trying to obtain a mutex that is currently held
by another thread trying to get memory.
Hmm, reminds me a bit on the zvol swap hang I got
some time ago:
http://www.opensolaris.org/jive/thread.jspa?threadID=11956tstart=150
I guess if the other thead is stuck trying to get memory, then
I made some powernow experiments on a dual core amd64 box, running the
64-bit debug on-20060828 kernel. At some point the kernel seemed to
make no more progress (probably a bug in the multiprocessor powernow
code), the gui was stuck, so I typed (blind) F1-A + $systemdump.
Writing the crashdump
I've tried to use dmake lint on on-src-20060731, and was running out of swap
on my
Tecra S1 laptop, 32-bit x86, 768MB main memory, with a 512MB swap slice.
The FULL KERNEL: global crosschecks: lint run consumes lots (~800MB) of space
in /tmp, so the system was running out of swap space.
To fix
What throughput do you get for the full untar (untared size / elapse time) ?
# tar xf thunderbird-1.5.0.4-source.tar 2.77s user
35.36s system 33% cpu 1:54.19
260M/114 =~ 2.28 MB/s on this IDE disk
IDE disk?
Maybe it's this sparc ide/ata driver issue:
Bug ID: 6421427
Synopsis: netra x1
http://www.opensolaris.org/jive/thread.jspa?messageID=36229#36229
The problem is back, on a different system: a laptop running on-20060605 bits.
Compared to snv_29, the error message has improved, though:
# zfs snapshot hdd/[EMAIL PROTECTED]
cannot snapshot 'hdd/[EMAIL PROTECTED]': dataset is
91 matches
Mail list logo