7; + 'scsiclass'
This only sees the six ST2000DL003 drives of the main data pool,
and the LiveUSB flash drive...
So - is it possible to try reinitializing and locating connections to
the disk on a commodity motherboard (i.e. no lsiutil, IPMI and such)
using only OI, without rebooting the box?
The pools are not imported, so if I can detach and reload the sata
drivers - I might try that, but I am stumped at how would I do that.
Funny, after all the theoretical advice I dispense, the practice has
a new trick for me to learn and stay humble and almost ashamed ;)
Thanks for pointers,
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
device) would also
give a good boost to performance, since it won't have to seek much.
The rotational latency will be there however, limiting reachable IOPS
in comparison to an SSD SLOG.
HTH,
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
htt
> Can I know how to configure a SSD to be used for L2arc ? Basically I want to
> improve read performance.
Read the documentation, specifically the section titled;
Creating a ZFS Storage PoolWith Cache Devices
> To increase write performance, will SSD for Zil help ? As I read on forums,
> Zi
you configure "crash dump on NMI" and set up your IPMI card,
then you can likely gain remote access to both the server console
("physical" and/or serial) and may be able to trigger the NMI, too.
HTH,
//Jim
Thanks for the help.
On Wed, Mar 20, 2013 at 8:53 AM, Michael S
in the sense of each table.
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
irrelevant on GPT/EFI - no SMI slices there.
On my old home NAS with OpenSolaris I certainly did have
MBR partitions on the rpool intended initially for some
dual-booted OSes, but repurposed as L2ARC and ZIL devices
for the storage pool on other disks, when I played with
that technology. Didn
xceed a rack in size though. But for power/cooling this seems
like a standard figure for a 42U rack or just a bit more.
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ardless of what the boxes' individual power sources can
do. Conveniently, they also allow to do a remote hard-reset of hung
boxes without walking to the server room ;)
My 2c,
//Jim Klimov
___
zfs-discuss mailing list
zfs-discuss@
On 2013-03-15 01:58, Gary Driggs wrote:
On Mar 14, 2013, at 5:55 PM, Jim Klimov wrote:
However, recently the VM "virtual hardware" clocks became way slow.
Does NTP help correct the guest's clock?
Unfortunately no, neither guest NTP, ntpdate or rdate in crontabs,
nor Virt
modern hardware too, so VTx (lack of) shouldn't be
our reason...
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
en you'd use a lot less space (and you'd not see a
.garbledfilename in the directory during the process).
If you use rsync over network to back up stuff, here's an example of
SMF wrapper for rsyncd, and a config sample to make a snapshot after
completion of the rsync session.
http://wiki.o
lightweight
zle) zero-filled blocks should translate into zero IOs. (Maybe
some metadata would appear, to address the "holes", however).
With proper handling of sparse files you don't write any of that
voidness into the FS and you don't process anything on rea
e life from application
data and non-packaged applications, which might simplify backups, etc.
and you might be able to store these pieces in different pools (i.e.
SSDs for some data and HDDs for other - though most list members would
rightfully argue in favor of L2ARC on the SSDs).
HTH,
//Jim
on-list gurus might walk you through use of a debugger
or dtrace to track which calls are being made by "zfs destroy" and lead
it to conclude that the dataset is busy?.. I really only know to use
"truss -f -l progname params" which helps most of the time, and would
love to learn
old&defs=&refs=&path=&hist=&project=freebsd-head
There is a project "freebsd-head" in illumos codebase; I have no idea
how actual it is for the BSD users.
HTH,
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
rather than forbid their destruction as requested
here, but to a similar effect: to not let some scripted or thoughtless
manual jobs abuse the storage by wasting space in some datasets in the
form of snapshot creation.
//Jim
___
zfs-dis
On 2013-02-19 17:02, Victor Latushkin wrote:
On 2/19/13 6:32 AM, Jim Klimov wrote:
On 2013-02-19 14:24, Konstantin Kuklin wrote:
zfs set canmount=off zroot/var/crash
i can`t do this, because zfs list empty
I'd argue that in your case it might be desirable to evacuate data and
reinstal
y giving
you an intact state of ZFS data structures and a usable pool. Maybe not.
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
it
ran for a while, chances are that your older possibly better intact
TXGs are no longer referencable (rolled out of the ring buffer forever).
Good luck,
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
be not so forgiving about pool errors.
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
usr:<0x2aff>
zroot/var/crash:<0x0>
root@Flash:/root #
how i can delete or drop it fs zroot/var/crash (1m-10m size i didn`t
remember) and mount other zfs points with my data
--
С уважением
Куклин Константин.
Good luck,
//Jim Klimov
___
and rid
itself of more black PR by shooting down another public project of
the Sun legacy (hint: if the site does wither and die in community's
hands - it is not Oracle's fault; and if it lives on - Oracle did
something good for karma... win-win, at no price).
Thanks for your helpfulness
such a fixable problem to behave so
inconveniently - the official docs go as far as to suggest an OS
reinstallation in this case.
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
in comparison with the date when you expanded the pool ;)
Closer inspection could be done with a ZDB walk to print out
the DVA block addresses for blocks of a file (the DVA includes
the number of the top-level vdev), but that would take some
time - to determine which files you want to
tter for your cause.
Inspect the "zpool" source to see where it gets its numbers from...
and perhaps make and RTI relevant kstats, if they aren't yet there ;)
On the other hand, I am not certain how Solaris-based kstats interact
or correspond to struc
rites into it - regardless of absence or
presence (and type) of compression on the original dataset.
HTH,
//Jim Klimov
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
such construct in comparison to:
zfs list -H -o reservation,refreservation,usedbyrefreservation,name \
-t filesystem {-r pool/interesting/dataset}
Just asking - or suggesting a simpler way to do stuff ;)
//Jim
___
zfs-discuss mailing list
zfs-di
in a stone-age way; but overall there is no ready solution
so far.
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
, and their userdata was queued to disk as a large sequential
blob in a "coalesced" write operation.
HTH,
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
min might have better control over
the dedicated disk locations (i.e. faster tracks in a small-seek
stroke range), except that ZFS datasets are easier to resize...
right or wrong?
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http:/
mber of shared libraries which are loaded to
fulfill the runtime requirements, but aren't actively used and
thus go out into swap quickly. I chose to trust that statement ;)
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.o
RAM or anything.
So I guess the system also does some guessing in this case?..
If so, preallocating as many bytes as it thinks minimally required
and then allowing compression to stuff more data in, might help to
actually save the larger dumps in cases the system (dumpadm) made
a wrong guess
amounts equivalent to VM RAM size, but
don't really swap (most of the time). Setting aside SSDs for this
task might be too expensive, if they are never to be used in real
practice.
But this point is more of a task for swap device tiering (like
with Linux swap prio
On 2013-01-22 23:03, Sašo Kiselkov wrote:
On 01/22/2013 10:45 PM, Jim Klimov wrote:
On 2013-01-22 14:29, Darren J Moffat wrote:
Preallocated ZVOLs - for swap/dump.
Or is it also supported to disable COW for such datasets, so that
the preallocated swap/dump zvols might remain contiguous on
disable COW for such datasets, so that
the preallocated swap/dump zvols might remain contiguous on the
faster tracks of the drive (i.e. like a dedicated partition, but
with benefits of ZFS checksums and maybe compression)?
Thanks,
//Jim
___
zfs-discuss
;)
Or the problem is in finding the old Java version?
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
more useful, I believe
they shall be coded and published in common ZFS code. Sometime...
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
... and lead to improper
packets of that optic protocol.
Are there switch stats on whether it has seen media errors?
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 2013-01-20 16:56, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
And regarding the "considerable activity" - AFAIK there is little way
for ZFS to reliabl
On 2013-01-20 19:55, Tomas Forsman wrote:
On 19 January, 2013 - Jim Klimov sent me these 2,0K bytes:
Hello all,
While revising my home NAS which had dedup enabled before I gathered
that its RAM capacity was too puny for the task, I found that there is
some deduplication among the data bits
cy; however, ZFS does explicitly "rotate" starting disks
of allocations every few megabytes in order to even out the loads
among spindles (normally parity disks don't have to be accessed -
unless mismatches occur on data disks). Disabling such padding would
only help achieve thi
On 2013-01-19 20:23, Jim Klimov wrote:
On 2013-01-19 20:08, Bob Friesenhahn wrote:
On Sat, 19 Jan 2013, Jim Klimov wrote:
On 2013-01-19 18:17, Bob Friesenhahn wrote:
Resilver may in fact be just verifying that the pool disks are coherent
via metadata. This might happen if the fiber channel
On 2013-01-19 20:08, Bob Friesenhahn wrote:
On Sat, 19 Jan 2013, Jim Klimov wrote:
On 2013-01-19 18:17, Bob Friesenhahn wrote:
Resilver may in fact be just verifying that the pool disks are coherent
via metadata. This might happen if the fiber channel is flapping.
Correction: that
ks of the Thumper, so at least we
are certain to attribute no problems related to further breakage
components - external cables, disk trays, etc...
HTH,
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
e data would remain deduped and use no extra space, while unique
data won't waste the resources being accounted as deduped).
What do you think?
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
uld be worth it.
As for the experiment, I guess you can always make a ZVOL with different
recordsize, DD data into it from the production dataset's snapshot, and
attach the VM or its clone to the newly created clone of its disk image.
Good luck, and I hope I got Richard's logic righ
held by
those older snapshots. Moving such temporary works to a different
dataset with a different snapshot schedule and/or to a different
pool (to keep related fragmentation constrained) may prove useful.
HTH,
//Jim Klimov
___
zfs-discuss mailing list
zf
eading a bigger block and modifying
it with COW), plus some avalanche of metadata updates (likely with
the COW) for ZFS's own bookkeeping.
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
n case, try
to unplug and replug all cables (power, data) in case their pins got
oxydized over time.
HTH,
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ng over and
deleting the original) might help or not help free up a particular
TLVDEV (upon rewrite they will be striped again, albeit maybe ZFS
will make different decisions upon a new write - and prefer the more
free devices).
Also, if the file's blocks are referenced via snapshots, clones,
d
a pool on the same SXCE. There were no
such problems with newer build of parted as in OI, so that disk was
in fact labeled for SXCE while the box was booted with OI LiveCD.
HTH,
//Jim Klimov
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http:
On 2012-12-11 16:44, Jim Klimov wrote:
For single-break-per-row tests based on hypotheses from P parities,
D data disks and R broken rows, we need to checksum P*(D^R) userdata
recombinations in order to determine that we can't recover the block.
A small maths correction: the formula
On 2012-12-02 05:42, Jim Klimov wrote:
My plan is to dig out the needed sectors of the broken block from
each of the 6 disks and try any and all reasonable recombinations
of redundancy and data sectors to try and match the checksum - this
should be my definite answer on whether ZFS (of that
overheated CPU, non-ECC RAM and the software further along
the road). I am not sure which one of these *couldn't* issue
(or be interpreted to issue) a number of weird identical writes
to different disks at same offsets.
Everyone is a suspect :(
Thanks,
//Jim Klimov
___
more below...
On 2012-12-06 03:06, Jim Klimov wrote:
It also happens that on disks 1,2,3 the first row's sectors (d0, d2, d3)
are botched - ranges from 0x9C0 to 0xFFF (end of 4KB sector) are zeroes.
The neighboring blocks, located a few sectors away from this one, also
have compressed dat
For those who have work to do and can't be bothered to read detailed
context, please do scroll down to the marked Applied question about
the possible project to implement a better on-disk layout of blocks.
The busy experts' opinions are highly regarded here. Thanks ;) //Jim
C
or or two worth
of data.
So, given that there are no on-disk errors in the "Dataset mos
[META], ID 0" "Object #0" - what does the zpool scrub find time
after time and call an "error in metadata:0x0"?
Thanks,
//Jim Klimov
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ents. It is slower
than cachefile when you have many devices at static locations, because
it ensures that all storage devices are consulted and the new map of
the pool components' locations is drawn. Thus the device numbering
would change somehow due to
more below...
On 2012-12-05 23:16, Timothy Coalson wrote:
On Tue, Dec 4, 2012 at 10:52 PM, Jim Klimov mailto:jimkli...@cos.ru>> wrote:
On 2012-12-03 18:23, Jim Klimov wrote:
On 2012-12-02 05:42, Jim Klimov wrote:
>> 4) Where are the redundancy algorithms specifi
On 2012-12-05 05:52, Jim Klimov wrote:
For undersized allocations, i.e. of compressed data, it is possible
to see P-sizes not divisible by 4 (disks) in 4KB sectors, however,
some sectors do apparently get wasted because the A-size in the DVA
is divisible by 6*4KB. With columnar allocation of
clustered
setup, which may be quite native and safe to VxFS. ZFS does not support
simultaneous pool-imports by several hosts, so you'd have to set up the
clusterware to make sure only one host controls the pool at any time.
HTH,
//Jim
___
zfs-discuss ma
isn't seeing a lot of public development.
I just built this into simplesmf, http://code.google.com/p/simplesmf/
Support to execut the zvol chown immediately prior to launching guestvm
I know Jim is also building it into vboxsvc, but I haven't tried that yet.
Lest this point be los
On 2012-11-29 10:56, Jim Klimov wrote:
For example, I might want to have corporate webshop-related
databases and appservers to be the fastest storage citizens,
then some corporate CRM and email, then various lower priority
zones and VMs, and at the bottom of the list - backups.
On a side note
On 2012-12-05 04:11, Richard Elling wrote:
On Nov 29, 2012, at 1:56 AM, Jim Klimov mailto:jimkli...@cos.ru>> wrote:
I've heard a claim that ZFS relies too much on RAM caching, but
implements no sort of priorities (indeed, I've seen no knobs to
tune those) - so that if the stor
On 2012-12-03 18:23, Jim Klimov wrote:
On 2012-12-02 05:42, Jim Klimov wrote:
So... here are some applied questions:
Well, I am ready to reply a few of my own questions now :)
Continuing the desecration of my deceased files' resting grounds...
2) Do I understand correctly that fo
On 2012-12-03 20:51, Heiko L. wrote:
jimklimov wrote:
In general, I'd do the renaming with a "different bootable media",
including a LiveCD/LiveUSB, another distro that can import and
rename this pool version, etc. - as long as booting does not
involve use of the old rpool.
Thank you. I will t
one,
you could also "zpool export rpool1" in order to mark it as cleanly
exported and not potentially held by another OS instance. This should
allow to boot from it, unless some other bug steps in...
//Jim
___
zfs-discuss mailing list
zfs-dis
On 2012-12-02 05:42, Jim Klimov wrote:
So... here are some applied questions:
Well, I am ready to reply a few of my own questions now :)
I've staged an experiment by taking a 128Kb block from that file
and appending it to a new file in a test dataset, where I changed
the compression set
er up the HDD size
discrepancy. But I haven't done any replacements so far which would
prove or disprove this ;)
Thanks,
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
e the host's zfs volume which backs your old rpool
and use autoexpansion (or manual expansion) to let your VM's
rpool capture the whole increased virtual disk.
If automagic doesn't work, I posted about a month ago about
the manual procedure on this list:
http://mail.opensolaris.org/p
n yields the value saved in block pointer
and ZFS missed something, or if I don't get any such combo and ZFS
does what it should exhaustively and correctly, indeed ;)
Thanks a lot in advance for any info, ideas, insights,
and just for reading this long post to the end ;)
//Jim Klimov
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
e userdata and metadata (as is the default), while the randomly
accessed tablespaces might be or not be good candidates for such
caching - however you can test this setting change on the fly.
I believe, you must allow caching userdata for a dataset in RAM
if you want to let it spill over
On 2012-11-30 15:52, Tomas Forsman wrote:
On 30 November, 2012 - Albert Shih sent me these 0,8K bytes:
Hi all,
I would like to knwon if with ZFS it's possible to do something like that :
http://tldp.org/HOWTO/LVM-HOWTO/removeadisk.html
Removing a disk - no, one still can not reduce
y
mechanism?
Thanks for info/ideas,
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
less data on-disk and have less IOs to
read in the OS during boot and work. Especially so, if - and this is
the part I am not certain about - it is roughly as cheap to READ the
gzip-9 datasets as it is to read lzjb (in terms of CPU decompressi
at the zfs blocks for OS image files would be
the same and dedupable.
HTH,
//Jim Klimov
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
the current JDK
installed in GZ: either simply lofs-mounted from GZ to LZs,
or in a separate dataset, cloned and delegated into LZs (if
JDK customizations are further needed by some - but not all -
local zones, i.e. timezone updates, trusted CA certs, etc.).
HTH,
//Jim Klimov
_
it is also possible that the block would go away (if it
is not referenced also by snapshots/clones/dedup), and such drastic
measures won't be needed.
HTH,
//Jim Klimov
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/m
for a buck from the owned hardware...
Or, rather, shop for the equivalent non-appliance servers...
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
?
Is it possible to run VirtualBoxes in the ZFS-SA OS, dare I ask? ;)
Thanks,
//Jim Klimov
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
erent types of devices.
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
e should double-check the found
discrepancies and those sectors it's going to use to recover a
block, at least of the kernel knows it is on non-ECC RAM (if it
does), but I don't know if it really does that. (Worthy RFE if not).
HTH,
//Jim Klimov
__
't get any
snapshots - neither ZFS as underlying storage ('cause it's not),
not hypervisor snaps of the VM. So while faster, this is also some
trade-off :)
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ack of
one referrer explicitly, why not track them all?
Thanks for info,
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
t;none" enc. algos, or perhaps rsync over NFS as
if you are in the local filesystem.
HTH,
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
in the pool from its history with older releases?
Thanks,
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
mes, if you do the plain rsync from each snapdir.
Perhaps, if the "zfs diff" does perform reasonably for you, you can
feed its output as the list of objects to replicate in rsync's input
and save many cycles this way.
Good luck,
//Jim Klimov
_
On 2012-11-16 14:45, Jim Klimov wrote:
Well, as a simple stone-age solution (to simplify your SMF approach),
you can define custom attributes on dataset, zvols included. I think
a custom attr must include a colon ":" in the name, and values can be
multiline if needed. Simple examp
Well, as a simple stone-age solution (to simplify your SMF approach),
you can define custom attributes on dataset, zvols included. I think
a custom attr must include a colon ":" in the name, and values can be
multiline if needed. Simple example follows:
# zfs set owner:user=jim pool/
While trying to find workarounds for Edward's problem, I discovered
that NFSv4/ZFS-style ACLs can be applied to /devices/* and are even
remembered across reboots, but in fact this causes more problems
than solutions.
//Jim
___
zfs-discuss mailing
t VMs?
Did you measure any overheads of initiator-target vs. zvol, both being
on the local system? Is there any significant performance difference
worth thinking and talking about?
Thanks,
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On 2012-11-14 18:05, Eric D. Mudama wrote:
On Wed, Nov 14 at 0:28, Jim Klimov wrote:
All in all, I can't come up with anything offensive against it quickly
;) One possible nit regards the ratings being geared towards 4KB block
(which is not unusual with SSDs), so it may be further
bugger as well - maybe
the system would enter it upon getting stuck (does so instead of
rebooting when it is panicking) and you can find some more details
there?..
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
g ZFS metadata.
Thanks for bringing it into attention spotlight, and I hope the more
savvy posters would overview it better.
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
20Gb in size (dunno why - sol10u10 bug?) :(
So I do the manual step:
# zpool online -e pool c1t1d0
The "-e" flag marks the component as eligible for expansion.
When all pieces of a top-level vdev become larger, the setting
takes effect and the pool finally beco
process partial
overwrites of a 4KB sector with 512b pieces of data - would other
bytes remain intact or not?..
Before trying to fool a production system this way, if at all,
I believe some stress-tests with small blocks are due on some
other system.
My 2c,
//Jim Klimov
fter reboot - that's what I am asking for (if that was not clear
from my first post).
Thanks,
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
nment or even into a livecd/failsafe,
just so that the needed datasets or paths won't be "busy" and so I
can set, verify and apply these mountpoint values. This is not a
convenient way to do things :)
Thanks,
//Jim Klimov
___
z
any way, just ask for help here.
(Right Jim?)
I'd prefer the questions and discussion on vboxsvc to continue in
the VirtualBox forum, so it's all in one place for other users too.
It is certainly an offtopic for the lists about ZFS, so I won't
take this podium for too long
in general, there might be need for some fencing (i.e.
only one host tries to start up a VM from a particular backend image).
I am not sure iSCSI inherently does a better job than NFS at this?..
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolari
1 - 100 of 798 matches
Mail list logo