-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Anybody is working on this?.
- --
Jesus Cea Avion _/_/ _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/ _/_/_/_/ _/_/
jabber / xmpp:j...@jabber.org
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 10/01/12 21:32, Richard Elling wrote:
On Jan 9, 2012, at 7:23 PM, Jesus Cea wrote:
[...]
The page is written in Spanish, but the terminal transcriptions
should be useful for everybody.
In the process, maybe somebody finds this interesting too
have read your message after I migrated, but it was very
interesting. Thanks for taking the time to write it!.
Have a nice 2012.
- --
Jesus Cea Avion _/_/ _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/ _/_/_/_/ _/_/
jabber / xmpp:j
happen if I still use the complete disks BUT
with two slices instead of one?. Would it still have write cache
enabled?. And yes, I have checked that the cache flush works as
expected, because I can only do around one hundred write+sync per
second.
Advices?.
- --
Jesus Cea Avion
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 11/10/11 12:30, Darren J Moffat wrote:
On 09/26/11 20:03, Jesus Cea wrote:
# zpool upgrade -v [...] 24 System attributes [...]
[...]
These are special on disk blocks for storing file system metadata
attributes when there isn't enough space
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 16/10/11 18:49, Jesus Cea wrote:
These are special on disk blocks for storing file system metadata
attributes when there isn't enough space in the bonus buffer
area of the on disk version of the dnode.
Last question...
Can somebody confirm
, and there is no more
activity going on in the machine. ZPOOL version 29, ZFS version 5.
Am I missing anything?
- --
Jesus Cea Avion _/_/ _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/ _/_/_/_/ _/_/
jabber / xmpp:j...@jabber.org
/E23823_01/html/819-5461/gjxik.html#scrolltoc.
The bugid in the openindiana website is a broken link...
Thanks in advance!!.
- --
Jesus Cea Avion _/_/ _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/ _/_/_/_/ _/_/
jabber / xmpp:j...@jabber.org
(you skip over
entire on-disk branches if there are no changes under them).
- --
Jesus Cea Avion _/_/ _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/ _/_/_/_/ _/_/
jabber / xmpp:j...@jabber.org
, atime seems to be harmful. Badly.
PS: I saw something similar with zfs send too.
- --
Jesus Cea Avion _/_/ _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/ _/_/_/_/ _/_/
jabber / xmpp:j...@jabber.org
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 26/09/11 22:54, Jesus Cea wrote:
On 26/09/11 22:29, David Magda wrote:
Talking about 7.55 GB is mostly useless as well. If it's a
dozen video files then stat()ing them all with be done very
quickly by just running find(1). If however the 7.55
with L2ARC?. Since ARC is not encrypted (in RAM), is
it encrypted when evicted to L2ARC?.
Thanks for your time and attention!.
- --
Jesus Cea Avion _/_/ _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/ _/_/_/_/ _/_/
jabber / xmpp:j
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 19/09/11 19:45, Jesus Cea wrote:
I have a new answer: interaction between dataset encryption and
L2ARC and ZIL.
Question, a new question... :)
- --
Jesus Cea Avion _/_/ _/_/_/_/_/_/
j...@jcea.es - http
that hybrid storages (SSD+HD) are a huge opportunity for ZFS,
but I am still seeing problem reports. Just a few days ago somebody
posted in this list about being unable to delete a faulty SSD ZIL.
I am trying to be cautious and apply due diligence. It is part of my
job, after all... :)
- --
Jesus Cea
will upgrade
the ZPOOL version after a while.
- --
Jesus Cea Avion _/_/ _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/ _/_/_/_/ _/_/
jabber / xmpp:j...@jabber.org
snapshotting of all my
datasets skipping over a few of them?. Now I do a full recursive
snapshot, and then delete specific snapshots I don't want to have, like
the swap.
- --
Jesus Cea Avion _/_/ _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es
rebuilding the zpool first. So I like to have the
snapshots around in the primary.
- --
Jesus Cea Avion _/_/ _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/ _/_/_/_/ _/_/
jabber / xmpp:j...@jabber.org
, and the algorithm is not clear.
Any idea?. This bugs me a lot, but I rather prefer do not look to ZFS
code...
Thanks!.
- --
Jesus Cea Avion _/_/ _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/ _/_/_/_/ _/_/
jabber / xmpp:j...@jabber.org
25416K16K 2.91G 2.52G ZFS plain file
The reply format is a little bit different. Could you explain the
meaning of each field?. lvl, iblk, etc.
Thanks a lot!.
- --
Jesus Cea Avion _/_/ _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es
zdb magic?.
- --
Jesus Cea Avion _/_/ _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/ _/_/_/_/ _/_/
jabber / xmpp:j...@jabber.org
(with offset), and mode/directory changes (showing the before
after data).
As is, zfs send is nice but you require ZFS in both sides. I would
love a rsync-like tool that could avoid to scan 20 millions of files
just to find a couple of small changes (or none at all).
- --
Jesus Cea Avion
diff.
Hopefully someday we will finish it up...
Can't wait! :-))
- --
Jesus Cea Avion _/_/ _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/ _/_/_/_/ _/_/
jabber / xmpp:j...@jabber.org
talking about Solaris 10 U8.
- --
Jesus Cea Avion _/_/ _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/ _/_/_/_/ _/_/
jabber / xmpp:j...@jabber.org
the lost of ANY two disk, while the 6 disk
mirror configuration will be destroyed if the two disks lost are from
the SAME pair.
- --
Jesus Cea Avion _/_/ _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/ _/_/_/_/ _/_/
jabber / xmpp:j
this
requirement in modern ZFS implementations?.
I think ZFS doesn't reserves space for root, so you better have the
root (and /tmp and /var, if separate datasets) separate from
normal user fillable datasets. Is this correct?.
- --
Jesus Cea Avion
writes
need free space.
The root-thing is just a side effect.
I stand corrected.
I think ZFS doesn't allow root to eat from the implicit reservation,
so we lose the side effect. Am I right?.
- --
Jesus Cea Avion _/_/ _/_/_/_/_/_/
j...@jcea.es - http
block rewrite, better to have a maximum absolute
limit, since free space will be easy to find.
- --
Jesus Cea Avion _/_/ _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/ _/_/_/_/ _/_/
jabber / xmpp:j...@jabber.org
reading if you have an interest in allocators and
performance.
Where is the docs?. That link has little info.
- --
Jesus Cea Avion _/_/ _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/ _/_/_/_/ _/_/
jabber / xmpp:j...@jabber.org
on top of
dynamically expanding disk images (VDI). If the free blocks are put at the end
free block list, over time the VDI will grow to its maximum size before it
reuses any of the blocks.
Check the thread Thin device support in ZFS?, from late december.
- --
Jesus Cea Avion
...
- --
Jesus Cea Avion _/_/ _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/ _/_/_/_/ _/_/
jabber / xmpp:j...@jabber.org _/_/_/_/ _/_/_/_/_/
. _/_/ _/_/_/_/ _/_/ _/_/
Things
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 01/18/2010 05:11 PM, David Magda wrote:
On Jan 18, 2010, at 10:55, Jesus Cea wrote:
zpool and zfs report different free space because zfs takes into account
an internal reservation of 32MB or 1/64 of the capacity of the pool,
what is bigger
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
I see http://opensolaris.org/os/community/zfs/docs/ondiskformat0822.pdf
as a pretty outdated (3 years old) document. is there any plan to update
it?.
Maybe somebody could update it every time a new ZFS pool version is
available?.
- --
Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Nicolas Williams wrote:
I'd recommend waiting for ZFS crypto rather than using lofi with ZFS.
Wait... for how long?. Any schedule?.
I am very interested in ZFS Crypto, although I have lost hope of seeing
in Solaris 10.
- --
Jesus Cea Avion
/artic/sol10lu6zfs2.htm
- --
Jesus Cea Avion _/_/ _/_/_/_/_/_/
j...@jcea.es - http://www.jcea.es/ _/_/_/_/ _/_/_/_/ _/_/
jabber / xmpp:j...@jabber.org
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
I have ZFS root/boot in my environment, and I am interested in
separating /var in a independent dataset. How can I do it. I can use
Live Upgrade, if needed.
- --
Jesus Cea Avion _/_/ _/_/_/_/_/_/
[EMAIL PROTECTED
Upgrade. The machine is in
production; I can not do a reinstall. I can mess with configuration
files an create datasets and such by hand.
- --
The correct sig. delimiter is --
I know. The issue is PGP/GNUPG/Enigmail integration.
- --
Jesus Cea Avion
in advance.
- --
Jesus Cea Avion _/_/ _/_/_/_/_/_/
[EMAIL PROTECTED] - http://www.jcea.es/ _/_/_/_/ _/_/_/_/ _/_/
jabber / xmpp:[EMAIL PROTECTED
this!
...
etc.
Any advice?. Suggestions/alternative approaches welcomed.
- --
Jesus Cea Avion _/_/ _/_/_/_/_/_/
[EMAIL PROTECTED] - http://www.jcea.es/ _/_/_/_/ _/_/_/_/ _/_/
jabber / xmpp:[EMAIL PROTECTED
with ICF.* files. Seems easy enough to try.
- --
Jesus Cea Avion _/_/ _/_/_/_/_/_/
[EMAIL PROTECTED] - http://www.jcea.es/ _/_/_/_/ _/_/_/_/ _/_/
jabber / xmpp:[EMAIL PROTECTED
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Robin Guo wrote:
| At least, s10u6 will contain L2ARC cache, ZFS as root filesystem, etc..
Any detail about this L2ARC thing?. I see some references in Google (a
cache device) but no in deep description.
- --
Jesus Cea Avion
detached vdevs as well as destroyed pools.
+inf :-)
- --
Jesus Cea Avion _/_/ _/_/_/_/_/_/
[EMAIL PROTECTED] - http://www.jcea.es/ _/_/_/_/ _/_/_/_/ _/_/
jabber / xmpp:[EMAIL PROTECTED
are *not* talking about consumer grade pendrives, I can't comment.
- --
Jesus Cea Avion _/_/ _/_/_/_/_/_/
[EMAIL PROTECTED] http://www.argo.es/~jcea/ _/_/_/_/ _/_/_/_/ _/_/
jabber / xmpp:[EMAIL PROTECTED
)
No disk corruption. Only dataloss (last writes can be lost), if I recall
correctly. ZFS will be consistent even with ZIL disabled.
If I'm wrong, please educate :)
- --
Jesus Cea Avion _/_/ _/_/_/_/_/_/
[EMAIL PROTECTED] http://www.argo.es/~jcea
| find a bug on this (though it's been known for some time), so feel
free to
| file a bug.
Hope somebody is moving this to a hash or similar :-).
- --
Jesus Cea Avion _/_/ _/_/_/_/_/_/
[EMAIL PROTECTED] http://www.argo.es/~jcea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Vincent Fox wrote:
| So the point is, a JBOD with a flash drive in one (or two to mirror
the ZIL) of the slots would be a lot SIMPLER.
I guess a USB pendrive would be slower than a harddisk. Bad performance
for the ZIL.
- --
Jesus Cea Avion
. Investigating it, I saw the point was access time
modification. That is, when accessing a file, access time metadata is
updated.
You could unset the atime property, if you wish. I don't :)
- --
Jesus Cea Avion _/_/ _/_/_/_/_/_/
[EMAIL PROTECTED] http://www.argo.es/~jcea
and your database worksize is
16Kbytes, ZFS would load 128Kbytes, update 16 kbytes inside there and
write out 128 kbytes to the disk.
If both blocksizes are equal, you don't need the read part. That is a
huge win.
- --
Jesus Cea Avion _/_/ _/_/_/_/_/_/
[EMAIL
the critical bits with ZFS copies .
Those bits would include the OS.
Would ZFS boot be able to boot from a copies boot dataset, when one of
the disks are failing?. Counting that ditto blocks are spread between
both disks, of course.
PS: ZFS copies = Ditto blocks.
- --
Jesus Cea Avion
backups. But I could protect the boot environment or my mail
dataset using ditto blocks.
Playing with ZFS copies, I can use a single pool and modulate
space/protection per dataset according to my needs and compromises.
- --
Jesus Cea Avion
|
zfs receive. zfs send if far more efficient that rsync.
- --
Jesus Cea Avion _/_/ _/_/_/_/_/_/
[EMAIL PROTECTED] http://www.argo.es/~jcea/ _/_/_/_/ _/_/_/_/ _/_/
jabber / xmpp:[EMAIL PROTECTED
thinking about single-user
mode, patching and live upgrading. How about /var/sadm?
- --
Jesus Cea Avion _/_/ _/_/_/_/_/_/
[EMAIL PROTECTED] http://www.argo.es/~jcea/ _/_/_/_/ _/_/_/_/ _/_/
jabber / xmpp:[EMAIL PROTECTED
.
Would be very nice if the improvements would be documented anywhere :-)
- --
Jesus Cea Avion _/_/ _/_/_/_/_/_/
[EMAIL PROTECTED] http://www.argo.es/~jcea/ _/_/_/_/ _/_/_/_/ _/_/
jabber / xmpp:[EMAIL PROTECTED
Solaris10U4 be published :)
- --
Jesus Cea Avion _/_/ _/_/_/_/_/_/
[EMAIL PROTECTED] http://www.argo.es/~jcea/ _/_/_/_/ _/_/_/_/ _/_/
jabber / xmpp:[EMAIL PROTECTED
Environments under UFS, with all the userdata under ZFS.
- --
Jesus Cea Avion _/_/ _/_/_/_/_/_/
[EMAIL PROTECTED] http://www.argo.es/~jcea/ _/_/_/_/ _/_/_/_/ _/_/
jabber / xmpp:[EMAIL PROTECTED
.
...
- --
Jesus Cea Avion _/_/ _/_/_/_/_/_/
[EMAIL PROTECTED] http://www.argo.es/~jcea/ _/_/_/_/ _/_/_/_/ _/_/
jabber / xmpp:[EMAIL PROTECTED
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Kory Wheatley wrote:
We created 10,000 zfs file systems with no data in them yet, and
it seems after we did this our boot up process takes over an hour.
http://en.wikipedia.org/wiki/Zfs#Current_implementation_issues
- --
Jesus Cea Avion
and *THEN*
shrink the pool to umount the temporary added spare space.
To me, a huge issue is when you try to add a way-2 mirror to a zpool but
you add the two disk as separate vdev's by error. The only possible step
then is to backup the zpool, destroy it and recreate it again. Not nice...
- --
Jesus Cea
it another try.
Please, post your results. Thanks in advance.
- --
Jesus Cea Avion _/_/ _/_/_/_/_/_/
[EMAIL PROTECTED] http://www.argo.es/~jcea/ _/_/_/_/ _/_/_/_/ _/_/
jabber / xmpp:[EMAIL PROTECTED
problems gracefully managed by ZFS?.
Hope Solaris (not express) be able to act as a iscsi target soon :-)
- --
Jesus Cea Avion _/_/ _/_/_/_/_/_/
[EMAIL PROTECTED] http://www.argo.es/~jcea/ _/_/_/_/ _/_/_/_/ _/_/
jabber / xmpp:[EMAIL PROTECTED
?. And compatibility with Live Upgrade?. Any
timetable estimation?.
11/06 will be a fairly worthwhile upgrade, by the way.
- --
Jesus Cea Avion _/_/ _/_/_/_/_/_/
[EMAIL PROTECTED] http://www.argo.es/~jcea/ _/_/_/_/ _/_/_/_/ _/_/
jabber / xmpp:[EMAIL PROTECTED
unmirrored
spreaded over two disks (each disk partitioned with SVM). And I'm
constantly fighting the fill-up of one pools while the other is empty.
My current setup have the same space balance problem that a traditional
two *static* partition setup.
- --
Jesus Cea Avion
and mount a two-way ZFS mirror
between them. If space is an issue, you can use N partitions to mount a
raid-z, but your performance will suffer a lot because any data read
would require N seeks.
- --
Jesus Cea Avion _/_/ _/_/_/_/_/_/
[EMAIL PROTECTED] http
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Neil Perrin wrote:
I suppose if you know
the disk only contains zfs slices then write caching could be
manually enabled using format -e - cache - write_cache - enable
When will we have write cache control over ATA/SATA drives? :-).
- --
Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Neil Perrin wrote:
I suppose if you know
the disk only contains zfs slices then write caching could be
manually enabled using format -e - cache - write_cache - enable
When will we have write cache control over ATA/SATA drives? :-).
- --
Jesus Cea
64 matches
Mail list logo