) builds
> with checksum=sha256 and compression!=off. AFAIK, Solaris ZFS will COW
> the blocks even if their content is identical to what's already there,
> causing the snapshots to diverge.
>
> See https://www.illumos.org/issues/3236 for details.
>
This is in
>
> Robert Milkowski wrote:
> >
> > Solaris 11.1 (free for non-prod use).
> >
>
> But a ticking bomb if you use a cache device.
It's been fixed in SRU (although this is only for customers with a support
contract - still, will be in 11.2 as well).
Then, I
Solaris 11.1 (free for non-prod use).
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Tiernan OToole
Sent: 25 February 2013 14:58
To: zfs-discuss@opensolaris.org
Subject: [zfs-discuss] ZFS Distro Advice
Good morning all.
My home NA
nd not in open, while if Oracle does it they
are bad?
Isn't it at least a little bit being hypocritical? (bashing Oracle and doing
sort of the same)
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensol
> > It also has a lot of performance improvements and general bug fixes
> in
> > the Solaris 11.1 release.
>
> Performance improvements such as?
Dedup'ed ARC for one.
0 block automatically "dedup'ed" in-memory.
Improvements to ZIL performance.
Zero-copy
bug fixes by Oracle that Illumos is not
getting (lack of resource, limited usage, etc.).
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> Personally, I'd recommend putting a standard Solaris fdisk
> partition on the drive and creating the two slices under that.
Why? In most cases giving zfs an entire disk is the best option.
I wouldn't bother with any manual partitioning.
--
Robert Milkowski
http://mi
- 24x 2.5" disks in front, another 2x 2.5" in rear,
Sandy Bridge as well.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
contract though.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
No, there isn't other way to do it currently. SMF approach is probably the
best option for the time being.
I think that there should be couple of other properties for zvol where
permissions could be stated.
Best regards,
Robert Milkowski
http://milek.blogspot.com
From: zfs-di
set to 1 after the cache size is
decreased, and if it stays that way.
The fix is in one of the SRUs and I think it should be in 11.1
I don't know if it was fixed in Illumos or even if Illumos was affected by
this at all.
--
Robert Milkowski
http://milek.blogspot.com
> -Original
dup because you will shrink the average record
> size and balloon the memory usage).
Can you expand a little bit more here?
Dedup+compression works pretty well actually (not counting "standard"
problems with current dedup - compression or no
nly sync writes will go to zil right a way (and not always, see
logbias, etc.) and to arc to be committed later to a pool when txg closes.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://ma
night I rebooted the machine into single-user mode, to rule out
> zones, crontabs and networked abusers, but I still get resilvering resets
> every
> now and then, about once an hour.
>
> I'm now trying a run with all zfs datasets unmounted, hope that helps
> somew
own/HDD19/disk ONLINE
0 0 0
/dev/chassis/SUN-FIRE-X4270-M2-SERVER.unknown/HDD17/disk ONLINE
0 0 0
/dev/chassis/SUN-FIRE-X4270-M2-SERVER.unknown/HDD15/disk ONLINE
0 0 0
errors: No known data errors
Best regards,
Robert
And he will still need an underlying filesystem like ZFS for them :)
> -Original Message-
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Nico Williams
> Sent: 25 April 2012 20:32
> To: Paul Archer
> Cc: ZFS-Discuss mailing list
>
referring to dedup efficiency which with lower recordsize
values should improve dedup ratios (although it will require more memory for
ddt).
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Brad Diggs
Sent: 29 December 2011 15:55
To: Robert
p, however in pre Solaris 11 GA (and in Illumos) you would end up with 2x
copies of blocks in ARC cache, while in S11 GA ARC will keep only 1 copy of
all blocks. This can make a big difference if there are even more than just
2x files being dedupped and you need arc memory to cache other data as well.
--
Robert Milkowski
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> disk. This behavior is what makes NFS over ZFS slow without a slog: NFS
does
> everything O_SYNC by default,
No, it doesn't. Howver VMWare by default issues all writes as SYNC.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail
Just to close out the discussion, I wasn't able to prove any issues with
ZFS. The files that were changed all seem to have plausible scenarios.
I've moved my external USB drive backups over to ZFS directly connected
to the file server and it's all working fine.
Thanks for everyone's help!
-
unsubscribe
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Oct 24, 2011, at 9:42, Edward Ned Harvey
wrote:
>
> I would suggest finding a way to connect the external disks directly to the
> ZFS server, and start using zfs send instead.
>
Since these were my offsite backups I was using Truecrypt which drove the use
of ext3 and Linux. Also I wanted
On 10/22/2011 04:14 PM, Mark Sandrock wrote:
Why don't you see which byte differs, and how it does?
Maybe that would suggest the "failure mode". Is it the
same byte data in all affected files, for instance?
Mark
I found something interesting with the .ppt file. Apparently, just
opening a .ppt
can still be applied?
-Bob
> On Oct 22, 2011, at 9:27 AM, Robert Watzlavick wrote:
>
>> I've noticed something strange over the past few months with four files on
>> my raidz. Here's the setup:
>> OpenSolaris snv_111b
>> ZFS Pool version 14
>> AMD-b
On Oct 22, 2011, at 13:14, Edward Ned Harvey
wrote:
>>
> How can you outrule the possibility of "something changed the file."
> Intentionally, not as a form of filesystem corruption.
I suppose that's possible but seems unlikely. One byte on a file changed on the
disk with no corresponding chan
I've noticed something strange over the past few months with four files
on my raidz. Here's the setup:
OpenSolaris snv_111b
ZFS Pool version 14
AMD-based server with ECC RAM.
5 ST3500630AS 500 GB SATA drives (4 active plus spare) in raidz1
The other day, I observed what appears to be undetected
7;ll certainly find out, in due time, how
to have a ZFS server using smb shares broadcast its name on the network..
Thanks again for willing to help.
Amitiés, Robert
PS: since cross-posting seems to be the rage these days, I'll copy that
to the zfs-discuss list, in case a noble sou
On Mar 4, 2011, at 10:46 AM, Cindy Swearingen wrote:
> Hi Robert,
>
> We integrated some fixes that allowed you to replace disks of equivalent
> sizes, but 40 MB is probably beyond that window.
>
> Yes, you can do #2 below and the pool size will be adjusted down to the
>
On Mar 4, 2011, at 11:19 AM, Eric D. Mudama wrote:
> On Fri, Mar 4 at 9:22, Robert Hartzell wrote:
>> In 2007 I bought 6 WD1600JS 160GB sata disks and used 4 to create a raidz
>> storage pool and then shelved the other two for spares. One of the disks
>> failed last nig
On Mar 4, 2011, at 11:46 AM, Cindy Swearingen wrote:
> Robert,
>
> Which Solaris release is this?
>
> Thanks,
>
> Cindy
>
Solaris 11 express 2010.11
--
Robert Hartzell
b...@rwhartzell.net
RwHartzell.Net, Inc.
___
On Mar 4, 2011, at 10:01 AM, Tim Cook wrote:
>
>
> On Fri, Mar 4, 2011 at 10:22 AM, Robert Hartzell wrote:
> In 2007 I bought 6 WD1600JS 160GB sata disks and used 4 to create a raidz
> storage pool and then shelved the other two for spares. One of the disks
> failed la
#2 is possible would I still be able to use the last still shelved disk
as a spare?
If #2 is possible I would probably recreate the zpool as raidz2 instead of the
current raidz1.
Any info/comments would be greatly appreciated.
Robert
--
Robert Hartzell
b...@rwhartzell.net
Le 08/02/2011 07:10, Jerry Kemp a écrit :
As part of a small home project, I have purchased a SIL3124 hba in
hopes of attaching an external drive/drive enclosure via eSATA.
The host in question is an old Sun Netra T1 currently running
OpenSolaris Nevada b130.
The card in question is this Sil
. But I have
enough memory and such a workload that I see little physical reads going
on.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 01/ 7/11 09:02 PM, Pawel Jakub Dawidek wrote:
On Fri, Jan 07, 2011 at 07:33:53PM +, Robert Milkowski wrote:
Now what if block B is a meta-data block?
Metadata is not deduplicated.
Good point but then it depends on a perspective.
What if you you are storing lots of VMDKs?
One
at
dedup or not all the other possible cases of data corruption are there
anyway, adding yet another one might or might not be acceptable.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://
cting
duplicate blocks.
I don't believe that fletcher is still allowed for dedup - right now it
is only sha256.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailma
On 01/ 4/11 11:35 PM, Robert Milkowski wrote:
On 01/ 3/11 04:28 PM, Richard Elling wrote:
On Jan 3, 2011, at 5:08 AM, Robert Milkowski wrote:
On 12/26/10 05:40 AM, Tim Cook wrote:
On Sat, Dec 25, 2010 at 11:23 PM, Richard Elling
mailto:richard.ell...@gmail.com>> wrote:
The
On 01/ 3/11 04:28 PM, Richard Elling wrote:
On Jan 3, 2011, at 5:08 AM, Robert Milkowski wrote:
On 12/26/10 05:40 AM, Tim Cook wrote:
On Sat, Dec 25, 2010 at 11:23 PM, Richard Elling
mailto:richard.ell...@gmail.com>> wrote:
There are more people outside of Oracle developing f
Le 04/01/2011 08:24, Alan Wright a écrit :
Those objects are created automatically when you share a dataset
over SMB to support remote ZFS user/group quota management from
the Windows desktop. The dot in .$EXTEND is to make the directory
less intrusive on Solaris.
There is no Solaris or ZFS func
dates bi-weekly out of Sun. Nexenta spending
hundreds of man-hours on a GUI and userland apps isn't work on ZFS.
Exactly my observation as well. I haven't seen any ZFS related
development happening at Ilumos or Nexenta, at least not yet.
--
Robert
Le 13/12/2010 01:56, Tim Cook a écrit :
Yes, only the USA, which is where all relevant companies in this
discussion do business. On a mailing list centered around a company
founded in and doing business in the USA. So what exactly is your point?
Don't you forget that these companies also do mu
9 Oct 2010 at
src.opensolaris.org they are still old versions from August, at least
the ones I checked.
See
http://src.opensolaris.org/source/history/onnv/onnv-gate/usr/src/uts/common/fs/zfs/
the mercurial gate doesn't have any updates either.
Best regards,
Robert
On 07/12/2010 23:54, Tony MacDoodle wrote:
Is is possible to expand the size of a ZFS volume?
It was created with the following command:
zfs create -V 20G ldomspool/test
see man page for zfs, section about volsize property.
Best regards,
Robert Milkowski
http://milek.blogspot.com
t the code covered by them can be
used freely. If you intend for your code to be free for all users,
always use the latest version of the GPL.
--
Robert Millan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 18/11/2010 17:53, Cindy Swearingen wrote:
Markus,
Let me correct/expand this:
1. If you create a RAIDZ pool on OS 11 Express (b151a), you will have
some mirrored metadata. This feature integrated into b148 and the pool
version is 29. This is the part I mixed up.
2. If you have an existing R
any files, it just dumps data into the underlying
objects.
--matt
On Mon, Oct 4, 2010 at 11:20 AM, Robert Milkowski wrote:
Hi,
I thought that if I use zfs send snap | zfs recv if on a receiving side
the recordsize property is set to different value it will be honored. But it
doesn
m2/m1 [ZPL], ID 1110, cr_txg 33537, 2.03M, 6 objects
Object lvl iblk dblk dsize lsize %full type
6216K32K 1.00M 1M 100.00 ZFS plain file
Now it is fine.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-di
; ^[18]
<http://en.wikipedia.org/wiki/Btrfs#cite_note-17> Cloning from byte
ranges in one file to another is also
supported, allowing large files to be more efficiently manipulated
like standard rope
<http://en.wikipedia.org/wiki/Rope_%28computer_science%29> data structures."
shuffled
with either a kernel or udev upgrade.
Robert
On 9/13/10 10:31 AM, LaoTsao 老曹 wrote:
try export and import the zpool
On 9/13/2010 1:26 PM, Brian wrote:
I am running zfs-fuse on an Ubuntu 10.04 box. I have a dual mirrored
pool:
mirror sdd sde mirror sdf sdg
Recently the device names sh
m DVD/Jumpstart
you should see 2 disks and just do a ZFS 2 disk mirror for rpool.
Hope this helps...
- Robert Loper
-- Forwarded message --
From: Andrei Ghimus
To: zfs-discuss@opensolaris.org
Date: Mon, 30 Aug 2010 11:05:27 PDT
Subject: Re: [zfs-discuss] Terrible ZFS performance on a
in sync mode: system write file
in sync or async mode?
async
The sync property takes an effect immediately for all new writes even if
a file was open before the property was changed.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-disc
ehave this way and it should be considered as a bug.
What do you think?
ps. I tested it on S10u8 and snv_134.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/ma
On 08/16/10 10:38 PM, George Wilson wrote:
Robert Hartzell wrote:
On 08/16/10 07:47 PM, George Wilson wrote:
The root filesystem on the root pool is set to 'canmount=noauto' so you
need to manually mount it first using 'zfs mount '. Then
run 'zfs mount -a'.
-
y and "zfs
mount -a" failed I guess because the first command failed.
--
Robert Hartzell
b...@rwhartzell.net
RwHartzell.Net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 08/16/10 07:39 PM, Mark Musante wrote:
On 16 Aug 2010, at 22:30, Robert Hartzell wrote:
cd /mnt ; ls
bertha export var
ls bertha
boot etc
where is the rest of the file systems and data?
By default, root filesystems are not mounted. Try doing a "zfs mount
bertha/ROOT/snv_134&qu
legacy
bertha/zones/bz2/ROOT/zbe 821M 126G 821M legacy
cd /mnt ; ls
bertha export var
ls bertha
boot etc
where is the rest of the file systems and data?
--
Robert Hartzell
b...@rwhartzell.net
RwHartzell.Net
___
zfs-discuss mailing list
x27;t remember if it offered or not an ability to manipulate zvol's
WCE flag but if it didn't then you can do it anyway as it is a zvol
property. For an example see
http://milek.blogspot.com/2010/02/zvols-write-cache.html
--
Robert Milkowski
http://mil
recent
build you have zfs set sync={disabled|default|always} which also works
with zvols.
So you do have a control over how it is supposed to behave and to make
it nice it is even on per zvol basis.
It is just that the default is synchronous.
--
Robert Milko
fyi
Original Message
Subject:Read-only ZFS pools [PSARC/2010/306 FastTrack timeout
08/06/2010]
Date: Fri, 30 Jul 2010 14:08:38 -0600
From: Tim Haley
To: psarc-...@sun.com
CC: zfs-t...@sun.com
I am sponsoring the following fast-track for George Wilson.
fyi
--
Robert Milkowski
http://milek.blogspot.com
Original Message
Subject:zpool import despite missing log [PSARC/2010/292 Self Review]
Date: Mon, 26 Jul 2010 08:38:22 -0600
From: Tim Haley
To: psarc-...@sun.com
CC: zfs-t...@sun.com
I am sponsoring
On 22/07/2010 03:25, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Robert Milkowski
I had a quick look at your results a moment ago.
The problem is that you used a server with 4GB of RAM + a raid card
hough but it might be that a stripe size
was not matched to ZFS recordsize and iozone block size in this case.
The issue with raid-z and random reads is that as cache hit ratio goes
down to 0 the IOPS approaches IOPS of a single drive. For a little bit
more information see http://blogs.sun.
x27;disk'
id=0
guid=13726396776693410521
path='/dev/dsk/c5t600601608D642400B78DD7589A5DDF11d0s2'
devid='id1,s...@n600601608d642400b78dd7589a5ddf11/c'
phys_path='/scsi_vhci/s...@g600601608d642400b78dd7589a5ddf11:c'
whole_disk=0
metaslab_array=26
metaslab_shift=23
"compress" the file much better than a compression. Also
please note that you can use both: compression and dedup at the same time.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
han a
regression.
Are you sure it is not a debug vs. non-debug issue?
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
outdone, they've stopped other OS releases as well. Surely,
this is a temporary situation.
AFAIK the dev OSOL releases are still being produced - they haven't been
made public since b134 though.
--
Robert Milkowski
http://milek.blogspot.com
_
(async or sync) to be written synchronously.
ps. still, I'm not saying it would made ZFS ACID.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinf
ndom reads.
http://blogs.sun.com/roch/entry/when_to_and_not_to
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
performance as a much greater number of disk drives in RAID-10
configuration and if you don't need much space it could make sense.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On 24/06/2010 14:32, Ross Walker wrote:
On Jun 24, 2010, at 5:40 AM, Robert Milkowski wrote:
On 23/06/2010 18:50, Adam Leventhal wrote:
Does it mean that for dataset used for databases and similar environments where
basically all blocks have fixed size and there is no other data
On 23/06/2010 19:29, Ross Walker wrote:
On Jun 23, 2010, at 1:48 PM, Robert Milkowski wrote:
128GB.
Does it mean that for dataset used for databases and similar environments where
basically all blocks have fixed size and there is no other data all parity
information will end-up on one
smaller writes to metadata that will distribute parity.
What is the total width of your raidz1 stripe?
4x disks, 16KB recordsize, 128GB file, random read with 16KB block.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs
128GB.
Does it mean that for dataset used for databases and similar
environments where basically all blocks have fixed size and there is no
other data all parity information will end-up on one (z1) or two (z2)
specific disks?
On 23/06/2010 17:51, Adam Leventhal wrote:
Hey Robert,
How
dedup enabled in a pool you
can't really get a dedup ratio per share.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
rather except all of them to get about the same
number of iops.
Any idea why?
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
lly intent to get it integrated into ON? Because if
you do then I think that getting Nexenta guys expanding on it would be
better for everyone instead of having them reinventing the wheel...
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discu
.
Previous Versions should work even if you have a one large filesystems
with all users homes as directories within.
What Solaris/OpenSolaris version did you try for the 5k test?
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
? It maps the snapshots so windows
can access them via "previous versions" from the explorers context menu.
btw: the CIFS service supports Windows Shadow Copies out-of-the-box.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discus
whole point of having L2ARC is to serve high random read iops from
RAM and L2ARC device instead of disk drives in a main pool.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensola
full priority.
Is this problem known to the developers? Will it be addressed?
http://sparcv9.blogspot.com/2010/06/slower-zfs-scrubsresilver-on-way.html
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6494473
--
Robert Milkowski
http://milek.blogspot.com
On 11/06/2010 10:58, Andrey Kuzmin wrote:
On Fri, Jun 11, 2010 at 1:26 PM, Robert Milkowski <mailto:mi...@task.gda.pl>> wrote:
On 11/06/2010 09:22, sensille wrote:
Andrey Kuzmin wrote:
On Fri, Jun 11, 2010 at 1:54 AM, Richard Elling
mailto:ri
cely coalesce these
IOs and do a sequential writes with large blocks.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
port is nothing unusual and
has been the case for at least several years.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 10/06/2010 15:39, Andrey Kuzmin wrote:
On Thu, Jun 10, 2010 at 6:06 PM, Robert Milkowski <mailto:mi...@task.gda.pl>> wrote:
On 21/10/2009 03:54, Bob Friesenhahn wrote:
I would be interested to know how many IOPS an OS like Solaris
is able to push through a sing
0 IOPS to a single SAS port.
It also scales well - I did run above dd's over 4x SAS ports at the same
time and it scaled linearly by achieving well over 400k IOPS.
hw used: x4270, 2x Intel X5570 2.93GHz, 4x SAS SG-PCIE8SAS-E-Z (fw.
1.27.3.0), connected to F5100.
--
Robert Milkowski
: why do you need to do
this at all? Isn't the ZFS ARC supposed to release memory when the
system is under pressure? Is that mechanism not working well in some
cases ... ?
My understanding is that if kmem gets heavily fragmaneted ZFS won't be
able to give back much memory.
s/zvol.c#1785)
- but zfs send|recv should replicate it I think.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
are very useful at
times.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 06/05/2010 21:45, Nicolas Williams wrote:
On Thu, May 06, 2010 at 03:30:05PM -0500, Wes Felter wrote:
On 5/6/10 5:28 AM, Robert Milkowski wrote:
sync=disabled
Synchronous requests are disabled. File system transactions
only commit to stable storage on the next DMU transaction
would probably decrease
performance and would invalidate all blocks if only a single l2arc
device would die. Additionally having each block only on one l2arc
device allows to read from all of l2arc devices at the same time.
--
Robert Milkowski
http://milek.blogspo
ce failover in a
cluster L2ARC will be kept warm. Then the only thing which might affect
L2 performance considerably would be a L2ARC device failure...
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-dis
On 06/05/2010 13:12, Robert Milkowski wrote:
On 06/05/2010 12:24, Pawel Jakub Dawidek wrote:
I read that this property is not inherited and I can't see why.
If what I read is up-to-date, could you tell why?
It is inherited. Sorry for the confusion but there was a discussion if
it shou
opose that it shouldn't
but it was changed again during a PSARC review that it should.
And I did a copy'n'paste here.
Again, sorry for the confusion.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@
nformation on it you might look at
http://milek.blogspot.com/2010/05/zfs-synchronous-vs-asynchronous-io.html
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
fails prior to completing a series of
writes and I reboot using a failsafe (i.e. install disc), will the log be
replayed after a zpool import -f ?
yes
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss
when it is off it
will give you an estimate of what's the absolute maximum performance
increase (if any) by having a dedicated ZIL device.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolari
0 zil synchronicity
No promise on date, but it will bubble to the top eventually.
So everyone knows - it has been integrated into snv_140 :)
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
ution*.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
s no room for improvement here. All I'm saying is
that it is not as easy problem as it seems.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
1 - 100 of 1316 matches
Mail list logo