) builds
> with checksum=sha256 and compression!=off. AFAIK, Solaris ZFS will COW
> the blocks even if their content is identical to what's already there,
> causing the snapshots to diverge.
>
> See https://www.illumos.org/issues/3236 for details.
>
This is in
>
> Robert Milkowski wrote:
> >
> > Solaris 11.1 (free for non-prod use).
> >
>
> But a ticking bomb if you use a cache device.
It's been fixed in SRU (although this is only for customers with a support
contract - still, will be in 11.2 as well).
Then, I
Solaris 11.1 (free for non-prod use).
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Tiernan OToole
Sent: 25 February 2013 14:58
To: zfs-discuss@opensolaris.org
Subject: [zfs-discuss] ZFS Distro Advice
Good morning all.
My home NA
nd not in open, while if Oracle does it they
are bad?
Isn't it at least a little bit being hypocritical? (bashing Oracle and doing
sort of the same)
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensol
> > It also has a lot of performance improvements and general bug fixes
> in
> > the Solaris 11.1 release.
>
> Performance improvements such as?
Dedup'ed ARC for one.
0 block automatically "dedup'ed" in-memory.
Improvements to ZIL performance.
Zero-copy
bug fixes by Oracle that Illumos is not
getting (lack of resource, limited usage, etc.).
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> Personally, I'd recommend putting a standard Solaris fdisk
> partition on the drive and creating the two slices under that.
Why? In most cases giving zfs an entire disk is the best option.
I wouldn't bother with any manual partitioning.
--
Robert Milkowski
http://mi
- 24x 2.5" disks in front, another 2x 2.5" in rear,
Sandy Bridge as well.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
contract though.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
No, there isn't other way to do it currently. SMF approach is probably the
best option for the time being.
I think that there should be couple of other properties for zvol where
permissions could be stated.
Best regards,
Robert Milkowski
http://milek.blogspot.com
From: zfs-di
set to 1 after the cache size is
decreased, and if it stays that way.
The fix is in one of the SRUs and I think it should be in 11.1
I don't know if it was fixed in Illumos or even if Illumos was affected by
this at all.
--
Robert Milkowski
http://milek.blogspot.com
> -Original
dup because you will shrink the average record
> size and balloon the memory usage).
Can you expand a little bit more here?
Dedup+compression works pretty well actually (not counting "standard"
problems with current dedup - compression or no
nly sync writes will go to zil right a way (and not always, see
logbias, etc.) and to arc to be committed later to a pool when txg closes.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://ma
night I rebooted the machine into single-user mode, to rule out
> zones, crontabs and networked abusers, but I still get resilvering resets
> every
> now and then, about once an hour.
>
> I'm now trying a run with all zfs datasets unmounted, hope that helps
> somew
own/HDD19/disk ONLINE
0 0 0
/dev/chassis/SUN-FIRE-X4270-M2-SERVER.unknown/HDD17/disk ONLINE
0 0 0
/dev/chassis/SUN-FIRE-X4270-M2-SERVER.unknown/HDD15/disk ONLINE
0 0 0
errors: No known data errors
Best regards,
Robert
And he will still need an underlying filesystem like ZFS for them :)
> -Original Message-
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Nico Williams
> Sent: 25 April 2012 20:32
> To: Paul Archer
> Cc: ZFS-Discuss mailing list
>
referring to dedup efficiency which with lower recordsize
values should improve dedup ratios (although it will require more memory for
ddt).
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Brad Diggs
Sent: 29 December 2011 15:55
To: Robert
p, however in pre Solaris 11 GA (and in Illumos) you would end up with 2x
copies of blocks in ARC cache, while in S11 GA ARC will keep only 1 copy of
all blocks. This can make a big difference if there are even more than just
2x files being dedupped and you need arc memory to cache other data as well.
--
Robert Milkowski
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> disk. This behavior is what makes NFS over ZFS slow without a slog: NFS
does
> everything O_SYNC by default,
No, it doesn't. Howver VMWare by default issues all writes as SYNC.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail
. But I have
enough memory and such a workload that I see little physical reads going
on.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 01/ 7/11 09:02 PM, Pawel Jakub Dawidek wrote:
On Fri, Jan 07, 2011 at 07:33:53PM +, Robert Milkowski wrote:
Now what if block B is a meta-data block?
Metadata is not deduplicated.
Good point but then it depends on a perspective.
What if you you are storing lots of VMDKs?
One
at
dedup or not all the other possible cases of data corruption are there
anyway, adding yet another one might or might not be acceptable.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://
cting
duplicate blocks.
I don't believe that fletcher is still allowed for dedup - right now it
is only sha256.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailma
On 01/ 4/11 11:35 PM, Robert Milkowski wrote:
On 01/ 3/11 04:28 PM, Richard Elling wrote:
On Jan 3, 2011, at 5:08 AM, Robert Milkowski wrote:
On 12/26/10 05:40 AM, Tim Cook wrote:
On Sat, Dec 25, 2010 at 11:23 PM, Richard Elling
mailto:richard.ell...@gmail.com>> wrote:
The
On 01/ 3/11 04:28 PM, Richard Elling wrote:
On Jan 3, 2011, at 5:08 AM, Robert Milkowski wrote:
On 12/26/10 05:40 AM, Tim Cook wrote:
On Sat, Dec 25, 2010 at 11:23 PM, Richard Elling
mailto:richard.ell...@gmail.com>> wrote:
There are more people outside of Oracle developing f
dates bi-weekly out of Sun. Nexenta spending
hundreds of man-hours on a GUI and userland apps isn't work on ZFS.
Exactly my observation as well. I haven't seen any ZFS related
development happening at Ilumos or Nexenta, at least not yet.
--
Robert
9 Oct 2010 at
src.opensolaris.org they are still old versions from August, at least
the ones I checked.
See
http://src.opensolaris.org/source/history/onnv/onnv-gate/usr/src/uts/common/fs/zfs/
the mercurial gate doesn't have any updates either.
Best regards,
Robert
On 07/12/2010 23:54, Tony MacDoodle wrote:
Is is possible to expand the size of a ZFS volume?
It was created with the following command:
zfs create -V 20G ldomspool/test
see man page for zfs, section about volsize property.
Best regards,
Robert Milkowski
http://milek.blogspot.com
On 18/11/2010 17:53, Cindy Swearingen wrote:
Markus,
Let me correct/expand this:
1. If you create a RAIDZ pool on OS 11 Express (b151a), you will have
some mirrored metadata. This feature integrated into b148 and the pool
version is 29. This is the part I mixed up.
2. If you have an existing R
any files, it just dumps data into the underlying
objects.
--matt
On Mon, Oct 4, 2010 at 11:20 AM, Robert Milkowski wrote:
Hi,
I thought that if I use zfs send snap | zfs recv if on a receiving side
the recordsize property is set to different value it will be honored. But it
doesn
m2/m1 [ZPL], ID 1110, cr_txg 33537, 2.03M, 6 objects
Object lvl iblk dblk dsize lsize %full type
6216K32K 1.00M 1M 100.00 ZFS plain file
Now it is fine.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-di
; ^[18]
<http://en.wikipedia.org/wiki/Btrfs#cite_note-17> Cloning from byte
ranges in one file to another is also
supported, allowing large files to be more efficiently manipulated
like standard rope
<http://en.wikipedia.org/wiki/Rope_%28computer_science%29> data structures."
in sync mode: system write file
in sync or async mode?
async
The sync property takes an effect immediately for all new writes even if
a file was open before the property was changed.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-disc
ehave this way and it should be considered as a bug.
What do you think?
ps. I tested it on S10u8 and snv_134.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/ma
x27;t remember if it offered or not an ability to manipulate zvol's
WCE flag but if it didn't then you can do it anyway as it is a zvol
property. For an example see
http://milek.blogspot.com/2010/02/zvols-write-cache.html
--
Robert Milkowski
http://mil
recent
build you have zfs set sync={disabled|default|always} which also works
with zvols.
So you do have a control over how it is supposed to behave and to make
it nice it is even on per zvol basis.
It is just that the default is synchronous.
--
Robert Milko
fyi
Original Message
Subject:Read-only ZFS pools [PSARC/2010/306 FastTrack timeout
08/06/2010]
Date: Fri, 30 Jul 2010 14:08:38 -0600
From: Tim Haley
To: psarc-...@sun.com
CC: zfs-t...@sun.com
I am sponsoring the following fast-track for George Wilson.
fyi
--
Robert Milkowski
http://milek.blogspot.com
Original Message
Subject:zpool import despite missing log [PSARC/2010/292 Self Review]
Date: Mon, 26 Jul 2010 08:38:22 -0600
From: Tim Haley
To: psarc-...@sun.com
CC: zfs-t...@sun.com
I am sponsoring
On 22/07/2010 03:25, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Robert Milkowski
I had a quick look at your results a moment ago.
The problem is that you used a server with 4GB of RAM + a raid card
hough but it might be that a stripe size
was not matched to ZFS recordsize and iozone block size in this case.
The issue with raid-z and random reads is that as cache hit ratio goes
down to 0 the IOPS approaches IOPS of a single drive. For a little bit
more information see http://blogs.sun.
"compress" the file much better than a compression. Also
please note that you can use both: compression and dedup at the same time.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
han a
regression.
Are you sure it is not a debug vs. non-debug issue?
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
outdone, they've stopped other OS releases as well. Surely,
this is a temporary situation.
AFAIK the dev OSOL releases are still being produced - they haven't been
made public since b134 though.
--
Robert Milkowski
http://milek.blogspot.com
_
(async or sync) to be written synchronously.
ps. still, I'm not saying it would made ZFS ACID.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinf
ndom reads.
http://blogs.sun.com/roch/entry/when_to_and_not_to
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
performance as a much greater number of disk drives in RAID-10
configuration and if you don't need much space it could make sense.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On 24/06/2010 14:32, Ross Walker wrote:
On Jun 24, 2010, at 5:40 AM, Robert Milkowski wrote:
On 23/06/2010 18:50, Adam Leventhal wrote:
Does it mean that for dataset used for databases and similar environments where
basically all blocks have fixed size and there is no other data
On 23/06/2010 19:29, Ross Walker wrote:
On Jun 23, 2010, at 1:48 PM, Robert Milkowski wrote:
128GB.
Does it mean that for dataset used for databases and similar environments where
basically all blocks have fixed size and there is no other data all parity
information will end-up on one
smaller writes to metadata that will distribute parity.
What is the total width of your raidz1 stripe?
4x disks, 16KB recordsize, 128GB file, random read with 16KB block.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs
big of a file are you making? RAID-Z does not explicitly do the parity
distribution that RAID-5 does. Instead, it relies on non-uniform stripe widths
to distribute IOPS.
Adam
On Jun 18, 2010, at 7:26 AM, Robert Milkowski wrote:
Hi,
zpool create test raidz c0t0d0 c1t0d0 c2t0d0 c3t0d0
dedup enabled in a pool you
can't really get a dedup ratio per share.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
rather except all of them to get about the same
number of iops.
Any idea why?
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
lly intent to get it integrated into ON? Because if
you do then I think that getting Nexenta guys expanding on it would be
better for everyone instead of having them reinventing the wheel...
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discu
.
Previous Versions should work even if you have a one large filesystems
with all users homes as directories within.
What Solaris/OpenSolaris version did you try for the 5k test?
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
? It maps the snapshots so windows
can access them via "previous versions" from the explorers context menu.
btw: the CIFS service supports Windows Shadow Copies out-of-the-box.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discus
whole point of having L2ARC is to serve high random read iops from
RAM and L2ARC device instead of disk drives in a main pool.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensola
full priority.
Is this problem known to the developers? Will it be addressed?
http://sparcv9.blogspot.com/2010/06/slower-zfs-scrubsresilver-on-way.html
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6494473
--
Robert Milkowski
http://milek.blogspot.com
On 11/06/2010 10:58, Andrey Kuzmin wrote:
On Fri, Jun 11, 2010 at 1:26 PM, Robert Milkowski <mailto:mi...@task.gda.pl>> wrote:
On 11/06/2010 09:22, sensille wrote:
Andrey Kuzmin wrote:
On Fri, Jun 11, 2010 at 1:54 AM, Richard Elling
mailto:ri
cely coalesce these
IOs and do a sequential writes with large blocks.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
port is nothing unusual and
has been the case for at least several years.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 10/06/2010 15:39, Andrey Kuzmin wrote:
On Thu, Jun 10, 2010 at 6:06 PM, Robert Milkowski <mailto:mi...@task.gda.pl>> wrote:
On 21/10/2009 03:54, Bob Friesenhahn wrote:
I would be interested to know how many IOPS an OS like Solaris
is able to push through a sing
0 IOPS to a single SAS port.
It also scales well - I did run above dd's over 4x SAS ports at the same
time and it scaled linearly by achieving well over 400k IOPS.
hw used: x4270, 2x Intel X5570 2.93GHz, 4x SAS SG-PCIE8SAS-E-Z (fw.
1.27.3.0), connected to F5100.
--
Robert Milkowski
: why do you need to do
this at all? Isn't the ZFS ARC supposed to release memory when the
system is under pressure? Is that mechanism not working well in some
cases ... ?
My understanding is that if kmem gets heavily fragmaneted ZFS won't be
able to give back much memory.
s/zvol.c#1785)
- but zfs send|recv should replicate it I think.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
are very useful at
times.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 06/05/2010 21:45, Nicolas Williams wrote:
On Thu, May 06, 2010 at 03:30:05PM -0500, Wes Felter wrote:
On 5/6/10 5:28 AM, Robert Milkowski wrote:
sync=disabled
Synchronous requests are disabled. File system transactions
only commit to stable storage on the next DMU transaction
would probably decrease
performance and would invalidate all blocks if only a single l2arc
device would die. Additionally having each block only on one l2arc
device allows to read from all of l2arc devices at the same time.
--
Robert Milkowski
http://milek.blogspo
ce failover in a
cluster L2ARC will be kept warm. Then the only thing which might affect
L2 performance considerably would be a L2ARC device failure...
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-dis
On 06/05/2010 13:12, Robert Milkowski wrote:
On 06/05/2010 12:24, Pawel Jakub Dawidek wrote:
I read that this property is not inherited and I can't see why.
If what I read is up-to-date, could you tell why?
It is inherited. Sorry for the confusion but there was a discussion if
it shou
opose that it shouldn't
but it was changed again during a PSARC review that it should.
And I did a copy'n'paste here.
Again, sorry for the confusion.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@
nformation on it you might look at
http://milek.blogspot.com/2010/05/zfs-synchronous-vs-asynchronous-io.html
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
fails prior to completing a series of
writes and I reboot using a failsafe (i.e. install disc), will the log be
replayed after a zpool import -f ?
yes
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss
when it is off it
will give you an estimate of what's the absolute maximum performance
increase (if any) by having a dedicated ZIL device.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolari
0 zil synchronicity
No promise on date, but it will bubble to the top eventually.
So everyone knows - it has been integrated into snv_140 :)
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
ution*.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
s no room for improvement here. All I'm saying is
that it is not as easy problem as it seems.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ch means it couldn't discover it. does 'zpool import' (no other
options) list the pool?
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
(and do so with -R). That way you can easily script it so import happens
after your disks ara available.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
. Then you can "zpool import" I think requiring the -f or -F,
and reboot again normal.
I just did a test on Solaris 10/09 - and system came up properly,
entirely on its own, with a failed pool.
zpool status showed the pool as unavailable (as I removed an underlying
device) which is fi
.
You
will need to power cycle. The system won't boot up again; you'll have to
The system should boot-up properly even if some pools are not accessible
(except rpool of course).
If it is not the case then there is a bug - last time I checked it
worked perfectly fine.
--
Robert
u can also find some benchmarks with sysbench + mysql or oracle.
I don't remember if I posted or not some of my results but I'm pretty
sure you can find others.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-dis
attach EBS.
That way Solaris won't automatically try to import the pool and your
scripts will do it once disks are available.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
size for database vs.
default, atime off vs. on, lzjb, gzip, ssd). Also comparison of
benchmark results with all default zfs setting compared to whatever
setting you did which gave you the best result.
--
Robert Milkowski
http://milek.blogspot.com
__
but it suggests that it had nothing to do with a double slash - rather
some process (your shell?) had an open file within the mountpoint. But
supplying -f you forced zfs to unmount it anyway.
--
Robert Milkowski
http://milek.blogspot.com
On 21/04/2010 06:16, Ryan John wrote:
Thanks. That
without going through the
process of actually copying the blocks, but just duplicating its meta data like
NetApp does?
I don't know about file cloning but why not put each VM on top of a zvol
- then you can clone a zvol. ?
--
Robert Milkowski
http://milek.blogspo
,
while accessing \\filer\arch\myfolder\myfile.txt works.
Any ideas?
We are running snv_130.
you are not using Samba daemon, are you?
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.
normal reboots zfs won't read data from slog.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
letely die as well.
Other than that you are fine even with unmirrored slog device.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ris is doing more or less for some time now.
look in the archives of this mailing list for more information.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
fine.
So for example - on x4540 servers try to avoid creating a pool with a
single RAID-Z3 group made of 44 disks, rather create 4 RAID-Z2 groups
each made of 11 disks all of them in a single pool.
--
Robert Milkowski
http://milek.blogspot.com
__
On 02/04/2010 16:04, casper@sun.com wrote:
sync() is actually *async* and returning from sync() says nothing about
to clarify - in case of ZFS sync() is actually synchronous.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss
the pool, resume the resource group and enable the storage resource
The other approach is to keep a pool under a cluster management but
eventually suspend a resource group so there won't be any unexpected
failovers (but it really depends on circumstances and what you are
t
s are part of a
cluster both of them have a full access to shared storage and you can
force zpool import on both nodes at the same time.
When you think about it you need actually such behavior for RAC to work
on raw devices or real cluster volumes or filesystems, etc.
--
Robert Milkowski
http://mil
you can export a share with as sync (default) or
async share while on Solaris you can't really currently force a NFS
server to start working in an async mode.
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@
sfy a race condition for
the sake of internal consistency. Applications which need to know their
next commands will not begin until after the previous sync write was
committed to disk.
ROTFL!!!
I think you should explain it even further for Casper :) :) :) :) :) :) :)
--
Robert Milk
e thing is
well-documented.
I double checked the documentation and you're right - the default has
changed to sync.
I haven't found in which RH version it happened but it doesn't really
matter.
So yes, I was wrong - the current default it seems to be sync on L
On 31/03/2010 16:44, Bob Friesenhahn wrote:
On Wed, 31 Mar 2010, Robert Milkowski wrote:
or there might be an extra zpool level (or system wide) property to
enable checking checksums onevery access from ARC - there will be a
siginificatn performance impact but then it might be acceptable for
Unless you are talking about doing regular snapshots and making sure
that application is consistent while doing so - for example putting all
Oracle tablespaces in a hot backup mode and taking a snapshot...
otherwise it doesn't really make sense.
--
Robert Milkowski
http://mil
need to re-import a database or recover
lots of files over NFS - your service is down and disabling ZIL makes a
recovery MUCH faster. Then there are cases when leaving the ZIL disabled
is acceptable as well.
--
Robert Milkowski
http://milek.blogspot.com
___
ld cause a significant performance problem.
or there might be an extra zpool level (or system wide) property to
enable checking checksums onevery access from ARC - there will be a
siginificatn performance impact but then it might be acceptable for
really paranoid folks especially with modern ha
1 - 100 of 1144 matches
Mail list logo