Sao Kiselkov writes:
> On 06/12/2012 05:37 PM, Roch Bourbonnais wrote:
> >
> > So the xcall are necessary part of memory reclaiming, when one needs to
> > tear down the TLB entry mapping the physical memory (which can from here
> > on be repurposed).
> &
Try with this /etc/system tunings :
set mac:mac_soft_ring_thread_bind=0
set mac:mac_srs_thread_bind=0
set zfs:zio_taskq_batch_pct=50
Le 12 juin 2012 à 11:37, Roch Bourbonnais a écrit :
>
> So the xcall are necessary part of memory reclaiming, when one needs to tear
> down the
So the xcall are necessary part of memory reclaiming, when one needs to tear
down the TLB entry mapping the physical memory (which can from here on be
repurposed).
So the xcall are just part of this. Should not cause trouble, but they do. They
consume a cpu for some time.
That in turn can caus
The process should be scalable.
Scrub all of the data on one disk using one disk worth of IOPS
Scrub all of the data on N disks using N disk's worth of IOPS.
THat will take ~ the same total time.
-r
Le 12 juin 2012 à 08:28, Jim Klimov a écrit :
> 2012-06-12 16:20, Roch Bourbonna
Scrubs are run at very low priority and yield very quickly in the presence of
other work.
So I really would not expect to see scrub create any impact on an other type of
storage activity.
Resilvering will more aggressively push forward on what is has to do, but
resilvering does not need to
rea
Edward Ned Harvey writes:
> Based on observed behavior measuring performance of dedup, I would say, some
> chunk of data and its associated metadata seem have approximately the same
> "warmness" in the cache. So when the data gets evicted, the associated
> metadata tends to be evicted too. S
Josh, I don't know the internals of the device but I have heard reports of SSDs
that would ignore flush write cache commands _and_ wouldn't have a supercap
protection (nor battery).
Such devices are subject to dataloss.
Did you also catch this thread
http://opensolaris.org/jive/thread
Le 7 févr. 2011 à 17:08, Yi Zhang a écrit :
> On Mon, Feb 7, 2011 at 10:26 AM, Roch wrote:
>>
>> Le 7 févr. 2011 à 06:25, Richard Elling a écrit :
>>
>>> On Feb 5, 2011, at 8:10 AM, Yi Zhang wrote:
>>>
>>>> Hi all,
>>>>
>&g
Le 7 févr. 2011 à 06:25, Richard Elling a écrit :
> On Feb 5, 2011, at 8:10 AM, Yi Zhang wrote:
>
>> Hi all,
>>
>> I'm trying to achieve the same effect of UFS directio on ZFS and here
>> is what I did:
>
> Solaris UFS directio has three functions:
> 1. improved async code path
> 2
Brandon High writes:
> On Tue, Nov 23, 2010 at 9:55 AM, Krunal Desai wrote:
> > What is the "upgrade path" like from this? For example, currently I
>
> The ashift is set in the pool when it's created and will persist
> through the life of that pool. If you set it at pool creation, it will
Le 5 août 2010 à 19:49, Ross Walker a écrit :
> On Aug 5, 2010, at 11:10 AM, Roch wrote:
>
>>
>> Ross Walker writes:
>>> On Aug 4, 2010, at 12:04 PM, Roch wrote:
>>>
>>>>
>>>> Ross Walker writes:
>>>>> On Au
Ross Walker writes:
> On Aug 4, 2010, at 12:04 PM, Roch wrote:
>
> >
> > Ross Walker writes:
> >> On Aug 4, 2010, at 9:20 AM, Roch wrote:
> >>
> >>>
> >>>
> >>> Ross Asks:
> >>> So on that note,
Ross Walker writes:
> On Aug 4, 2010, at 9:20 AM, Roch wrote:
>
> >
> >
> > Ross Asks:
> > So on that note, ZFS should disable the disks' write cache,
> > not enable them despite ZFS's COW properties because it
> > sho
uestion earlier, but got no answer: while an
iSCSI target is presented WCE does it respect the flush
command?
Yes. I would like to say "obviously" but it's been anything
but.
-r
Ross Walker writes:
> On Aug 4, 2010, at 3:52 AM, Roch wrote:
>
> >
> > Ro
Ross Walker writes:
> On Aug 3, 2010, at 12:13 PM, Roch Bourbonnais
> wrote:
>
> >
> > Le 27 mai 2010 à 07:03, Brent Jones a écrit :
> >
> >> On Wed, May 26, 2010 at 5:08 AM, Matt Connolly
> >> wrote:
> >>> I've set u
Le 27 mai 2010 à 07:03, Brent Jones a écrit :
> On Wed, May 26, 2010 at 5:08 AM, Matt Connolly
> wrote:
>> I've set up an iScsi volume on OpenSolaris (snv_134) with these commands:
>>
>> sh-4.0# zfs create rpool/iscsi
>> sh-4.0# zfs set shareiscsi=on rpool/iscsi
>> sh-4.0# zfs create -s -V 10g
v writes:
> Hi,
> A basic question regarding how zil works:
> For asynchronous write, will zil be used?
> For synchronous write, and if io is small, will the whole io be place on
> zil? or just the pointer be save into zil? what about large size io?
>
Let me try.
ZIL : code and data stru
Can you post zpool status ?
Are your drives all the same size ?
-r
Le 30 mai 2010 à 23:37, Sandon Van Ness a écrit :
> I just wanted to make sure this is normal and is expected. I fully
> expected that as the file-system filled up I would see more disk space
> being used than with other file-sy
When we use one vmod, both machines are finished in about 6min45,
zilstat maxes out at about 4200 IOPS.
Using four vmods it takes about 6min55, zilstat maxes out at 2200
IOPS.
Can you try 4 concurrent tar to four different ZFS filesystems (same
pool).
-r
_
Robert Milkowski writes:
> On 01/04/2010 20:58, Jeroen Roodhart wrote:
> >
> >> I'm happy to see that it is now the default and I hope this will cause the
> >> Linux NFS client implementation to be faster for conforming NFS servers.
> >>
> > Interesting thing is that apparently default
I think This is highlighting that there is extra CPU requirement to
manage small blocks in ZFS.
The table would probably turn over if you go to 16K zfs records and
16K reads/writes form the application.
Next step for you is to figure how much reads/writes IOPS do you
expect to take in the
ic
> to the IMAP server (called skiplist), and some are small flat files
> that are just rewritten. All they have in common is activity and
> frequent locking. They can be relocated as a whole.
>
> > > The second one is from:
> > >
> > >http://blo
Le 5 janv. 10 à 17:49, Robert Milkowski a écrit :
On 05/01/2010 16:00, Roch wrote:
That said, I truly am for a evolution for random read
workloads. Raid-Z on 4K sectors is quite appealing. It means
that small objects become nearly mirrored with good random read
performance while large objects
f bit rot occurs in X and disk
holding Y dies, resilvering would generate garbage for Y.
This seems to force use to chunk up disks with every unit
checksummed even if freed. Secure deletion becomes a problem
as well. And you need can end up madly searching for free
stripes, repositioning old blocks in p
Tim Cook writes:
> On Sun, Dec 27, 2009 at 6:43 PM, Bob Friesenhahn <
> bfrie...@simple.dallas.tx.us> wrote:
>
> > On Sun, 27 Dec 2009, Tim Cook wrote:
> >
> >>
> >> That is ONLY true when there's significant free space available/a fresh
> >> pool. Once those files have been deleted and
Le 28 déc. 09 à 00:59, Tim Cook a écrit :
On Sun, Dec 27, 2009 at 1:38 PM, Roch Bourbonnais > wrote:
Le 26 déc. 09 à 04:47, Tim Cook a écrit :
On Fri, Dec 25, 2009 at 11:57 AM, Saso Kiselkov
wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
I've started porting a video s
Le 26 déc. 09 à 04:47, Tim Cook a écrit :
On Fri, Dec 25, 2009 at 11:57 AM, Saso Kiselkov
wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
I've started porting a video streaming application to opensolaris on
ZFS, and am hitting some pretty weird performance issues. The thing
I'm
t
You might try setting zfs_scrub_limit to 1 or 2 and attach a customer
service record to :
6494473 ZFS needs a way to slow down resilvering
-r
Le 7 oct. 09 à 06:14, John a écrit :
Hi,
We are running b118, with a LSI 3801 controller which is connected
to 44 drives (yes it's a lo
Le 28 sept. 09 à 17:58, Glenn Fawcett a écrit :
Been there, done that, got the tee shirt A larger SGA will
*always* be more efficient at servicing Oracle requests for blocks.
You avoid going through all the IO code of Oracle and it simply
reduces to a hash.
Sounds like good advice
Bob Friesenhahn writes:
> On Wed, 23 Sep 2009, Ray Clark wrote:
>
> > My understanding is that if I "zfs set checksum=" to
> > change the algorithm that this will change the checksum algorithm
> > for all FUTURE data blocks written, but does not in any way change
> > the checksum for prev
Le 23 sept. 09 à 19:07, Neil Perrin a écrit :
On 09/23/09 10:59, Scott Meilicke wrote:
How can I verify if the ZIL has been disabled or not? I am trying
to see how much benefit I might get by using an SSD as a ZIL. I
disabled the ZIL via the ZFS Evil Tuning Guide:
echo zil_disable/W0t1 |
I wonder if a taskq pool does not suffer from a similar
effect observed for the nfsd pool :
6467988 Minimize the working set of nfsd threads
Created threads round robin our of taskq loop, doing little
work but wake up at least once per 5 minute and so are never
reaped.
-r
Nils Goroll
stuart anderson writes:
> > > > Question :
> > > >
> > > > Is there a way to change the volume blocksize
> > say
> > > via 'zfs snapshot send/receive'?
> > > >
> > > > As I see things, this isn't possible as the
> > target
> > > volume (including property values) gets
> > overwritten
>
"100% random writes produce around 200 IOPS with a 4-6 second pause
around every 10 seconds. "
This indicates that the bandwidth you're able to transfer
through the protocol is about 50% greater than the bandwidth
the pool can offer to ZFS. Since, this is is not sustainable, you
s
Do you have the zfs primarycache property on this release ?
if so, you could set it to 'metadata' or none.
primarycache=all | none | metadata
Controls what is cached in the primary cache (ARC). If
this property is set to "all", then both user data and
metadat
Scott Lawson writes:
> Also you may wish to look at the output of 'iostat -xnce 1' as well.
>
> You can post those to the list if you have a specific problem.
>
> You want to be looking for error counts increasing and specifically 'asvc_t'
> for the service times on the disks. I higher num
Unlike NFS which can issue sync writes and async writes, iscsi needs
to be serviced with synchronous semantics (unless the write caching is
enabled, caveat emptor).
If the workloads issuing the iscsi request is single threaded, then
performance is governed by I/O size over rotational latenc
roland writes:
> >SSDs with capacitor-backed write caches
> >seem to be fastest.
>
> how to distinguish them from ssd`s without one?
> i never saw this explicitly mentioned in the specs.
They probably don't have one then (or they should fire their
entire marketing dept).
Capacitors allows
Le 5 août 09 à 06:06, Chookiex a écrit :
Hi All,
You know, ZFS afford a very Big buffer for write IO.
So, When we write a file, the first stage is put it to buffer.
But, if the file is VERY short-lived? Is it bring IO to disk?
or else, it just put the meta data and data to memory, and then
re
Tim Cook writes:
> On Tue, Aug 4, 2009 at 7:33 AM, Roch Bourbonnais
> wrote:
>
> >
> > Le 4 août 09 à 13:42, Joseph L. Casale a écrit :
> >
> > does anybody have some numbers on speed on sata vs 15k sas?
> >>>
> >>
> >>
Try
zpool import 2169223940234886392 [storage1]
-r
Le 4 août 09 à 15:11, David a écrit :
I seem to have run into an issue with a pool I have, and haven't
found a resolution yet. The box is currently running FreeBSD 7-
STABLE with ZFS v13, (Open)Solaris doesn't support my raid controller.
Le 26 juil. 09 à 01:34, Toby Thain a écrit :
On 25-Jul-09, at 3:32 PM, Frank Middleton wrote:
On 07/25/09 02:50 PM, David Magda wrote:
Yes, it can be affected. If the snapshot's data structure / record
is
underneath the corrupted data in the tree then it won't be able to
be
reached.
Le 19 juil. 09 à 16:47, Bob Friesenhahn a écrit :
On Sun, 19 Jul 2009, Ross wrote:
The success of any ZFS implementation is *very* dependent on the
hardware you choose to run it on.
To clarify:
"The success of any filesystem implementation is *very* dependent on
the hardware you choose
Le 4 août 09 à 13:42, Joseph L. Casale a écrit :
does anybody have some numbers on speed on sata vs 15k sas?
The next chance I get, I will do a comparison.
Is it really a big difference?
I noticed a huge improvement when I moved a virtualized pool
off a series of 7200 RPM SATA discs to ev
Henk Langeveld writes:
> Mario Goebbels wrote:
> >>> An introduction to btrfs, from somebody who used to work on ZFS:
> >>>
> >>> http://www.osnews.com/story/21920/A_Short_History_of_btrfs
> >> *very* interesting article.. Not sure why James didn't directly link to
> >> it, but courteous o
"C. Bergström" writes:
> James C. McPherson wrote:
> > An introduction to btrfs, from somebody who used to work on ZFS:
> >
> > http://www.osnews.com/story/21920/A_Short_History_of_btrfs
> >
> *very* interesting article.. Not sure why James didn't directly link to
> it, but courteous of
Bob Friesenhahn writes:
> On Wed, 29 Jul 2009, Jorgen Lundman wrote:
> >
> > For example, I know rsync and tar does not use fdsync (but dovecot does)
> > on
> > its close(), but does NFS make it fdsync anyway?
>
> NFS is required to do synchronous writes. This is what allows NFS
> cli
The things I'd pay most attention to would be all single threaded 4K,
32K, and 128K writes to the raw device.
Maybe sure the SSD has a capacitor and enable the write cache on the
device.
-r
Le 5 juil. 09 à 12:06, James Lever a écrit :
On 04/07/2009, at 3:08 AM, Bob Friesenhahn wrote:
I
zio_assess went away with SPA 3.0 :
6754011 SPA 3.0: lock breakup, i/o pipeline refactoring, device failure
handling
You now have :
zio_vdev_io_assess(zio_t *zio)
Yes it's one of the last stages of the I/O pipeline (see zio_impl.h).
-r
tester writes:
> Hi,
>
> What does zio
Stuart Anderson writes:
>
> On Jun 21, 2009, at 10:21 PM, Nicholas Lee wrote:
>
> >
> >
> > On Mon, Jun 22, 2009 at 4:24 PM, Stuart Anderson
> > > > wrote:
> >
> > However, it is a bit disconcerting to have to run with reduced data
> > protection for an entire week. While I am certai
tester writes:
> Hello,
>
> Trying to understand the ZFS IO scheduler, because of the async nature
> it is not very apparent, can someone give a short explanation for each
> of these stack traces and for their frequency
>
> this is the command
>
> dd if=/dev/zero of=/test/test1/tras
p://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide
If you do, then be prepared to unmount or reboot all clients of
the server in case of a crash in order to clear their
corrupted caches.
This is in no way a ZIL problem nor a ZFS problem.
http://blogs.sun.com/roch/entry/nfs_and_zfs
We're definitely working on problems contributing to such 'picket
fencing'.
But beware to equate symptoms and root caused issues. We already know
that picket fencing is multicause and
we're tracking the ones we know about : there is something related to
taskq cpu scheduling and
something
Le 18 juin 09 à 20:23, Richard Elling a écrit :
Cor Beumer - Storage Solution Architect wrote:
Hi Jose,
Well it depends on the total size of your Zpool and how often these
files are changed.
...and the average size of the files. For small files, it is likely
that the default
recordsize
Le 16 juin 09 à 19:55, Jose Martins a écrit :
Hello experts,
IHAC that wants to put more than 250 Million files on a single
mountpoint (in a directory tree with no more than 100 files on each
directory).
He wants to share such filesystem by NFS and mount it through
many Linux Debian clients
a bit before committing to disk. Then,
when its time to commit to disk, it realizes the disk is failed, and
from then enter those failmode conditions (wait, continue, panic, ?).
Could this be the case?
http://blogs.sun.com/roch/date/20080514
--
Brent Jones
br...@servuhome.net
___
Hi Noel.
zpool iostat -v
For a working pool and for a problem pool would help to see
the type of pool and it's capacity.
I assume the problem is not the source of the data.
To read large number of small files typically requires lots
and lots of threads (say 100 per source disks).
Is da
Le 8 févr. 09 à 13:44, David Magda a écrit :
On Feb 8, 2009, at 16:12, Vincent Fox wrote:
Do you think having log on a 15K RPM drive with the main pool
composed of 10K RPM drives will show worthwhile improvements? Or am
I chasing a few percentage points?
Another important question is wheth
Le 8 févr. 09 à 13:12, Vincent Fox a écrit :
Thanks I think I get it now.
Do you think having log on a 15K RPM drive with the main pool
composed of 10K RPM drives will show worthwhile improvements? Or am
I chasing a few percentage points?
In cases where logzilla helps, then this shoul
Sounds like the device it not ignoring the cache flush requests sent
down by ZFS/zil commit.
If the SSD is able the drain it's internal buffer to flash on a power
outage; then it needs to ignore the cache flush.
You can do this on a per device basis, It's kludgy tuning but hope the
instructi
Eric D. Mudama writes:
> On Tue, Jan 20 at 21:35, Eric D. Mudama wrote:
> > On Tue, Jan 20 at 9:04, Richard Elling wrote:
> >>
> >> Yes. And I think there are many more use cases which are not
> >> yet characterized. What we do know is that using an SSD for
> >> the separate ZIL log works
Eric D. Mudama writes:
> On Mon, Jan 19 at 23:14, Greg Mason wrote:
> >So, what we're looking for is a way to improve performance, without
> >disabling the ZIL, as it's my understanding that disabling the ZIL
> >isn't exactly a safe thing to do.
> >
> >We're looking for the best way to
Nicholas Lee writes:
> Another option to look at is:
> set zfs:zfs_nocacheflush=1
> http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide
>
> Best option is to get a a fast ZIL log device.
>
>
> Depends on your pool as well. NFS+ZFS means zfs will wait for write
> comple
ain level of
> performance, and what we've got with the ZIL on the pool is completely
> unacceptable.
>
> Thanks for any pointers you may have...
>
I think you found out for the replies, this NFS issue is not
related to ZFS nor a ZIL malfunction in any way.
http:/
Chookiex writes:
> Hi all,
>
> I have 2 questions about ZFS.
>
> 1. I have create a snapshot in my pool1/data1, and zfs send/recv it to
> pool2/data2. but I found the USED in zfs list is different:
> NAME USED AVAIL REFER MOUNTPOINT
> pool2/data2 160G 1.44T
Tim writes:
> On Tue, Jan 13, 2009 at 6:26 AM, Brian Wilson wrote:
>
> >
> > Does creating ZFS pools on multiple partitions on the same physical drive
> > still run into the performance and other issues that putting pools in
> > slices
> > does?
> >
>
>
> Is zfs going to own the whol
milosz writes:
> iperf test coming out fine, actually...
>
> iperf -s -w 64k
>
> iperf -c -w 64k -t 900 -i 5
>
> [ ID] Interval Transfer Bandwidth
> [ 5] 0.0-899.9 sec 81.1 GBytes774 Mbits/sec
>
> totally steady. i could probably implement some tweaks to improve it,
Le 12 janv. 09 à 17:39, Carson Gaspar a écrit :
> Joerg Schilling wrote:
>> Fabian Wörner wrote:
>>
>>> my post was not to start a discuss gpl<>cddl.
>>> It just an idea to promote ZFS and OPENSOLARIS
>>> If it was against anything than against exfat, nothing else!!!
>>
>> If you like to pro
Le 13 janv. 09 à 21:49, Orvar Korvar a écrit :
> Oh, thanx for your very informative answer. Ive added a link to your
> information in this thread:
>
> But... Sorry, but I wrote wrong. I meant "I will not recommend
> against HW raid + ZFS anymore" instead of "... recommend against HW
> raid
Le 4 janv. 09 à 21:09, milosz a écrit :
> thanks for your responses, guys...
>
> the nagle's tweak is the first thing i did, actually.
>
> not sure what the network limiting factors could be here... there's
> no switch, jumbo frames are on... maybe it's the e1000g driver?
> it's been wonky s
Try setting the cachemode property on the target filesystem.
Also verify that the source can pump data through the net at the
desired rate if the target is /dev/null.
-r
Le 8 janv. 09 à 18:46, gnomad a écrit :
> I have just built an opensolaris box (2008.11) as a small fileserver
> (6x 1TB
18.4 403.31.22.9 1.1 0.22.50.6 15 24 c6t5d0
>19.3 402.71.22.9 1.1 0.32.50.6 15 25 c6t6d0
>18.8 406.11.22.9 1.0 0.22.40.6 15 25 c6t7d0
>
>
> Any experts here to say if that's just because bonnie
Scott Laird writes:
> On Fri, Jan 2, 2009 at 8:54 PM, Richard Elling
> wrote:
> > Scott Laird wrote:
> >>
> >> On Fri, Jan 2, 2009 at 4:52 PM, Akhilesh Mritunjai
> >> wrote:
> >>
> >>>
> >>> As for source, here you go :)
> >>>
> >>>
> >>> http://cvs.opensolaris.org/source/xref/onnv/o
Ahmed Kamal writes:
> Hi,
>
> I have been doing some basic performance tests, and I am getting a big hit
> when I run UFS over a zvol, instead of directly using zfs. Any hints or
> explanations is very welcome. Here's the scenario. The machine has 30G RAM,
> and two IDE disks attached. The
Alastair Neil writes:
> I am attempting to create approx 10600 zfs file systems across two
> pools. The devices underlying the pools are mirrored iscsi volumes
> shared over a dedicated gigabit Ethernet with jumbo frames enabled
> (MTU 9000) from a Linux Openfiler 2.3 system. I have added a co
Marcelo Leal writes:
> Hello all,
> Somedays ago i was looking at the code and did see some variable that
> seems to make a correlation between the size of the data, and if the
> data is written to the slog or directly to the pool. But i did not
> find it anymore, and i think is way more com
Le 9 déc. 08 à 03:16, Brent Jones a écrit :
> On Mon, Dec 8, 2008 at 3:09 PM, milosz wrote:
>> hi all,
>>
>> currently having trouble with sustained write performance with my
>> setup...
>>
>> ms server 2003/ms iscsi initiator 2.08 w/intel e1000g nic directly
>> connected to snv_101 w/ intel
Le 20 déc. 08 à 22:34, Dmitry Razguliaev a écrit :
> Hi, I faced with a similar problem, like Ross, but still have not
> found a solution. I have raidz out of 9 sata disks connected to
> internal and 2 external sata controllers. Bonnie++ gives me the
> following results:
> nexenta,8G,
> 10
HI Qihua, there are many reasons why the recordsize does not govern
the I/O size directly. Metadata I/O is one, ZFS I/O scheduler
aggregation is another.
The application behavior might be a third.
Make sure to create the DB files after modifying the ZFS property.
-r
Le 26 déc. 08 à 11:49, q
Le 15 déc. 08 à 01:13, Ahmed Kamal a écrit :
> Hi,
>
> I have been doing some basic performance tests, and I am getting a
> big hit when I run UFS over a zvol, instead of directly using zfs.
> Any hints or explanations is very welcome. Here's the scenario. The
> machine has 30G RAM, and two
Le 15 nov. 08 à 08:49, Nicholas Lee a écrit :
>
>
> On Sat, Nov 15, 2008 at 7:54 AM, Richard Elling <[EMAIL PROTECTED]
> > wrote:
> In short, separate logs with rotating rust may reduce sync write
> latency by
> perhaps 2-10x on an otherwise busy system. Using write optimized SSDs
> will redu
Bill Sommerfeld writes:
> On Wed, 2008-10-22 at 10:30 +0100, Darren J Moffat wrote:
> > I'm assuming this is local filesystem rather than ZFS backed NFS (which
> > is what I have).
>
> Correct, on a laptop.
>
> > What has setting the 32KB recordsize done for the rest of your home
> > di
Tim writes:
> On Sat, Nov 29, 2008 at 11:06 AM, Ray Clark <[EMAIL PROTECTED]>wrote:
>
> > Please help me understand what you mean. There is a big difference between
> > being unacceptably slow and not working correctly, or between being
> > unacceptably slow and having an implementation pro
Le 22 oct. 08 à 21:02, Bill Sommerfeld a écrit :
> On Wed, 2008-10-22 at 09:46 -0700, Mika Borner wrote:
>> If I turn zfs compression on, does the recordsize influence the
>> compressratio in anyway?
>
> zfs conceptually chops the data into recordsize chunks, then
> compresses
> each chunk inde
Thomas, for long latency fat links, it should be quite
beneficial to set the socket buffer on the receive side
(instead of having users tune tcp_recv_hiwat).
throughput of a tcp connnection is gated by
"receive socket buffer / round trip time".
Could that be Ross' problem ?
-r
Ross Smith wr
Le 23 oct. 08 à 05:40, Constantin Gonzalez a écrit :
> Hi,
>
> Bob Friesenhahn wrote:
>> On Wed, 22 Oct 2008, Neil Perrin wrote:
>>> On 10/22/08 10:26, Constantin Gonzalez wrote:
3. Disable ZIL[1]. This is of course evil, but one customer
pointed out to me
that if a tar xvf we
Le 2 oct. 08 à 09:21, Christiaan Willemsen a écrit :
> Hi there.
>
> I just got a new Adaptec RAID 51645 controller in because the old
> (other type) was malfunctioning. It is paired with 16 Seagate 15k5
> disks, of which two are used with hardware RAID 1 for OpenSolaris
> snv_98, and the r
Leave the default recordsize. With 128K recordsize, files smaller than
128K are stored as single record
tightly fitted to the smallest possible # of disk sectors. Reads and
writes are then managed with fewer ops.
Not tuning the recordsize is very generally more space efficient and
more perf
Files are stored as either a single record (ajusted to the size of the
file) multiple number of fixed size records.
-r
Le 25 août 08 à 09:21, Robert a écrit :
> Thanks for your response, from which I have known more details.
> However, there is one thing I am still not clear--maybe at first
Kyle McDonald writes:
> Ross wrote:
> > Just re-read that and it's badly phrased. What I meant to say is that a
> > raid-z / raid-5 array based on 500GB drives seems to have around a 1 in 10
> > chance of loosing some data during a full rebuild.
> >
> >
> >
> Actually, I think it'
initiator_host:~ # dd if=/dev/zero bs=1k of=/dev/dsk/c5t0d0
count=100
So this is going at 3000 x 1K writes per second or
330usec per write. The iscsi target is probably doing a
over the wire operation for each request. So it looks fine
at first glance.
-r
Cody Campbell writes:
Peter Tribble writes:
> A question regarding zfs_nocacheflush:
>
> The Evil Tuning Guide says to only enable this if every device is
> protected by NVRAM.
>
> However, is it safe to enable zfs_nocacheflush when I also have
> local drives (the internal system drives) using ZFS, in particula
Robert Milkowski writes:
> Hello Roch,
>
> Saturday, June 28, 2008, 11:25:17 AM, you wrote:
>
>
> RB> I suspect, a single dd is cpu bound.
>
> I don't think so.
>
We're nearly so as you show. More below.
> Se below one with a stri
Le 28 juin 08 à 05:14, Robert Milkowski a écrit :
> Hello Mark,
>
> Tuesday, April 15, 2008, 8:32:32 PM, you wrote:
>
> MM> The new write throttle code put back into build 87 attempts to
> MM> smooth out the process. We now measure the amount of time it
> takes
> MM> to sync each transaction g
Bob Friesenhahn writes:
> On Tue, 15 Apr 2008, Mark Maybee wrote:
> > going to take 12sec to get this data onto the disk. This "impedance
> > mis-match" is going to manifest as pauses: the application fills
> > the pipe, then waits for the pipe to empty, then starts writing again.
> > Note t
Le 30 mars 08 à 15:57, Kyle McDonald a écrit :
> Fred Oliver wrote:
>>
>> Marion Hakanson wrote:
>>> [EMAIL PROTECTED] said:
I am having trouble destroying a zfs file system (device busy) and
fuser
isn't telling me who has the file open: . . .
This situation appears to occur e
Le 3 mars 08 à 09:58, Robert Milkowski a écrit :
> Hello zfs-discuss,
>
>
> I had a zfs file system with recordsize=8k and a couple of large
> files. While doing zfs send | zfs recv I noticed it's doing
> about 1500 IOPS but with block size 8K so total throughput
> wasn't impr
Le 1 mars 08 à 22:14, Bill Shannon a écrit :
> Jonathan Edwards wrote:
>>
>> On Mar 1, 2008, at 3:41 AM, Bill Shannon wrote:
>>> Running just plain "iosnoop" shows accesses to lots of files, but
>>> none
>>> on my zfs disk. Using "iosnoop -d c1t1d0" or "iosnoop -m
>>> /export/home/shannon"
>>>
Le 28 févr. 08 à 21:00, Jonathan Loran a écrit :
>
>
> Roch Bourbonnais wrote:
>>
>> Le 28 févr. 08 à 20:14, Jonathan Loran a écrit :
>>
>>>
>>> Quick question:
>>>
>>> If I create a ZFS mirrored pool, will the read performance get
Le 28 févr. 08 à 20:14, Jonathan Loran a écrit :
>
> Quick question:
>
> If I create a ZFS mirrored pool, will the read performance get a
> boost?
> In other words, will the data/parity be read round robin between the
> disks, or do both mirrored sets of data and parity get read off of
> both
1 - 100 of 390 matches
Mail list logo