It's Columbia Pictures vs. Bunnell:
http://www.eff.org/legal/cases/torrentspy/columbia_v_bunnell_magistrate_order.pdf
The Register syndicated a Security Focus article that summarizes the
potential impact of the court decision:
http://www.theregister.co.uk/2007/08/08/litigation_data_retention/
You might also consider taking a look at this thread:
http://mail.opensolaris.org/pipermail/zfs-discuss/2007-July/041760.html
Although I'm not certain, this sounds a lot like the other pool
fragmentation issues.
-j
On Wed, Aug 15, 2007 at 01:11:40AM -0700, Yaniv Aknin wrote:
Hello friends,
You can do it either way. Eric Kustarz has a good explanation of how to
set up incremental send/receive on your laptop. The description is on
his blog:
http://blogs.sun.com/erickustarz/date/20070612
The technique he uses is applicable to any ZFS filesystem.
-j
On Wed, Aug 08, 2007 at
In an attempt to speed up progress on some of the si3124 bugs that Roger
reported, I've created a workspace with the fixes for:
6565894 sata drives are not identified by si3124 driver
6566207 si3124 driver loses interrupts.
I'm attaching a driver which contains these fixes as well as a
When sequential I/O is done to the disk directly there is no performance
degradation at all.
All filesystems impose some overhead compared to the rate of raw disk
I/O. It's going to be hard to store data on a disk unless some kind of
filesystem is used. All the tests that Eric and I have
*sata_hba_list::list sata_hba_inst_t satahba_next | ::print
sata_hba_inst_t satahba_dev_port | ::array void* 32 | ::print void* |
::grep .!=0 | ::print sata_cport_info_t cport_devp.cport_sata_drive |
::print -a sata_drive_info_t satadrv_features_support satadrv_settings
At Matt's request, I did some further experiments and have found that
this appears to be particular to your hardware. This is not a general
32-bit problem. I re-ran this experiment on a 1-disk pool using a 32
and 64-bit kernel. I got identical results:
64-bit
==
$ /usr/bin/time dd
Marko,
Matt and I discussed this offline some more and he had a couple of ideas
about double-checking your hardware.
It looks like your controller (or disks, maybe?) is having trouble with
multiple simultaneous I/Os to the same disk. It looks like prefetch
aggravates this problem.
When I asked
Each drive is freshly formatted with one 2G file copied to it.
How are you creating each of these files?
Also, would you please include a the output from the isalist(1) command?
These are snapshots of iostat -xnczpm 3 captured somewhere in the
middle of the operation.
Have you
This certainly isn't the case on my machine.
$ /usr/bin/time dd if=/test/filebench/largefile2 of=/dev/null bs=128k
count=1
1+0 records in
1+0 records out
real1.3
user0.0
sys 1.2
# /usr/bin/time dd if=/dev/dsk/c0t0d0 of=/dev/null bs=128k count=1
1+0
Marko,
I tried this experiment again using 1 disk and got nearly identical
times:
# /usr/bin/time dd if=/dev/dsk/c0t0d0 of=/dev/null bs=128k count=1
1+0 records in
1+0 records out
real 21.4
user0.0
sys 2.4
$ /usr/bin/time dd if=/test/filebench/testfile
A couple more questions here.
[mpstat]
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
00 0 3109 3616 316 1965 17 48 45 2450 85 0 15
10 0 3127 3797 592 2174 17 63 46 1760 84 0 15
CPU minf mjf xcal
This seems a bit strange. What's the workload, and also, what's the
output for:
ARC_mru::print size lsize
ARC_mfu::print size lsize
and
ARC_anon::print size
For obvious reasons, the ARC can't evict buffers that are in use.
Buffers that are available to be evicted should be on the mru or mfu
Gar. This isn't what I was hoping to see. Buffers that aren't
available for eviction aren't listed in the lsize count. It looks like
the MRU has grown to 10Gb and most of this could be successfully
evicted.
The calculation for determining if we evict from the MRU is in
arc_adjust() and looks
Something else to consider, depending upon how you set arc_c_max, you
may just want to set arc_c and arc_p at the same time. If you try
setting arc_c_max, and then setting arc_c to arc_c_max, and then set
arc_p to arc_c / 2, do you still get this problem?
-j
On Thu, Mar 15, 2007 at 05:18:12PM
I suppose I should have been more forward about making my last point.
If the arc_c_max isn't set in /etc/system, I don't believe that the ARC
will initialize arc.p to the correct value. I could be wrong about
this; however, next time you set c_max, set c to the same value as c_max
and set p to
it seems there isn't an algorithm in ZFS that detects sequential write
in traditional fs such as ufs, one would trigger directio.
There is no directio for ZFS. Are you encountering a situation in which
you believe directio support would improve performance? If so, please
explain.
-j
Would the logic behind ZFS take full advantage of a heavily multicored
system, such as on the Sun Niagara platform? Would it utilize of the
32 concurrent threads for generating its checksums? Has anyone
compared ZFS on a Sun Tx000, to that of a 2-4 thread x64 machine?
Pete and I are working
And this feature is independant on whether or not the data is
DMA'ed straight into the user buffer.
I suppose so, however, it seems like it would make more sense to
configure a dataset property that specifically describes the caching
policy that is desired. When directio implies different
Basically speaking - there needs to be some sort of strategy for
bypassing the ARC or even parts of the ARC for applications that
may need to advise the filesystem of either:
1) the delicate nature of imposing additional buffering for their
data flow
2) already well optimized applications
Note also that for most applications, the size of their IO operations
would often not match the current page size of the buffer, causing
additional performance and scalability issues.
Thanks for mentioning this, I forgot about it.
Since ZFS's default block size is configured to be larger than
Harley:
I had tried other sizes with much the same results, but
hadnt gone as large as 128K. With bs=128K, it gets worse:
| # time dd if=zeros-10g of=/dev/null bs=128k count=102400
| 81920+0 records in
| 81920+0 records out
|
| real2m19.023s
| user0m0.105s
| sys
Harley:
Old 36GB drives:
| # time mkfile -v 1g zeros-1g
| zeros-1g 1073741824 bytes
|
| real2m31.991s
| user0m0.007s
| sys 0m0.923s
Newer 300GB drives:
| # time mkfile -v 1g zeros-1g
| zeros-1g 1073741824 bytes
|
| real0m8.425s
| user0m0.010s
| sys
23 matches
Mail list logo