Re: Maxphys on -current?
g...@lexort.com (Greg Troxel) writes: >When you run dd with bs=64k and then bs=1m, how different are the >results? (I believe raw requests happen accordingly, vs MAXPHYS for fs >etc. access.) 'raw requests' are split into MAXPHYS size chunks. While using bs=1m reduces the syscall overhead somewhat, the major effect is that the system will issue requests for all 16 chunks (1M / MAXPHYS) concurrently. 16 chunks is also the maximum, so between bs=1m and bs=2m the difference is only the reduced syscall overhead. The filesystem can do something similar, asynchronous writes are also issued in parallel, for reading it may chose to read-ahead blocks to optimize I/O requests, also for up to 16 chunks. In reality, large contigous I/O rarely happens and the current UVM overhead (e.g. mapping buffers) becomes more significant, the faster your drive is. A larger MAXPHYS also reduces SATA command overhead, that's up to 10% for SATA3 (6Gbps) that you might gain, assuming that you manage to do large contigous I/O. NVME is a different thing. While the hardware command overhead is neglible, you can mitigate software overhead by using larger chunks for I/O and the gain can be much higher, at least for raw I/O.
Re: Maxphys on -current?
Brian Buhrow writes: > hello. I know that this has ben a very long term project, but I'm > wondering about the > status of this effort? I note that FreeBSD-13 has a Maxphys value of 1048576 > bytes. > Have we found other ways to get more throughput from ATA disks that obviate > the need for this > setting which I'm not aware of? > If not, is anyone working on this project? The wiki page says the project is > stalled. I haven't heard that anyone is. When you run dd with bs=64k and then bs=1m, how different are the results? (I believe raw requests happen accordingly, vs MAXPHYS for fs etc. access.)
Maxphys on -current?
hello. I know that this has ben a very long term project, but I'm wondering about the status of this effort? I note that FreeBSD-13 has a Maxphys value of 1048576 bytes. Have we found other ways to get more throughput from ATA disks that obviate the need for this setting which I'm not aware of? If not, is anyone working on this project? The wiki page says the project is stalled. Any thoughts or news would be greatly appreciated. -thanks -Brian
Re: unable to create xfer table DMA map for drive 0, error=12
hello. The error says you ran out of memory. My guess is your machine has been running for a while and the amount of contiguous memory available for the kernel to allocate has fragmented, leading to the issue. I seem to remember versions of NetBSD ealier than V8 were prone to this issue. it may very well be NetBSD is still prone to this issue, though I've not seen it for a while, even on my NetBSD-5 fleet of machines. My guess is a reboot will fix the issue. -thanks -Brian
unable to create xfer table DMA map for drive 0, error=12
I attached a 2,5" SSD to a machine, did a drvctl -r ata_hl atabus1 and got svwsata0:1: unable to create xfer table DMA map for drive 0, error=12 wd2(svwsata0:1:0): using PIO mode 4 Is this a problem with -6 that machine runs or what does it mean?