Am 18.01.2013 00:01, schrieb Rick Macklem:
Wojciech Puchar wrote:
create 10GB file (on 2GB RAM machine, with some swap used to make sure
little cache would be available for filesystem.
dd if=/dev/zero of=file bs=1m count=10k
block size is 32KB, fragment size 4k
now test random read
But I doubt that such a change would improve performance in the
you doubt but i am sure it would improve it a lot. Just imagine multiple
VM images on filesystem, running windoze with 4kB cluster size, each writing something.
no matter what is written from within VM it ends up as read
Hello list,
At $DAILY_JOB I got involved with an ASI board that didn't have any
kind of FreeBSD support, so I ended up writing a driver for it.
If you try to ignore the blatant style(9) violations (of which there
are many, hopefully on the way to be cleaned up) it seems to work fine.
on 18/01/2013 13:39 Jimmy Olgeni said the following:
Hello list,
At $DAILY_JOB I got involved with an ASI board that didn't have any kind of
FreeBSD support, so I ended up writing a driver for it.
If you try to ignore the blatant style(9) violations (of which there are many,
hopefully
On Fri, 18 Jan 2013, Andriy Gapon wrote:
See INTR_MPSAFE in bus_setup_intr(9).
Thanks! It went away. Back to testing...
--
jimmy
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe,
On Thu, 17 Jan 2013 16:12:17 -0600, Karim Fodil-Lemelin
fodillemlinka...@gmail.com wrote:
SAS controllers may connect to SATA devices, either directly connected
using native SATA protocol or through SAS expanders using SATA Tunneled
Protocol (STP).
The systems is currently put in place
The autotuning work is reaching into many places of the kernel and
while trying to tie up all lose ends I've got stuck in the kmem_map
and how it works or what its limitations are.
During startup the VM is initialized and an initial kernel virtual
memory map is setup in kmem_init() covering the
Stefan Esser wrote:
Am 18.01.2013 00:01, schrieb Rick Macklem:
Wojciech Puchar wrote:
create 10GB file (on 2GB RAM machine, with some swap used to make
sure
little cache would be available for filesystem.
dd if=/dev/zero of=file bs=1m count=10k
block size is 32KB, fragment size 4k
I'll follow up with detailed answers to your questions over the weekend.
For now, I will, however, point out that you've misinterpreted the
tunables. In fact, they say that your kmem map can hold up to 16GB and the
current used space is about 58MB. Like other things, the kmem map is
auto-sized
On Fri, Jan 18, 2013 at 7:29 AM, Andre Oppermann an...@freebsd.org wrote:
The (inital?) size of the kmem_map is determined by some voodoo magic,
a sprinkle of nmbclusters * PAGE_SIZE incrementor and lots of tunables.
However it seems to work out to an effective kmem_map_size of about 58MB
on
On Thursday, January 17, 2013 9:33:53 pm David Xu wrote:
I am trying to fix a bug in GNU grep, the bug is if you
want to skip FIFO file, it will not work, for example:
grep -D skip aaa .
it will be stucked on a FIFO file.
Here is the patch:
Try adding the following to /boot/loader.conf and reboot:
hw.mpt.enable_sata_wc=1
The default value, -1, instructs the driver to leave the STA drives at their
configuration default. Often times this means that the MPT BIOS will turn off
the write cache on every system boot sequence. IT DOES
The default value, -1, instructs the driver to leave the STA drives at their
configuration default. Often times this means that the MPT BIOS will turn off
the write cache on every system boot sequence. IT DOES THIS FOR A GOOD REASON!
An enabled write cache is counter to data reliability.
- Original Message -
From: Wojciech Puchar woj...@wojtek.tensor.gdynia.pl
To: Scott Long scott4l...@yahoo.com
Cc: Dieter BSD dieter...@gmail.com; freebsd-hackers@freebsd.org
freebsd-hackers@freebsd.org; gi...@freebsd.org gi...@freebsd.org;
sco...@freebsd.org sco...@freebsd.org;
disk would write data
I suspect that I'm encountering situations right now at netflix where this
advice is not true. I have drives that are seeing intermittent errors, then
being forced into reset after a timeout, and then coming back up with
filesystem problems. It's only a suspicion at
Wojciech writes:
If computer have UPS then write caching is fine. even if FreeBSD crash,
disk would write data
That is incorrect. A UPS reduces the risk, but does not eliminate it.
It is impossible to completely eliminate the risk of having the
write cache on. If you care about your data you
On Fri, 2013-01-18 at 20:37 +0100, Wojciech Puchar wrote:
disk would write data
I suspect that I'm encountering situations right now at netflix where this
advice is not true. I have drives that are seeing intermittent errors,
then being forced into reset after a timeout, and then
That is incorrect. A UPS reduces the risk, but does not eliminate it.
nothing eliminate all risks.
But for most applications, you must have the write cache off,
and you need queuing (e.g. TCQ or NCQ) for performance. If
you have queuing, there is no need to turn the write cache
on.
did you
On Jan 18, 2013, at 1:12 PM, Dieter BSD dieter...@gmail.com wrote:
It is inexcusable that FreeBSD defaults to leaving the write cache on
for SATA PATA drives.
This was completely driven by the need to satisfy idiotic benchmarkers,
tech writers, and system administrators. It was a huge deal
and anyone who enabled SATA WC or complained about I/O slowness
would be forced into Siberian salt mines for the remainder of their lives.
so reserve a place for me there.
___
freebsd-hackers@freebsd.org mailing list
On Fri, 2013-01-18 at 22:18 +0100, Wojciech Puchar wrote:
and anyone who enabled SATA WC or complained about I/O slowness
would be forced into Siberian salt mines for the remainder of their lives.
so reserve a place for me there.
Yeah, me too. I prefer to go for all-out performance with
On 2013-Jan-18 12:12:11 -0800, Dieter BSD dieter...@gmail.com wrote:
adding hw.ata.wc=0 to /boot/loader.conf. The bigger problem is that
FreeBSD does not support queuing on all controllers that support it.
Not something that admins can fix, and inexcusable for an OS that
claims to care about
On 18/01/2013 10:16 AM, Mark Felder wrote:
On Thu, 17 Jan 2013 16:12:17 -0600, Karim Fodil-Lemelin
fodillemlinka...@gmail.com wrote:
SAS controllers may connect to SATA devices, either directly
connected using native SATA protocol or through SAS expanders using
SATA Tunneled Protocol (STP).
This is all turning into a bikeshed discussion. As far as I can tell,
the basic original question was why a *SAS* (not a SATA) drive was not
performing as well as expected based upon experiences with Linux. I
still don't know whether reads or writes were being used for dd.
This morning, I ran
mpt0: LSILogic SAS/SATA Adapter port 0x1000-0x10ff mem
0x9991-0x99913fff,0x9990-0x9990 irq 28 at device 0.0 on pci11
mpt0: MPI Version=1.5.20.0
mpt0: Capabilities: ( RAID-0 RAID-1E RAID-1 )
mpt0: 0 Active Volumes (2 Max)
mpt0: 0 Hidden Drive Members (14 Max)
Ah. Historically IBM
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
On 01/18/13 08:39, John Baldwin wrote:
I (disclaimer: not bsdgrep person) have just tested that bsdgrep
handle this case just fine.
The non-blocking part is required to make the code function, otherwise
the system will block on open() if fifo
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
On 01/18/13 08:39, John Baldwin wrote:
On Thursday, January 17, 2013 9:33:53 pm David Xu wrote:
I am trying to fix a bug in GNU grep, the bug is if you want to
skip FIFO file, it will not work, for example:
grep -D skip aaa .
it will be
Scott writes:
If I had my way, the WC would be off, everyone would be using SAS,
and anyone who enabled SATA WC or complained about I/O slowness
would be forced into Siberian salt mines for the remainder of their lives.
Actually, If you are running SAS, having SATA WC on or off wouldn't
On 18/01/2013 5:42 PM, Matthew Jacob wrote:
This is all turning into a bikeshed discussion. As far as I can tell,
the basic original question was why a *SAS* (not a SATA) drive was not
performing as well as expected based upon experiences with Linux. I
still don't know whether reads or writes
Matthew writes:
There is also no information in the original email as to which direction
the I/O was being sent.
In one of the followups, Karim reported:
# dd if=/dev/zero of=foo count=10 bs=1024000
10+0 records in
10+0 records out
1024 bytes transferred in 19.615134 secs (522046
On 1/15/13 4:03 PM, Trent Nelson wrote:
On Tue, Jan 15, 2013 at 02:33:41PM -0800, Ian Lepore wrote:
On Tue, 2013-01-15 at 14:29 -0800, Alfred Perlstein wrote:
On 1/15/13 1:43 PM, Konstantin Belousov wrote:
On Tue, Jan 15, 2013 at 04:35:14PM -0500, Trent Nelson wrote:
Luckily it's for
On 18 January 2013 19:11, Dieter BSD dieter...@gmail.com wrote:
Matthew writes:
There is also no information in the original email as to which direction
the I/O was being sent.
In one of the followups, Karim reported:
# dd if=/dev/zero of=foo count=10 bs=1024000
10+0 records in
10+0
32 matches
Mail list logo