I have a system that is (sometimes) used as an ftp server to serve g4u
disk images. Current machine is a Dell PowerEdge R320 with 16GB
memory running 6.1_STABLE from yesterday.
If I get 3 ftp clients all reading the same 45GB image from it I
quickly get into the situation that all memory is us
Edgar Fuß wrote:
> > 35685270 bytes/sec
> That's OK.
>
> > Note I removed -o log
> Shouldn't make a difference, I think.
I re-enabled -o log and did the dd test again on NetBSD 6.0 with the
patch you posted and vfs.wapbl.verbose_commit=2
# dd if=/dev/zero bs=64k of=out count=1
1+0 r
Mouse wrote:
> Depends. Is the filesystem mounted noatime (or read-only)? If not,
> there are going to be atime updates, and don't all inode updates get
> done synchronously? Or am I misunderstanding something?
Thar is not the case, anyway:
/dev/raid1e on /home type ffs (nodev, noexec, nosuid
> 35685270 bytes/sec
That's OK.
> Note I removed -o log
Shouldn't make a difference, I think.
On Wed, Sep 18, 2013 at 03:51:03PM -0400, Mouse wrote:
> >> Yes, I run 24 concurent tar -czf as a test.
> > But those shouldn't do small synchronous writes, should they?
>
> Depends. Is the filesystem mounted noatime (or read-only)? If not,
> there are going to be atime updates, and don't all in
>> Yes, I run 24 concurent tar -czf as a test.
> But those shouldn't do small synchronous writes, should they?
Depends. Is the filesystem mounted noatime (or read-only)? If not,
there are going to be atime updates, and don't all inode updates get
done synchronously? Or am I misunderstanding som
Edgar Fuß wrote:
> EF> How fast can you write to the file system in question?
> ED> What test do you want me to perform?
> dd if=/dev/zero bs=64k
helvede# dd if=/dev/zero bs=64k of=out count=1
1+0 records in
1+0 records out
65536 bytes transferred in 18.365 secs (35685270 bytes
EF> How fast can you write to the file system in question?
ED> What test do you want me to perform?
dd if=/dev/zero bs=64k
EF> Does your NFS load include a large amount of small syncrounous (filesync)
EF> write operations?
ED> Yes, I run 24 concurent tar -czf as a test.
But those shouldn't do smal
Edgar Fuß wrote:
> How fast can you write to the file system in question?
What test do you want me to perform?
> Does your NFS load include a large amount of small syncrounous (filesync)
> write operations?
Yes, I run 24 concurent tar -czf as a test.
--
Emmanuel Dreyfus
http://hcpnet.free.fr
Hello,
I'm trying to get kgdb working between two virtual box instances. (I
have verified that /dev/tty00 <-> /dev/tty00 works by running GENERIC
kernels and minicom on both virtual machines).
I basically did what is documented on:
http://www.netbsd.org/docs/kernel/kgdb.html
The webpage
> In this setup, vfs.wapbl.flush_disk_cache=1 still get high loads, on both 6.0
> and -current.
> I assume there must be something bad with WAPBL/RAIDframe
Everything up to and including 6.0 is broken in this respect.
Thanks to hannken@, 6.1 does align journal flushes.
How fast can you write to th
Emmanuel Dreyfus wrote:
> Thank you for saving my day. But now what happens?
> I note the SATA disks are in IDE emulation mode, and not AHCI. This is
> something I need to try changing:
Switched to AHCI. Here is below how hard disks are discovered (the relevant raid
is RAID1 on wd0 and wd1)
In
Christos Zoulas wrote:
> You *might* need an fsck after power loss. If you crash and the disk syncs
> then you should be ok if the disk flushed (which it probably did if you
> say "syncing disks" after the panic).
I am not sure I ever encountered a crash where syncing disk after panic
did not lo
On Sep 18, 3:34am, m...@netbsd.org (Emmanuel Dreyfus) wrote:
-- Subject: Re: high load, no bottleneck
| Christos Zoulas wrote:
|
| > On large filesystems with many files fsck can take a really long time after
| > a crash. In my personal experience power outages are much less frequent than
| > c
On Sep 17, 5:38pm, buh...@nfbcal.org (Brian Buhrow) wrote:
-- Subject: Re: high load, no bottleneck
| hello. How do you move the wapbl log to a drive other than the one
| on which the filesystem that's being logged is runing? In other words, I
| thought the log existed on the same media
Thor Lancelot Simon wrote:
> In AHCI mode, you might be able to use ordered tags or "force unit access"
> (does SATA have this concept per command?) to force individual transactions
> or series of transactions out, rather than flushing out all the data every
> time just to get the metadata into t
On Tue, Sep 17, 2013 at 09:48:49PM +0200, Emmanuel Dreyfus wrote:
>
> Thank you for saving my day. But now what happens?
> I note the SATA disks are in IDE emulation mode, and not AHCI. This is
> something I need to try changing:
In AHCI mode, you might be able to use ordered tags or "force unit
On Wed, Sep 18, 2013 at 03:34:19AM +0200, Emmanuel Dreyfus wrote:
> Christos Zoulas wrote:
>
> > On large filesystems with many files fsck can take a really long time after
> > a crash. In my personal experience power outages are much less frequent than
> > crashes (I crash quite a lot since I al
18 matches
Mail list logo