Re: Load average changed in 6.1?

2017-04-24 Thread STeve Andre'

On 04/24/17 04:42, Christoph Borsbach wrote:

Hello everyone,
first off: I know that the topic of "load" has been discussed numerous 
times, and been a topic on undeadly [1]. I know that this number is not 
that important.


However:
After upgrading 3 of my systems to 6.1 (from 6.0) I noticed the load 
average (15min value) has gone up by roughly 1.0, both in the output of 
daily(8) over some days now and when checking manually with w, top, or 
uptime.

The systems in question differ a bit:
- amd64 MP (KVM-Guest, dmesg [2], load-example [3])
- amd64 SP (VMware Guest, dmesg and examples not handy right now)
- i386 SP (Alix, dmesg [4], load examples [5])

All were upgraded last week with bsd.rd to 6.1-RELEASE. The systems 
perform as well as ever and nothing was changed aside from upgrading 
system and packages. I'm just interested what could change the behavior. 
A quick check of src/sys/uvm/uvm_meter.c does not show me any changes 
recently.


Has anybody observed this as well and has an explanation for this?

Thanks,
Christoph


Christoph,

What has changed 6.0 - 6.1 is the entire operating system.  uvm_meter.c
may not have changed but the other sub-systems have, which effects
the way things works.  It's the same with playing mp3's and you get 
stutter (or not) when disk I/O or other things are in play.


Any OS is a city; largely invisible to us, interactions go on that can 
have ripple effects in how things work.  The concept of a load average

is nebulous at best.  You can spike the system averages any number of
ways so using it to determine how busy the system is at any point in
time is not great.  Better to see how fast the system delivers web pages 
or files, or ...


Perhaps the uptime / w documentation should explicitly say that 
comparing load avs on different versions is a bit like comparing apples 
to spark plugs.


--STeve Andre'



Re: Load average changed in 6.1?

2017-04-24 Thread Christoph Borsbach
On Mon, Apr 24, 2017 at 11:09:37 +0200, Andreas Kusalananda Kähäri wrote:
> On Mon, Apr 24, 2017 at 10:42:16AM +0200, Christoph Borsbach wrote:
> > Hello everyone,
> > first off: I know that the topic of "load" has been discussed numerous
> > times, and been a topic on undeadly [1]. I know that this number is not that
> > important.
> > 
> > However:
> > After upgrading 3 of my systems to 6.1 (from 6.0) I noticed the load average
> > (15min value) has gone up by roughly 1.0, both in the output of daily(8)
> > over some days now and when checking manually with w, top, or uptime.
> 
> Yes. If I understand correctly, this is because the kernel threads are
> now counted towards the load average.  An OpenBSD system at rest should
> now have a load average of about 1.

Thanks, that makes sense and explains it!

Christoph

> 
> Regards,
> Kusalananda



Re: Load average changed in 6.1?

2017-04-24 Thread Andreas Kusalananda Kähäri
On Mon, Apr 24, 2017 at 10:42:16AM +0200, Christoph Borsbach wrote:
> Hello everyone,
> first off: I know that the topic of "load" has been discussed numerous
> times, and been a topic on undeadly [1]. I know that this number is not that
> important.
> 
> However:
> After upgrading 3 of my systems to 6.1 (from 6.0) I noticed the load average
> (15min value) has gone up by roughly 1.0, both in the output of daily(8)
> over some days now and when checking manually with w, top, or uptime.

Yes. If I understand correctly, this is because the kernel threads are
now counted towards the load average.  An OpenBSD system at rest should
now have a load average of about 1.

Regards,
Kusalananda

> The systems in question differ a bit:
> - amd64 MP (KVM-Guest, dmesg [2], load-example [3])
> - amd64 SP (VMware Guest, dmesg and examples not handy right now)
> - i386 SP (Alix, dmesg [4], load examples [5])
> 
> All were upgraded last week with bsd.rd to 6.1-RELEASE. The systems perform
> as well as ever and nothing was changed aside from upgrading system and
> packages. I'm just interested what could change the behavior. A quick check
> of src/sys/uvm/uvm_meter.c does not show me any changes recently.
> 
> Has anybody observed this as well and has an explanation for this?
> 
> Thanks,
> Christoph
> 
> 
> 
> [1]
> http://undeadly.org/cgi?action=article=20090715034920=flat
> 
> [2]
> OpenBSD 6.1 (GENERIC.MP) #20: Sat Apr  1 13:45:56 MDT 2017
> dera...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/GENERIC.MP
> real mem = 6425526272 (6127MB)
> avail mem = 6226112512 (5937MB)
> mpath0 at root
> scsibus0 at mpath0: 256 targets
> mainbus0 at root
> bios0 at mainbus0: SMBIOS rev. 2.8 @ 0xf69e0 (11 entries)
> bios0: vendor SeaBIOS version "1.9.3-20161116_142049-atsina" date 04/01/2014
> bios0: QEMU Standard PC (i440FX + PIIX, 1996)
> acpi0 at bios0: rev 0
> acpi0: sleep states S3 S4 S5
> acpi0: tables DSDT FACP APIC HPET
> acpi0: wakeup devices
> acpitimer0 at acpi0: 3579545 Hz, 24 bits
> acpimadt0 at acpi0 addr 0xfee0: PC-AT compat
> cpu0 at mainbus0: apid 0 (boot processor)
> cpu0: Westmere E56xx/L56xx/X56xx (Nehalem-C), 2494.15 MHz
> cpu0: 
> FPU,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,SSE3,PCLMUL,SSSE3,CX16,SSE4.1,SSE4.2,x2APIC,POPCNT,AES,HV,NXE,LONG,LAHF
> cpu0: 64KB 64b/line 2-way I-cache, 64KB 64b/line 2-way D-cache, 512KB
> 64b/line 16-way L2 cache
> cpu0: ITLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped
> cpu0: DTLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped
> cpu0: smt 0, core 0, package 0
> mtrr: Pentium Pro MTRR support, 10 var ranges, 88 fixed ranges
> cpu0: apic clock running at 1000MHz
> cpu1 at mainbus0: apid 1 (application processor)
> cpu1: Westmere E56xx/L56xx/X56xx (Nehalem-C), 2493.77 MHz
> cpu1: 
> FPU,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,SSE3,PCLMUL,SSSE3,CX16,SSE4.1,SSE4.2,x2APIC,POPCNT,AES,HV,NXE,LONG,LAHF
> cpu1: 64KB 64b/line 2-way I-cache, 64KB 64b/line 2-way D-cache, 512KB
> 64b/line 16-way L2 cache
> cpu1: ITLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped
> cpu1: DTLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped
> cpu1: smt 0, core 0, package 1
> ioapic0 at mainbus0: apid 0 pa 0xfec0, version 11, 24 pins
> acpihpet0 at acpi0: 1 Hz
> acpiprt0 at acpi0: bus 0 (PCI0)
> acpicpu0 at acpi0: C1(@1 halt!)
> acpicpu1 at acpi0: C1(@1 halt!)
> "ACPI0006" at acpi0 not configured
> "PNP0303" at acpi0 not configured
> "PNP0F13" at acpi0 not configured
> "PNP0700" at acpi0 not configured
> "PNP0501" at acpi0 not configured
> "PNP0A06" at acpi0 not configured
> "PNP0A06" at acpi0 not configured
> "PNP0A06" at acpi0 not configured
> "QEMU0002" at acpi0 not configured
> pvbus0 at mainbus0: KVM
> pci0 at mainbus0 bus 0
> pchb0 at pci0 dev 0 function 0 "Intel 82441FX" rev 0x02
> pcib0 at pci0 dev 1 function 0 "Intel 82371SB ISA" rev 0x00
> pciide0 at pci0 dev 1 function 1 "Intel 82371SB IDE" rev 0x00: DMA, channel
> 0 wired to compatibility, channel 1 wired to compatibility
> pciide0: channel 0 disabled (no drives)
> atapiscsi0 at pciide0 channel 1 drive 0
> scsibus1 at atapiscsi0: 2 targets
> cd0 at scsibus1 targ 0 lun 0:  ATAPI 5/cdrom
> removable
> cd0(pciide0:1:0): using PIO mode 4, DMA mode 2
> uhci0 at pci0 dev 1 function 2 "Intel 82371SB USB" rev 0x01: apic 0 int 11
> piixpm0 at pci0 dev 1 function 3 "Intel 82371AB Power" rev 0x03: apic 0 int
> 9iic0 at piixpm0
> vga1 at pci0 dev 2 function 0 "Cirrus Logic CL-GD5446" rev 0x00
> wsdisplay0 at vga1 mux 1: console (80x25, vt100 emulation)
> wsdisplay0: screen 1-5 added (80x25, vt100 emulation)
> em0 at pci0 dev 3 function 0 "Intel 82540EM" rev 0x03: apic 0 int 11,
> address XX:XX:XX:XX:XX:XX
> virtio0 at pci0 dev 4 function 0 "Qumranet Virtio Storage" rev 0x00
> vioblk0 at virtio0
> scsibus2 at vioblk0: 2 targets
> sd0 at scsibus2 targ 0 lun 0: 

Load average changed in 6.1?

2017-04-24 Thread Christoph Borsbach

Hello everyone,
first off: I know that the topic of "load" has been discussed numerous 
times, and been a topic on undeadly [1]. I know that this number is not 
that important.


However:
After upgrading 3 of my systems to 6.1 (from 6.0) I noticed the load 
average (15min value) has gone up by roughly 1.0, both in the output of 
daily(8) over some days now and when checking manually with w, top, or 
uptime.

The systems in question differ a bit:
- amd64 MP (KVM-Guest, dmesg [2], load-example [3])
- amd64 SP (VMware Guest, dmesg and examples not handy right now)
- i386 SP (Alix, dmesg [4], load examples [5])

All were upgraded last week with bsd.rd to 6.1-RELEASE. The systems 
perform as well as ever and nothing was changed aside from upgrading 
system and packages. I'm just interested what could change the behavior. 
A quick check of src/sys/uvm/uvm_meter.c does not show me any changes 
recently.


Has anybody observed this as well and has an explanation for this?

Thanks,
Christoph



[1]
http://undeadly.org/cgi?action=article=20090715034920=flat

[2]
OpenBSD 6.1 (GENERIC.MP) #20: Sat Apr  1 13:45:56 MDT 2017
dera...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/GENERIC.MP
real mem = 6425526272 (6127MB)
avail mem = 6226112512 (5937MB)
mpath0 at root
scsibus0 at mpath0: 256 targets
mainbus0 at root
bios0 at mainbus0: SMBIOS rev. 2.8 @ 0xf69e0 (11 entries)
bios0: vendor SeaBIOS version "1.9.3-20161116_142049-atsina" date 04/01/2014
bios0: QEMU Standard PC (i440FX + PIIX, 1996)
acpi0 at bios0: rev 0
acpi0: sleep states S3 S4 S5
acpi0: tables DSDT FACP APIC HPET
acpi0: wakeup devices
acpitimer0 at acpi0: 3579545 Hz, 24 bits
acpimadt0 at acpi0 addr 0xfee0: PC-AT compat
cpu0 at mainbus0: apid 0 (boot processor)
cpu0: Westmere E56xx/L56xx/X56xx (Nehalem-C), 2494.15 MHz
cpu0: 
FPU,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,SSE3,PCLMUL,SSSE3,CX16,SSE4.1,SSE4.2,x2APIC,POPCNT,AES,HV,NXE,LONG,LAHF
cpu0: 64KB 64b/line 2-way I-cache, 64KB 64b/line 2-way D-cache, 512KB 
64b/line 16-way L2 cache

cpu0: ITLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped
cpu0: DTLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped
cpu0: smt 0, core 0, package 0
mtrr: Pentium Pro MTRR support, 10 var ranges, 88 fixed ranges
cpu0: apic clock running at 1000MHz
cpu1 at mainbus0: apid 1 (application processor)
cpu1: Westmere E56xx/L56xx/X56xx (Nehalem-C), 2493.77 MHz
cpu1: 
FPU,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,SSE3,PCLMUL,SSSE3,CX16,SSE4.1,SSE4.2,x2APIC,POPCNT,AES,HV,NXE,LONG,LAHF
cpu1: 64KB 64b/line 2-way I-cache, 64KB 64b/line 2-way D-cache, 512KB 
64b/line 16-way L2 cache

cpu1: ITLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped
cpu1: DTLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped
cpu1: smt 0, core 0, package 1
ioapic0 at mainbus0: apid 0 pa 0xfec0, version 11, 24 pins
acpihpet0 at acpi0: 1 Hz
acpiprt0 at acpi0: bus 0 (PCI0)
acpicpu0 at acpi0: C1(@1 halt!)
acpicpu1 at acpi0: C1(@1 halt!)
"ACPI0006" at acpi0 not configured
"PNP0303" at acpi0 not configured
"PNP0F13" at acpi0 not configured
"PNP0700" at acpi0 not configured
"PNP0501" at acpi0 not configured
"PNP0A06" at acpi0 not configured
"PNP0A06" at acpi0 not configured
"PNP0A06" at acpi0 not configured
"QEMU0002" at acpi0 not configured
pvbus0 at mainbus0: KVM
pci0 at mainbus0 bus 0
pchb0 at pci0 dev 0 function 0 "Intel 82441FX" rev 0x02
pcib0 at pci0 dev 1 function 0 "Intel 82371SB ISA" rev 0x00
pciide0 at pci0 dev 1 function 1 "Intel 82371SB IDE" rev 0x00: DMA, 
channel 0 wired to compatibility, channel 1 wired to compatibility

pciide0: channel 0 disabled (no drives)
atapiscsi0 at pciide0 channel 1 drive 0
scsibus1 at atapiscsi0: 2 targets
cd0 at scsibus1 targ 0 lun 0:  ATAPI 5/cdrom 
removable

cd0(pciide0:1:0): using PIO mode 4, DMA mode 2
uhci0 at pci0 dev 1 function 2 "Intel 82371SB USB" rev 0x01: apic 0 int 11
piixpm0 at pci0 dev 1 function 3 "Intel 82371AB Power" rev 0x03: apic 0 
int 9iic0 at piixpm0

vga1 at pci0 dev 2 function 0 "Cirrus Logic CL-GD5446" rev 0x00
wsdisplay0 at vga1 mux 1: console (80x25, vt100 emulation)
wsdisplay0: screen 1-5 added (80x25, vt100 emulation)
em0 at pci0 dev 3 function 0 "Intel 82540EM" rev 0x03: apic 0 int 11, 
address XX:XX:XX:XX:XX:XX

virtio0 at pci0 dev 4 function 0 "Qumranet Virtio Storage" rev 0x00
vioblk0 at virtio0
scsibus2 at vioblk0: 2 targets
sd0 at scsibus2 targ 0 lun 0:  SCSI3 0/direct fixed
sd0: 119808MB, 512 bytes/sector, 245366784 sectors
virtio0: msix shared
virtio1 at pci0 dev 5 function 0 "Qumranet Virtio Memory" rev 0x00
viomb0 at virtio1
virtio1: apic 0 int 10
isa0 at pcib0
isadma0 at isa0
fdc0 at isa0 port 0x3f0/6 irq 6 drq 2
fd0 at fdc0 drive 1: density unknown
com0 at isa0 port 0x3f8/8 irq 4: ns16550a, 16 byte fifo
pckbc0 at isa0 port 0x60/5 irq 1 irq 12
pckbd0 at pckbc0 (kbd slot)