Re: ssh client_loop send disconnnect from Dom0 -> DomU (NetBSD 10.0_BETA/Xen)

2023-06-29 Thread Matthias Petermann

Hi,

On 30.06.23 07:07, Brian Buhrow wrote:

hello.  Yes, this behavior is expected.  It ensures that there is no 
conflict between the
device on the domu end of the vif port and the device on the dom0 end.  This is 
more
sane behavior than FreeBSD, which zeros out the MAC address on the dom0 side of 
the vif.

-thanks
-Brian



thanks for the clarification. Good to know this is not a bug or so.

Overall the topic seems now a lot more clearer and with the static ARP 
entries in place I can finally return to having daily backups of my VMs :-)


Concerning the root cause I assume this requires further investigation. 
To make this independend of the system where it originally occured, I 
plan to create a minimal setup to reproduce it on a similiar sized 
system and will support as much I can.


Kind regards
Matthias


smime.p7s
Description: S/MIME Cryptographic Signature


Re: ssh client_loop send disconnnect from Dom0 -> DomU (NetBSD 10.0_BETA/Xen)

2023-06-29 Thread Matthias Petermann

Hello,

On 29.06.23 11:58, Matthias Petermann wrote:
While I do not want to praise the evening before the dayyou deserve 
some feedback. Both the synthetic test with ssh/dd and my real payload 
with ssh/dump have been running for easily 6 hours without interruption 
this morning. I took the advice and first made static entries in the ARP 
table for each other for the two partners directly involved (Dom0 and 
the DomU concerned). I will continue to monitor this but it looks much 
better now than the days before.


In case this proves as a reproduceable solution, my next question would 
be how this could be persisted (apart from hard-coding the arp -d -a / 
-s calls into rc.local etc.). The former proposal you sent me 
(net.inet.icmp.bmcastecho=1  and ping -nc10) did not create ARP-adresses 
with no expiration time on my NetBSD 10.0_BETA system. You mentioned 
this might be a feature of -HEAD - not sure about 10...


I also wanted to mention - and I don't know how this contributes - that 
mDNSd is enabled on all involved hosts. I had originally planned this so 
that the hosts can also find each other via the .local suffix if the 
local domain .lan cannot be resolved - for example if the DNS server is 
down.


Kind regards
Matthias


With the assignment of permanent ARP entries, everything worked stably 
for the whole day yesterday. It seems to be due to the ARP entries. I've 
done some work on how to make this persistent at least as a workaround 
and found /etc/ethers in combination with /usr/sbin/arp -f /etc/ethers 
to be suitable.


Anyway, while applying this change and do further testing, something 
weird came to my attention. Is this expected?:


Please see the MAC adress configured in the DomU config file (on Dom0):

```
ame="srv-net"
type="pv"
kernel="/netbsd-XEN3_DOMU.gz"
memory=512
vcpus=2
vif = ['mac=00:16:3E:00:00:01,bridge=bridge0,ip=192.168.2.51' ]
disk = [

'file:/data/vhd/srv-net_root.img,0x01,rw','file:/data/vhd/srv-net_data1.img,0x02,rw','file:/data/vhd/srv-net_data2.img,0x03,rw','file:/data/vhd/srv-net_data3.img,0x04,rw',
]
```

In the DomU this configured MAC adress matches the MAC of the virtual 
network interface:


```
srv-net$ ifconfig xennet0
xennet0: flags=0x8843 mtu 1500
capabilities=0x3fc00
capabilities=0x3fc00
enabled=0
ec_capabilities=0x5
ec_enabled=0
address: 00:16:3e:00:00:01
inet6 fe80::216:3eff:fe00:1%xennet0/64 flags 0 scopeid 0x1
inet 192.168.2.51/24 broadcast 192.168.2.255 flags 0
```

In opposite to this, on the Dom0 the related xen backend network 
interface has a slightly different MAC:


```
xvif1i0: flags=0x8943 
mtu 1500

capabilities=0x3fc00
capabilities=0x3fc00
enabled=0
ec_capabilities=0x5
ec_enabled=0x1
address: 00:16:3e:01:00:01
inet6 fe80::216:3eff:fe01:1%xvif1i0/64 flags 0 scopeid 0x4
```

It differs in the 4th octet and I am wondering, if this is intended?

Kind regards
Matthias


smime.p7s
Description: S/MIME Cryptographic Signature


Re: ssh client_loop send disconnnect from Dom0 -> DomU (NetBSD 10.0_BETA/Xen)

2023-06-29 Thread Matthias Petermann

Hi Brian,

On 26.06.23 16:17, Brian Buhrow wrote:

hello.  A couple of quick questions based on the convrsation and the 
snippets of logs
shown in the e-mails.

1.  Is the MAC address shown in the ARP replies the correct one for the dom0?  
No reason it
should be wrong, but it's worth verifying, just in case there is an unknown 
host replying on
the network.


The addresses match - I just verified this. Anyway, thanks for the pointer.



2. Can you capture the same tcpdumps using the -e flag?  The -e flag will print 
the source and
destination MAC addresses, as wel as the source and destination IP addresses or 
host names,
depending on whether you use the -n flag.  This might provide additional 
insight into what's
happening on the network.


Since I added static ARP records, the problem did not occur another 
time. I did stop tcpdump for now to save space, but I will consider the 
-e flag next time.


Kind regards
Matthias



smime.p7s
Description: S/MIME Cryptographic Signature


Re: ssh client_loop send disconnnect from Dom0 -> DomU (NetBSD 10.0_BETA/Xen)

2023-06-29 Thread Matthias Petermann

Hi,

On 26.06.23 15:37, RVP wrote:

On Mon, 26 Jun 2023, Matthias Petermann wrote:

Could it still be an ARP related issue? I did a simplified version of 
the test this morning:




Try this test: since you have static IP- & MAC-addresses everywhere in
your setup, just add them as static ARP entries (skip own address):

On each of your DomUs and the Dom0:

arp -d -a    # delete ARP-cache
arp -s IP-addr1 MAC-addr1
arp -s IP-addr2 MAC-addr2

etc.

On the Dom0, add the addrs. of the DomUs. On each of the DomUs, the addrs.
of Dom0 and _other_ DomUs.

Do your tests.

-RVP


While I do not want to praise the evening before the dayyou deserve 
some feedback. Both the synthetic test with ssh/dd and my real payload 
with ssh/dump have been running for easily 6 hours without interruption 
this morning. I took the advice and first made static entries in the ARP 
table for each other for the two partners directly involved (Dom0 and 
the DomU concerned). I will continue to monitor this but it looks much 
better now than the days before.


In case this proves as a reproduceable solution, my next question would 
be how this could be persisted (apart from hard-coding the arp -d -a / 
-s calls into rc.local etc.). The former proposal you sent me 
(net.inet.icmp.bmcastecho=1  and ping -nc10) did not create ARP-adresses 
with no expiration time on my NetBSD 10.0_BETA system. You mentioned 
this might be a feature of -HEAD - not sure about 10...


I also wanted to mention - and I don't know how this contributes - that 
mDNSd is enabled on all involved hosts. I had originally planned this so 
that the hosts can also find each other via the .local suffix if the 
local domain .lan cannot be resolved - for example if the DNS server is 
down.


Kind regards
Matthias


smime.p7s
Description: S/MIME Cryptographic Signature


Re: ssh client_loop send disconnnect from Dom0 -> DomU (NetBSD 10.0_BETA/Xen)

2023-06-26 Thread Matthias Petermann

Hi,

On 26.06.23 10:41, RVP wrote:

On Sun, 25 Jun 2023, Matthias Petermann wrote:


Somewhere between 2) and 3) there should be the answer to the question.


```
08:52:07.595831 ARP, Request who-has vhost2.lan tell srv-net.lan, length 28
08:52:07.595904 ARP, Reply vhost2.lan is-at 88:ae:dd:02:a4:03 (oui 
Unknown), length 28
08:52:07.595919 ARP, Reply vhost2.lan is-at 88:ae:dd:02:a4:03 (oui 
Unknown), length 28
08:52:07.595921 ARP, Reply vhost2.lan is-at 88:ae:dd:02:a4:03 (oui 
Unknown), length 28
08:52:07.595921 ARP, Reply vhost2.lan is-at 88:ae:dd:02:a4:03 (oui 
Unknown), length 28
08:52:07.595926 ARP, Reply vhost2.lan is-at 88:ae:dd:02:a4:03 (oui 
Unknown), length 28

[...]
08:52:07.627118 IP srv-net.lan.ssh > vhost2.lan.54243: Flags [R], seq 
3177171235, win 0, length 0

```

Well, this doesn't look like an ARP timeout issue. The DomU does the 
ARP-query
and gets back an answer from the Dom0 right away. In fact the Dom0 sends 
multiple
replies to the query (I don't know what that means nor if it's relevant 
to your
issue...); then sshd on the DomU gets a EHOSTDOWN and exits, and the 
kernel sends

a reset TCP packet in response to more data coming to that socket.



Could it still be an ARP related issue? I did a simplified version of 
the test this morning:


```
ssh user@srv-net /bin/dd if=/dev/zero > test.img
```

while running tcpdump in the DomU. Exactly at the time where I got the 
"Connection to srv-net closed by remote host." on the client side, 
tcpdump shows a pattern very similiar to the tcpdump from yesterday:


```
14:02:39.132635 IP srv-net.lan.ssh > vhost2.lan.56867: Flags [P.], seq 
1107922413:1107922961, ack 2414700, win 4197, options [nop,nop,TS val 
7788 ecr 7786],

 length 548
14:02:39.132678 IP vhost2.lan.56867 > srv-net.lan.ssh: Flags [.], ack 
1107922961, win 24609, options [nop,nop,TS val 7786 ecr 7788], length 0
14:02:39.132758 IP srv-net.lan.ssh > vhost2.lan.56867: Flags [P.], seq 
1107922961:1107923509, ack 2414700, win 4197, options [nop,nop,TS val 
7788 ecr 7786],

 length 548
14:02:39.132823 ARP, Request who-has vhost2.lan tell srv-net.lan, length 28
14:02:39.133234 ARP, Reply vhost2.lan is-at 88:ae:dd:02:a4:03 (oui 
Unknown), length 28
14:02:39.133237 ARP, Reply vhost2.lan is-at 88:ae:dd:02:a4:03 (oui 
Unknown), length 28
14:02:39.133238 ARP, Reply vhost2.lan is-at 88:ae:dd:02:a4:03 (oui 
Unknown), length 28
14:02:39.133239 ARP, Reply vhost2.lan is-at 88:ae:dd:02:a4:03 (oui 
Unknown), length 28
14:02:39.133240 ARP, Reply vhost2.lan is-at 88:ae:dd:02:a4:03 (oui 
Unknown), length 28
14:02:39.133241 ARP, Reply vhost2.lan is-at 88:ae:dd:02:a4:03 (oui 
Unknown), length 28
14:02:39.133251 IP srv-net.lan.ssh > vhost2.lan.56867: Flags [P.], seq 
1107923509:1107924057, ack 2414700, win 4197, options [nop,nop,TS val 
7788 ecr 7786],

 length 548
14:02:39.133289 IP vhost2.lan.56867 > srv-net.lan.ssh: Flags [.], ack 
1107924057, win 24609, options [nop,nop,TS val 7786 ecr 7788], length 0
14:02:39.137375 IP srv-net.lan.ssh > vhost2.lan.56867: Flags [F.], seq 
1107924057, ack 2414700, win 4197, options [nop,nop,TS val 7788 ecr 
7786], length 0
14:02:39.137437 IP vhost2.lan.56867 > srv-net.lan.ssh: Flags [.], ack 
1107924058, win 24677, options [nop,nop,TS val 7786 ecr 7788], length 0
14:02:39.137568 IP vhost2.lan.56867 > srv-net.lan.ssh: Flags [P.], seq 
2414700:2414760, ack 1107924058, win 24677, options [nop,nop,TS val 7786 
ecr 7788], l

ength 60
14:02:39.137588 IP srv-net.lan.ssh > vhost2.lan.56867: Flags [R], seq 
645276183, win 0, length 0

```

> I may have to replicate your setup to dig into this. Maybe this weekend.
> Send
> instructions on how to set-up Xen. In the meantime, can you:
>
> 1. post the output of `ifconfig' on all your DomUs

```
❯ for i in srv-net srv-iot srv-mail srv-app srv-extra;do echo "--\n-- 
ifconfig of DomU $i\n--"; ssh user@$i /sbin/ifconfig -a;done

--
-- ifconfig of DomU srv-net
--
xennet0: flags=0x8843 mtu 1500
capabilities=0x3fc00
capabilities=0x3fc00
enabled=0
ec_capabilities=0x5
ec_enabled=0
address: 00:16:3e:00:00:01
inet6 fe80::216:3eff:fe00:1%xennet0/64 flags 0 scopeid 0x1
inet 192.168.2.51/24 broadcast 192.168.2.255 flags 0
lo0: flags=0x8049 mtu 33624
status: active
inet6 ::1/128 flags 0x20
inet6 fe80::1%lo0/64 flags 0 scopeid 0x2
inet 127.0.0.1/8 flags 0
--
-- ifconfig of DomU srv-iot
--
xennet0: flags=0x8843 mtu 1500
capabilities=0x3fc00
capabilities=0x3fc00
enabled=0
ec_capabilities=0x5
ec_enabled=0
address: 00:16:3e:00:00:02
inet6 fe80::216:3eff:fe00:2%xennet0/64 flags 0 scopeid 0x1
inet 192.168.2.52/24 broadcast 192.168.2.255 flags 0
lo0: flags=0x8049 mtu 33624
status: active
inet6 ::1/128 flags 0x20
inet6 fe80::1%lo0/64 flags 0 scopeid 0x2
in

Re: ssh client_loop send disconnnect from Dom0 -> DomU (NetBSD 10.0_BETA/Xen)

2023-06-25 Thread Matthias Petermann

Hello,

On 25.06.23 07:49, Matthias Petermann wrote:


4) Run the test with tcpdump from DomU -> this is currently ongoing. I 
will followup as soon I have the results.


This is the follow-up I promised. I was lucky this morning to catch one 
occurance of the issue while tcpdump was running in the DomU. Because of 
the huge volume, I just captured the meta data (default output of 
tcpdump to stdout) and even here, the resulting log grow quickly to 5 
GB. So I cut it down to the relevant time window and uploaded it here:


https://paste.petermann-it.de/?2ea9787bbff024f4#71N5aXYoQTdDq3tXVxBXjfmDAuw9Wdof3Dkyim99xcYG

For better classification, here is the rough timelime of the events I'd 
like to comment below:


1) 08:52:07.595169

Begin active monitoring
Continuous ssh package flow from srv-net.lan (DomU) -> vhost2.lan (Dom0)

2) 08:52:07.595831

Notified a lot of ARP related traffic
ssh package flow seems to be slowed down / paused
Client (ssh) reported "Connection to srv-net.lan closed by remote host."

3) 08:52:21.xx

Client (backup script) reported that it created a new ssh connection to
the remote host and started the next dump.


Somewhere between 2) and 3) there should be the answer to the question. 
Please apologize for the noise in the log file this host is quite busy 
and I fear that removing lines that I consider unrelated might result in 
unintentional misdirection of the analysis.


So far, thanks for all your time support and valuable support - it helps 
a lot to understand the system even better.


Kind regards
Matthias



smime.p7s
Description: S/MIME Cryptographic Signature


Re: ssh client_loop send disconnnect from Dom0 -> DomU (NetBSD 10.0_BETA/Xen)

2023-06-24 Thread Matthias Petermann

Hello,

On 25.06.23 03:48, RVP wrote:

On Sat, 24 Jun 2023, Brian Buhrow wrote:

In any case, The fact that you're getting regular delays on your pings 
suggests
there is a delay between the time when the arp cache times out and 
when it gets refreshed.




This would be determined by `net.inet.arp.nd_delay' I think (on
-HEAD).

As a consequence of that delay, if you have a high speed stream 
running when the cache times out,
it's possible the send buffer of the sending process, i.e. sshd, is 
filling up before that

cache gets refreshed and the packets can flow again.



In this case, the kernel would either block the sshd process or
return EAGAIN--which is handled. The kernel should only return a
EHOSTDOWN if `net.inet.arp.nd_bmaxtries' * `net.inet.arp.nd_retrans'
(ie. 3 * 1000ms) has passed without getting an ARP response. Even
on a LAN, this is pretty unlikely (even with that peculiarly short
30-second ARP-address cache timeout). Smells like a Xen+load+timing
issue (not hand-wavy at all there, RVP!). It would be interesting
to see the tcpdump capture from the DomU.

-RVP


Over the last day I did some further tests and tried out all the hints I 
got in this thread. Here is s short summary:


1) Run a ping over night from DomU to Dom0 -> no dropouts

2) increased the ARP cache timeout  net.inet.arp.nd_reachable=120
   on both, Dom0 and DomU  -> this seemed to have an effect at first,
   but the problem still exists (its not a measured fact but a feeling,
   that it happens now a bit less often and later)

3) Checked send/receive buffer configuration

```
srv-net$ sysctl net.inet.tcp.sendbuf_auto
net.inet.tcp.sendbuf_auto = 1
srv-net$ sysctl net.inet.tcp.recvbuf_auto
net.inet.tcp.recvbuf_auto = 1
srv-net$ sysctl net.inet.tcp.sendbuf_max
net.inet.tcp.sendbuf_max = 262144
srv-net$ sysctl net.inet.tcp.recvbuf_max
net.inet.tcp.recvbuf_max = 262144
```

These samples are from DomU, but Dom0 has an identical configuration.

4) Run the test with tcpdump from DomU -> this is currently ongoing. I 
will followup as soon I have the results.



Kind regards
Matthias


smime.p7s
Description: S/MIME Cryptographic Signature


Re: ssh client_loop send disconnnect from Dom0 -> DomU (NetBSD 10.0_BETA/Xen)

2023-06-23 Thread Matthias Petermann

Hello,

On 24.06.23 01:37, RVP wrote:

On Fri, 23 Jun 2023, Brian Buhrow wrote:

hello.  My understanding is that the arp caching mechanism works 
regardless of whether

you use static MAC addresses or dynamically generated ones.
[...]
If you then run brconfig on the bridge containing the domu, you'll see 
the MAC  address you

assigned, or which was assigned dynamically, alive and well.



Right, but, cacheing implies a timeout, and is there a timeout for the MAC
addresses on Xen IFs? Does an `arp -an' indicate this (I can't test this--
no Xen set up.)


On my Dom0, it looks like there is a timeout for the MAC adresses. The 
lines below are random but subsequent samples of the "arp -an" command 
on the Dom0 (192.168.2.50) within a timespan of ~5 minutes. What catched 
my eye so far:


 - there seem to be expirations, that resolve / renew (*1)
 - there are very long timeouts (23h+) that shortly later seem to be 
reset to a shorter value (*2)


So I am wondering what the expectation should be. Are the MAC address 
timeouts supposed to be long-lived (hours...) or are they usually 
short-lived (seconds)? Does the output below indicate some issue?


Kind regards
Matthias


```
vhost2$ doas arp -an
? (192.168.2.254) at e0:28:6d:25:44:6c on re0 23h59m52s S(*2)
? (192.168.2.191) at 98:ee:cb:f0:3c:b8 on re0 20s R
? (192.168.2.51) at 00:16:3e:00:00:01 on re0 2s R
? (192.168.2.54) at 00:16:3e:00:00:04 on re0 2s R
? (192.168.2.55) at 00:16:3e:00:00:05 on re0 1s R
? (192.168.2.52) at 00:16:3e:00:00:02 on re0 8s R
? (192.168.2.53) at 00:16:3e:00:00:03 on re0 13s R
vhost2$ doas arp -an
? (192.168.2.254) at e0:28:6d:25:44:6c on re0 23h59m52s S(*2)
? (192.168.2.191) at 98:ee:cb:f0:3c:b8 on re0 20s R
? (192.168.2.51) at 00:16:3e:00:00:01 on re0 2s R
? (192.168.2.54) at 00:16:3e:00:00:04 on re0 2s R
? (192.168.2.55) at 00:16:3e:00:00:05 on re0 1s R
? (192.168.2.52) at 00:16:3e:00:00:02 on re0 8s R
? (192.168.2.53) at 00:16:3e:00:00:03 on re0 13s R
vhost2$ doas arp -an
? (192.168.2.254) at e0:28:6d:25:44:6c on re0 23h59m51s S
? (192.168.2.191) at 98:ee:cb:f0:3c:b8 on re0 19s R
? (192.168.2.51) at 00:16:3e:00:00:01 on re0 1s R
? (192.168.2.54) at 00:16:3e:00:00:04 on re0 1s R
? (192.168.2.55) at 00:16:3e:00:00:05 on re0 expired R   (*1)
? (192.168.2.52) at 00:16:3e:00:00:02 on re0 7s R
? (192.168.2.53) at 00:16:3e:00:00:03 on re0 12s R
vhost2$ doas arp -an
? (192.168.2.254) at e0:28:6d:25:44:6c on re0 16s R
? (192.168.2.191) at 98:ee:cb:f0:3c:b8 on re0 29s R
? (192.168.2.51) at 00:16:3e:00:00:01 on re0 26s R
? (192.168.2.54) at 00:16:3e:00:00:04 on re0 26s R
? (192.168.2.55) at 00:16:3e:00:00:05 on re0 25s R
? (192.168.2.52) at 00:16:3e:00:00:02 on re0 2s D
? (192.168.2.53) at 00:16:3e:00:00:03 on re0 23h59m52s S (*2)
vhost2$ doas arp -an
? (192.168.2.254) at e0:28:6d:25:44:6c on re0 10s R
? (192.168.2.191) at 98:ee:cb:f0:3c:b8 on re0 23s R
? (192.168.2.51) at 00:16:3e:00:00:01 on re0 20s R
? (192.168.2.54) at 00:16:3e:00:00:04 on re0 20s R
? (192.168.2.55) at 00:16:3e:00:00:05 on re0 19s R
? (192.168.2.52) at 00:16:3e:00:00:02 on re0 26s R
? (192.168.2.53) at 00:16:3e:00:00:03 on re0 1s D
vhost2$ doas arp -an
? (192.168.2.254) at e0:28:6d:25:44:6c on re0 29s R
? (192.168.2.191) at 98:ee:cb:f0:3c:b8 on re0 10s R
? (192.168.2.51) at 00:16:3e:00:00:01 on re0 23h59m52s S(*2)
? (192.168.2.54) at 00:16:3e:00:00:04 on re0 23h59m52s S(*2)
? (192.168.2.55) at 00:16:3e:00:00:05 on re0 23h59m51s S(*2)
? (192.168.2.52) at 00:16:3e:00:00:02 on re0 23h59m58s S(*2)
? (192.168.2.53) at 00:16:3e:00:00:03 on re0 3s R
vhost2$ doas arp -an
? (192.168.2.254) at e0:28:6d:25:44:6c on re0 25s R
? (192.168.2.191) at 98:ee:cb:f0:3c:b8 on re0 6s R
? (192.168.2.51) at 00:16:3e:00:00:01 on re0 3s D
? (192.168.2.54) at 00:16:3e:00:00:04 on re0 3s D
? (192.168.2.55) at 00:16:3e:00:00:05 on re0 2s D
? (192.168.2.52) at 00:16:3e:00:00:02 on re0 23h59m54s S(*2)
? (192.168.2.53) at 00:16:3e:00:00:03 on re0 23h59m59s S(*2)
vhost2$ doas arp -an
? (192.168.2.254) at e0:28:6d:25:44:6c on re0 23s R
? (192.168.2.191) at 98:ee:cb:f0:3c:b8 on re0 4s R
? (192.168.2.51) at 00:16:3e:00:00:01 on re0 1s D
? (192.168.2.54) at 00:16:3e:00:00:04 on re0 1s D
? (192.168.2.55) at 00:16:3e:00:00:05 on re0 30s R
? (192.168.2.52) at 00:16:3e:00:00:02 on re0 23h59m52s S(*2)
? (192.168.2.53) at 00:16:3e:00:00:03 on re0 23h59m57s S(*2)

```


smime.p7s
Description: S/MIME Cryptographic Signature


Re: ssh client_loop send disconnnect from Dom0 -> DomU (NetBSD 10.0_BETA/Xen)

2023-06-23 Thread Matthias Petermann

Hello Manuel,

On 23.06.23 16:17, Manuel Bouyer wrote:


I'm not sure it's Xen-specific, there have been changes in the network stack
between -9 and -10 affecting the way ARP and duplicate addresses are managed.



Thanks for your attention. I remember you are one of the Xen Gurus RVP 
recommended me to call :-) At the moment I am also far from sure this is 
a Xen issue, but try to follow each suspicion I get aware of. Do you 
have more details on the network stack changes (maybe a link to change 
log or files I should take a look at?).




```
name="srv-net"
type="pv"
kernel="/netbsd-XEN3_DOMU.gz"
memory=512
vcpus=2
vif = ['mac=00:16:3E:00:00:01,bridge=bridge0,ip=192.168.2.51' ]


the ip= part is not used by NetBSD.
A fixed mac address shouldn't make a difference, it's the xl tool which
generates one if needed and the domU doesn't know if it's fixed or
auto-generated.


Thanks for the clarification... so then its a one time setup thing only.

Btw, the ip= I forgot to mention - I agree this is not part of the 
official NetBSD configuration :-) Actually I use it as a way to assign 
IP-Adresses for my DomUs from the Dom0 config. It's part of the custom 
image builder[1] created to standardize my setups. Xen in NetBSD 10 
feels (and measures...) so much more performant - I hope there is some 
way to address this network issue. For the weekend I plan do reproduce 
the exact same setup on a higher quality hardware to find out if there 
is potentially some hardware related factor existing. At the moment I 
run it on a low cost NUC7CJYB with Celeron J4025 CPU and Realtek NIC. I 
go some reports especially the NIC could qualify for the source of the 
trouble (although I still don't understand if this is relevant for the 
bridge between Dom0 and the DomUs.).


Kind regards
Matthias


[1] 
https://forge.petermann-it.de/mpeterma/vmtools/src/commit/95d55f184b9fd1d74c931abfc7d44c58f00c0c32/lib/install.sh#L81


smime.p7s
Description: S/MIME Cryptographic Signature


Re: ssh client_loop send disconnnect from Dom0 -> DomU (NetBSD 10.0_BETA/Xen)

2023-06-23 Thread Matthias Petermann

Hi,

On 23.06.23 02:45, RVP wrote:

So, the server tries to write data into the socket; write() fails with
errno = EHOSTDOWN which sshd(8) treats as a fatal error and it exits.
The client tries to read/write to a closed connection, and it too quits.

The part which doesn't make sense is the EHOSTDOWN error. Clearly the
other end isn't down. Can't say I understand what's happening here. You
need a Xen guru now, Matthias :)


I will still try the tips from yesterday (long time ping test) and 
collect some more data. And yes - I think only someone with a strong Xen 
background can really help me :-) I will followup as soon I completed my 
recent tests.




On Thu, 22 Jun 2023, Brian Buhrow wrote:

  hello.  Actually, on the server side, where you get the "host is 
down" message, that is a
system error from the network stack itself.  I've seen it when the arp 
cache times out and

can't be refreshed in a timely manner.



But, does ARP make any sense for Xen IFs? I thought MAC addresses were
ginned up for Xen IFs...


At the moment, I manually set the MAC adresses for all DomUs in the 
Domain configuration file (at the network interface specification), example:


```
name="srv-net"
type="pv"
kernel="/netbsd-XEN3_DOMU.gz"
memory=512
vcpus=2
vif = ['mac=00:16:3E:00:00:01,bridge=bridge0,ip=192.168.2.51' ]
disk = [

'file:/data/vhd/srv-net_root.img,0x01,rw','file:/data/vhd/srv-net_data1.img,0x02,rw','file:/data/vhd/srv-net_data2.img,0x03,rw','file:/data/vhd/srv-net_data3.img,0x04,rw',
]
```

I have made sure that there are no duplicates of MAC adresses in my 
network. The reason why I had decided to set them manually was to avoid 
accidental duplicates when operating multiple Xen hosts at the same 
network (My understanding is that I in case the mac= paramater is left 
off, Xen tooling decides for a MAC adress from the 00:16:3E... range).


Actually I don't believe this would make a difference - but should I try 
to avoid the manual specification of the mac adress here for a test?


Kind regards
Matthias


smime.p7s
Description: S/MIME Cryptographic Signature


Re: FWD: Re: Mounting NetBSD partition on voidlinux

2023-06-22 Thread Matthias Petermann

Hi,

On 22.06.23 20:33, Sagar Acharya wrote:

I have found the hex addresses of my files and dirs. Is there a program with 
which I can recursively extract them from raw hex?
Thanking you
Sagar Acharya
https://humaaraartha.in



In such cases in the past I had some success with foremost[1]. It 
detects files on their headers and data structure and tries to 
reconstruct as many of them as it finds. It has the drawback that it is 
not able to reconstruct the file names, but it worked reliable for a
damaged photo collection back in the day. I was able to reindex the 
files by reading the exif headers and rename the files based on them.


I think someone mentioned some  modern alternative to foremost but I 
don't remember at the moment.


Kind regards
Matthias

[1] https://pkgsrc.se/sysutils/foremost



smime.p7s
Description: S/MIME Cryptographic Signature


Re: ssh client_loop send disconnnect from Dom0 -> DomU (NetBSD 10.0_BETA/Xen)

2023-06-22 Thread Matthias Petermann

Hi,

On 22.06.23 08:36, RVP wrote:

Can you see any errors from sshd(8) in the logs on the DomU?
If not, run the sshd server standalone like this:

```
/usr/sbin/sshd -Dddd -E/tmp/s.log
```

then post the `s.log' file after you run something like:

```
$ ssh -E/tmp/c.log -vvv XXX.NET 'dd if=/dev/zero bs=1m count=1 
msgfmt=quiet' >/dev/null

```

on the Dom0.



Thanks for all the good points and suggestions. I have picked up the 
last one and temporarely ran sshd on the DomU in debug mode, while 
repeating the exact same test with ssh and dump.


Here are the logs:

 - Dom0: 
https://paste.petermann-it.de/?0c56870be0c9e8b1#9bCJT4C2hTBMUWzgDu84o6ipccjC7kNQzdofQtCaLUyz


 - DomU: 
https://paste.petermann-it.de/?8185479d13b60bbd#98azwgVmfsz5aQ8dPswwCD2yn2mTnM9kstVLvNv8JUKy


During this test, the error count on the affected interfaces did not 
increase further:


```
netbsd_netif_rx_bytes{interface="re0"} 64759716902
netbsd_netif_tx_bytes{interface="re0"} 1017162986389
netbsd_netif_errors{interface="re0"} 0
netbsd_netif_rx_bytes{interface="lo0"} 99500058
netbsd_netif_tx_bytes{interface="lo0"} 99500058
netbsd_netif_errors{interface="lo0"} 0
netbsd_netif_rx_bytes{interface="bridge0"} 1047528461564
netbsd_netif_tx_bytes{interface="bridge0"} 1049530231106
netbsd_netif_errors{interface="bridge0"} 0
netbsd_netif_rx_bytes{interface="xvif1i0"} 33842998
netbsd_netif_tx_bytes{interface="xvif1i0"} 31012168535
netbsd_netif_errors{interface="xvif1i0"} 26
```

The relevant part from the client log...

```
debug2: channel 0: window 1900544 sent adjust 196608

debug2: tcpwinsz: 197420 for connection: 3

debug2: channel 0: window 1966080 sent adjust 131072

debug2: tcpwinsz: 197420 for connection: 3

debug2: channel 0: window 1966080 sent adjust 131072

debug2: tcpwinsz: 197420 for connection: 3

debug2: channel 0: window 1966080 sent adjust 131072

debug3: send packet: type 1

debug1: channel 0: free: client-session, nchannels 1

debug3: channel 0: status: The following connections are open:

  #0 client-session (t4 r0 i0/0 o0/0 e[write]/0 fd 6/7/8 sock -1 cc -1 
io 0x01/0x00)




Connection to srv-net closed by remote host.

Transferred: sent 1086580, received 7869273016 bytes, in 393.9 seconds

Bytes per second: sent 2758.8, received 19980159.5

debug1: Exit status -1
```

...and from the server log...

```
debug2: channel 0: rcvd adjust 131072

debug2: channel 0: rcvd adjust 131072

debug2: channel 0: rcvd adjust 196608

debug2: channel 0: rcvd adjust 131072

debug2: channel 0: rcvd adjust 131072

debug2: channel 0: rcvd adjust 131072

process_output: ssh_packet_write_poll: Connection from user user 
192.168.2.50 port 60196: Host is down


debug1: do_cleanup

debug3: PAM: sshpam_thread_cleanup entering

debug3: mm_request_receive: entering

debug1: do_cleanup

debug1: PAM: cleanup

debug1: PAM: closing session

debug1: PAM: deleting credentials

debug3: PAM: sshpam_thread_cleanup entering
```

...appears a bit like both parties blaming each other... client tells 
the remote host has closed the connection, and the remote host complains 
about the client beeing down.



Kind regards
Matthias


smime.p7s
Description: S/MIME Cryptographic Signature


Re: ssh client_loop send disconnnect from Dom0 -> DomU (NetBSD 10.0_BETA/Xen)

2023-06-21 Thread Matthias Petermann

On 22.06.23 07:52, Matthias Petermann wrote:

On 21.06.23 19:54, Matthias Petermann wrote:
2>log.txt ssh user@srv-net -vvv doas /sbin/dump -X -h 0 -b 64 -0auf - 
/data/119455aa-6ef8-49e0-b71a-9c87e84014cb > /mnt/test.dump


...just noticed another variation, this time client_loop: send 
disconnect occured:


https://paste.petermann-it.de/?880b721698a2bedc#8pTM5QsrDoojU5tL9xmKoLgRw4Zi96BPrQiKQX6GtAaZ



```
debug2: channel 0: window 1966080 sent adjust 131072
debug2: tcpwinsz: 197420 for connection: 3
debug2: channel 0: window 1933312 sent adjust 163840
debug2: tcpwinsz: 197420 for connection: 3
debug2: channel 0: window 1900544 sent adjust 196608
debug3: send packet: type 1
client_loop: send disconnect: Broken pipe
```

...and the error count of the interface did not increase:

```
netbsd_netif_rx_bytes{interface="xvif1i0"} 29121457
netbsd_netif_tx_bytes{interface="xvif1i0"} 30167032238
netbsd_netif_errors{interface="xvif1i0"} 26
```

So the disconnection event and the error cound do not seem to be 
connected directly(?)


Kind regards
Matthias


smime.p7s
Description: S/MIME Cryptographic Signature


Re: ssh client_loop send disconnnect from Dom0 -> DomU (NetBSD 10.0_BETA/Xen)

2023-06-21 Thread Matthias Petermann

On 21.06.23 19:54, Matthias Petermann wrote:
2>log.txt ssh user@srv-net -vvv doas /sbin/dump -X -h 0 -b 64 -0auf - 
/data/119455aa-6ef8-49e0-b71a-9c87e84014cb > /mnt/test.dump


...just noticed another variation, this time client_loop: send 
disconnect occured:


https://paste.petermann-it.de/?880b721698a2bedc#8pTM5QsrDoojU5tL9xmKoLgRw4Zi96BPrQiKQX6GtAaZ

Best regards,
Matthias


smime.p7s
Description: S/MIME Cryptographic Signature


Re: ssh client_loop send disconnnect from Dom0 -> DomU (NetBSD 10.0_BETA/Xen)

2023-06-21 Thread Matthias Petermann

Hello,

On 21.06.23 11:22, RVP wrote:

On Wed, 21 Jun 2023, RVP wrote:

A `Broken pipe' from ssh means the RHS of the pipeline exited 
prematurely.




Is what I said, but, I see that ssh ignores SIGPIPE (network I/O--duh!),
so that error message is even odder.

Do a `2>log.txt ssh -vvv ...' and post the `log.txt' file when you send
the command-line.

-RVP


Thanks for your patience It took me some hours to get ready for the 
test. The command that I issues is:


```
vhost2$ 2>log.txt ssh user@srv-net -vvv doas /sbin/dump -X -h 0 -b 64 
-0auf - /data/119455aa-6ef8-49e0-b71a-9c87e84014cb > /mnt/test.dump

```

The log output at the time of the was:

```
debug2: tcpwinsz: 197420 for connection: 3
debug2: channel 0: window 1933312 sent adjust 163840
debug2: tcpwinsz: 197420 for connection: 3
debug2: channel 0: window 1933312 sent adjust 163840
debug2: tcpwinsz: 197420 for connection: 3
debug2: channel 0: window 1966080 sent adjust 131072
debug3: send packet: type 1
debug1: channel 0: free: client-session, nchannels 1
debug3: channel 0: status: The following connections are open:
  #0 client-session (t4 r0 i0/0 o0/0 e[write]/0 fd 6/7/8 sock -1 cc -1 
io 0x01/0x00)


Connection to srv-net closed by remote host.
Transferred: sent 1119040, received 8256466228 bytes, in 374.4 seconds
Bytes per second: sent 2988.7, received 22051060.6
debug1: Exit status -1
```

I would like to apologize for being imprecise in the initial wording of 
the error message. Without the pipe - i.e. with the redirect - I can't 
currently see a "client_loop send disconnect" message at all, but 
"Connection to srv-net closed by remote host.". However, the result is 
the same - the connection was closed remotely.


By the way, until now I assumed that the problem only occurs from Dom0 
to DomU. In the meantime, however, I have encountered it at least once 
with my previously working variant - another physical host on the LAN 
after DomU. Since all packets going to and from a DomU also pass through 
Dom0, it is of course conceivable that this is a problem in Dom0. I have 
also noticed there that the virtual network interfaces of the DomU in 
question get transmission errors reported:


```
netbsd_netif_rx_bytes{interface="xvif1i0"} 14371724
netbsd_netif_tx_bytes{interface="xvif1i0"} 29567916243
netbsd_netif_errors{interface="xvif1i0"} 26
```

For your reference you can find the full log at:

https://paste.petermann-it.de/?0a1f841ca7c27c63#DTzA3mJMN4fXqGTNVERaWuLJxLTNU1BByhs3DoEn5i9d

Kind regards
Matthias


smime.p7s
Description: S/MIME Cryptographic Signature


Re: Mounting NetBSD partition on voidlinux

2023-06-21 Thread Matthias Petermann

Hi,

On 21.06.23 14:20, Sagar Acharya wrote:

Also, linux doesn't have fsck_ffs and debian had support for ufs in ufsutils a 
long time ago.

I highly recommend that for such cases you have a small standalone source which 
can be built for correcting such errors which can perhaps have disklabel, 
fsck_ffs, etc. A user can use it locally!
Thanking you
Sagar Acharya
https://humaaraartha.in



For serious repair attempts I would always use the USB installation 
media of the appropriate NetBSD version as mentioned by Martin. This 
will first boot into the installer (sysinst), but you can exit it via 
"Utilities menu" -> "Run /bin/sh". Then you are in a shell and can 
access the whole toolbox to analyze and clarify the situation.


I would proceed like this (assumption: GPT partition layout):

1) gpt show 

...to see if the partition table is still intact, if I got the right 
device and which partitions are present.


Example: gpt show wd0

2) dkctl  listwedges

...to list the wedges assigned to the partitions (system internal 
mapping of parts of a logical disk)


example: dkctl wd0 listwedges

3) fsck_ffs -f 

...forced fsck

Example: fsck_ffs -f /dev/rdk3


Kind regards
Matthias


smime.p7s
Description: S/MIME Cryptographic Signature


Re: Mounting NetBSD partition on voidlinux

2023-06-21 Thread Matthias Petermann

Hi,

On 21.06.23 12:16, Martin Husemann wrote:

On Wed, Jun 21, 2023 at 12:12:35PM +0200, Sagar Acharya wrote:

My NetBSD system has gotten corrupted. How do I mount my NetBSD partition on 
voidlinux?


The typical recovery doesn't involve any other OS. If your kernel
works and finds the / partition you can "boot -sa" and select
/rescue/init as init replacement, then fix things from there.

If that doesn't work, just boot a USB installer and escape to shell,
then fix whatever needs fixing.


I fully agree to what Martin said. Just for the case you still want to 
access to FFSv2 from Linux, this is from my notes as I once had a 
similiar case*):


```
$ sudo mount -t ufs -o ro,ufstype=ufs2 /dev/sdf1 /mnt
```


Kind regards
Matthias


*) try to find a low-barrier, sustainable emergency access method to my 
FFSv2-formatted external USB backups for my family. Found out that 
giving them a live Linux System to boot from USB media was the easiest 
way. Another option I considered was ufs2tools for Windows 
(https://ufs2tools.sourceforge.net/) but this seems unmaintained.


smime.p7s
Description: S/MIME Cryptographic Signature


Re: ssh client_loop send disconnnect from Dom0 -> DomU (NetBSD 10.0_BETA/Xen)

2023-06-21 Thread Matthias Petermann

On 21.06.23 10:22, RVP wrote:

On Wed, 21 Jun 2023, Matthias Petermann wrote:

Before I had dd in place, I used a redirection > $dumpname which 
results in the same kind of broken pipe issues. I just did verify this 
by repeating this as an isolated test case.




I don't get that: there's no pipe there when you do `> file'. So how come
a Broken pipe still?



My mistake... the error message probably was slighty different but still 
related to the ssh_client_loop. I will repeat the test to catch the 
exact message and will attach the exact command.


Kind regards
Matthias


smime.p7s
Description: S/MIME Cryptographic Signature


Re: ssh client_loop send disconnnect from Dom0 -> DomU (NetBSD 10.0_BETA/Xen)

2023-06-21 Thread Matthias Petermann

Hello,

On 21.06.23 09:31, RVP wrote:

On Tue, 20 Jun 2023, Matthias Petermann wrote:

problems. Since there is a bit more steam on the system, I get 
irregular but predictable SSH connection disconnects (ssh client loop 
send disconnect: Broken pipe). I have already tried all possible 
combinations of ClientAliveInterval and ServerAlivceInterval (and 
-CountMax), also TCPKeepAlive. Nothing has changed this situation. 
Even if I run the Ssh server in the DomUs with -d -d -d (debug level 
3), I see no evidence of a cause around the time of the abort.




A `Broken pipe' from ssh means the RHS of the pipeline exited prematurely.
Does dd report anything? media errors? filesystem full? O_DIRECT constraint
violations (offsets and sizes not aligned to block-size because dd(1) is
reading from a (network) pipe--though this is not a problem on UFS)?



thanks for your response and pointing me to the dd... I did not think 
about this could contribute to the issue at first but it sounds like 
something important to consider. Basically I use dd to get at least some 
kind of return code on the host e.g. if the target media is full or not 
writable for some reason.


Before I had dd in place, I used a redirection > $dumpname which results 
in the same kind of broken pipe issues. I just did verify this by 
repeating this as an isolated test case.


Kind regards
Matthias


smime.p7s
Description: S/MIME Cryptographic Signature


ssh client_loop send disconnnect from Dom0 -> DomU (NetBSD 10.0_BETA/Xen)

2023-06-20 Thread Matthias Petermann

Hello all,

I have a network problem here that I'm not sure what Xen's contribution is.

There is one Dom0 and several DomUs. The DomUs are connected via a 
brigde to the Dom0 and the LAN.


The filesystems of the DomUs are backed up to a USB disk attached to the 
host. To do this, Dom0 calls the dump command[1] in the DomUs via ssh 
and has the data written to stdout. In Dom0, the data is then redirected 
to the file on the USB disk.


When the VMs were not yet particularly busy, this worked without any 
problems. Since there is a bit more steam on the system, I get irregular 
but predictable SSH connection disconnects (ssh client loop send 
disconnect: Broken pipe). I have already tried all possible combinations 
of ClientAliveInterval and ServerAlivceInterval (and -CountMax), also 
TCPKeepAlive. Nothing has changed this situation. Even if I run the Ssh 
server in the DomUs with -d -d -d (debug level 3), I see no evidence of 
a cause around the time of the abort.


Interesting: if I initiate SSH outbound from an external host on the LAN 
(i.e. not the Dom0), it works without interconnection aborts in any case.


So I'm just wondering if there might be any peculiarities, setting 
options or known errors related to long-running SSH connections from 
Dom0 into a DomU on the same host. If anyone has any ideas here, I would 
be very very grateful.


Kind regards
Matthias


[1] 
https://forge.petermann-it.de/mpeterma/vmtools/src/commit/c0f89b3b7610da25fd073a0cebf4e11788934a4b/vmbackup#L193


smime.p7s
Description: S/MIME Cryptographic Signature


Re: scp/sftp -R broken?

2023-06-05 Thread Matthias Petermann

Hi Thomas,

On 06.06.23 00:40, Thomas Klausner wrote:

Hi!

When I try to recursively copy a directory with "scp -r" or sftp's
"put -Rp" between a -current and a NetBSD 9, I see:

# scp -r a netbsd-9:
scp: realpath ./a: No such file
scp: upload "./a": path canonicalization failed
scp: failed to upload directory a to .

# ssh -V
OpenSSH_9.1 NetBSD_Secure_Shell-20221004-hpn13v14-lpk, OpenSSL 3.0.8 7 Feb 2023

netbsd-9# ssh -V
OpenSSH_8.0 NetBSD_Secure_Shell-20220604-hpn13v14-lpk, OpenSSL 1.1.1k  25 Mar 
2021

scp of single files works.

The same command works if I copy it onto the same machine (and thus
same ssh on the other side), both current -> current and netbsd9 ->
netbsd9.

Any ideas why this doesn't work, and what the error message wants to tell me??


I'm sorry, I've forgotten the specifics, but I've encountered the 
problem before as well. I was able to solve it using the -O option. In 
the scp man page, it's mentioned that this option switches to the legacy 
SCP protocol if the server doesn't support SFTP. So it seems like more 
of a workaround, but maybe it will help you? Does anyone have more 
details on this?


Kind regards
Matthias


smime.p7s
Description: S/MIME Cryptographic Signature


Re: nvmm users - experience

2023-05-21 Thread Matthias Petermann

Hello,

On 21.05.23 16:01, Mathew, Cherry G.* wrote:

Hello,

I'm wondering if there are any nvmm(4) users out there - I'd like to
understand what your user experience is - expecially for multiple VMs
running simultaneously.

Specifically, I'd like to understand if nvmm based qemu VMs have
interactive "jitter" or any such scheduling related effects.

I tried 10.0_BETA with nvmm and it was unusable for 3guests that I
migrated from XEN, so I had to fall back.

Just looking for experiences from any users of nvmm(4) on NetBSD (any
version, including -current is fine).

Many Thanks,



I would like to contribute a small testimonial as well.

I came across Qemu/NVMM more or less out of necessity, as I had been 
struggling for some time to set up a proper Xen configuration on newer 
NUCs (UEFI only). The issue I encountered was with the graphics output 
on the virtual host, meaning that the screen remained black after 
switching from Xen to NetBSD DOM0. Since the device I had at my disposal 
lacked a serial console or a management engine with Serial over LAN 
capabilities, I had to look for alternatives and therefore got somewhat 
involved in this topic.


I'm using the combination of NetBSD 9.3_STABLE + Qemu/NVMM on small 
low-end servers (Intel NUC7CJYHN), primarily for classic virtualization, 
which involves running multiple independent virtual servers on a 
physical server. The setup I have come up with works stably and with 
acceptable performance.


Scenario:

I have a small root filesystem with FFS on the built-in SSD, and the 
backing store for the VMs is provided through ZFS ZVOLs. The ZVOLs are 
replicated alternately every night (full and incremental) to an external 
USB hard drive.


There are a total of 5 VMs:

net (DHCP server, NFS and SMB server, DNS server)
app (Apache/PHP-FPM/PostgreSQL hosting some low-traffic web apps)
comm (ZNC)
iot (Grafana, InfluxDB for data collection from two smart meters 
every 10 seconds)

mail (Postfix/Cyrus IMAP for a handful of mailboxes)

Most of the time, the Hosts CPU usage of the host with this "load" is 
around 20%. The provided services consistently respond quickly.


However, I have noticed that depending on the load, the clocks of the 
VMs can deviate significantly. This can be compensated for by using a 
higher HZ in the host kernel (HZ=1000) and tolerant ntdps configuration 
in the guests. I have also tried various settings with schedctl, 
especially with the FIFO scheduler, which helped in certain scenarios 
with high I/O load. However, this came at the expense of stability.


Furthermore, in my system configuration, granting a guest more than one 
CPU core does not seem to provide any advantage. Particularly in the VMs 
where I am concerned about performance (net with Samba/NFS), my 
impression is that allocating more CPU cores actually decreases 
performance even further. I should measure this more precisely someday...


Apart from that, I have set up Speedstep on the host with a short 
warm-up and relatively long cool-down phase, so the system is very 
energy-efficient when idle (power consumption around 3.5W).


For the usability of keyboard/mouse/graphics output I cannot provide an 
assessment, as I install the guest systems using Qemu's virtual serial 
console.


If you have specific questions or need assistance, feel free to reach 
out. I have documented everything quite well, as I intended to 
contribute it to the wiki someday. By the way, I am currently working on 
a second identical system where I plan to test the combination of NetBSD 
10.0_BETA and Xen 4.15.


Kind regards
Matthias


smime.p7s
Description: S/MIME Cryptographic Signature


Reproducible deadlock? with VND on NetBSD 10.0_BETA Dom0 on Xen 4.15

2023-05-17 Thread Matthias Petermann

Hello all,

I just wanted to create a custom filesystem image on a Xen Dom0 and 
encountered a possible deadlock:


1) Create the image file as sparse file:

  $ bytes=$(echo "(16*1024*1024*1024)-1"|bc)
  $ doas dd if=/dev/zero of=/data/vhd/net.img bs=1 count=1 seek=$bytes

2) Configure and mount the image:

  $ doas vndconfig vnd0 /data/vhd/net.img 


  $ doas newfs -O2 -I /dev/vnd0
  2$ doas mount -o log /dev/vnd0 /mnt/

3) Extract the base system components

  $ cd /mnt/
  $ sets="kern-GENERIC.tar.xz base.tar.xz comp.tar.xz etc.tar.xz 
man.tar.xz misc.tar.xz modules.tar.xz rescue.tar.xz test.tar.xz text.tar.xz"

  $ setsdir=/data/install/NetBSD-10.0_BETA/amd64/binary/sets
  $ for set in $sets;do doas tar xvfz $setsdir/$set;done

Observation: during the last command (tar xvfz), after it seems to work 
for a while, it suddenly comes to a freeze. In a second terminal 
interaction and inspection with top is still possible. The CPU is in 
100% idle in this state, also RAM does not seem to be an obvious 
problem. This goes well until I start an operation with disk-i/o in the 
second terminal, then everything comes to a halt here too.


This is the top output from the freeze situation:

```
load averages:  0.00,  0.00,  0.00;   up 0+00:16:04 
  15:05:06

25 processes: 24 sleeping, 1 on CPU
CPU states:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% 
idle

Memory: 244M Act, 119M Inact, 5904K Wired, 13M Exec, 274M File, 24M Free
Swap: 8192M Total, 8192M Free / Pools: 73M Used

  PID USERNAME PRI NICE   SIZE   RES STATE   TIME   WCPUCPU COMMAND
  433 root  950   100M   68M vndpc   0:03  0.00%  0.00% tar
0 root  960 0K   31M uvnfp1  0:02  0.00%  0.00% 
[system]

```

Maybe the following context is relevant:

 - the hosts filesystem is FFSv2 with WAPBL enabled
 - the VNDs filesystem is FFSv2 with WAPBL enabled
 - the DOM0 has 512 MB RAM and 1 CPU Core configured (pin)
 - the freeze doesn't occur on the exact same system when I boot into a 
native NetBSD kernel without the hypervisor (in this case there are both 
CPU cores available as well as the full 8192 MB RAM)


Is there anything I could try to investigate this in more detail? 
Unfortunately on the Xen side I have no console available (the system 
has no serial port and the graphical console turns black when the 
hypervisor hands over control to the netbsd kernel).


Many thanks in advance!

Kind regards
Matthias


smime.p7s
Description: S/MIME Cryptographic Signature


net/net-snmp in pkgsrc-2022Q4 fails to build on NetBSD 10.0_BETA

2023-02-20 Thread Matthias Petermann

Hello,

unfortunately I can't find the right mail thread for this topic, but I 
think this has been discussed here before.


The reason why I write: I saw that the problem in pkgsrc-current is 
already solved:


```
author  mrg  2023-01-14 21:11:35 +
committer   mrg  2023-01-14 21:11:35 +
commit  711c521efb49e012473a36e4ec6cb6e2b986bf27 (patch)
tree32122313d729b8bdf33986ef144ca20da283f547 /net/net-snmp
parent  4f05f5067c807a4435646f0040d96b1cfcb8c779 (diff)
handle merged inpcb code since 9.99.104:
  http://mail-index.netbsd.org/source-changes/2022/10/28/msg141800.html
```

Is it possible and allowed by policy to aim for a pull-up of the patch 
for pkgsrc-2022Q4? I know that 2023Q1 is practically around the corner, 
and nothing should stop me from setting up a local patch to 2022Q4 for 
myself. Nevertheless, net-snmp alone on my small limited pkglist is 
responsible as (in)direct dependency for the failure of the buids of 
about 150 packages, including Ejabberd, RabbitMQ and Pandoc. So this 
might be affecting other users too?


Kind regards
Matthias





Re: FFSv2ea

2023-01-16 Thread Matthias Petermann

Hello Clay,

On 17.01.23 03:27, Clay Daniels wrote:
I've enjoyed trying out the new 10.0_BETA and I've selected the newer 
FFSv2ea from the partition menu of the installation a couple of times.


If I try this:    #gpt show wd0

I get this:        GPT part - NetBSD FFSv1/FFSv2

Then the next installation I deliberately used the default of FFSv2, I 
get the same thing. I'm sure I must be expecting the gpt command to do 
more than it really does. What other commands shows the fast file system 
in use?


Clay




the output of gpt corresponds to the GPT partition type. This is rather 
generic in my opinion - you can see that FFSv1 and FFSv2 seem to share a 
code here.


More precise information about the contents of the FFS file system 
internally can provide the command dumpfs. Attention: this only works on 
unmounted partitions. For example, it also shows the subtype FFSV2ea. 
Here is an example:


```
$ doas dumpfs /dev/dk2 | head

file system: /dev/dk2
format  FFSv2ea
endian  little-endian
location 65536  (-b 128)
magic   19012038timeMon Jan 16 16:32:12 2023
superblock location 65536   id  [ 63010452 50c5903c ]
nbfree  5327255 ndir12306   nifree  16432093nffree  1132
ncg 354 size67108855blocks  66069485
bsize   32768   shift   15  mask0x8000
fsize   4096shift   12  mask0xf000
frag8   shift   3   fsbtodb 3
bpg 23697   fpg 189576  ipg 46720
minfree 5%  optim   timemaxcontig 2 maxbpg  4096
symlinklen 120  contigsumsize 2
maxfilesize 0x000800800805
nindir  4096inopb   128
avgfilesize 16384   avgfpdir 64
sblkno  24  cblkno  32  iblkno  40  dblkno  2960
sbsize  4096cgsize  32768
csaddr  2960cssize  8192
cgrotor 0   fmod0   ronly   0   clean   0x01
wapbl version 0x1   location 2  flags 0x0
wapbl loc0 268463360loc1 131072 loc2 512loc3 3
usrquota 0  grpquota 0
flags   none
fsmnt   /export
```

However, it is reasonable to ask whether the reasons that led to the 
introduction of the new FFSV2 subtype ea would also require a standalone 
GPT partition type, or whether the current status is still ok, since 
nothing has changed in the basic on disk format of the filesystems. Here 
I do not know however the guidelines. Maybe Chuck can answer that?


Kind regards
Matthias


Re: Branching for netbsd-10 next week

2022-12-15 Thread Matthias Petermann

Am 15.12.22 um 11:35 schrieb Martin Husemann:

Just to wrap up this thread:

  - branch will probably happen in the next ~10h

  - default file system for new installations will be FFSv2

I will update docs and extend the wiki page about FFS2ea to show how to
switch later, and also provide installation instructions how to select
FFSv2ea right during installation (trivial to do, but better we have
something to be found for later searches).

Thanks to all the input provided on this list and off list!

Martin


Thanks Martin for keeping us up with the progress and the decision for 
the default file system. I think considering all the opinions expressed, 
this is a good decision.


Kind regards
Matthias


smime.p7s
Description: S/MIME Cryptographic Signature


Re: HEADS UP: UFS2 extended attribute changes will be committed tomorrow

2022-12-15 Thread Matthias Petermann

Hello Chuck,

Am 05.12.22 um 16:32 schrieb Chuck Silvers:

In my records, I noticed another note that I had made more than a year ago.
That was around the time the Posix ACLs were imported into NetBSD. I'm not
sure if this is on anyone's radar, or if it's generally considered a
requirement. I would be interested to know how dump / restore are affected
by the current changes, or if there are plans to add support for Extended
Attributes / ACLs to them. Back in FreeBSD 6.0 times there was a patch[1]
that did exactly that for the FreeBSD variants of dump / restore. Probably
the source code deviates however far from it.


christos applied the dump changes for extattrs from freebsd to our code last 
year:

commit 52fea75266aee4480e62dd763d4d3d74b002ea5c
Author: christos 
Date:   Sat Jun 19 13:56:34 2021 +

 Add external attribute dumping and restoring support from FreeBSD.
 Does not fully work yet, attributes are being saved and restored correctly,
 but don't appear in the restored files somehow.


there is a bug in that code that effectively prevents restoring NFSv4 ACLs.
I have a fix for this bug that I need to apply to freebsd and then I'll apply it
to our code as well.



thanks for the update. I have seen the fix has landed in NetBSD code 
base earlier this week - great job! ACL support in NetBSD becomes more 
and more complete & solid :-)


Kind regards
Matthias


smime.p7s
Description: S/MIME Cryptographic Signature


Re: Branching for netbsd-10 next week

2022-12-09 Thread Matthias Petermann

Hello Martin,

Am 08.12.22 um 20:21 schrieb Martin Husemann:

Now the question: should the default install really use this new FFS type,
or should it default to plain FFSv2?


thanks for the good news about the branching progress, as well as the 
good preparation of the topic around the installer and the support for 
EA/ACL. Depending from which point of view one sees NetBSD, one or the 
other variant could appear as the better one. Here therefore only my 
completely personal opinion.


For classification: I see NetBSD as a solid server operating system for 
small and medium appliances. I prefer it (not only) because of the 
stability and the low maintenance effort to all other Unix variants 
everywhere where I can make the decision myself and take responsibility. 
I hope that NetBSD will keep its relevance in this area or even gain it. 
Thats why I would like to see stable features that are standard on other 
comparable operating systems also enabled by default on NetBSD. This 
concerns not only the ACLs, but for example also WAPBL (log). This would 
make it easier for new users on modern systems to get started.


Especially regarding the ACLs, which allow for the first time to run an 
Active Directory compatible domain controller with NetBSD, a nice 
straight path opens up which has the potential to lead to a wider 
distribution and thus better test coverage in those inhomogeneous 
environments where NetBSD has not been present so far due to this gap. 
The detour over the single user mode and the execution of a migration on 
the nevertheless just freshly installed system represents here an 
unnecessary complication.


On the other hand, of course, I also see the concerns about backward 
compatibility. However, if I understand correctly, this only affects new 
installations? If I migrate an existing NetBSD 9 to 10, nothing changes 
in the file system format. I.e. as long as I do not actively initite the 
migration, I can always access it with NetBSD 9. How likely is it that I 
will have to access the filesystem with NetBSD 9 after a new 
installation of NetBSD 10? I can only think of experimental or recovery 
scenarios. In such cases, is it possibly more reasonable to refer the 
(experienced) user to single user mode and migration than the (new) user 
for a fresh install?


Probably you can read it out - I would vote for FFSv2ea as default, but 
am also fine with the opposite if there are good reasons for it. It's 
ever just a few keystrokes in the installer ;-)


Many greetings
Matthias


smime.p7s
Description: S/MIME Cryptographic Signature


Re: Branching for netbsd-10 next week

2022-12-09 Thread Matthias Petermann

Hello Robert,

Am 09.12.22 um 08:55 schrieb Robert Elz:

   | - packets from pkgsrc (like samba) will continue to have the
   |  corresponding options disabled by default

Those packages could have warnings in DESCR and MESSAGE (or whatever it
is called) advising of the need for FFSv2ea for full functionality.
How does samba (and anything else like it) deal with this - if it is
a compile time option, then since NetBSD supports EAs (and hence the
sys call interface exists) EA support in samba should be enabled.
More likely it is to be a run time problem, as whether a filesys has
EA support or not will vary, some might, others not, so whether samba
can provide EA functionality will tend to vary file by file (or at least
directory by directory) - given that, a solid warning that FFSv2ea support
is needed in the samba man page (or other doc) for NetBSD should allow
users to know what to do.


That's right - on NetBSD 10, the necessary library functions for 
managing ACLs are available and linked to Samba at compile time. This 
currently has to be enabled manually via the acl compile option - I've 
had this in my custom builds for a few months now.


A Samba version compiled this way works in standalone mode even without 
an ACL-capable file system. Error messages about the missing ACL support 
of the operating system only occur when it is supposed to be used as 
Active Directory Domain Controller. The first place where this usually 
occurs is when you initialize the structures for the AD in the file 
system with "samba-tool domain provision" (mainly affects the directory 
sysvol).


I therefore agree with you - a corresponding note in the MESSAGE would 
be sufficient. Especially since even on a system without EA/ACL on the 
root file system, for example, a separate partition for the sysvol can 
be mounted with ACLs. This is how I have currently implemented this on 
my domain controllers - primarily for historical reasons and in order 
not to expose the root file system to the risks of the "fresh" EA 
implementation that still existed at that time.


Kind regards
Matthias


smime.p7s
Description: S/MIME Cryptographic Signature


Re: HEADS UP: UFS2 extended attribute changes will be committed tomorrow

2022-12-05 Thread Matthias Petermann

Hi Chuck,

Am 22.11.22 um 07:10 schrieb Chuck Silvers:
[]

as you noted in your later mail, this is documented only in the fsck_ffs manpage
since it only applies to fsck_ffs and not the fs-independent fsck wrapper 
program.

thanks for your feedback!

-Chuck


thank you very much for the explanations. The UFS2ea is now running 
stable for a few weeks at my place (migrated with fsck_ffs -p -c ea) as 
SYSVOL of a Samba server, as well as for some filesystems used as large 
file shares.


In my records, I noticed another note that I had made more than a year 
ago. That was around the time the Posix ACLs were imported into NetBSD. 
I'm not sure if this is on anyone's radar, or if it's generally 
considered a requirement. I would be interested to know how dump / 
restore are affected by the current changes, or if there are plans to 
add support for Extended Attributes / ACLs to them. Back in FreeBSD 6.0 
times there was a patch[1] that did exactly that for the FreeBSD 
variants of dump / restore. Probably the source code deviates however 
far from it.


Kind regards and thanks again for all the work with this much 
appreciated feature

Matthias

[1] https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=93085


smime.p7s
Description: S/MIME Cryptographic Signature


Re: HEADS UP: UFS2 extended attribute changes will be committed tomorrow

2022-11-18 Thread Matthias Petermann

Hello,

Am 18.11.22 um 09:09 schrieb Matthias Petermann:

Hello Chuck, hello all

Am 15.11.22 um 12:34 schrieb Chuck Silvers:
 > Please let me know if there are any questions or concerns.
 >
 > -Chuck

  - The new fsck option "-c" to convert from UFS2 to UFS2es and vice 
versa is missing in the man page fsck(8)




...my fault. I just revisited the diff and found the documentation I 
missed in the man page fsck_ffs(8).


Kind regards
Matthias


smime.p7s
Description: S/MIME Cryptographic Signature


Re: HEADS UP: UFS2 extended attribute changes will be committed tomorrow

2022-11-18 Thread Matthias Petermann

Hello Chuck, hello all

Am 15.11.22 um 12:34 schrieb Chuck Silvers:
> Hi folks,
>
> On Wednesday I'll be committing the changes that I proposed a while back
> that restore UFS2 backward-compatibility with previous NetBSD releases
> and create a new "UFS2ea" variant of UFS2 that supports extended 
attributes.

>
> The previous discussion of this issue started with this post:
> https://mail-index.netbsd.org/current-users/2022/05/24/msg042387.html
>
> The diff that I'll be committing (if no additional changes arise) is at:
> https://ftp.netbsd.org/pub/NetBSD/misc/chs/diff.ufs2ea.20221114.1
>
> There is a wiki page with instructions for updating a -current 
installation

> from before this change to after this change at:
> https://wiki.netbsd.org/features/UFS2ea/
>
> Please let me know if there are any questions or concerns.
>
> -Chuck

I have done some tests with a current build. I did a complete 
reinstallation. I noticed a few small things that I would like to point 
out. Some of the things are probably not caused by the current commit, I 
just want to mention them in the context from a user's point of view:


 - UFS2 seems to be the default in Sysinst when FFSv2 is selected. I 
did not find a selection to explicitly choose UFS2ea. A new installation 
with UFS2ea enabled FFSv2 as root filesystem is currently only possible 
via a detour (conversion via fsck in single user mode). Would an 
addition of sysinst be desirable here?


- A FFSv2 with UFS2 (without ea) can be mounted without error message 
with the option "posix1eacls". Only when trying to manipulate an ACL 
with setfacl you get an error message "Operation not supported". Is this 
due to the fact that the mount command generally does not check in 
advance whether the addressed file system supports the options or does 
not know about them, or is this a place where the magic bytes still have 
to be updated?


 - The mount option "posix1eacls" (I think there was a second ACL 
related one?) is missing in the man page mount(8). I feel like I've seen 
this before in a man page...was I looking for it in the wrong place?


 - The new fsck option "-c" to convert from UFS2 to UFS2es and vice 
versa is missing in the man page fsck(8)


Kind regards
Matthias





smime.p7s
Description: S/MIME Cryptographic Signature


nss_winbind not functional anymore on NetBSD 9.99.106 and Samba 4.16.5

2022-11-14 Thread Matthias Petermann

Hello all,

I have been using NetBSD 9.99.99 with Samba 4.15.9 (from pkgsrc 2022Q2) 
as Windows Domain Controller for a while now which worked well.


Since I switched to the combination NetBSD 9.99.106 and Samba 4.16.5 
(from pkgsrc 2022Q3), the name resolution for usernames / groups via 
nss_winbind does not work anymore.


The Windows clients are not directly affected by this, since the nss 
mechanism, especially on the Unix side, ensures that the correct 
plaintext names can be displayed for the numeric user and group ids 
assigned by Samba - for example, with ls. The workaround at the moment 
is to work with the numeric IDs. This is inconvenient and error-prone.


As proof, I try to display the user information for the built-in domain 
administrator account via id command:


```
net$ id Administrator
id: Administrator: No such user
```

I have checked the following so far:

1) Basic function kerberos with kinit / klist.

```
net$ kinit Administrator
Administrator@TEST.LOCAL's Password:

net$ klist
Credentials cache: FILE:/tmp/krb5cc_1000
Principal: Administrator@TEST.LOCAL

  IssuedExpires   Principal
Nov 14 10:42:45 2022  Nov 14 20:42:45 2022  krbtgt/TEST.LOCAL@TEST.LOCAL
```

2) Joining the Domain from a Windows 11 Prof 22H2 based host

 - works

3) Basic function winbind

```
net$ wbinfo -i Administrator
TEST\administrator:*:0:100::/home/TEST/administrator:/bin/false

net$ wbinfo -g Administrator
TEST\cert publishers
TEST\ras and ias servers
TEST\allowed rodc password replication group
TEST\denied rodc password replication group
TEST\dnsadmins
TEST\enterprise read-only domain controllers
TEST\domain admins
TEST\domain users
TEST\domain guests
TEST\domain computers
TEST\domain controllers
TEST\schema admins
TEST\enterprise admins
TEST\group policy creator owners
TEST\read-only domain controllers
TEST\dnsupdateproxy
```

4) /etc/nsswitch.conf

```
group:  files winbind
group_compat:   nis
hosts:  files dns
netgroup:   files [notfound=return] nis
networks:   files
passwd: files winbind
passwd_compat:  nis
shells: files
```

5) libnss winbind

```
net$ ls -la /usr/lib/nss_winbind.so.0 

lrwxr-xr-x  1 root  wheel  30 Nov 14 09:56 /usr/lib/nss_winbind.so.0 -> 
/usr/pkg/lib/libnss_winbind.so

```

6) Ktrace of the "id" command (excerpts)

```
net$ ktrace id Administrator
id: Administrator: No such user
net$ kdump

592592 id   CALL  open(0x785c601b43b8,0x40,0x1b6)
   592592 id   NAMI  "/etc/nsswitch.conf"
   592592 id   RET   open 3
   592592 id   CALL 
mmap(0,0x7000,PROT_READ|PROT_WRITE,0x1002,0x,0,0)

   592592 id   RET   mmap 132338150055936/0x785c606ca000
   592592 id   CALL 
mmap(0,0x7000,PROT_READ|PROT_WRITE,0x1002,0x,0,0)

   592592 id   RET   mmap 132338150027264/0x785c606c3000
   592592 id   CALL 
mmap(0,0x5000,PROT_READ|PROT_WRITE,0x1002,0x,0,0)

   592592 id   RET   mmap 132338150006784/0x785c606be000
   592592 id   CALL 
mmap(0,0x5000,PROT_READ|PROT_WRITE,0x1002,0x,0,0)

   592592 id   RET   mmap 132338149986304/0x785c606b9000
   592592 id   CALL  __fstat50(3,0x7f7fff082110)
   592592 id   RET   __fstat50 0
   592592 id   CALL 
mmap(0,0x5000,PROT_READ|PROT_WRITE,0x1002,0x,0,0)

   592592 id   RET   mmap 132338149965824/0x785c606b4000
   592592 id   CALL  read(3,0x785c606b4740,0x4000)
   592592 id   GIO   fd 3 read 667 bytes
   "#   $NetBSD: nsswitch.conf,v 1.6 2009/10/25 00:17:06 tsarna 
Exp $\n#\n# nsswitch.conf(5) -\n#   name service switch configurat\
ion file\n#\n\n\n# These are the defaults in libc\n#\n#group: 
compat\ngroup:  files winbind\ngroup_compat:nis\nh\
osts:   files dns\nnetgroup:files [notfound=return] 
nis\nnetworks:  files\n#passwd: compat\npasswd: files winbind\
\npasswd_compat:nis\nshells:files\n\n\n# 
List of supported sources for each database\n#\n# group:   compat\
, dns, files, nis\n# group_compat:  dns, nis\n# 
hosts:  dns, files, nis, mdnsd, multicast_dns\n# netgroup:\
files, nis\n# networks: dns, files, 
nis\n# passwd:  compat, dns, files, nis\n# passwd_compat:\

dns, nis\n# shells: dns, files, nis\n"
   592592 id   RET   read 667/0x29b
   592592 id   CALL  read(3,0x785c606b4740,0x4000)
   592592 id   GIO   fd 3 read 0 bytes
   ""

 592592 id   CALL  open(0x7f7fff0817b8,0,7)
   592592 id   NAMI  "/usr/lib/nss_files.so.0"
   592592 id   RET   open -1 errno 2 No such file or directory
   592592 id   CALL  __sigprocmask14(3,0x7f7fff081e60,0)
   592592 id   RET   __sigprocmask14 0
   592592 id   CALL 
mmap(0,0x5000,PROT_READ|PROT_WRI

Best practices to mount SMB share on NetBSD current

2022-11-03 Thread Matthias Petermann

Hello all,

after the justified removal of mount_smb from the base system - what 
would be the current best way to mount a remote SMB filesystem (Windows 
Server 2016) on NetBSD?


Ideas:

 - wip/fuse-smbfs (Has anyone ever tested this and found it useful?).
 - gvfs to SMB via Fuse (should be possible according to the 
description of sysutils/gvfs - but I haven't found any documentation 
etc. - is there any experience with this?)
 - rclone (https://rclone.org/smb/ + 
https://rclone.org/commands/rclone_mount/ - sounds adventurous at first ;-))


I'm looking forward to some answers with further suggestions - maybe I 
haven't found the most obvious one yet.


Kind regards
Matthias



smime.p7s
Description: S/MIME Cryptographic Signature


Re: Functional differences when using ntpd as NTP client on NTP-NetBSD 9.3<-->10 ?

2022-10-22 Thread Matthias Petermann

Hi,

Am 21.10.2022 um 18:02 schrieb Christos Zoulas:

from man ntp.conf:
  The quality and reliability of the suite of associations discovered by
  the manycast client is determined by the NTP mitigation algorithms and
  the minclock and minsane values specified in the tos configuration
  command.  At least minsane candidate servers must be available and the
  mitigation algorithms produce at least minclock survivors in order to
  synchronize the clock.  Byzantine agreement principles require at least
  four candidates in order to correctly discard a single falseticker.  For
  legacy purposes, minsane defaults to 1 and minclock defaults to 3.  For
  manycast service minsane should be explicitly set to 4, assuming at least
  that number of servers are available.

How many clocks are you synchronizing against? Could it be that you
end up with fewer than minclock servers?

christos



Well, strictly speaking, I synchronise exactly against a single clock 
(that of the host on which the VMs are running). This also works 
reliably on NetBSD 9.3 with the configuration parameters from my last mail.


Regarding "tos minclock", I had relied on the comment in the 
configuration file:


	"Set the target and limit for adding servers configured via pool 
statements or discovered dynamically via mechanisms such as broadcast 
and manycast."


and left it at the default values, because I don't use a pool statement 
but a dedicated server statemend to define the clock source.


```
tos minclock 3 maxclock 6
server 192.168.2.10 burst minpoll 4 maxpoll 6 true
```

Admittedly, the excerpt from man ntp.conf reads a bit more generic. 
However, the problem also exists with:


```
tos minclock 1 maxclock 1
```


Kind regards
Matthias


Re: Functional differences when using ntpd as NTP client on NTP-NetBSD 9.3<-->10 ?

2022-10-22 Thread Matthias Petermann

Hi,

Am 21.10.2022 um 18:41 schrieb Steffen Nurpmeso:

Christos Zoulas wrote in
  :
  |In article <3407f89f-6d30-f1a5-d013-77176f249...@petermann-it.de>,
  |Matthias Petermann   wrote:
  ...
  |>I use ntpd in my Qemu/nvmm VMs as a client to synchronise the (otherwise
  ...

I would simply shoot via rdate(8) without -a, maybe via cron.
Unless they are long living, then with -a.
(I drive my laptop like so, against a NTP running on an always-on
vserver that listens to a single NTP in the same rack.)


Thanks, I will definitely add this to my list of alternatives. Tested it 
successfully without the -a option (-a was not effective because of the 
large deviations). What I like about the approach is that it is so 
simple and unambiguous. At the moment I have the feeling that in my VM 
it triggers a small time jump of a few seconds after every cron trigger. 
I'll have to take a closer look at the -a option (which should prevent 
exactly that) and in parallel continue researching ntpd. With the 
latter, my experience is that - if it works - the continuity and 
uniformity of the time progression is very well ensured.


Kind regards
Matthias


Functional differences when using ntpd as NTP client on NTP-NetBSD 9.3<-->10 ?

2022-10-21 Thread Matthias Petermann

Hello all,

I use ntpd in my Qemu/nvmm VMs as a client to synchronise the (otherwise 
lagging) clocks. For this purpose, ntpd runs on the host and 
synchronises on the internet. The ntpd in the VM only knows the host as 
the only time source and is configured in such a way that it does not 
give up even in the case of major deviations. This means, I use mostly 
the defaults with the following adjustments:


```
$ doas vi /etc/ntp.conf

tinker panic 0 stepout 30

#tos minsane 2

server 192.168.2.10 burst minpoll 4 maxpoll 6 true
```

This works reliably in the VMs with NetBSD 9.3 as guest. Deviations are 
regularly compensated for by stepping.


In the VMs with NetBSD 9.99.99 as guest, for some reason no stepping 
takes place. As a consequence, in a short time deviations of several 
seconds up to minutes occur.


NetBSD 9.3 contains ntpd 4.2.8p11. In my build of NetBSD 9.99.99, ntpd 
4.2.8p14 is included, but I have also seen that p15 is now in the tree. 
Does anyone know if there have been any functional changes between these 
three versions that could account for this behaviour?


(I am aware that such questions are tricky in the virtualisation 
context. For what it's worth, the host is running on NetBSD 9.3 with 
HZ=1000, all VMs are running with HZ=100, and the Qemu processes are set 
to "near real-time" priority in the host via Schedctl (SCHED_FIFO). 
Since I have been using this setup with NetBSD 9.3 VMs, I have had no 
more problems with clock synchronisation. With said NetBSD 9.99.99 VM, 
all these host-side optimisations don't seem to help. Since the external 
configuration is the same (same virtual CPU count), I suspect the cause 
is more on the side of ntpd (or other internal changes that I'm not 
thinking about at all right now).


As always, I'm grateful for any helpful hints - chrony is compiling in 
the background and I'll test it tonight as a workaround.


Kind regards
Matthias


Re: How to limit amount of virtual memory used for files (was: Re: Tuning ZFS memory usage on NetBSD - call for advice)

2022-09-24 Thread Matthias Petermann

Hello all,

On 22.09.22 18:38, Mike Pumford wrote:



On 22/09/2022 06:44, Lloyd Parkes wrote:

Can we put together a catalogue of clearly defined problems so that we 
can reproduce them and investigate further? While Håvard appears to 
have solved his problem, I'm pretty sure I have an unused G4 Mac Mini 
of my own that I can try and reproduce his problem on.


So I changed vm.filemin and vm.filemax to solve my excessive swapping 
issues. Basically the system was preferring to keep file cache around 
and was evicting memory for processes that consumed swap (so data pages 
I'm assuming) rather than file cache. Under heavy memory/file usage 
(pkgsrc bulk builds) any large process running at the time of the build 
ended up going non-responsive.


So it could be that the all that needs to change here are the defaults 
for those parameters.


For the record I have:
vm.filemax=10
vm.filemin=1

in /etc/sysctl.conf on a 16GB 9.3-STABLE system. The system can 
comfortably use more memory than that for file cache. As I write this 
message its currently using 9GB for file cache (which I don't see as a 
problem as there are no other demands on the system memory at the 
moment. Starting a new process like firefox correctly dumped file cache 
over other memory with this config. Also long running firefox processes 
remained responsive both during and after the builds with this change.


Mike


Thank you very much for the helpful input. I had read through the linked 
thread and also adjusted the settings vm.filemax=10 and vm.filemin=1 for 
me. This has resulted in a noticeable relaxation when I now send a zfs 
with the destination of a file on a USB disk. The file cache still fills 
up, but the swap usage is no longer detectable or only minimal. The 
system remains stable and responsive.


In summary, this has solved my problem.

Kind regards
Matthias



smime.p7s
Description: S/MIME Cryptographic Signature


Re: How to limit amount of virtual memory used for files (was: Re: Tuning ZFS memory usage on NetBSD - call for advice)

2022-09-20 Thread Matthias Petermann

Hi,

Am 20.09.2022 um 08:27 schrieb RVP:

On Tue, 20 Sep 2022, Matthias Petermann wrote:

I think I had answered this earlier (but I'm not quite sure) - the 
problem only occurs when I write the data obtained with "zfs send" to 
a local file system (e.g. the USB HDD). If I send the data to a remote 
system with netcat instead, the "file usage" remains within the green 
range.


[...]

This raises the question for me: can I somehow limit this kind of 
memory use on a process basis?




Try piping to dd with direct I/O:

... | dd bs=10m oflag=direct of=foo.file

Does that help?


In theory this looked exactly what I was looking for... unfortunately I 
can observe the same effect like I was doing before with the simple 
redirection. "File" grows to a maximum, the system starts to swap.


I use this command line:

```
zfs send -R tank/vol@backup | dd bs=10m oflag=creat,direct 
of=/mnt/vhost-tank-vol.zfs

```

(I had to add the creat flag as the file did not exist before)

The memory remains occupied even if I send a "sync" in between. However, 
it is immediately released again when I a) delete the file or b) unmount 
the file system.


Have I used the direct flag correctly?

Kind regards
Matthias


How to limit amount of virtual memory used for files (was: Re: Tuning ZFS memory usage on NetBSD - call for advice)

2022-09-19 Thread Matthias Petermann

Hello all,

Am 31.08.2022 um 21:57 schrieb Lloyd Parkes:

It might not be ZFS related. But it could be.

Someone else reported excessive, ongoing, increasing "File" usage a 
while back and I was somewhat dismissive because they were running a 
truckload of apps at the same time (not in VMs).


I did manage to reproduce his problem on an empty non-ZFS NetBSD system, 
so there is definitely something going on where "File" pages are not 
getting reclaimed when there is pressure on the memory system.


I haven't got around to looking into it any deeper though.

BTW the test was to copy a single large file (>1TB?) from SATA storage 
to USB storage. Since the file is held open for the duration of the copy 
(I used dd IIRC) this might end up exercising many of the same code 
paths as a VM accessing a disk image.


Cheers,
Lloyd



I think I had answered this earlier (but I'm not quite sure) - the 
problem only occurs when I write the data obtained with "zfs send" to a 
local file system (e.g. the USB HDD). If I send the data to a remote 
system with netcat instead, the "file usage" remains within the green 
range. I can therefore confirm your test case and am now pretty sure 
that ZFS is not the culprit here.


In general, I have found relatively little about how exactly the memory 
under "File" is composed. The man page for top does not contain any 
information on this. I got a bit of an idea in [1], so I'm going to make 
a few assumptions. If anyone could confirm or contradict these, I would 
be very grateful.


Accordingly, the memory shown under "File" could be areas of files in 
the file system mapped into the main memory. As a consequence, my 
massive writing to a large file probably leads to the data first being 
"parked" in memory pages of the main memory and then gradually being 
written to the backing storage (hard disk). Since "zfs send" can read 
the data from a SATA SSD much faster than it can be written to the slow 
USB HDD, the main memory is utilised to the maximum for the duration of 
the process.


This raises the question for me: can I somehow limit this kind of memory 
use on a process basis? Could ulimit help here?


```
vhost$ ulimit -a
time(cpu-seconds)unlimited
file(blocks) unlimited
coredump(blocks) unlimited
data(kbytes) 262144
stack(kbytes)4096
lockedmem(kbytes)2565180
memory(kbytes)   7695540
nofiles(descriptors) 1024
processes1024
threads  1024
vmemory(kbytes)  unlimited
sbsize(bytes)unlimited
vhost$
```

Unfortunately, I have not found any more detailed descriptions of the 
above parameters. At least "file(blocks)" reads promisingly...



Kind regards
Matthias

[1] https://www.netbsd.org/docs/internals/en/chap-memory.html


Re: Tuning ZFS memory usage on NetBSD - call for advice

2022-09-03 Thread Matthias Petermann

Hi Lloyd,

On 31.08.22 21:57, Lloyd Parkes wrote:

It might not be ZFS related. But it could be.

Someone else reported excessive, ongoing, increasing "File" usage a 
while back and I was somewhat dismissive because they were running a 
truckload of apps at the same time (not in VMs).


I did manage to reproduce his problem on an empty non-ZFS NetBSD system, 
so there is definitely something going on where "File" pages are not 
getting reclaimed when there is pressure on the memory system.


I haven't got around to looking into it any deeper though.

BTW the test was to copy a single large file (>1TB?) from SATA storage 
to USB storage. Since the file is held open for the duration of the copy 
(I used dd IIRC) this might end up exercising many of the same code 
paths as a VM accessing a disk image.


Cheers,
Lloyd



Thanks for your assessment. You might be right the huge increase of 
"File" was noticed while I was writing with "zfs send" to a FFS on a USB 
disk (sd0).


Yesterday I tried the same again, but this time the destination was a 
socket (via netcat). The amount of "File" remained stable and there was 
no significant swapping activity.


It looks like I had jumped to conclusions here.

I will observe this further...

Kind regards
Matthias



smime.p7s
Description: S/MIME Cryptographic Signature


Tuning ZFS memory usage on NetBSD - call for advice

2022-08-31 Thread Matthias Petermann

Hello all,

under [1] is described in the section "Memory usage", which requirements 
ZFS has for the memory.


It further mentions that the tunables that exist in FreeBSD do not exist 
in NetBSD. Especially for the size of the ARC there seems to be no limit 
for NetBSD:


"vfs.zfs.arc_max - Upper size of the ARC. The default is all RAM but 1 
GB, or 5/8 of all RAM, whichever is more. Use a lower value if the 
system runs any other daemons or processes that may require memory. 
Adjust this value at runtime with sysctl(8) and set it in 
/boot/loader.conf or /etc/sysctl.conf."


So far so good... I have here the concrete case that I use ZFS ZVOLS as 
backend storage for virtual machines (Qemu/nvmm). The host has 8192 MB 
RAM available, of which are allocated to the VMs:


* net (512 MB RAM)
* iot (1024 MB RAM)
* mail (512 MB RAM)
* app (2048 MB RAM)

This should leave 4096 MB for the host - while ZFS would claim 5120 MB 
(5/8) of the available RAM. On this, after a while, the value under 
"File" in top increases to over 3 GB, with the consequence that the 
system starts swapping to the swap partition.


This raises the following questions for me:

- how can I investigate the composition of the amount of memory 
displayed under "File" in top more precisely?


- are there any hidden tuning possibilities for ZFS in NetBSD (maybe 
boot parameters etc.) or compile-time settings?


- what kind of memory can basically be swapped out? Only memory of the 
processes (e.g. RAM of the Qemu VMs) or also parts of the ZFS ARC?


- Which value does ZFS use to determine the ARC upper limit? The 
physical RAM or the physical RAM + swap? Background to the question: in 
my example, would I perhaps be better off disabling swap?


Kind regards
Matthias

Btw, sorry for the cross-posting. The host is running on a NetBSD 
9.3_STABLE, however the topic seems relevant for current as well and I 
would have the possibility to test or compare on current as well.



[1] https://wiki.netbsd.org/zfs/
[2] https://docs.freebsd.org/en/books/handbook/zfs/#zfs-advanced


Re: Qemu storage performance drops when smp > 1 (NetBSD 9.3 + Qemu/nvmm + ZVOL)

2022-08-17 Thread Matthias Petermann

Hello Brian,

On 18.08.22 07:10, Brian Buhrow wrote:

hello.  that's interesting.  Do the cores used for the vms also get 
used for the host os?
Can you arrange things so that the host os gets dedicated cores that the vms 
can't use?  If you
do that, do you still see a performance drop when you add cores to the vms?


the host OS has no other purpose than managing the VMs at this time. So 
apart from the minimal footprint of the hosts processes (postfix, 
syslog...) all the ressources are available to the Qemu processes.


In the meantime, because of the other mail in this thread, I believe my 
fundamental misunderstanding is/was that I can overcommit a physical 2 
Core CPU with a multiple of virtual cores without penalties.
I'm starting to see that now, I wasn't aware that the context switches 
are so expensive that they can paralyze the whole I/O system.


Kind regards
Matthias



smime.p7s
Description: S/MIME Cryptographic Signature


Re: Qemu storage performance drops when smp > 1 (NetBSD 9.3 + Qemu/nvmm + ZVOL)

2022-08-17 Thread Matthias Petermann

Hello Brian,

On 17.08.22 20:51, Brian Buhrow wrote:

hello.  If you want to use zfs for your storage, which I strongly 
recommend, lose the
zvols and use flat files inside zfs itself.  I think you'll find your storage 
performance goes
up by orders of magnetude.  I struggled with this on FreeBSD for over a year 
before I found the
myriad of tickets on google regarding the terrible performance of zvols.  It's 
a real shame,
because zvols are such a tidy way to manage virtual servers.  However, the 
performance penalty
is just too big to ignore.
-thanks
-Brian



thank you for your suggestion. I have researched the ZVOL vs. QCOW2 
discussion. Unfortunately, nothing can be found in connection with 
NetBSD, but some in connection with Linux and KVM. The things I have 
found attest to the ZVOLs at least a slight performance advantage. That 
people finally decide for QCOW2 seems to be mainly due to the fact that 
the VM can be paused when the underlying storage is overfilled instead 
of crashing like with ZVOL. However, this situation can also be 
prevented with monitoring and regular snapshots.


Nevertheless, I made a practical attempt and built my described test 
scenario exactly with QCOW2 files located in one and the same ZFS 
dataset. However, the result is almost the same.


If I give the Qemu processes only one core via parameter -smp, I can 
measure a very good I/O bandwidth on the host - depending on the number 
of running VMs it even increases significantly, so that the limiting 
factor here seems to be only the single-thread performance of a CPU core:


- VM 1 with 1 SMP Core: ~200 MByte/s
- + VM2 with 1 SMP Core:  ~300 MByte/s
- + VM3 with 1 SMP Core:  ~500 MByte/s

As with my first test, performance is dramatically worse when I give 
each VM 2 cores instead of 1:


- VM 1 with 2 SMP Cores: ~30...40 MByte/s
- + VM2 with 2 SMP Cores:  < 1 MByte/s
- + VM3 with 2 SMP Cores:  < 1 MByte/s

Is there any logical explanation for this drastic drop in performance?

Kind regards
Matthias



smime.p7s
Description: S/MIME Cryptographic Signature


Qemu storage performance drops when smp > 1 (NetBSD 9.3 + Qemu/nvmm + ZVOL)

2022-08-17 Thread Matthias Petermann

Hello,

I'm trying to find the cause of a performance problem and don't really 
know how to proceed.



## Test Setup

Given a host (Intel NUC7CJYHN, 2 physical cores, 8 GB RAM, 500 GB SSD) 
with a fresh NetBSD/amd64 9.3_STABLE. The SSD contains ESP, an FFS root 
partition and swap, and a large ZPOOL.


The host is to be used as a virtual host for VMs. For this, VMs are run 
with Qemu 7.0.0 (from pkgsrc 2022Q2) and nvmm. The VMs also run NetBSD 
9.3. Storage is provided by ZVOLs through virtio.


Before explaining the issue I face, here some rounded numbers showing 
the performance of the host OS (sampled with iostat):


1) Starting one writer

```
# dd if=/dev/zero of=/dev/zvol/rdsk/tank/vol/test1 bs=4m &


---> ~ 200 MByte/s

2) Adding another writer

```
# dd if=/dev/zero of=/dev/zvol/rdsk/tank/vol/test2 bs=4m &
```

---> ~ 300 MByte/s

3) Adding another writer

```
# dd if=/dev/zero of=/dev/zvol/rdsk/tank/vol/test3 bs=4m &
```

---> ~ 500 MByte/s

From my understanding, this represents the write performance I can 
expect with my hardware when I write raw data in parallel to discrete 
ZVOLS located on the same physical storage (SSD).


This picture changes completely when Qemu comes into play. I did install 
a basic NetBSD 9.3 on each of the ZVOLs (standard layout with FFSv2 + 
WAPBL) and operate them with this QEMU command:


```
qemu-system-x86_64  -machine pc-q35-7.0 -smp $VM_CORES -m $VM_RAM -accel 
nvmm \

-k de -boot cd \
-machine graphics=off -display none -vga none \
-object 
rng-random,filename=/dev/urandom,id=viornd0 \

-device virtio-rng-pci,rng=viornd0 \
-object iothread,id=t0 \
-device virtio-blk-pci,drive=hd0,iothread=t0 \
-device virtio-net-pci,netdev=vioif0,mac=$VM_MAC \
-chardev 
socket,id=monitor,path=$MONITOR_SOCKET,server=on,wait=off \

-monitor chardev:monitor \
-chardev 
socket,id=serial0,path=$CONSOLE_SOCKET,server=on,wait=off \

-serial chardev:serial0 \
-pidfile /tmp/$VM_ID.pid \
-cdrom $VM_CDROM_IMAGE \
-drive 
file=$VM_HDD_VOLUME,if=none,id=hd0,format=raw \
-netdev 
tap,id=vioif0,ifname=$VM_NETIF,script=no,downscript=no \

-device virtio-balloon-pci,id=balloon0
```

The command already includes following optimizations:

 - use virtio driver instead of emulated SCSI device
 - use a separate I/O thread for block device access


## Test Case 1

The environment is set for this test:

 - VM_CORES: 1
 - VM_RAM: 256
 - 
 - VM_HDD_VOLUME (e.g. /dev/zvol/rdsk/tank/vol/test3), each VM has its 
dedicated ZVOL


My test case is the following:

0) Launch iostat -c on the Host and monitor continuously
1) Launch 3 instances of the VM configuration (vm1, vm2, vm3)
2) SSH into vm1
3) Issue dd if=/dev/zero of=/root/test.img bs=4m
   Observation: iostat on Host shows ~140 MByte /s
4) SSH into vm2
5) Issue dd if=/dev/zero of=/root/test.img bs=4m
   Observation: iostat on Host shows ~180 MByte /s
6) SSH into vm3
7) Issue dd if=/dev/zero of=/root/test.img bs=4m
   Observation: iostat on Host shows ~220 MByte /s

Intermediate summary:

 - pretty good results :-)
 - with each additional writer, the bandwidth utilization raises


## Test Case 2

The environment is modified for this test:

 - VM_CORES: 2

The same test case yields completely different results:

0) Launch iostat -c on the Host and monitor continuously
1) Launch 3 instances of the VM configuration (vm1, vm2, vm3)
2) SSH into vm1
3) Issue dd if=/dev/zero of=/root/test.img bs=4m
   Observation: iostat on Host shows ~ 30 MByte /s
4) SSH into vm2
5) Issue dd if=/dev/zero of=/root/test.img bs=4m
   Observation: iostat on Host shows ~ 3 MByte /s
6) SSH into vm3
7) Issue dd if=/dev/zero of=/root/test.img bs=4m
   Observation: iostat on Host shows < 1 MByte /s

Intermediate summary:

 - unexpected bad performance - even with one writer performance is far
   below the values compared to using only one core per VM
 - bandwidth drops dramatically with each additional writer

## Summary and Questions

 - adding more cores to Qemu seems to considerable impact disk I/O 
performance

 - Is this expected / known behavior?
 - What could I do to mitigate / help to find root cause?

By the way - except for this hopefully solvable problem I am surprised 
how well the team of NetBSD, ZVOL, Qemu and NVMM works.



Kind regards
Matthisa



smime.p7s
Description: S/MIME Cryptographic Signature


Re: NetBSD 9.2 installer can't detect disk of some Hetzner VPSes

2022-08-15 Thread Matthias Petermann

Hello Robert,

Am 15.08.2022 um 13:35 schrieb Robert Swindells:

If "qemu -machine q35" fails to boot NetBSD in the same way then it will
be a lot easier to resolve this.


unfortunately not... at least with machine type pc-q35-7.0 it boots for 
me both NetBSD 9.3 and a current 9.99.99.


Beside of this, the basic problem of this thread "can't detect disk" 
seems to be solved in NetBSD 9.99.99 - at least I get hints in the 
kernel messages on the Hetzner platform that storage and network were 
detected (up to the wedges). Currently booting still aborts because no 
console is found.


One of the guesses is that it is due to the missing viocon driver. 
Taylor has ported the viocon driver from OpenBSD and part of it has been 
in current for a few days. I'm helping Taylor test it. Unfortunately, 
the workflow is quite time expensive - I always have to wait to build a 
complete image and write it raw to disk at Hetzner via rescue system.


During the last test I had taken screenshots of the dmesg (from the 
kernel debugger) from the Hetzner VM. If you want, I can mail them to you.


Kind regards
Matthias


Re: Virtio Viocon driver - possible to backport from OpenBSD?

2022-08-09 Thread Matthias Petermann

Hello Reinound,

On 06.08.22 20:07, Reinoud Zandijk wrote:


I always use `serial` console for my Qemu hacking but if some cloud
environments rather have viocon's it seems like a sound idea. AFAIK its not
that hard and was on my TODO list when I worked on virtio but it got
sidetracked by other work.

I am currently working on something completely different but might take a
peek but feel free to try :)


Thanks for your response. To enhance the context a bit there was a 
recent discussion[1] @current where a German cloud provider switched 
from VGA to viocon for some of their offerings. They emulate the com 
interface too, but I am not aware of if and how this can be accessed 
from the management shell. The assumption is that it is simply not 
possible. As there is no way to access the serial console, I assume 
viocon would be the only way to make use of this cloud VPSs.


Kind regards
Matthias


[1] http://mail-index.netbsd.org/current-users/2022/07/20/msg042701.html



smime.p7s
Description: S/MIME Cryptographic Signature


Virtio Viocon driver - possible to backport from OpenBSD?

2022-08-04 Thread Matthias Petermann
Hello,

according:

https://man.openbsd.org/virtio.4

the OpenBSD virtio driver has its origin in NetBSD. Viocon Support was added 
later and not ported back yet. I am wondering how much effort it would take to 
merge it from

https://cvsweb.openbsd.org/src/sys/dev/pv/viocon.c?rev=1.8&content-type=text/x-cvsweb-markup

This would help to run netbsd on qemu without VGA Emulation which seems to 
become the default for some cloud environments. 

Kind regards
Matthias

Re: NetBSD 9.2 installer can't detect disk of some Hetzner VPSes

2022-07-20 Thread Matthias Petermann

Hello all,

I was able to gather a little more information now. For me, at the 
moment, it doesn't look like the configuration of the virtualized 
hardware is completely "random". It rather seems to depend on the 
baseline one chooses when ordering the VPS - see the FAQ[1]:


"What hardware do my servers run on?

	The CX line Hetzner Cloud servers run on the latest generation of 
Intel® Xeon® CPUs (Skylake) with ECC RAM. We also have a line of cloud 
servers, the CPX line, which are based on AMD 2nd generation EPYC CPUs. 
And there are also models (the CCX line) that have dedicated vCPUs 
(Intel® Xeon® and AMD EPYC). For local storage, we use NVMe SSDs."


Depending on the selection, one gets a system with Intel Xeon (Skylake) 
or AMD 2nd Gen EPYC.


At the moment, I only encounter the described problem when I select the 
AMD variant. Mayuresh, can you confirm this?


Kind regards
Matthias


[1] https://docs.hetzner.com/cloud/technical-details/faq




smime.p7s
Description: S/MIME Cryptographic Signature


Re: NetBSD 9.2 installer can't detect disk of some Hetzner VPSes

2022-07-20 Thread Matthias Petermann

Hi Robert,

On 20.07.22 13:43, Robert Elz wrote:

What kind of console interface does that setup give you?   Emulated
serial port?   Emulated graphics interface?

>
> One of the virtio devices (1043) is described in pcidevs as a virtio 
console
> but it doesn't look like we have any kind of driver for that one 
(whatever

> that actually means).   The their setup emulates some kind of standard
> com port (serial) or vga, then it should be possible to attach to that,
> but the boot code would need to tell the kernel which of those to use.
>
> You'd probably do better asking on current-users (or perhaps tech-kern
> but just pick one of those) than netbsd-users for this kind of info.
>
> kre

at boot stage, the VPS is accessed via a web based virtual console which 
looks like a emulated VGA display from Qemu. Later in the dmesg there is 
no emulated graphics adapter found. There is a device "virtio2" which 
seems to identify as a console device:


```
[] virtio2 at pci3 dev 0 function 0
[] virtio2: console device (rev. 0x01)
[] virtio2: autoconfiguration error: no matching child driver; not 
configured

```

And there is also an emulated com0 interface present, which seems to be 
useless as I cannot access it from outside.


As a test I switched off the virtio module via userconf and booted it. 
Here the boot comes at least up to the prompt for the root filesystem 
device (of course, since no virtio exists any more, the block devices 
are also no longer recognized).


I can't really explain why the boot loader and the kernel messages can 
write to the virtual console, but later in the run no corresponding 
graphics adapter is found...


Kind regards
Matthias



smime.p7s
Description: S/MIME Cryptographic Signature


"zfs send" freezes system (was: Re: pgdaemon high CPU consumption)

2022-07-18 Thread Matthias Petermann

Hello,

On 13.07.22 12:30, Matthias Petermann wrote:

I can now confirm that reverting the patch also solved my problem. Of 
course I first fell into the trap, because I had not considered that the 
ZFS code is loaded as a module and had only changed the kernel. As a 
result, it looked at first as if this would not help. Finally it did...I 
am now glad that I can use a zfs send again in this way. This previously 
led reproducibly to a crash, whereby I could not make backups. This is 
critical for me and I would like to support tests regarding this.


In contrast to the PR, there are hardly any xcalls in my use case - 
however, my system only has 4 CPU cores, 2 of which are physical.



Many greetings
Matthias



Roundabout one week after removing the patch, my system with ZFS is 
behaving "normally" for the most part and the freezes have disappeared. 
What is the recommended way given the 10 branch? If it is not 
foreseeable that the basic problem can be solved shortly, would it also 
be an option to withdraw the patch in the sources to get at least a 
stable behavior? (Not only) on the sidelines, I would still be 
interested in whether this "zfs send" problem occurs in general, or 
whether certain hardware requirements have a favorable effect on it.


Kind regards
Matthias



smime.p7s
Description: S/MIME Cryptographic Signature


Re: pgdaemon high CPU consumption

2022-07-13 Thread Matthias Petermann

Hello,

On 10.07.22 19:14, Matthias Petermann wrote:
thanks for this reference... it matches pretty much my observations. I 
did a lot of attempts to tune maxvnodes during the last days, but the 
pgdaemon issue remained. Ultimately I suspect it is also responsible for 
the reproducible system lock-ups during ZFS send.


I am about to revert the patch from the PR above on my system and re-try.

Kind regards
Matthias



I can now confirm that reverting the patch also solved my problem. Of 
course I first fell into the trap, because I had not considered that the 
ZFS code is loaded as a module and had only changed the kernel. As a 
result, it looked at first as if this would not help. Finally it did...I 
am now glad that I can use a zfs send again in this way. This previously 
led reproducibly to a crash, whereby I could not make backups. This is 
critical for me and I would like to support tests regarding this.


In contrast to the PR, there are hardly any xcalls in my use case - 
however, my system only has 4 CPU cores, 2 of which are physical.



Many greetings
Matthias



smime.p7s
Description: S/MIME Cryptographic Signature


Re: pgdaemon high CPU consumption

2022-07-10 Thread Matthias Petermann

Hello Frank,

On 01.07.22 14:07, Frank Kardel wrote:

Hi Matthias !

See PR 55707 
http://gnats.netbsd.org/cgi-bin/query-pr-single.pl?number=55707 , which 
I do not considere fixed due to the pgdaemon issue. reverting arc.cto 
1.20 will give you many xcalls, but the system stays more usable.


Frank



thanks for this reference... it matches pretty much my observations. I 
did a lot of attempts to tune maxvnodes during the last days, but the 
pgdaemon issue remained. Ultimately I suspect it is also responsible for 
the reproducible system lock-ups during ZFS send.


I am about to revert the patch from the PR above on my system and re-try.

Kind regards
Matthias



smime.p7s
Description: S/MIME Cryptographic Signature


Re: pgdaemon high CPU consumption

2022-07-03 Thread Matthias Petermann

Hello,

On 01.07.22 12:48, Brad Spencer wrote:

"J. Hannken-Illjes"  writes:


On 1. Jul 2022, at 07:55, Matthias Petermann  wrote:

Good day,

since some time I noticed that on several of my systems with NetBSD/amd64 
9.99.97/98 after longer usage the kernel process pgdaemon completely claims a 
CPU core for itself, i.e. constantly consumes 100%.
The affected systems do not have a shortage of RAM and the problem does not 
disappear even if all workloads are stopped, and thus no RAM is actually used 
by application processes.

I noticed this especially in connection with accesses to the ZFS set up on the 
respective machines - for example after checkout from the local CVS relic 
hosted on ZFS.

Is there already a known problem or what information would have to be collected 
to get to the bottom of this?

I currently have such a case online, so I would be happy to pull diagnostic 
information this evening/afternoon. At the moment all info I have is from top.

Normal view:

```
  PID USERNAME PRI NICE   SIZE   RES STATE   TIME   WCPUCPU COMMAND
0 root 1260 0K   34M CPU/0 102:45   100%   100% [system]
```

Thread view:


```
  PID   LID USERNAME PRI STATE   TIME   WCPUCPU NAME  COMMAND
0   173 root 126 CPU/1  96:57 98.93% 98.93% pgdaemon  [system]
```


Looks a lot like kern/55707: ZFS seems to trigger a lot of xcalls

Last action proposed was to back out the patch ...

--
J. Hannken-Illjes - hann...@mailbox.org



Probably only a slightly related data point, but Ya, if you have a
system / VM / Xen PV that does not have a whole lot of RAM and if you
don't back out that patch your system will become unusable in a very
short order if you do much at all with ZFS (tested with a recent
-current building pkgsrc packages on a Xen PVHVM).  The patch does fix a
real bug, as NetBSD doesn't have the define that it uses, but the effect
of running that code will be needed if you use ZFS at all on a "low" RAM
system.  I personally suspect that the ZFS ARC or some pool is allowed
to consume nearly all available "something" (pools, RAM, etc..) without
limit but have no specific proof (or there is a leak somewhere).  I
mostly run 9.x ZFS right now (which may have other problems), and have
been setting maxvnodes way down for some time.  If I don't do that the
Xen PV will hang itself up after a couple of 'build.sh release' runs
when the source and build artifacts are on ZFS filesets.


Thanks for describing this use case. Apart from the fact that I don't 
currently use Xen on the affected machine, it performs similiar 
workload. I use it as pbulk builder with distfiles, build artifacts and 
CVS / Git mirror stored on ZFS. The builders themself are located in 
chroot sandboxes on FFS. Anyway, I can trigger the observations by doing 
a NetBSD src checkout from ZFS backed CVS to the FFS partition.


The maxvnodes trick first led to pgdaemon behave normal again, but the 
system freezed shortly after with no further evidence.


I am not sure if this thread is the right one for pointing this out, but 
I experienced further issues with NetBSD current and ZFS when I tried to 
perform a recursive "zfs send" of a particular snapshot of my data sets. 
After it initially works, I see the system freeze after a couple of 
seconds with no chance to recover (could not even enter the kernel 
debugger). I will come back and need to prepare a dedicated test VM for 
my cases.


Kind regards
Matthias



smime.p7s
Description: S/MIME Cryptographic Signature


pgdaemon high CPU consumption

2022-06-30 Thread Matthias Petermann

Good day,

since some time I noticed that on several of my systems with 
NetBSD/amd64 9.99.97/98 after longer usage the kernel process pgdaemon 
completely claims a CPU core for itself, i.e. constantly consumes 100%.
The affected systems do not have a shortage of RAM and the problem does 
not disappear even if all workloads are stopped, and thus no RAM is 
actually used by application processes.


I noticed this especially in connection with accesses to the ZFS set up 
on the respective machines - for example after checkout from the local 
CVS relic hosted on ZFS.


Is there already a known problem or what information would have to be 
collected to get to the bottom of this?


I currently have such a case online, so I would be happy to pull 
diagnostic information this evening/afternoon. At the moment all info I 
have is from top.


Normal view:

```
  PID USERNAME PRI NICE   SIZE   RES STATE   TIME   WCPUCPU COMMAND
0 root 1260 0K   34M CPU/0 102:45   100%   100% 
[system]

```

Thread view:


```
  PID   LID USERNAME PRI STATE   TIME   WCPUCPU NAME  COMMAND
0   173 root 126 CPU/1  96:57 98.93% 98.93% pgdaemon  [system]
```

Kind regards
Matthias



smime.p7s
Description: S/MIME Cryptographic Signature


Re: NetBSD Xen guest freezes system + vif MAC address confusion (NetBSD 9.99.97 / Xen 4.15.2)

2022-06-05 Thread Matthias Petermann



Hi,

Am 05.06.2022 um 14:49 schrieb Matthias Petermann:
When shutting down the DomU, the whole system still hangs. If I 
understood your mail from 30.05.2022 (HEAD UP - NetBSD 9.99.x dom0 needs 
a Xen tool patch) correctly, I need new Xen tools or a manual patch for 
this part of the problem. I will try the latter today.



Just as a small addition to my mail from yesterday - the patch for the 
block script in Xentools works for me too :-)


Many greetings
Matthias


Re: NetBSD Xen guest freezes system + vif MAC address confusion (NetBSD 9.99.97 / Xen 4.15.2)

2022-06-05 Thread Matthias Petermann



Hello Manuel,

Am 27.05.2022 um 20:39 schrieb Manuel Bouyer:

On Fri, May 27, 2022 at 02:06:59PM +0200, Matthias Petermann wrote:

Anyway, Once I try to "xl console" I did only get a fragment:

```
ganymed$ doas xl console net
[   1.000] cpu_rng: rdrand
[   1.000] entropy: ready
[   1.000] Copyright (c) 1996, 1997, 1998, 1999,
```

At the "1999," the Dom0 became frozen, again.


A recent change caused xenconsoled to hang, and possibly xenstore to
miss events too. Should be fixed with
src/sys/arch/xen/xen/xenevt.c 1.65

But the hang on the filesystem remains for me.



Today I continued my tests and reinstalled NetBSD 9.99.97 (built from 
the sources of 30.05.2022). Xentools and Xenkernel are still the same as 
in my initial case (from 29.04.2022).


Differences from the build of 05/25/2022:

- I can now start a DomU without any problems and install e.g. a NetBSD 
in it. This had previously already led to a freeze of the system, so 
this part of the issue seems to be resolved too me


When shutting down the DomU, the whole system still hangs. If I 
understood your mail from 30.05.2022 (HEAD UP - NetBSD 9.99.x dom0 needs 
a Xen tool patch) correctly, I need new Xen tools or a manual patch for 
this part of the problem. I will try the latter today.


Thanks for all the effort you guys put into supporting Xen on NetBSD so 
well!


Many greetings
Matthias


Re: boot.cfg syntax question

2022-06-05 Thread Matthias Petermann

Hello,

thanks for bringing this up. I just wanted to add another data point here:

https://mail-index.netbsd.org/netbsd-users/2021/02/03/msg026523.html

To me it looks like the same issue - nice to read this has been solved 
with the patch and looking forward to test it.


Kind regards
Matthias

Am 05.06.2022 um 08:59 schrieb RVP:

On Sun, 5 Jun 2022, Thomas Klausner wrote:


Did I misunderstand the man page or is there a bug here?



Alrighty, this patch fixes it for me (also fixes PR# 53128):

---START---
diff -urN sys/arch/i386/stand/efiboot.orig/boot.c 
sys/arch/i386/stand/efiboot/boot.c
--- sys/arch/i386/stand/efiboot.orig/boot.c    2021-09-07 
11:41:31.0 +
+++ sys/arch/i386/stand/efiboot/boot.c    2022-06-05 06:50:39.139514564 
+

@@ -453,8 +453,10 @@
  } else {
  int i;

+#if 0
  if (howto == 0)
  bootdefault();
+#endif
  for (i = 0; i < NUMNAMES; i++) {
  bootit(names[i][0], howto);
  bootit(names[i][1], howto);
---END---

-RVP


Re: NetBSD Xen guest freezes system + vif MAC address confusion (NetBSD 9.99.97 / Xen 4.15.2)

2022-05-27 Thread Matthias Petermann

Hello Jürgen,

Am 27.05.2022 um 14:14 schrieb J. Hannken-Illjes:

Stack trace of thread vnconfig (1239) and from ddb "call fstrans_dump"
should give even more details.


here is the stacktrace from the vnconfig process (the PID has changed 
since I restarted):


https://www.petermann-it.de/tmp/p7.jpg

You can find the output of fstrans_dump here:

https://www.petermann-it.de/tmp/p8.jpg

I hope this helps a bit in troubleshooting.

Kind regards
Matthias


Re: NetBSD Xen guest freezes system + vif MAC address confusion (NetBSD 9.99.97 / Xen 4.15.2)

2022-05-27 Thread Matthias Petermann

Hi Manuel,

Am 27.05.2022 um 12:14 schrieb Manuel Bouyer:

Paginated processes list:

https://www.petermann-it.de/tmp/p1.jpg
https://www.petermann-it.de/tmp/p2.jpg
https://www.petermann-it.de/tmp/p3.jpg

several processes in fstchg wait, a stack trace of these processes
(tr/t 0t or tr/a 0x would show theses) would help.

So it looks like a deadlock in the filesystem. What is your storage
configuration ?



Thanks for your advice - I did another series of screenshots and 
prepared the relevant information here:


https://www.petermann-it.de/tmp/p6.png

My storage configuration this time is nothing out of the ordinary:

```
wd0 (GPT)
 |
 '-- dk0 (NAME:root, FFSv2 with log, contains the root filesystem)
 '---dk1 (NAME:swap)
 '---dk2 (NAME:data, FFSv2 with log, contains VND-Images)
  |
  '-- net.img (16 GB   sparse file image)
  '-- net-export.img  (500 GB  sparse file image)
```

Since you bring up the deadlock / filesystem assumption - I did an 
additional test right away. My original test case uses both CPU cores in 
Dom0. The modified test boots Dom0 with "dom0_max_vcpus=1 
dom0_vcpus_pin" so that only one core is available. With only one core 
in the Dom0 at least the VM is instantiated (meaning the "xl create" 
command comes back as expected, and the Dom0 stays responsive for a 
little while (in contrast to the original test - I was now able to 
perform "xl list" and did see the VM. Anyway, Once I try to "xl console" 
I did only get a fragment:


```
ganymed$ doas xl console net
[   1.000] cpu_rng: rdrand
[   1.000] entropy: ready
[   1.000] Copyright (c) 1996, 1997, 1998, 1999,
```

At the "1999," the Dom0 became frozen, again.

Kind regards
Matthias



Re: NetBSD Xen guest freezes system + vif MAC address confusion (NetBSD 9.99.97 / Xen 4.15.2)

2022-05-27 Thread Matthias Petermann



Hi Manuel,

Am 27.05.2022 um 11:14 schrieb Manuel Bouyer:


did you create the bridge0 ?



Yes, it exists:

```
ganymed$ brconfig bridge0
bridge0: flags=41
Configuration:
priority 32768 hellotime 2 fwddelay 15 maxage 20
ipfilter disabled flags 0x0
Interfaces:
re0 flags=3
port 1 priority 128
Address cache (max cache: 100, timeout: 1200):
```


After the messages appear on the system console, the system does not respond
to any input either via SSH or on the local console. It seems to be frozen.
I can still activate the kernel debugger with Control+Alt+Escape.


Can you get a stack trace, and processes list ?



I took some "screenshots" of the vga console. This was unfortunately the 
only way because the device has no serial console.



Paginated processes list:

https://www.petermann-it.de/tmp/p1.jpg
https://www.petermann-it.de/tmp/p2.jpg
https://www.petermann-it.de/tmp/p3.jpg
https://www.petermann-it.de/tmp/p4.jpg

Output of "trace":

https://www.petermann-it.de/tmp/p5.jpg

Kind regards
Matthias


NetBSD Xen guest freezes system + vif MAC address confusion (NetBSD 9.99.97 / Xen 4.15.2)

2022-05-27 Thread Matthias Petermann



Hello all,

currently I am not able to instantiate a NetBSD Xen guest on NetBSD 9.99 
(side fact: I also have problems with a Windows guest, but it is not 
that important at the moment).


The problem occurs in the following environment:

 - Xen Kernel 4.15.2 and matching Xen Tools from pkgsrc 2022Q1 (built 
29.04.2022)

 - NetBSD/Xen 9.99.97 (build 25.05.2022)

The host is booted with this boot.cfg (if this matters):

```
menu=Boot Xen:load /netbsd-XEN3_DOM0.gz console=pc;multiboot 
/xen.gz dom0_mem=512M vga=keep console=vga

```

The guest config looks like this:

```
name = "net"
type="pv"
kernel = "/netbsd-INSTALL_XEN3_DOMU.gz"
#kernel = "/netbsd-XEN3_DOMU.gz"
memory = 2048
vcpus = 2
vif = [ 'mac=00:16:3E:01:00:01,bridge=bridge0' ]
disk = [
   'file:/data/vhd/net.img,hda,rw',
   'file:/data/vhd/net-export.img,hdb,rw'
]
```

When I try to instantiate the guest, I get the following output on the 
controlling terminal:


```
ganymed$ doas xl create net
Parsing config from net
libxl: error: libxl_device.c:1109:device_backend_callback: Domain 
1:unable to add device with path /local/domain/0/backend/vif/1/0
libxl: error: libxl_create.c:1862:domcreate_attach_devices: Domain 
1:unable to add vif devices

```

At the same time the following message appears on the system console:

```
[   184.680057] xbd backend: attach device vnd0d (size 1048576000) for 
domain 1
[   184.910057] xbd backend: attach device vnd1d (size 33554432) for 
domain 1

[   195.260077] xvif1i0: Ethernet address 00:16:3e:02:00:01
[   195.320059] xbd backend: detach device vnd1d for domain 1
[   195.350051] xbd backend: detach device vnd0d for domain 1
[   195.450054] xvif1i0: disconnecting
```

After the messages appear on the system console, the system does not 
respond to any input either via SSH or on the local console. It seems to 
be frozen. I can still activate the kernel debugger with 
Control+Alt+Escape.


What surprises me: the 4th digit of the MAC address in the system log 
seems to be 1 higher than specified in the guest configuration. I have 
checked this again because I initially assumed a configuration error. Is 
this somehow explainable or might this already be a indication of the 
root cause?


Kind regards
Matthias


Re: WDCTL_RST failed for drive 0 / wd0: IDENTIFY failed (SATA autodetection issue after installation)

2022-05-24 Thread Matthias Petermann

Hi Rin,

thank you for your quick response. I can first confirm that the 
controller installed in the system is ahcisata(4). I have two different 
model variants where the problem occurs - on one very reliably at every 
boot, and on the other almost after every cold start (and only two of 
four disks affected on the latter). I will build and test the kernel 
with AHCISATA_EXTRA_DELAY and give feedback in a timely manner.


Many greetings
Matthias

Am 24.05.2022 um 18:23 schrieb Rin Okuyama:

Hi,

The recent change for probe timing should only affect ahcisata(4).
Is your SATA controller ahcisata(4)? If so,

(1) please try kernel built with:

---
options AHCISATA_EXTRA_DELAY
---

If it works around the problem,

(2) please send us full dmesg of your machine.

Then, we can add your controller to the quirk list. At once it is
registered to the list, AHCISATA_EXTRA_DELAY option is no longer
required.

Thanks,
rin

On 2022/05/25 0:49, Matthias Petermann wrote:
A small addendum: disabling the Intel Platform Trust technology in the 
BIOS did not help me (had read this in another post of the linked 
thread).


However, by plugging in additional USB devices (a mouse) I apparently 
caused the necessary delay, which the disk would have needed in the 
first case to execute the WDCTL_RST without errors. This "workaround" 
is a shaky one though, an extremely close call. I don't even want to 
think about what I would do to a production server if this happened to 
me on a reboot.


Kind regards
Matthias


Am 24.05.2022 um 17:31 schrieb Matthias Petermann:


Hello all,

with one of the newer builds of 9.99 (unfortunately I can't narrow it 
down more) I have a problem on a NUC5 with a Seagate Firecuda SATA 
hard drive (hybrid HDD/SSD).


As long as I boot from the USB stick (for installation, as well as 
later for booting the kernel with root redirected to the wd0) the 
hard drive wd0 is recognized correctly and works without problems.


When I boot directly from the wd0 hard drive, I get through the boot 
loader fine, which also still loads the kernel correctly into memory. 
However, when running the initialization or hardware detection, there 
is then a problem with the initialization of wd0:


```
WDCTL_RST failed for drive 0
wd0: IDENTIFY failed
```

The error pattern seems to be not quite rare and probably the closest 
to it is this post:


http://mail-index.netbsd.org/current-users/2022/03/01/msg042073.html

Recent changes to the SATA autodetection timing are mentioned there. 
This would fit my experience, since I had the problem neither with 
9.1 (build from 02/16/2021) nor with older 9.99 versions. Does anyone 
know more specifics about this timing thing, as well as known 
workarounds if there are any? I have several NUC5s with exactly this 
model of hard drive running stably for several years - it would be a 
shame if I now have to replace them for such a reason.


Many greetings
Matthias






Re: WDCTL_RST failed for drive 0 / wd0: IDENTIFY failed (SATA autodetection issue after installation)

2022-05-24 Thread Matthias Petermann
A small addendum: disabling the Intel Platform Trust technology in the 
BIOS did not help me (had read this in another post of the linked thread).


However, by plugging in additional USB devices (a mouse) I apparently 
caused the necessary delay, which the disk would have needed in the 
first case to execute the WDCTL_RST without errors. This "workaround" is 
a shaky one though, an extremely close call. I don't even want to think 
about what I would do to a production server if this happened to me on a 
reboot.


Kind regards
Matthias


Am 24.05.2022 um 17:31 schrieb Matthias Petermann:


Hello all,

with one of the newer builds of 9.99 (unfortunately I can't narrow it 
down more) I have a problem on a NUC5 with a Seagate Firecuda SATA hard 
drive (hybrid HDD/SSD).


As long as I boot from the USB stick (for installation, as well as later 
for booting the kernel with root redirected to the wd0) the hard drive 
wd0 is recognized correctly and works without problems.


When I boot directly from the wd0 hard drive, I get through the boot 
loader fine, which also still loads the kernel correctly into memory. 
However, when running the initialization or hardware detection, there is 
then a problem with the initialization of wd0:


```
WDCTL_RST failed for drive 0
wd0: IDENTIFY failed
```

The error pattern seems to be not quite rare and probably the closest to 
it is this post:


http://mail-index.netbsd.org/current-users/2022/03/01/msg042073.html

Recent changes to the SATA autodetection timing are mentioned there. 
This would fit my experience, since I had the problem neither with 9.1 
(build from 02/16/2021) nor with older 9.99 versions. Does anyone know 
more specifics about this timing thing, as well as known workarounds if 
there are any? I have several NUC5s with exactly this model of hard 
drive running stably for several years - it would be a shame if I now 
have to replace them for such a reason.


Many greetings
Matthias




WDCTL_RST failed for drive 0 / wd0: IDENTIFY failed (SATA autodetection issue after installation)

2022-05-24 Thread Matthias Petermann



Hello all,

with one of the newer builds of 9.99 (unfortunately I can't narrow it 
down more) I have a problem on a NUC5 with a Seagate Firecuda SATA hard 
drive (hybrid HDD/SSD).


As long as I boot from the USB stick (for installation, as well as later 
for booting the kernel with root redirected to the wd0) the hard drive 
wd0 is recognized correctly and works without problems.


When I boot directly from the wd0 hard drive, I get through the boot 
loader fine, which also still loads the kernel correctly into memory. 
However, when running the initialization or hardware detection, there is 
then a problem with the initialization of wd0:


```
WDCTL_RST failed for drive 0
wd0: IDENTIFY failed
```

The error pattern seems to be not quite rare and probably the closest to 
it is this post:


http://mail-index.netbsd.org/current-users/2022/03/01/msg042073.html

Recent changes to the SATA autodetection timing are mentioned there. 
This would fit my experience, since I had the problem neither with 9.1 
(build from 02/16/2021) nor with older 9.99 versions. Does anyone know 
more specifics about this timing thing, as well as known workarounds if 
there are any? I have several NUC5s with exactly this model of hard 
drive running stably for several years - it would be a shame if I now 
have to replace them for such a reason.


Many greetings
Matthias


Re: Status of NetBSD virtualization roadmap - support jails like features?

2022-04-16 Thread Matthias Petermann



Hi Greg,

Am 15.04.2022 um 20:28 schrieb Greg A. Woods:

At Fri, 15 Apr 2022 07:36:15 +0200, Matthias Petermann  
wrote:
Subject: Status of NetBSD virtualization roadmap - support jails like features?


My motivation: I am looking for a particularly high performance
virtualization solution on NetBSD. Especially disk and network IO
plays a role for me.


In my experience nothing beats I/O performance of Xen with LVM in the
dom0 and the best/fastest storage available for the dom0, especially now
there's SMP support for dom0.  That's anecdotal though -- I haven't done
any real comparisons.  I just know that NFS in domUs is a lot slower
than using LVMs via xbd(4), no matter where/how-fast the NFS server is!

If I'm not too far out of touch I think there's still a wee bit more SMP
support needed in the networking code to make it possible for dom0 to
also give the best network throughput, but it's really not horrible as-is.


In theory NVMM with QEMU and virtio(4) should be about the same I would
guess, with potential for some improvement in some micro-benchmarks, but
for production use the maturity and completeness of the provisioning
support offered by Xen still seems far superior to me.



This is interesting to read and also coincides with my experience so 
far. LVM was also my logical choice at first, but I had some unclear 
crashes with it in connection with FSS (FFS2 snapshots), which is why I 
use VNDs as backing store at the moment. The concrete context of that 
time needs a re-evaluation.


Anyway, the performance drop of LVM vs. VND was not dramatic for me, 
since the network IO is the limiting factor on the specific platform and 
the weak CPU equipment (NUC5 with Celeron) could also play a role here. 
In the meantime, I also have other hardware (i3) available and will do 
some comparative tests when the opportunity arises, possibly also with 
NVMM / virtio.



Regardless, I still think it wouldn't hurt
if NetBSD could implement some sort of
jail.


I'm not convinced "jails" (at least in the FreeBSD form I'm most
familiar with) actually buy much without also increasing complexity
and/or introducing limitations on both the provisioning and the
"virtual" side.

With a full virtualisation as in Xen the added complexity is very well
partitioned between the provisioning side and the VMs, and there are
almost no limitations inside the VMs (assuming you are virtualising
something that fits well into a virtualised environment, i.e. with no
special direct hardware access needs) -- everything looks and feels and
is managed almost as if it is running on bare hardware and so the
management of the VM is exactly as if it were running on separate
hardware; except of course some aspects are actually easier to manage,
such as provisioning direct console access and control.  There's really
nothing new to learn other than how to spin up a new domU (and possibly
how to use LVM effectively).


From a layering perspective, this is definitely the cleanest option and 
I fully sympathize with it. However, I also realize that this is only a 
real option on systems that have enough power and are based on an 
architecture supported by Xen. At the moment, those are mostly x86 systems.



However FreeBSD-style jails do offer their own form of flexibility that
seems to be worth having available, and it would be nice for jails to be
available on NetBSD as well.  The impact inside the OS (kernel and
userland) is quite high though, and is itself a form of complexity
nightmare all its own, though perhaps not so horrible as Linux "cgroups"
and some other related Linux kernel namespaces are.


Yes, that's right - in one of the papers, if I remember correctly, the 
effort to find the many places to build in security checks for the jails 
was explicitly mentioned. But it was also mentioned that this effort is 
already done in NetBSD for Kauth.


Maybe, a full-fledged jail implementation as in FreeBSD may not even be 
necessary. I would be interested to see how far one can go here without 
unreasonable large interventions in the existing structures. If one 
needs a separate network stack per virtual instance, resource allocation 
and the greatest possible isolation, Xen is already a very good option 
and usually with these requirements in mind suitable hardware should be 
selected.


But there are use cases where Xen feels too much and chroot too little. 
That's why I would be curious if it would be a good approach to think 
about a careful expansion of chroots, with a focus on local security. A 
start could be to prevent the transmission of signals between processes 
in different chroots. This would have the benefit, for example, that you 
cannot accidentally kill processes in another chroot if you use the same 
user id in different chroots. Would something like this be feasible as a 
secmodel extension, analogous to curtain mode, and also in the 

Re: Status of NetBSD virtualization roadmap - support jails like features?

2022-04-15 Thread Matthias Petermann

Hi Greg,

Am 15.04.2022 um 14:24 schrieb Greg Troxel:


   However, this week I read a post on Reddit[2] that was a bit
   disturbing to me. Meaningfully, it proclaims that the main development
   platform for nvmm is now DragonflyBSD rather than NetBSD. It also
   claims that the implementation in NetBSD is now "stale and
   broken". Comparing the timestamps of the last commits in the
   repositories [3] and [4], the last activities are only three months
   apart. The nature and extent of the respective changes is difficult
   for me to evaluate. Is anyone here deeper into this and can say what
   the general state of nvmm in NetBSD is?

1) nvmm seems to work well in netbsd (I haven't run it yet) and there has
been bug fixing.

2) code flows between BSDs a lot, in many directions.

3) You could run diff to see what's different and why.

4) The language in the reddit post does not sound particularly
constructive.  Someone with knowledge of improved code in DragonFly (I
don't know if that's true or not) could send a message here or
teech-kern pointing it out and suggesting we update, rather than being
dismissive on reddit.  Or file PRs and list them; technical criticism is
fair.


Probably after your message (which I view as helpful) someone(tm) will
look at the diff.  But if you are inclined to do  that and post some
comments, that's probably useful.



Thank you very much for your points. I will indeed do the diff asap out 
of interest, although I can't promise that anything can be derived from 
it - I'm not a kernel developer, let alone know anything about 
virtualization beyond the administrator level ;-)


But from a roadmap point of view, I see it as a good sign that nvmm gets 
bug fixes and is described quite comprehensively in the NetBSD guide.


Many greetings
Matthias


Status of NetBSD virtualization roadmap - support jails like features?

2022-04-14 Thread Matthias Petermann

Hello all,

this mail is more or less my personal reflection on the virtualization 
capabilities of NetBSD combined with the question where the journey 
could go.


I basically use all virtualization technologies offered on NetBSD:

* Xen for virtualizing entire servers on production environments.
* Qemu/nvmm for virtualization currently on the desktop (playground)
* Chroots for administrative isolation of services - I use these like 
jails with the knowledge that they don't provide the same security.


My motivation: I am looking for a particularly high performance 
virtualization solution on NetBSD. Especially disk and network IO plays 
a role for me.


So far I thought that nvmm could play a bigger role in the future, 
because there are some interesting approaches, for example [1].


However, this week I read a post on Reddit[2] that was a bit disturbing 
to me. Meaningfully, it proclaims that the main development platform for 
nvmm is now DragonflyBSD rather than NetBSD. It also claims that the 
implementation in NetBSD is now "stale and broken". Comparing the 
timestamps of the last commits in the repositories [3] and [4], the last 
activities are only three months apart. The nature and extent of the 
respective changes is difficult for me to evaluate. Is anyone here 
deeper into this and can say what the general state of nvmm in NetBSD is?


Regardless, I still think it wouldn't hurt if NetBSD could implement 
some sort of jail. There have been promising projects in the past [5] 
and [6] that seem to have put a lot of thought into a clean integration 
with the NetBSD APIs kauth and the secmodels. So far, however, none of 
these approaches has made it beyond prototype status. Does anyone know 
if there is a code repository for [5]? I would be interested to see the 
implementation or the approaches to it. I realize that a complete jail 
implementation comparable to FreeBSD is not an easy task. However, for 
certain use cases, it would be helpful to be able to take away some of 
the privileges of a process running as root in a chroot jail, such as 
sending signals to processes outside the jail. Are there any examples of 
this available?


Kind regards
Matthias

[1] 
https://imil.net/blog/posts/2020/fakecracker-netbsd-as-a-function-based-microvm/


[2] https://www.reddit.com/r/NetBSD/comments/sq62bc/nvmm_status/

[3] http://cvsweb.netbsd.org/bsdweb.cgi/src/sys/dev/nvmm/?only_with_tag=MAIN

[4] 
https://github.com/DragonFlyBSD/DragonFlyBSD/tree/master/sys/dev/virtual/nvmm


[5] http://2008.asiabsdcon.org/papers/P3A-paper.pdf

[6] https://github.com/smherwig/netbsd-sandbox



Re: black screen, boot doesn't finish

2022-03-04 Thread Matthias Petermann

Hello all,

I have here an Intel NUC i5-7300U with integrated graphics with similar 
problems on NetBSD 9.99.93 (build from 02/28/2022).


When I boot the machine in UEFI mode, the green kernel output appear 
first, until the mode switch of the graphical console. After that the 
screen remains black.


When I boot on the same device in BIOS mode, it continues properly after 
the mode switch to graphical console.


Is there a patch etc. that would be worth testing?

Many greetings
Matthias


Am 18.02.2022 um 21:08 schrieb Thomas Klausner:

On Fri, Feb 18, 2022 at 10:07:07PM +0200, Andreas Gustafsson wrote:

Thomas Klausner wrote:

This commit

$NetBSD: drmfb.c,v 1.13 2022/02/16 23:30:10 riastradh Exp $

makes my graphical console disappear.


It also makes my i386 laptop testbed hang during boot:

   
http://www.gson.org/netbsd/bugs/build/i386-laptop/commits-2022.02.html#2022.02.16.23.30.10


I've just backed it out (with riastradh's ok).
  Thomas


xterm-color256: Different behavior between NetBSD 9.2 and 9.99.93?

2022-02-02 Thread Matthias Petermann



Hello all,

on my NetBSD systems I set the environment variable TERM to 
"xterm-color256". This makes console apps like mc, taskwarrior, fish, 
micro etc. more attractive as it allows the use of 256 color themes. At 
least this is the case in NetBSD 9.2.


On NetBSD 9.99.93 (build from 01/22/2022), however, only a 8-color 
terminal is recognized by mc, for example, despite the TERM variable 
being set. I could also check this with tput:


NetBSD 9.2:

```
$ echo $TERM
xterm-256color
$ tput colors
256
```

NetBSD 9.99.93:

```
$ echo $TERM
xterm-256color
$ tput colors
8
```

Were there changes here that require a different type of configuration, 
or is this a bug?


Kind regards
Matthias


NetBSD 9.99.93 kernel crash with Firefox, Lariza and Mate

2022-01-29 Thread Matthias Petermann

Hello all,

given here runs a NetBSD 9.99.93 (built from the sources from 
22.01.2022). The system runs with Xorg (Intel KMS) so far quite stable. 
However, several X-applications crash the system reproducibly, each time 
a kernel crash with core dump and reboot happens.


First I noticed this with Firefox. Likewise with Lariza, which I tried 
as an alternative browser. It also happens when I try to start a mate 
session in Xorg.


If I avoid the above applications and use Awesome window manager, I can 
work for hours with multiple xterms without crashing.


I don't see a clear pattern behind this at the moment. I had suspected 
that it might have something to do with hardware acceleration. I have 
turned off hardware acceleration in Firefox because I had display 
problems with it under NetBSD 9.2, but this did not change anything for 
NetBSD 9.99.93.


To get further details, I tried to open one of the core dumps in gdb. 
Unfortunately it can't read the symbols (what would I have to do to make 
it do that?).


```
$ gdb  --symbols=netbsd.0 --quiet --eval-command="file netbsd.0" 
--eval-command="target kvm netbsd.0.core" --eval-command "bt" 
--eval-command "list" --eval-command "info all-registers"


Reading symbols from netbsd.0...
(No debugging symbols found in netbsd.0)
Reading symbols from netbsd.0...
(No debugging symbols found in netbsd.0)
0x802261f5 in cpu_reboot ()
#0  0x802261f5 in cpu_reboot ()
#1  0x80dbe8d4 in kern_reboot ()
#2  0x80e018c2 in vpanic ()
#3  0x80e01987 in panic ()
#4  0x80229017 in trap ()
#5  0x802210e3 in alltraps ()
#6  0x80d7a30c in uvn_findpage ()
#7  0x80d7a70f in uvn_findpages ()
#8  0x80e7f7d3 in genfs_getpages ()
#9  0x80e7cd52 in VOP_GETPAGES ()
#10 0x80d7a0fa in uvn_get ()
#11 0x80d5b0cc in uvm_fault_internal ()
#12 0x80228970 in trap ()
#13 0x802210e3 in alltraps ()
No symbol table is loaded.  Use the "file" command.
```

Has anyone recently observed anything similar?

Kind regards
Matthias


Thanks for wpa_supplicant configuration in sysinst

2022-01-27 Thread Matthias Petermann

Hello all,

recently i had seen in the source changes that sysinst in current got 
support for configuring wifi devices. I tried that for the first time 
today and was very happy - it worked right away and makes installing 
NetBSD on laptops so much easier. Thanks for that :-)


Matthias


Re: HEADS UP: Merging drm update (Lenovo X230 mode switch issue in UEFI mode only, BIOS works)

2022-01-11 Thread Matthias Petermann

Hello,

unfortunately my build was broken the night before last, so I had to 
restart yesterday. This morning I could now test with the changed FONT 
option. The error pattern is still unchanged - with deactivated CSM, the 
graphics switches to this disturbed mode at the modeswitch.


As expected, the font was changed by the FONT option, which shows that 
the first lines before the mode switch appear in the Spleen font.


I'm afraid these findings don't really help?

Best regards
Matthias


On 09.01.22 17:40, Matthias Petermann wrote:

Hello,

On 04.01.22 21:10, RVP wrote:


Can you check something else as well?

Compile a kernel with:

-
# Give us a choice of fonts based on monitor size
#options    FONT_BOLD8x16
#options    FONT_BOLD16x32
options FONT_SPLEEN12x24
-
Sorry for the delay... just started a build with this options you 
recommended. You can expect some result until tomorrow morning.


Kind regards
Matthias


Re: HEADS UP: Merging drm update (Lenovo X230 mode switch issue in UEFI mode only, BIOS works)

2022-01-09 Thread Matthias Petermann

Hello,

On 04.01.22 21:10, RVP wrote:


Can you check something else as well?

Compile a kernel with:

-
# Give us a choice of fonts based on monitor size
#options    FONT_BOLD8x16
#options    FONT_BOLD16x32
options FONT_SPLEEN12x24
-
Sorry for the delay... just started a build with this options you 
recommended. You can expect some result until tomorrow morning.


Kind regards
Matthias


Re: HEADS UP: Merging drm update (Lenovo X230 mode switch issue in UEFI mode only, BIOS works)

2021-12-31 Thread Matthias Petermann

Hello,

first of all, thanks for the effort to bring an up-to-date DRM to 
NetBSD! Proper graphics support is essential for most users and 
therefore the work cannot be appreciated enough.


I have now also managed to test current on my laptop and made an 
observation that I would like to share and hopefully be able to help to 
clarify / fix the underlying issue.


The Laptop is a Lenovo X230 model with i5 CPU and integrated intel 
graphics. It can boot NetBSD in both - BIOS (CSM) and UEFI mode.


- With NetBSD 9.2, the mode switch (when initializing the i915drmkms0 
device) works fine in both boot modes.


- In current from 28.12.2021 the mode switch only works when I boot in 
BIOS mode.


- When I boot current in UEFI mode, after the mode switch it only 
displays a blank screen with a white background. After that, within a 
few seconds, a kind of randomly structured dark spot develops from the 
center of the screen, which then stretches to the edge of the screen [1].


One (not necessarily related) observation: after the appearance of the 
above-mentioned spot, I turn off the laptop and boot back into NetBSD 
9.2. Immediately afterwards, I have a strange flickering on the display, 
which is especially noticeable with brighter colors. The graphical 
display seems normal otherwise. The flickering then disappeared again 
over time. Although I have absolutely no idea about it, my first thought 
was that the dark spot could be some kind of thermal problem that occurs 
with the mode switch? Is this possible?


In any case, I would like to help track down the problem. I'm building a 
current from the current sources and then setting up an identical laptop 
as a test machine. In the meantime, I'd appreciate any hints on what 
might be needed as diagnostic data.


Many greetings
Matthias

[1] http://www.petermann-it.de/netbsd/netbsd-drm.mp4
(SSL certicate renewal in progress)


Re: Filesystem corruption in current 9.99.92 (posix1eacl & log enabled FFSv2)

2021-12-29 Thread Matthias Petermann

Hello,

On 27.12.21 06:20, Matthias Petermann wrote:
I did not try to move the file around as you recommended because I would 
like to ask if there is anything I can do at this point to gather more 
diagnostic data to help understand the root cause?


in the meantime I migrated all files to a freshly created filesystem 
using the patched kernel and so "solved" the problem for now.


The broken filesystem still exists, but I am now running out of space on 
the host (the filesystems are in sparse allocated VNDs). I would have to 
delete the broken filesystem in a timely manner, but would still like to 
run diagnostic steps on the root cause first, if any. Unfortunately, the 
filesystem is very large and remote, and I don't know how to reasonably 
isolate the affected portion to save space for further analysis. Are 
there any other reasonable steps I could do asap?


One more question I would have about the patch. It helped very well to 
avoid the freeze when working in such a corrupted filesystem. In this 
case, the filesystem behaves as you described - no ACL is applied or 
issued on the affected directory. When I try to set a new ACL on the 
affected directory, it seems to have no effect, but no error message 
appears. Would it make sense to include the patch with appropriate error 
logging in the official sources, so that when the problem occurs for 
which we do not know the cause at the moment, we will at least get some 
output (instead of the current behavior - the infinite loop)?


Kind regards
Matthias


Re: Filesystem corruption in current 9.99.92 (posix1eacl & log enabled FFSv2)

2021-12-26 Thread Matthias Petermann

Hello Chuck,

On 24.12.21 01:10, Matthias Petermann wrote:
thanks for the good explanation which helped me a lot, and the tip how 
to break the infinite loop. I will definitely try that. In the meantime 
I have mounted the filesystem without the posix1eacls option. In this 
mode the "find /export" command runs cleanly. So your tip regarding the 
ACLs / extended attributes seems completely right. Currently I transfer 
the data from there to another filesystem to compare it from there 
against the most recent backup. Unfortunately this also means that I 
don't know at the moment if I will get back the state I had before after 
a new mount with the posix1eacls option. I hope so though, because I 
would like to find out more about this.


I'll get back to you as soon as I'm ready.
I finally managed to get the testing done here and I can confirm that 
with your patch, the infinite loop does not occur when I access the 
affected directory.


So to summarize this up:

a) with unpatched kernel

```
# cd /export/project/A

--> infinite loop
```

b) with your patch

```
# cd /export/project/A

--> ok
```

I did not try to move the file around as you recommended because I would 
like to ask if there is anything I can do at this point to gather more 
diagnostic data to help understand the root cause?


What I tried already is to list the extended attributes for the 
"affected" directory and another "unaffected" directory located side by 
side:


# lsextattr system /expotr/project/A 



A/
# lsextattr system /export/project/B 



B/posix1e.acl_access  posix1e.acl_default security.NTACL

I guess this proves your theory the extended attributes block got 
corrupted somehow...


Kind regard
Matthias


Re: Filesystem corruption in current 9.99.92 (posix1eacl & log enabled FFSv2)

2021-12-23 Thread Matthias Petermann

Hi Chuck,

On 23.12.21 18:12, Chuck Silvers wrote:

a "cylinder group" is a metadata structure in FFS that describes the
allocation state of a portion of the blocks and inodes of the file system
and contains the inode records themselves.  the header for this structure
also contains a "magic number" field that is supposed to contain a certain
constant value as a way to sanity-check that this metadata on disk was not
overwritten with some completely unrelated contents.

in your case, since the magic number field does not actually contain the value
that it's supposed to contain, we know that the storage underneath the
file system has gotten corrupted somehow.  you'll want to track down
how that happened, but that is separate from your immediate problem.

this sounds like a bug I have seen before, where the extended attribute block
for a file has been corrupted.  please try the attached patch and see if
this prevents the infinite loop.

if that does prevent the infinite loop, then the file will probably appear
not to have an ACL anymore, and I'm not sure what will happen if you try
to set a new ACL on the file when it is in this state.  for right now,
the safest thing you can do will be to make a copy of the file without
trying to preserve extended attributes (ie. do not use cp's "-p" option),
then delete the original file, then move the copy of the file to have
the original file's name, then you can change the new file's
owner/group/mode/ACL to be what the original file had.


thanks for the good explanation which helped me a lot, and the tip how 
to break the infinite loop. I will definitely try that. In the meantime 
I have mounted the filesystem without the posix1eacls option. In this 
mode the "find /export" command runs cleanly. So your tip regarding the 
ACLs / extended attributes seems completely right. Currently I transfer 
the data from there to another filesystem to compare it from there 
against the most recent backup. Unfortunately this also means that I 
don't know at the moment if I will get back the state I had before after 
a new mount with the posix1eacls option. I hope so though, because I 
would like to find out more about this.


I'll get back to you as soon as I'm ready.

Thanks again
Matthias


Filesystem corruption in current 9.99.92 (posix1eacl & log enabled FFSv2)

2021-12-23 Thread Matthias Petermann

Hello,

for tracking down an FFS issue in current I would appreciate some 
advice. There is a NetBSD 9.99.92 Xen/PV VM (storage provided by file 
backed VND). The kernel is built from ~2012-11-27 CVS source. The root 
partition is a normal FFSv2 with WAPBL. In addition there is a data 
partition for which I have posix1eacls enabled (for samba network shares 
and sysvol).


The data partition causes problems. Without the host being crashed or 
rudely shut down in the past, the filesystem seems to have become 
inconsistent. I first noticed this because the "find" of the daily cron 
job was still running late in the morning with 100% CPU load but no disk 
I/O ongoing.


Then I took the filesystem offline for safety and forced a fsck. Errors 
were detected and solved:


```
$ doas fsck -f NAME=export
** /dev/rdk3
** File system is already clean
** Last Mounted on /export
** Phase 1 - Check Blocks and Sizes
** Phase 2 - Check Pathnames
** Phase 3 - Check Connectivity
** Phase 4 - Check Reference Counts
** Phase 5 - Check Cyl groups
CG 31: PASS5: BAD MAGIC NUMBER
ALTERNATE SUPERBLK(S) ARE INCORRECT
SALVAGE? [yn]

CG 31: PASS5: BAD MAGIC NUMBER
ALTERNATE SUPERBLK(S) ARE INCORRECT
SALVAGE? [yn] y

SUMMARY INFORMATION BAD
SALVAGE? [yn] y

BLK(S) MISSING IN BIT MAPS
SALVAGE? [yn] y

CG 799: PASS5: BAD MAGIC NUMBER
CG 801: PASS5: BAD MAGIC NUMBER
CG 806: PASS5: BAD MAGIC NUMBER
CG 823: PASS5: BAD MAGIC NUMBER
CG 962: PASS5: BAD MAGIC NUMBER
CG 966: PASS5: BAD MAGIC NUMBER
482470 files, 113827090 used, 67860178 free (3818 frags, 8482045 blocks, 
0.0% fragmentation)


* FILE SYSTEM WAS MODIFIED *
```

I did not find too much information what this magic numbers of a 
cylinder group means and what could have caused them to be "bad" :-/
Anyway, a repeated fsck does not show further errors so I thought it 
should be fine. However, after mounting the FS to /export with


```
$ find /export
```

i can still trigger the above mentioned 100% CPU problem in a 
reproduce-able manner. Thereby find always hangs at the same directory 
entry.


Does anyone have an idea how I can investigate this further? I have 
already done a ktrace on find, but in the state in question there seems 
to be no activity going on in find itself.


Kind regards
Matthias


Re: Samba DC provisioning fails with Posix ACL enabled FFS

2021-11-30 Thread Matthias Petermann

Thanks :-)

Am 29.11.21 um 21:03 schrieb Jaromír Doleček:

UFS_ACL enabled in XEN3_DOMU now.

Le lun. 29 nov. 2021 à 17:46, Matthias Petermann  a écrit 
:


Am 28.11.21 um 17:32 schrieb Christos Zoulas:

Thanks for the bug report :-)

christos



You're welcome :-)

One more small question: currently the UFS_ACL option in the XEN3_DOMU
is not enabled by default for the amd64 architecture. For XEN_DOM0 the
option is enabled. I guess that the main use case for the ACLs for many
users will be Samba. If one installs Samba on a Xen system, it will
probably be in a DOMU rather than a DOM0.

What do you think about enabling this UFS_ACL for XEN3_DOMU as well?

Kind regards
Matthias


Re: Samba DC provisioning fails with Posix ACL enabled FFS

2021-11-29 Thread Matthias Petermann

Am 28.11.21 um 17:32 schrieb Christos Zoulas:

Thanks for the bug report :-)

christos



You're welcome :-)

One more small question: currently the UFS_ACL option in the XEN3_DOMU 
is not enabled by default for the amd64 architecture. For XEN_DOM0 the 
option is enabled. I guess that the main use case for the ACLs for many 
users will be Samba. If one installs Samba on a Xen system, it will 
probably be in a DOMU rather than a DOM0.


What do you think about enabling this UFS_ACL for XEN3_DOMU as well?

Kind regards
Matthias


Re: Samba DC provisioning fails with Posix ACL enabled FFS

2021-11-28 Thread Matthias Petermann

Hello all,

it turned out that my problem was a result of an inconsistency in the 
ACL variant (NFSv4 vs. POSIX1e) that existed in NetBSD-current for about 
2 months. Christos was kind enough to look at it and fix it right 
away[1]. My big thanks for that!


With all NetBSD-current builds with sources from 2021-11-27 and newer 
the provisioning of an AD domain can be expected to works now. I tested 
this with success with Samba from pkgsrc-2021Q3.


Many greetings
Matthias


[1] https://anonhg.netbsd.org/src/rev/21d465dbb2a8


Re: Samba DC provisioning fails with Posix ACL enabled FFS

2021-11-25 Thread Matthias Petermann

On 25.11.21 14:49, Matthias Petermann wrote:
I am using Samba 4.13.11 from pkgsrc-2021Q3 (compiled with acl-Option). 
The NetBSD version is: NetBSD net.local 9.99.92 NetBSD 9.99.92 
(XEN3_DOMU_CUSTOM) #0: Thu Nov 25 06:26:36 CET 2021 
mpeterma@sysbldr92.local:/home/mpeterma/netbsd-current/obj/sys/arch/amd64/compile/XEN3_DOMU_CUSTOM 
amd64




Just to add another data point: I just found out that I have a VM with 
NetBSD 9.99.88 build from 2021-11-03 with Samba 4.13.10 for which the 
provisioning works. So it looks like there is only a small time window I 
have to investigate for possible changes. In case someone expected the 
same issue and knows what the problem is - I will be thankful for any 
hint. In case I find the issue by myself, I will send an update as soon 
as possible.


Kind regards
Matthias


Samba DC provisioning fails with Posix ACL enabled FFS

2021-11-25 Thread Matthias Petermann

Hello all,

has anyone tried provisioning a Samba DC on NetBSD current recently?

I managed to do this about half a year ago. Currently, however, there 
seems to be a problem that I can't quite figure out yet.


I use as storage for Samba / Sysvol a FFS with Posix ACLs enabled. I 
have enabled these with tunefs after formatting and also give them as 
mount options.


However, when trying to provision I get:

```
net# samba-tool domain provision --use-rfc2307 --interactive
Realm [LOCAL]:  MPNET.LOCAL 


Domain [MPNET]:
Server Role (dc, member, standalone) [dc]: 

DNS backend (SAMBA_INTERNAL, BIND9_FLATFILE, BIND9_DLZ, NONE) 
[SAMBA_INTERNAL]:
DNS forwarder IP address (write 'none' to disable forwarding) 
[127.0.0.1]:  192.168.2.254
Administrator password: 


Retype password:
...
INFO 2021-11-25 13:53:38,235 pid:1640 
/usr/pkg/lib/python3.8/site-packages/samba/provision/__init__.py #1570: 
Setting up well known security principals
INFO 2021-11-25 13:53:38,260 pid:1640 
/usr/pkg/lib/python3.8/site-packages/samba/provision/__init__.py #1584: 
Setting up sam.ldb users and groups
INFO 2021-11-25 13:53:38,351 pid:1640 
/usr/pkg/lib/python3.8/site-packages/samba/provision/__init__.py #1592: 
Setting up self join
Repacking database from v1 to v2 format (first record 
CN=Print-Media-Ready,CN=Schema,CN=Configuration,DC=mpnet,DC=local)

Repack: re-packed 1 records so far
Repacking database from v1 to v2 format (first record 
CN=msCOM-PartitionSet-Display,CN=411,CN=DisplaySpecifiers,CN=Configuration,DC=mpnet,DC=local)
Repacking database from v1 to v2 format (first record 
CN=Builtin,DC=mpnet,DC=local)

set_nt_acl_no_snum: fset_nt_acl returned NT_STATUS_INVALID_PARAMETER.
ERROR(runtime): uncaught exception - (3221225485, 'An invalid parameter 
was passed to a service or function.')
  File "/usr/pkg/lib/python3.8/site-packages/samba/netcmd/__init__.py", 
line 186, in _run

return self.run(*args, **kwargs)
  File "/usr/pkg/lib/python3.8/site-packages/samba/netcmd/domain.py", 
line 487, in run

result = provision(self.logger,
  File 
"/usr/pkg/lib/python3.8/site-packages/samba/provision/__init__.py", line 
2341, in provision

provision_fill(samdb, secrets_ldb, logger, names, paths,
  File 
"/usr/pkg/lib/python3.8/site-packages/samba/provision/__init__.py", line 
1979, in provision_fill

setsysvolacl(samdb, paths.netlogon, paths.sysvol, paths.root_uid,
  File 
"/usr/pkg/lib/python3.8/site-packages/samba/provision/__init__.py", line 
1764, in setsysvolacl

_setntacl(os.path.join(root, name))
  File 
"/usr/pkg/lib/python3.8/site-packages/samba/provision/__init__.py", line 
1753, in _setntacl

return setntacl(
  File "/usr/pkg/lib/python3.8/site-packages/samba/ntacls.py", line 
236, in setntacl

smbd.set_nt_acl(
net#
```

I am using Samba 4.13.11 from pkgsrc-2021Q3 (compiled with acl-Option). 
The NetBSD version is: NetBSD net.local 9.99.92 NetBSD 9.99.92 
(XEN3_DOMU_CUSTOM) #0: Thu Nov 25 06:26:36 CET 2021 
mpeterma@sysbldr92.local:/home/mpeterma/netbsd-current/obj/sys/arch/amd64/compile/XEN3_DOMU_CUSTOM 
amd64


(yes, I am using a custom XEN3_DOMU kernel as the provided kernel conf 
lacks the UFS_ACL option)


Has anyone an idea what is wrong here?

Kind regards
Matthias


Sysinst customization options ("installation profiles")?

2021-09-15 Thread Matthias Petermann

Hello all,

I have a general question about the NetBSD installation process.

Background: I use NetBSD/amd64 as an operating system for appliances 
that I usually install on mini or industrial PCs - sometimes also as 
virtual machines on systems where I have access to the virtual host. I 
usually use a USB installation image and sysinst for this. My 
installation image also contains a selection of "custom" scripts, e.g. 
for setting up certain storage configurations (RAIDframe with LVM etc.) 
via the shell and then installing the system with default settings. This 
is done by starting the system with the USB image, but terminating 
Sysinst directly and then working with my scripts.


Now, for the first time, I have had the case that someone else should 
carry out such an installation on a VM with the help of the installation 
ISO. Since we had neglected to provide detailed installation 
instructions, several iterations were necessary until the partitioning 
and basic configuration were right. None of this was a disaster, but it 
was a pity for all involved.


I appreciate Sysinst as a generic installer that is the same for all 
supported platforms and allows experts to configure many details of the 
system already in the installation phase. I wouldn't want to miss this 
possibility in any case.


What I would like to know is - what about the customisability of 
Sysinst? Is there something like "hooks" in which you can call a script 
provided on the installation image, for example? I would like to extend 
Sysinst so that after the dialogue with the language selection, a menu 
appears that offers a list of "installation profiles" stored in a 
certain directory on the installation medium (as well as an "Expert 
Mode" in which the normal Sysinst then takes place). I would see the 
"installation profiles" in the first instance as "unattended" shell 
scripts for very specific hardware configurations, which are inserted 
into the image by a system integrator and then started by "progress" 
depending on the selection of Sysinst. After completion of the script, 
it might be practical to land in the main menu of Sysinst, from where 
the reboot and also the utility and config menus would be accessible.


Does something like this already exist in one form or another? Or is 
there interest in being able to use something like this in the future?


Many greetings
Matthias


smime.p7s
Description: S/MIME Cryptographic Signature


Re: [HEADS UP] pkgsrc default database directory changed

2020-12-07 Thread Matthias Petermann

Hello everybody,

while I think the change makes sense (if I understand it correctly, it 
will make it easier for me in the future to switch between different PKG 
locations including the corresponding metadata just by renaming the 
respective /usr/pkg directory), I would be very happy to read a short 
statement about the chronological classification again.


Is it correct that:

1) this change will be active in pkgsrc-2020Q4
2) requires a more recent pkg_install than the one from NetBSD 9.1

?

Thanks and best regards
Matthias


Re: sponsor NetBSD for 2020 https://github.com/sponsors/NetBSD

2020-11-10 Thread Matthias Petermann

Am 10.11.2020 um 14:32 schrieb matthew sporleder:

Indeed -- casting a wide net is in our interest.  I hope you are able
to use one of our many potential donation offerings -- paypal, stripe,
amazon smile, github sponsorship.. any I am missing?



So far my monthly subscription via Paypal has worked well. And the 
background of my question has already been answered - obviously the 
payment via Github is an advantage for the NetBSD Foundation at least in 
the first year, because Microsoft adds a dollar for every dollar 
donated? That would be a clear reason to change the payment provider at 
least for one year ;-)


Kind regards
Matthias


Re: sponsor NetBSD for 2020 https://github.com/sponsors/NetBSD

2020-11-10 Thread Matthias Petermann

Hallo Matthew,

Am 10.11.2020 um 05:35 schrieb matthew sporleder:

Hey -- the end of the year is coming up fast.  Wouldn't you feel
better about yourself if you added a github sponsorship to balance out
your incredible year? :)
How does this type of donation compare to a Paypal Monthly Subscription? 
Is it just a different way of transport, or are there advantages / 
disadvantages to Paypal?


Kind regards
Matthias


Re: [PATCH] net/samba4: relocate Sysvol to persist between reboots & move variable data out of /usr/pkg/etc/...

2020-07-31 Thread Matthias Petermann

Hello everybody,

unfortunately I was late to test this patch and saw that it is already 
in CVS. I still wanted to let you know that I can now confirm that 
without the patch and activated "log" the problem described by Chavdar 
occurs, and that everything seems fine with the patch. I have 
successfully provisioned a new domain on a sysvol on FFSv2 with 
activated log.


A big thank you from me for the quick remedy and the great cooperation!

Best regards and have a nice weekend
Matthias



Re: [PATCH] net/samba4: relocate Sysvol to persist between reboots & move variable data out of /usr/pkg/etc/...

2020-07-30 Thread Matthias Petermann

Am 30.07.2020 um 15:58 schrieb Chuck Silvers:

I tried with both "posix1eacls" and "log", and that triggers the corruption
and crash for me too.  the UFS2 extattr code hasn't been updated to have the
necessary hooks to integrate with wapbl, I'll take a look at what is needed
for that.  until that is done, please only use acls without "log".

-Chuck



Now I also understand why I have had no problem since the patch. I had 
explicitly not turned on WAPBL because I wanted to keep my original test 
case as simple as possible. If there is anything to test, I will help again.


Best wishes
Matthias


Re: [PATCH] net/samba4: relocate Sysvol to persist between reboots & move variable data out of /usr/pkg/etc/...

2020-07-29 Thread Matthias Petermann

Hello Chavdar,

Am 28.07.2020 um 18:48 schrieb Chavdar Ivanov:

This being a place people are trying samba4 as a DC, I got a
repeatable panic on one of the systems I am trying it on, as follows:

crash: _kvm_kvatop(0)
Crash version 9.99.69, image version 9.99.69.
Kernel compiled without options LOCKDEBUG.
System panicked: /: bad dir ino 657889 at offset 0: Bad dir (not
rounded), reclen=0x2e33, namlen=51, dirsiz=60 <= reclen=11827 <=
maxsize=512, flags=0x2005900, entryoffsetinblock=0, dirblksiz=512

Backtrace from time of crash is available.
_KERNEL_OPT_NARCNET() at 0
_KERNEL_OPT_DDB_HISTORY_SIZE() at _KERNEL_OPT_DDB_HISTORY_SIZE
sys_reboot() at sys_reboot
vpanic() at vpanic+0x15b
snprintf() at snprintf
ufs_lookup() at ufs_lookup+0x518
VOP_LOOKUP() at VOP_LOOKUP+0x42
lookup_once() at lookup_once+0x1a1
namei_tryemulroot() at namei_tryemulroot+0xacf
namei() at namei+0x29
vn_open() at vn_open+0x9a
do_open() at do_open+0x112
do_sys_openat() at do_sys_openat+0x72
sys_open() at sys_open+0x24
syscall() at syscall+0x26e
--- syscall (number 5) ---
syscall+0x26e:




that still looks like a file system inconsistency. Before the patch from 
Chuck I also had the case several times that a filesystem that was 
apparently repaired with fsck could no longer be trusted. After 
importing the patched kernel, to be on the safe side, I recreated all 
the file systems previously mounted with posix1eacls with newfs. 
Presumably fsck is not prepared for the kind of inconsistency, and only 
a newfs can restore a trustworthy initial state. What is the starting 
point for you? Has the file system been created after the patch, or has 
it only been treated with fsck so far?


In any case, I would advise you - if you have not already done so - to 
use a separate partition or LVM volume for the sysvol with its own file 
system, and to mount only this with the posix1eacls option. It seems the 
ACL code still needs a lot of testingh, so at least you can be sure that 
your root filesystem will not be affected.


Definitely good to know that you also test with Samba - many eyes see 
more :-)


Best wishes
Matthias


[PATCH] net/samba4: relocate Sysvol to persist between reboots & move variable data out of /usr/pkg/etc/...

2020-07-27 Thread Matthias Petermann

Hello everyone,

with the introduction of FFS ACLs Samba can be used as windows domain 
controller (DC). The DC needs a directory to persist its policies and 
scripts - the so called Sysvol.


The creation of the Sysvol typically takes place during the domain 
provisioning with samba-tool. At the moment, the default Samba4 from 
pkgsrc is configured to put Sysvol below /var/run/sysvol. Unfortunately, 
there is a critical issue with this location: Everything inside /var/run 
gets purged as part of the systems startup sequence. So this means 
losing all your policies, ultimately a corruption of the domain 
controller state at the next reboot.


Therefore, Sysvol needs to be relocated to a persistent place.

I checked how this is implemented elsewhere:

* On Linux systems Sysvol is typically located at /var/lib/samba/sysvol
* On FreeBSD the location is /var/db/samba4/sysvol

As /var/lib is not mentioned in HIER(7) at all, I guess this is Linux 
specific. Therefore I would propose the FreeBSD-way and put it below 
/var/db/samba4/sysvol. In addition to that I think it would be a good 
idea to relocate the variable Samba data (databases, caches) currently 
located at /usr/pkg/etc/samba/private) as well. My proposal for the 
target is /var/db/samba4/private.


Attached is a patch which applies to pkgsrc-current. I did perform the 
usual tests (removing all previous configuration and databases, 
provisioning a new domain, joining a Windows client to the domain) - no 
issues so far.


What do you think?

Kind regards
Matthias
Index: Makefile
===
RCS file: /cvsroot/pkgsrc/net/samba4/Makefile,v
retrieving revision 1.103
diff -u -r1.103 Makefile
--- Makefile21 Jul 2020 18:42:25 -  1.103
+++ Makefile28 Jul 2020 00:29:52 -
@@ -1,7 +1,7 @@
 # $NetBSD: Makefile,v 1.103 2020/07/21 18:42:25 christos Exp $

 DISTNAME=  samba-4.12.5
-PKGREVISION=   1
+PKGREVISION=   2
 CATEGORIES=net
 MASTER_SITES=  https://download.samba.org/pub/samba/stable/

@@ -34,8 +34,8 @@
 SMB_LOCALSTATE?=   ${VARBASE}
 SMB_INFO?= ${PREFIX}/info
 SMB_MAN?=  ${PREFIX}/${PKGMANDIR}
-SMB_STATE?=${VARBASE}/run
-SMB_PRIVATE?=  ${PKG_SYSCONFDIR}/private
+SMB_STATE?=${VARBASE}/db/samba4
+SMB_PRIVATE?=  ${SMB_STATE}/private
 SMB_PID?=  ${VARBASE}/run
 SMB_CACHE?=${VARBASE}/run
 SMB_LOCK?= ${VARBASE}/run
Index: PLIST
===
RCS file: /cvsroot/pkgsrc/net/samba4/PLIST,v
retrieving revision 1.31
diff -u -r1.31 PLIST
--- PLIST   6 Jul 2020 14:38:06 -   1.31
+++ PLIST   28 Jul 2020 00:29:52 -
@@ -37,6 +37,7 @@
 bin/wbinfo
 @pkgdir bind-dns
 @pkgdir etc/samba
+@pkgdir var/db/samba4
 include/charset.h
 include/core/doserr.h
 include/core/error.h


Re: Request for a quick fix in man page gpt(8) (fbsd-zfs --> zfs)

2020-07-27 Thread Matthias Petermann

Am 27.07.2020 um 23:10 schrieb Thomas Klausner:

On Mon, Jul 27, 2020 at 10:47:30PM +0200, Matthias Petermann wrote:

recently the identifier of the ZFS partition type has been renamed in gpt.
It used to be called "fbsd-zfs". Now it is only "zfs" which I very much
welcome as a NetBSD user. Can someone please adjust the man page?


Christos just fixed it :)
  Thomas



Thanks :-)

On this occasion I would like to ask a general question about ZFS man 
pages.


I am aware that the ZFS code was largely imported from FreeBSD. Hence 
the man pages. In zfs(8), for example, reference is made to jails, which 
most likely will not be implemented in NetBSD.


Is it realistic to remove these passages from the man page and at the 
same time to disconnect the corresponding parameters of the "zfs" 
command? Or would that make a later merge of newer ZFS code very difficult?


Of course this is not a top priority and for sure other things are more 
important. However, in my eyes, on the way to stabilization it would 
help to provide a more integrated, native user experience.


Kind regards
Matthias


Request for a quick fix in man page gpt(8) (fbsd-zfs --> zfs)

2020-07-27 Thread Matthias Petermann

Hello everyone,

recently the identifier of the ZFS partition type has been renamed in 
gpt. It used to be called "fbsd-zfs". Now it is only "zfs" which I very 
much welcome as a NetBSD user. Can someone please adjust the man page?


This should do it:

Index: gpt.8
===
RCS file: /cvsroot/src/sbin/gpt/gpt.8,v
retrieving revision 1.73
diff -r1.73 gpt.8
206c206
< .It Cm fbsd-zfs
---
> .It Cm zfs

Kind regards
Matthias


Re: Samba DC provisioning fails with ACL-enabled NetBSD-current

2020-07-26 Thread Matthias Petermann

Hello Chuck,

Am 26.07.2020 um 02:30 schrieb Chuck Silvers:

On Thu, Jul 23, 2020 at 08:09:11PM -0400, Christos Zoulas wrote:

Be very careful and use a separate partition for sysvol because Matthias 
reported
fs corruption which I have not looked at yet.


I committed a fix for the fs corruption bug just now.
If you have tried this samba provisioning step with an earlier kernel,
you should make sure that any fs corruption due to the bug is repaired
by unmounting the file system containing samba's sysvol directory and
running "fsck -fy" on that device.

-Chuck



Thank you for your effort and the quick solution. After successfully 
testing the patch, I created a complete build from the current CVC 
sources again this evening. I can confirm that the change as tested now 
also works in current.


Kind regards
Matthias


Re: nss_winbind Segmentation fault -

2020-07-21 Thread Matthias Petermann

Am 22.07.2020 um 05:26 schrieb Matthias Petermann:

Hello Christos,

Am 21.07.2020 um 17:49 schrieb Christos Zoulas:

I am having trouble building pkgsrc. Can you try:

https://www.netbsd.org/~christos/samba4.diff

christos



Thank you very much - I applied the patch tonight and can confirm that 
the group query now works:



test10# id Administrator
uid=0(MPNET\administrator) gid=100(users) 
groups=100(users),307(MPNET\enterprise admins),304(MPNET\domain 
admins),308(MPNET\group policy creator owners),306(MPNET\schema 
admins),305(MPNET\denied rodc password replication 
group),309(BUILTIN\users),300(BUILTIN\administrators)


test10# cd mpnet.local/
test10# ls -la
total 28
drwxrwx---+ 4 root  BUILTIN\administrators  512 Jul 22 07:14 .
drwxrwx---+ 3 root  BUILTIN\administrators  512 Jul 22 07:14 ..
drwxrwx---+ 4 root  BUILTIN\administrators    0 Jul 22 07:14 Policies
drwxrwx---+ 2 root  BUILTIN\administrators  512 Jul 22 07:14 scripts


I did my previous tests with Samba 4.12.3. For the patch, I used the 
version from current, which is now at 4.12.5. To be on the safe side, I 
have provisioned the entire domain again and also rejointed the Windows 
client - so this still works with 4.12.5. Great!


Best wishes
Matthias


...I forgot to mention that I had to update the PLIST to make the 
package install.


Kind regards
Matthias


Re: nss_winbind Segmentation fault -

2020-07-21 Thread Matthias Petermann

Hello Christos,

Am 21.07.2020 um 17:49 schrieb Christos Zoulas:

I am having trouble building pkgsrc. Can you try:

https://www.netbsd.org/~christos/samba4.diff

christos



Thank you very much - I applied the patch tonight and can confirm that 
the group query now works:



test10# id Administrator
uid=0(MPNET\administrator) gid=100(users) 
groups=100(users),307(MPNET\enterprise admins),304(MPNET\domain 
admins),308(MPNET\group policy creator owners),306(MPNET\schema 
admins),305(MPNET\denied rodc password replication 
group),309(BUILTIN\users),300(BUILTIN\administrators)


test10# cd mpnet.local/
test10# ls -la
total 28
drwxrwx---+ 4 root  BUILTIN\administrators  512 Jul 22 07:14 .
drwxrwx---+ 3 root  BUILTIN\administrators  512 Jul 22 07:14 ..
drwxrwx---+ 4 root  BUILTIN\administrators0 Jul 22 07:14 Policies
drwxrwx---+ 2 root  BUILTIN\administrators  512 Jul 22 07:14 scripts


I did my previous tests with Samba 4.12.3. For the patch, I used the 
version from current, which is now at 4.12.5. To be on the safe side, I 
have provisioned the entire domain again and also rejointed the Windows 
client - so this still works with 4.12.5. Great!


Best wishes
Matthias


Re: nss_winbind Segmentation fault -

2020-07-21 Thread Matthias Petermann

Hello Christos,

Am 21.07.2020 um 16:40 schrieb Christos Zoulas:

In article ,
Matthias Petermann   wrote:

Hello Christos,

Thank you for your tip - I have come a little further. Am I correctly
interpreting the debugger output that the memory address of the integer
pointer from groupc points to empty/unallocated memory?


Yes, but the issue is that there is an argument missing. I am fixing it.


that sounds good - I'm happy to help with testing! But no hurry :-)

Kind regards
Matthias


Re: nss_winbind Segmentation fault -

2020-07-21 Thread Matthias Petermann

Hello Christos,

Thank you for your tip - I have come a little further. Am I correctly 
interpreting the debugger output that the memory address of the integer 
pointer from groupc points to empty/unallocated memory?


-
test10# gdb /usr/bin/id id.core
GNU gdb (GDB) 8.3
Copyright (C) 2019 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 
<http://gnu.org/licenses/gpl.html>

This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Type "show copying" and "show warranty" for details.
This GDB was configured as "x86_64--netbsd".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.

For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from /usr/bin/id...
Reading symbols from /usr/libdata/debug//usr/bin/id.debug...
[New process 5161]
Core was generated by `id'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x7617ee803dcf in netbsdwinbind_getgroupmembership (nsrv=0x0, 
nscb=0x0, ap=0x7f7fff209688) at ../../nsswitch/winbind_nss_netbsd.c:203
203 for (dupc = 0; dupc < MIN(maxgrp, *groupc); 
dupc++) {

(gdb) br netbsdwinbind_getgroupmembership
Breakpoint 1 at 0x7617ee803aac: file 
../../nsswitch/winbind_nss_netbsd.c, line 176.

(gdb) run Administrator
Starting program: /usr/bin/id Administrator

Breakpoint 1, netbsdwinbind_getgroupmembership (nsrv=0x0, nscb=0x0, 
ap=0x7f7fff3af948) at ../../nsswitch/winbind_nss_netbsd.c:176

176 {
(gdb) s
177 int *result = va_arg(ap, int *);
(gdb) s
178 const char  *uname  = va_arg(ap, const char *);
(gdb) s
179 gid_t   *groups = va_arg(ap, gid_t *);
(gdb) s
180 int  maxgrp = va_arg(ap, int);
(gdb) s
181 int *groupc = va_arg(ap, int *);
(gdb) s
183 struct winbindd_request request = {
(gdb) p uname
$1 = 0x73209820ce60 <_winbind_passwdbuf> "MPNET\\administrator"
(gdb) x 0x73209820ce60
0x73209820ce60 <_winbind_passwdbuf>:"MPNET\\administrator"
(gdb) p groupc
$2 = (int *) 0x73200011
(gdb) x 0x73200011
0x73200011:   
-

The further backtrace looks like this:

-
(gdb) c
Continuing.

Program received signal SIGSEGV, Segmentation fault.
0x732098003dcf in netbsdwinbind_getgroupmembership (nsrv=0x0, 
nscb=0x0, ap=0x7f7fff3af948) at ../../nsswitch/winbind_nss_netbsd.c:203
203 for (dupc = 0; dupc < MIN(maxgrp, *groupc); 
dupc++) {

(gdb) bt
#0  0x732098003dcf in netbsdwinbind_getgroupmembership (nsrv=0x0, 
nscb=0x0, ap=0x7f7fff3af948) at ../../nsswitch/winbind_nss_netbsd.c:203
#1  0x732098b5a375 in _nsdispatch (retval=retval@entry=0x0, 
disp_tab=disp_tab@entry=0x732098de0de0, 
database=database@entry=0x732098ba0849 "group",
	method=method@entry=0x732098b96a4e "getgroupmembership", 
defaults=0x732098de5ec0 <__nsdefaultcompat>) at 
/home/source/ab/HEAD/src/lib/libc/net/nsdispatch.c:670
#2  0x732098aa25f9 in _getgroupmembership 
(_uname=_uname@entry=0x73209820ce60 <_winbind_passwdbuf> 
"MPNET\\administrator", agroup=agroup@entry=100,
	groups=groups@entry=0x7320991f2050, maxgrp=17, 
groupc=groupc@entry=0x7f7fff3afb2c) at 
/home/source/ab/HEAD/src/lib/libc/gen/getgroupmembership.c:396
#3  0x732098a72d40 in _getgrouplist (_uname=0x73209820ce60 
<_winbind_passwdbuf> "MPNET\\administrator", agroup=100, 
groups=0x7320991f2050, grpcnt=0x7f7fff3afb88)

at /home/source/ab/HEAD/src/lib/libc/gen/getgrouplist.c:67
#4  0x86001792 in user (pw=0x73209820ce00 <_winbind_passwd>) at 
/home/source/ab/HEAD/src/usr.bin/id/id.c:272
#5  main (argc=, argv=) at 
/home/source/ab/HEAD/src/usr.bin/id/id.c:167

-

I then unpacked the appropriate current sources under /home/source... so 
that I am at least technically able to continue debugging. However, my 
understanding just stops here and I have to take more time to think 
myself into it. Can you see anything obvious from the output of gdb?


Kind regards
Matthias


Am 21.07.2020 um 01:29 schrieb Christos Zoulas:

groupc must be NULL. You probably want to install the debug sets
so that you can also see how libc is calling nsdispatch.

christos

On Jul 20, 2020, at 7:24 PM, Matthias Petermann <mailto:m...@petermann-it.de>> wrote:


)




nss_winbind Segmentation fault - (was: Re: Samba DC provisioning fails with ACL-enabled NetBSD-current)

2020-07-20 Thread Matthias Petermann

Hello everybody,

In the meantime I was able to successfully connect a Windows VM to the 
Samba domain. The domain login works and also the access to the Sysvol. 
Wonderful!


Now I try to make the domain accounts known on the Samba host - the 
NetBSD system. I have adjusted the /etc/nsswitch.conf as follows:



#group: compat
group:  files winbind
#passwd:compat
passwd: files winbind


and linked the nss library to the expected location:


test10# ln -s /usr/pkg/lib/libnss_winbind.so /usr/lib/nss_winbind.so.0


My expectation was that the following would work:


test10# id Administrator
Memory fault (core dumped)


As you can see, this is not the case. A core dump is written instead. 
Fortunately, it contains kind of a hint:



test10# gdb /usr/bin/id id.core
Reading symbols from /usr/bin/id...
(No debugging symbols found in /usr/bin/id)
[New process 1964]
Core was generated by `id'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x70fb1200355c in netbsdwinbind_getgroupmembership () from 
   /usr/lib/nss_winbind.so.0

(gdb) bt
#0  0x70fb1200355c in netbsdwinbind_getgroupmembership () from 
   /usr/lib/nss_winbind.so.0

#1  0x70fb12b5a375 in nsdispatch () from /usr/lib/libc.so.12
#2  0x70fb12aa25f9 in getgroupmembership () from /usr/lib 
/libc.so.12

#3  0x70fb12a72d40 in getgrouplist () from /usr/lib/libc.so.12
#4  0x00013e001792 in main ()
(gdb)


As a test, I have adjusted /etc/nsswitch.conf to not use winbind for group:


#group: compat
group:  files #winbind
#passwd:compat
passwd: files winbind


That looks better:


test10# id Administrator
uid=0(MPNET\administrator) gid=100(users) groups=100(users)


Then I reactivated winbind for group in /etc/nsswitch.conf and tried to 
look more deeply into the function netbsdwinbind_getgroupmembership with 
the debugger:



test10# gdb /usr/bin/id id.core
Reading symbols from /usr/bin/id...
(No debugging symbols found in /usr/bin/id)
[New process 11852]
Core was generated by `id'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x79b06de0355c in netbsdwinbind_getgroupmembership () from 
   /usr/lib/nss_winbind.so.0

(gdb) br netbsdwinbind_getgroupmembership
Breakpoint 1 at 0x79b06de03451
(gdb) run Administrator
Starting program: /usr/bin/id Administrator

Breakpoint 1, 0x795372a03451 in 
netbsdwinbind_getgroupmembership () from /usr/lib/nss_winbind.so.0

(gdb) s
Single stepping until exit from function 
netbsdwinbind_getgroupmembership,

which has no line number information.

Program received signal SIGSEGV, Segmentation fault.
0x795372a0355c in netbsdwinbind_getgroupmembership () from 
/usr/lib/nss_winbind.so.0

(gdb)


Result: the debugging symbols are missing. So rebuilt Samba again with 
debug symbols:



test10# cd /usr/pkgsrc/net/samba4/
test10# env CFLAGS=-g INSTALL_UNSTRIPPED=yes make replace


The next attempt was more revealing:


test10# id Administrator
Memory fault (core dumped)
test10# gdb /usr/bin/id id.core
Reading symbols from /usr/bin/id...
(No debugging symbols found in /usr/bin/id)
[New process 13454]
Core was generated by `id'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x731c1f203dcf in netbsdwinbind_getgroupmembership 
(nsrv=0x0, nscb=0x0, ap=0x7f7fff7b5028) at ../../nsswitch 
/winbind_nss_netbsd.c:203
203 for (dupc = 0; dupc < MIN(maxgrp, *groupc); 
dupc++) {

(gdb) bt
#0  0x731c1f203dcf in netbsdwinbind_getgroupmembership 
(nsrv=0x0, nscb=0x0, ap=0x7f7fff7b5028) at ../../nsswitch 
/winbind_nss_netbsd.c:203

#1  0x731c1fd5a375 in nsdispatch () from /usr/lib/libc.so.12
#2  0x731c1fca25f9 in getgroupmembership () from /usr/lib 
/libc.so.12

#3  0x731c1fc72d40 in getgrouplist () from /usr/lib/libc.so.12
#4  0x58e01792 in main ()
(gdb)


The problem seems to be triggered by the function / macro (?) MIN (...). 
Its parameters are fed by the call parameters of 
netbsdwinbind_getgroupmembership:



netbsdwinbind_getgroupmembership(void *nsrv, void *nscb, va_list ap)
{
int *result = va_arg(ap, int *);
const char  *uname  = va_arg(ap, const char *);
gid_t   *groups = va_arg(ap, gid_t *);
int  maxgrp = va_arg(ap, int);
int *groupc = va_arg(ap, int *);

struct winbindd_request request = {
.wb_flags = WBFLAG_FROM_NSS,
};
struct winbindd_response response = {
.length = 0,
};
gid_t   *wblistv;
int wblistc, i, isdup, dupc;

strncpy(request.

  1   2   >