Re: Lots of port failures today?

2022-08-18 Thread Mateusz Guzik
this should do it:
https://cgit.FreeBSD.org/src/commit/?id=545db925c3d5408e71e21432895770cd49fd2cf3

On 8/19/22, Shawn Webb  wrote:
> On Thu, Aug 18, 2022 at 02:28:58PM -0700, Mark Millard wrote:
>> Larry Rosenman  wrote on
>> Date: Thu, 18 Aug 2022 19:45:10 UTC :
>>
>> > https://home.lerctr.org:/build.html?mastername=live-host_ports=2022-08-18_13h12m51s
>> >
>> > circa 97ecdc00ac5 on main
>> > Ideas?
>>
>> Unsure but . . .
>>
>> A bunch of your errors start with text looking like:
>>
>> QUOTE
>> CMake Error:
>>   The detected version of Ninja () is less than the version of Ninja
>> required
>>   by CMake (1.3).
>> END QUOTE
>>
>
> The 14-CURRENT/amd64 package build I kicked off yesterday for
> HardenedBSD is experiencing the same exact failure. Nearly 12,000
> ports skipped:
>
> http://ci-08.md.hardenedbsd.org/build.html?mastername=hardenedbsd-current_amd64-local=2022-08-17_20h01m01s
>
> Thanks,
>
> --
> Shawn Webb
> Cofounder / Security Engineer
> HardenedBSD
>
> https://git.hardenedbsd.org/hardenedbsd/pubkeys/-/raw/master/Shawn_Webb/03A4CBEBB82EA5A67D9F3853FF2E67A277F8E1FA.pub.asc
>


-- 
Mateusz Guzik 



Re: Lots of port failures today?

2022-08-18 Thread Shawn Webb
On Thu, Aug 18, 2022 at 02:28:58PM -0700, Mark Millard wrote:
> Larry Rosenman  wrote on
> Date: Thu, 18 Aug 2022 19:45:10 UTC :
> 
> > https://home.lerctr.org:/build.html?mastername=live-host_ports=2022-08-18_13h12m51s
> > 
> > circa 97ecdc00ac5 on main
> > Ideas?
> 
> Unsure but . . .
> 
> A bunch of your errors start with text looking like:
> 
> QUOTE
> CMake Error:
>   The detected version of Ninja () is less than the version of Ninja required
>   by CMake (1.3).
> END QUOTE
> 

The 14-CURRENT/amd64 package build I kicked off yesterday for
HardenedBSD is experiencing the same exact failure. Nearly 12,000
ports skipped:

http://ci-08.md.hardenedbsd.org/build.html?mastername=hardenedbsd-current_amd64-local=2022-08-17_20h01m01s

Thanks,

-- 
Shawn Webb
Cofounder / Security Engineer
HardenedBSD

https://git.hardenedbsd.org/hardenedbsd/pubkeys/-/raw/master/Shawn_Webb/03A4CBEBB82EA5A67D9F3853FF2E67A277F8E1FA.pub.asc


signature.asc
Description: PGP signature


Re: Lots of port failures today?

2022-08-18 Thread Mateusz Guzik
ye just get stock top of the tree

On 8/18/22, Larry Rosenman  wrote:
> On 08/18/2022 4:25 pm, Mateusz Guzik wrote:
>> On 8/18/22, Mateusz Guzik  wrote:
>>> On 8/18/22, Larry Rosenman  wrote:
 https://home.lerctr.org:/build.html?mastername=live-host_ports=2022-08-18_13h12m51s

 circa 97ecdc00ac5 on main
 Ideas?

>>>
>>> try with 9ac6eda6c6a36db6bffa01be7faea24f8bb92a0f reverted
>>>
>>
>> I'm pretty sure it will be fixed with  URL:
>> https://cgit.FreeBSD.org/src/commit/?id=545db925c3d5408e71e21432895770cd49fd2cf3
> should I un-revert 9ac6eda6c6a36db6bffa01be7faea24f8bb92a0f and pick up
> a new pull?
> --
> Larry Rosenman http://www.lerctr.org/~ler
> Phone: +1 214-642-9640 E-Mail: l...@lerctr.org
> US Mail: 5708 Sabbia Dr, Round Rock, TX 78665-2106
>


-- 
Mateusz Guzik 



RE: Lots of port failures today?

2022-08-18 Thread Mark Millard
Larry Rosenman  wrote on
Date: Thu, 18 Aug 2022 19:45:10 UTC :

> https://home.lerctr.org:/build.html?mastername=live-host_ports=2022-08-18_13h12m51s
> 
> circa 97ecdc00ac5 on main
> Ideas?

Unsure but . . .

A bunch of your errors start with text looking like:

QUOTE
CMake Error:
  The detected version of Ninja () is less than the version of Ninja required
  by CMake (1.3).
END QUOTE

After that is some variety across such examples, for example:

CMake Error at 
/usr/local/share/cmake/Modules/CMakeDetermineCompilerABI.cmake:49 (try_compile):
  Failed to generate test project build system.
Call Stack (most recent call first):
  /usr/local/share/cmake/Modules/CMakeTestCCompiler.cmake:26 
(CMAKE_DETERMINE_COMPILER_ABI)
  CMakeLists.txt:5 (project)

vs.

CMake Error at /usr/local/share/cmake/Modules/CheckFunctionExists.cmake:92 
(try_compile):
  Failed to generate test project build system.
Call Stack (most recent call first):
  CMakeLists.txt:90 (check_function_exists)

vs.

CMake Error at /usr/local/share/cmake/Modules/CheckSymbolExists.cmake:148 
(try_compile):
  Failed to generate test project build system.
Call Stack (most recent call first):
  /usr/local/share/cmake/Modules/CheckSymbolExists.cmake:71 
(__CHECK_SYMBOL_EXISTS_IMPL)
  src/CMakeLists.txt:44 (check_symbol_exists)

vs.

CMake Error at /usr/local/share/cmake/Modules/CheckIncludeFile.cmake:95 
(try_compile):
  Failed to generate test project build system.
Call Stack (most recent call first):
  cmake/config-ix.cmake:60 (check_include_file)
  CMakeLists.txt:732 (include)

and so on.

But those are after the Ninja report and might be consequences
of whatever is going on for the Ninja reports.

===
Mark Millard
marklmi at yahoo.com




Re: Lots of port failures today?

2022-08-18 Thread Larry Rosenman

On 08/18/2022 4:25 pm, Mateusz Guzik wrote:

On 8/18/22, Mateusz Guzik  wrote:

On 8/18/22, Larry Rosenman  wrote:

https://home.lerctr.org:/build.html?mastername=live-host_ports=2022-08-18_13h12m51s

circa 97ecdc00ac5 on main
Ideas?



try with 9ac6eda6c6a36db6bffa01be7faea24f8bb92a0f reverted



I'm pretty sure it will be fixed with  URL:
https://cgit.FreeBSD.org/src/commit/?id=545db925c3d5408e71e21432895770cd49fd2cf3
should I un-revert 9ac6eda6c6a36db6bffa01be7faea24f8bb92a0f and pick up 
a new pull?

--
Larry Rosenman http://www.lerctr.org/~ler
Phone: +1 214-642-9640 E-Mail: l...@lerctr.org
US Mail: 5708 Sabbia Dr, Round Rock, TX 78665-2106



Re: Lots of port failures today?

2022-08-18 Thread Mateusz Guzik
On 8/18/22, Mateusz Guzik  wrote:
> On 8/18/22, Larry Rosenman  wrote:
>> https://home.lerctr.org:/build.html?mastername=live-host_ports=2022-08-18_13h12m51s
>>
>> circa 97ecdc00ac5 on main
>> Ideas?
>>
>
> try with 9ac6eda6c6a36db6bffa01be7faea24f8bb92a0f reverted
>

I'm pretty sure it will be fixed with  URL:
https://cgit.FreeBSD.org/src/commit/?id=545db925c3d5408e71e21432895770cd49fd2cf3

-- 
Mateusz Guzik 



Re: Lots of port failures today?

2022-08-18 Thread Lorenzo Salvadore
--- Original Message ---
On Thursday, August 18th, 2022 at 21:45, Larry Rosenman  wrote:


> 
> 
> https://home.lerctr.org:/build.html?mastername=live-host_ports=2022-08-18_13h12m51s
> 
> circa 97ecdc00ac5 on main
> Ideas?

I don't know, but I can tell that I received two pkg-fallout mails today: one 
for gcc12 and
the other one for gcc13-devel and none of them has been updated recently. So I 
guess something
changed in one of their dependencies, which might have affected some other 
ports.

Cheers,

Lorenzo Salvadore



Re: Lots of port failures today?

2022-08-18 Thread Mateusz Guzik
On 8/18/22, Larry Rosenman  wrote:
> https://home.lerctr.org:/build.html?mastername=live-host_ports=2022-08-18_13h12m51s
>
> circa 97ecdc00ac5 on main
> Ideas?
>

try with 9ac6eda6c6a36db6bffa01be7faea24f8bb92a0f reverted

-- 
Mateusz Guzik 



Re: kernel panic during zfs pool import

2022-08-18 Thread Toomas Soome


> On 18. Aug 2022, at 18:46, Santiago Martinez  wrote:
> 
> Hi everyone,
> 
> I have a server running 13.1-stable that was powered off (gracefully) and now 
> is been powered on again and we have the following problem.
> 
> The server boots almost properly, kernel load and when zfs import other pools 
> it panic with the following message.
> 
> "panic : Solaris (panic) zfs: adding existent segment to range tree 
> (offset=4af2ca9000 size=a)".
> 
> if i boot into single user and the pools (most specifically pool01 ) is not 
> imported then there is no panic. but as soon as we try to import pool01 we 
> get that panic error. it worth mentioning that the pool imported/online 
> before.
> 
> Have two question:
> 
> *How i can tell to not import the pool automatically during boot so sever 
> is not stuck in a boot/panic/reboot infinite loop?

removing zpool.cache should do. unfortunately, you would need to boot alternate 
root (usb/cd/net) for that.

> 
> *Anybody know what this panic means?
> 

in short - bug. I’d try to boot current and import pool there - maybe the issue 
is fixed in current…

rgds,
toomas

> Thanks in advance!
> 
> Santi
> 
> 
> 



Lots of port failures today?

2022-08-18 Thread Larry Rosenman

https://home.lerctr.org:/build.html?mastername=live-host_ports=2022-08-18_13h12m51s

circa 97ecdc00ac5 on main
Ideas?

--
Larry Rosenman http://www.lerctr.org/~ler
Phone: +1 214-642-9640 E-Mail: l...@lerctr.org
US Mail: 5708 Sabbia Dr, Round Rock, TX 78665-2106



kernel panic during zfs pool import

2022-08-18 Thread Santiago Martinez

Hi everyone,

I have a server running 13.1-stable that was powered off (gracefully) 
and now is been powered on again and we have the following problem.


The server boots almost properly, kernel load and when zfs import other 
pools it panic with the following message.


"panic : Solaris (panic) zfs: adding existent segment to range tree 
(offset=4af2ca9000 size=a)".


if i boot into single user and the pools (most specifically pool01 ) is 
not imported then there is no panic. but as soon as we try to import 
pool01 we get that panic error. it worth mentioning that the pool 
imported/online before.


Have two question:

*    How i can tell to not import the pool automatically during boot so 
sever is not stuck in a boot/panic/reboot infinite loop?


*    Anybody know what this panic means?

Thanks in advance!

Santi





Re: Hangs in bacula / NFS? on recent Current

2022-08-18 Thread Larry Rosenman

On 08/18/2022 9:49 am, Larry Rosenman wrote:

I didn't get all my mail on my bacula backups today (they backup to
NFS mounted TrueNAS).
Also a df hangs.

Here are procstat -kk's for all:
ler in  borg in ~ via C v14.0.5-clang on ☁️  (us-east-1)
❯ ps auxxxwww|grep bacula
bacula   20670.0  0.0  63188  13652  -  Is   11:30
0:17.49 /usr/local/sbin/bacula-sd -u bacula -g bacula -v -c
/usr/local/etc/bacula/bacula-sd.conf
root 20720.0  0.0  59280  31276  -  Is   11:30
0:00.31 /usr/local/sbin/bacula-fd -u root -g wheel -v -c
/usr/local/etc/bacula/bacula-fd.conf
bacula   20750.0  0.0  86992  19352  -  Is   11:30
0:56.95 /usr/local/sbin/bacula-dir -u bacula -g bacula -v -c
/usr/local/etc/bacula/bacula-dir.conf
postgres502410.0  0.1 285764 160244  -  Is   23:05
0:00.38 postgres: bacula bacula [local]  (postgres)
postgres502440.0  0.1 298784  74448  -  Ds   23:05
0:00.67 postgres: bacula bacula [local]  (postgres)
ler 665950.0  0.0  12888   2600  3  S+   09:46
0:00.00 grep --color=auto bacula

ler in  borg in ~ via C v14.0.5-clang on ☁️  (us-east-1)
❯ sudo procstat -kk 2067
  PIDTID COMMTDNAME  KSTACK
 2067 100742 bacula-sd   -   mi_switch+0x157
sleepq_switch+0x107 sleepq_catch_signals+0x266 sleepq_wait_sig+0x9
_cv_wait_sig+0x137 kern_select+0x9fe sys_select+0x56
amd64_syscall+0x12e fast_syscall_common+0xf8
 2067 101036 bacula-sd   -   mi_switch+0x157
sleepq_switch+0x107 sleepq_catch_signals+0x266
sleepq_timedwait_sig+0x12 _sleep+0x27d umtxq_sleep+0x242 do_wait+0x26b
__umtx_op_wait_uint_private+0x54 sys__umtx_op+0x7e amd64_syscall+0x12e
fast_syscall_common+0xf8
 2067 101038 bacula-sd   -   mi_switch+0x157
sleepq_switch+0x107 sleepq_catch_signals+0x266
sleepq_timedwait_sig+0x12 _sleep+0x27d umtxq_sleep+0x242 do_wait+0x26b
__umtx_op_wait_uint_private+0x54 sys__umtx_op+0x7e amd64_syscall+0x12e
fast_syscall_common+0xf8
 2067 124485 bacula-sd   -   mi_switch+0x157
sleepq_switch+0x107 sleepq_catch_signals+0x266
sleepq_timedwait_sig+0x12 _cv_timedwait_sig_sbt+0x15c
kern_poll_kfds+0x457 kern_poll+0x9f sys_poll+0x50 amd64_syscall+0x12e
fast_syscall_common+0xf8

ler in  borg in ~ via C v14.0.5-clang on ☁️  (us-east-1)
❯ sudo procstat -kk 2072
  PIDTID COMMTDNAME  KSTACK
 2072 100677 bacula-fd   -   mi_switch+0x157
sleepq_switch+0x107 sleepq_catch_signals+0x266 sleepq_wait_sig+0x9
_cv_wait_sig+0x137 kern_select+0x9fe sys_select+0x56
amd64_syscall+0x12e fast_syscall_common+0xf8
 2072 101039 bacula-fd   -   mi_switch+0x157
sleepq_switch+0x107 sleepq_catch_signals+0x266
sleepq_timedwait_sig+0x12 _sleep+0x27d umtxq_sleep+0x242 do_wait+0x26b
__umtx_op_wait_uint_private+0x54 sys__umtx_op+0x7e amd64_syscall+0x12e
fast_syscall_common+0xf8
 2072 101040 bacula-fd   -   mi_switch+0x157
sleepq_switch+0x107 sleepq_catch_signals+0x266
sleepq_timedwait_sig+0x12 _sleep+0x27d umtxq_sleep+0x242 do_wait+0x26b
__umtx_op_wait_uint_private+0x54 sys__umtx_op+0x7e amd64_syscall+0x12e
fast_syscall_common+0xf8
 2072 124490 bacula-fd   -   mi_switch+0x157
sleepq_switch+0x107 sleepq_catch_signals+0x266
sleepq_timedwait_sig+0x12 _cv_timedwait_sig_sbt+0x15c
kern_poll_kfds+0x457 kern_poll+0x9f sys_poll+0x50 amd64_syscall+0x12e
fast_syscall_common+0xf8

ler in  borg in ~ via C v14.0.5-clang on ☁️  (us-east-1)
❯ sudo procstat -kk 2075
  PIDTID COMMTDNAME  KSTACK
 2075 101007 bacula-dir  -   mi_switch+0x157
sleepq_switch+0x107 sleepq_catch_signals+0x266 sleepq_wait_sig+0x9
_sleep+0x29b umtxq_sleep+0x242 do_wait+0x26b __umtx_op_wait+0x53
sys__umtx_op+0x7e amd64_syscall+0x12e fast_syscall_common+0xf8
 2075 101041 bacula-dir  -   mi_switch+0x157
sleepq_switch+0x107 sleepq_catch_signals+0x266
sleepq_timedwait_sig+0x12 _sleep+0x27d umtxq_sleep+0x242 do_wait+0x26b
__umtx_op_wait_uint_private+0x54 sys__umtx_op+0x7e amd64_syscall+0x12e
fast_syscall_common+0xf8
 2075 101045 bacula-dir  -   mi_switch+0x157
sleepq_switch+0x107 sleepq_catch_signals+0x266 sleepq_wait_sig+0x9
_cv_wait_sig+0x137 kern_select+0x9fe sys_select+0x56
amd64_syscall+0x12e fast_syscall_common+0xf8
 2075 101046 bacula-dir  -   mi_switch+0x157
sleepq_switch+0x107 sleepq_catch_signals+0x266
sleepq_timedwait_sig+0x12 _sleep+0x27d umtxq_sleep+0x242 do_wait+0x26b
__umtx_op_wait_uint_private+0x54 sys__umtx_op+0x7e amd64_syscall+0x12e
fast_syscall_common+0xf8
 2075 101047 bacula-dir  -   mi_switch+0x157
sleepq_switch+0x107 sleepq_catch_signals+0x266
sleepq_timedwait_sig+0x12 _sleep+0x27d kern_clock_nanosleep+0x1d1
sys_nanosleep+0x3b amd64_syscall+0x12e fast_syscall_common+0xf8
 2075 124479 bacula-dir  -   mi_switch+0x157

Hangs in bacula / NFS? on recent Current

2022-08-18 Thread Larry Rosenman
I didn't get all my mail on my bacula backups today (they backup to NFS 
mounted TrueNAS).

Also a df hangs.

Here are procstat -kk's for all:
ler in  borg in ~ via C v14.0.5-clang on ☁️  (us-east-1)
❯ ps auxxxwww|grep bacula
bacula   20670.0  0.0  63188  13652  -  Is   11:30   0:17.49 
/usr/local/sbin/bacula-sd -u bacula -g bacula -v -c 
/usr/local/etc/bacula/bacula-sd.conf
root 20720.0  0.0  59280  31276  -  Is   11:30   0:00.31 
/usr/local/sbin/bacula-fd -u root -g wheel -v -c 
/usr/local/etc/bacula/bacula-fd.conf
bacula   20750.0  0.0  86992  19352  -  Is   11:30   0:56.95 
/usr/local/sbin/bacula-dir -u bacula -g bacula -v -c 
/usr/local/etc/bacula/bacula-dir.conf
postgres502410.0  0.1 285764 160244  -  Is   23:05   0:00.38 
postgres: bacula bacula [local]  (postgres)
postgres502440.0  0.1 298784  74448  -  Ds   23:05   0:00.67 
postgres: bacula bacula [local]  (postgres)
ler 665950.0  0.0  12888   2600  3  S+   09:46   0:00.00 
grep --color=auto bacula


ler in  borg in ~ via C v14.0.5-clang on ☁️  (us-east-1)
❯ sudo procstat -kk 2067
  PIDTID COMMTDNAME  KSTACK
 2067 100742 bacula-sd   -   mi_switch+0x157 
sleepq_switch+0x107 sleepq_catch_signals+0x266 sleepq_wait_sig+0x9 
_cv_wait_sig+0x137 kern_select+0x9fe sys_select+0x56 amd64_syscall+0x12e 
fast_syscall_common+0xf8
 2067 101036 bacula-sd   -   mi_switch+0x157 
sleepq_switch+0x107 sleepq_catch_signals+0x266 sleepq_timedwait_sig+0x12 
_sleep+0x27d umtxq_sleep+0x242 do_wait+0x26b 
__umtx_op_wait_uint_private+0x54 sys__umtx_op+0x7e amd64_syscall+0x12e 
fast_syscall_common+0xf8
 2067 101038 bacula-sd   -   mi_switch+0x157 
sleepq_switch+0x107 sleepq_catch_signals+0x266 sleepq_timedwait_sig+0x12 
_sleep+0x27d umtxq_sleep+0x242 do_wait+0x26b 
__umtx_op_wait_uint_private+0x54 sys__umtx_op+0x7e amd64_syscall+0x12e 
fast_syscall_common+0xf8
 2067 124485 bacula-sd   -   mi_switch+0x157 
sleepq_switch+0x107 sleepq_catch_signals+0x266 sleepq_timedwait_sig+0x12 
_cv_timedwait_sig_sbt+0x15c kern_poll_kfds+0x457 kern_poll+0x9f 
sys_poll+0x50 amd64_syscall+0x12e fast_syscall_common+0xf8


ler in  borg in ~ via C v14.0.5-clang on ☁️  (us-east-1)
❯ sudo procstat -kk 2072
  PIDTID COMMTDNAME  KSTACK
 2072 100677 bacula-fd   -   mi_switch+0x157 
sleepq_switch+0x107 sleepq_catch_signals+0x266 sleepq_wait_sig+0x9 
_cv_wait_sig+0x137 kern_select+0x9fe sys_select+0x56 amd64_syscall+0x12e 
fast_syscall_common+0xf8
 2072 101039 bacula-fd   -   mi_switch+0x157 
sleepq_switch+0x107 sleepq_catch_signals+0x266 sleepq_timedwait_sig+0x12 
_sleep+0x27d umtxq_sleep+0x242 do_wait+0x26b 
__umtx_op_wait_uint_private+0x54 sys__umtx_op+0x7e amd64_syscall+0x12e 
fast_syscall_common+0xf8
 2072 101040 bacula-fd   -   mi_switch+0x157 
sleepq_switch+0x107 sleepq_catch_signals+0x266 sleepq_timedwait_sig+0x12 
_sleep+0x27d umtxq_sleep+0x242 do_wait+0x26b 
__umtx_op_wait_uint_private+0x54 sys__umtx_op+0x7e amd64_syscall+0x12e 
fast_syscall_common+0xf8
 2072 124490 bacula-fd   -   mi_switch+0x157 
sleepq_switch+0x107 sleepq_catch_signals+0x266 sleepq_timedwait_sig+0x12 
_cv_timedwait_sig_sbt+0x15c kern_poll_kfds+0x457 kern_poll+0x9f 
sys_poll+0x50 amd64_syscall+0x12e fast_syscall_common+0xf8


ler in  borg in ~ via C v14.0.5-clang on ☁️  (us-east-1)
❯ sudo procstat -kk 2075
  PIDTID COMMTDNAME  KSTACK
 2075 101007 bacula-dir  -   mi_switch+0x157 
sleepq_switch+0x107 sleepq_catch_signals+0x266 sleepq_wait_sig+0x9 
_sleep+0x29b umtxq_sleep+0x242 do_wait+0x26b __umtx_op_wait+0x53 
sys__umtx_op+0x7e amd64_syscall+0x12e fast_syscall_common+0xf8
 2075 101041 bacula-dir  -   mi_switch+0x157 
sleepq_switch+0x107 sleepq_catch_signals+0x266 sleepq_timedwait_sig+0x12 
_sleep+0x27d umtxq_sleep+0x242 do_wait+0x26b 
__umtx_op_wait_uint_private+0x54 sys__umtx_op+0x7e amd64_syscall+0x12e 
fast_syscall_common+0xf8
 2075 101045 bacula-dir  -   mi_switch+0x157 
sleepq_switch+0x107 sleepq_catch_signals+0x266 sleepq_wait_sig+0x9 
_cv_wait_sig+0x137 kern_select+0x9fe sys_select+0x56 amd64_syscall+0x12e 
fast_syscall_common+0xf8
 2075 101046 bacula-dir  -   mi_switch+0x157 
sleepq_switch+0x107 sleepq_catch_signals+0x266 sleepq_timedwait_sig+0x12 
_sleep+0x27d umtxq_sleep+0x242 do_wait+0x26b 
__umtx_op_wait_uint_private+0x54 sys__umtx_op+0x7e amd64_syscall+0x12e 
fast_syscall_common+0xf8
 2075 101047 bacula-dir  -   mi_switch+0x157 
sleepq_switch+0x107 sleepq_catch_signals+0x266 sleepq_timedwait_sig+0x12 
_sleep+0x27d kern_clock_nanosleep+0x1d1 sys_nanosleep+0x3b 
amd64_syscall+0x12e fast_syscall_common+0xf8
 2075 124479 bacula-dir   

Re: Updating EFI boot loader results in boot hangup

2022-08-18 Thread Warner Losh
On Thu, Aug 18, 2022, 12:19 AM Simon J. Gerraty  wrote:

> Warner Losh  wrote:
> > I think I broke it with my latest updates. I don't have a good ZFS
> testing setup
> > so I'm spending a little time enhancing the bootable image generator to
> have
> > one that I can easily test boot with qemu.
>
> FWIW bhyve is *excellent* for mucking about with EFI and loader in general.
>
> I did all the UEFI support for Junos using bhyve (initially so I could
> test LOADER_VERIEXEC), and I regurlarly use it to test various install
> processes - pxe boot and net install, usb install, etc.
>
> I build loader.efi from a branch off main, everyting else is from
> stable/12 at present.
>
> The combination of makefs, mkimg and bhyve - make hacking the low level
> boot bits much safer.
>
> Byhve is quick too - my Junos VM's take about 40-50s from start to login
> prompt.
>

Yup. Use all that stuff. My issue was tooling (creating the bookable ZFS
images) as well as not being able to create an image that recreates the
problem. I've fixed the tooling issue, but used qemu so I could capture
stout to see if the tests pass/fail easily in a script. Bhyve, as far as I
know, can't do that without delving into separate expect scripts. And it
can't run arm binaries on x86...

I also use bhyve when I want to attach a debugger or need to test longer
running things.

But in this case it took a while to find how to reproduce... but I found
one and just pushed a fix.

Warner

>


Re: 24.3. Updating Bootcode

2022-08-18 Thread Nuno Teixeira
Forgot to send a screenshot
 for
loader.efi @ main-n257458-ef8b872301c5
( +Boot0007* FreeBSD-14
HD(1,GPT,73acd1b2-de41-11eb-8156-002b67dfc673,0x28,0x82000)/File(\efi\freebsd\loader.efi)
 nvd0p1:/efi/freebsd/loader.efi
/boot/efi//efi/freebsd/loader.efi )

Someone talked about this warnings earlier:
---
Attempted extraction of recovery data from stadard superblock: failed
Attempt to find boot zone recovery data: failed
(...)
---
But it boots ok.

Cheers,

Nuno Teixeira  escreveu no dia quarta, 17/08/2022 à(s)
19:18:

> *** and "EFI Hard Drive"(legacy /efi/boot/bootx64.efi) from BIOS.
>
>
> Nuno Teixeira  escreveu no dia quarta, 17/08/2022
> à(s) 19:14:
>
>> Hi,
>>
>> And it's done:
>> ---
>>  Boot0007* FreeBSD-14
>> HD(1,GPT,73acd1b2-de41-11eb-8156-002b67dfc673,0x28,0x82000)/File(\efi\freebsd\loader.efi)
>>  nvd0p1:/efi/freebsd/loader.efi
>> /boot/efi//efi/freebsd/loader.efi
>> +Boot0006* FreeBSD-14_old
>> HD(1,GPT,73acd1b2-de41-11eb-8156-002b67dfc673,0x28,0x82000)/File(\efi\freebsd\loader-old.efi)
>>  nvd0p1:/efi/freebsd/loader-old.efi
>> /boot/efi//efi/freebsd/loader-old.efi
>>  Boot0004* Windows Boot Manager
>> HD(1,GPT,8c497825-1db2-41f8-8924-85dfd0bb7283,0x800,0x82000)/File(\EFI\Microsoft\Boot\bootmgfw.efi)
>>da1p1:/EFI/Microsoft/Boot/bootmgfw.efi
>> (null)
>>  Boot* EFI Hard Drive (SAMSUNG MZVLB1T0HBLR-000L2)
>> PciRoot(0x0)/Pci(0x1d,0x0)/Pci(0x0,0x0)/NVMe(0x1,39-f9-b8-01-81-38-25-00)/HD(1,GPT,73acd1b2-de41-11eb-8156-002b67dfc673,0x28,0x82000)
>> ---
>> and I can choose "FreeBSD-14"(/efi/freebsd/loader.efi),
>> "FreeBSD-14_old"(/efi/freebsd/loader-old.efi) and "EFI Hard
>> Drive"(legacy /efi/bootx64.efi) from BIOS.
>>
>> NOTE: efibootmgr(8) example is:
>> ---
>> efibootmgr -a -c -l /boot/efi/EFI/freebsd/loader.efi -L FreeBSD-11
>>   ^^^
>> ---
>> But I choosed "efi" instead of "EFI"...
>>
>> Thanks all for helping me understand it!
>>
>> Cheers,
>>
>>
>> Warner Losh  escreveu no dia terça, 16/08/2022 à(s)
>> 18:19:
>>
>>>
>>>
>>> On Tue, Aug 16, 2022 at 6:01 AM Nuno Teixeira 
>>> wrote:
>>>
 Hi Toomas,

 For better OS support, the UEFI specification (UEFI 2.8A Feb 14, page
> 499) is suggesting to use structure like:
>
> /efi//…
>
> And to use this suggestion, it means the UEFI Boot Manager needs to be
> configured (see efibootmgr(8)).
>
> Therefore, once you have set up OS specific setup, there is no use for
> default (/efi/boot/…) and you need to update one or another, but not
> both.
>

 FreeBSD have /efi/freebsd/... but it's not configured in
 efibootmgr:

>>>
>>> The current default installer will do this, but older upgraded systems
>>> don't do this by default. Likely you are looking at an older
>>> system and/or one of the 'bad actors' that reset this stuff between
>>> boots.
>>>
>>>
 efibootmgr -v:
 ---
 BootOrder  : 0004, , 2002, 2003, 2001
 Boot0004* Windows Boot Manager
 HD(1,GPT,8c497825-1db2-41f8-8924-85dfd0bb7283,0x800,0x82000)/File(\EFI\Microsoft\Boot\bootmgfw.efi)

  da0p1:/EFI/Microsoft/Boot/bootmgfw.efi (null)
 +Boot* EFI Hard Drive (SAMSUNG MZVLB1T0HBLR-000L2)
 PciRoot(0x0)/Pci(0x1d,0x0)/Pci(0x0,0x0)/NVMe(0x1,39-f9-b8-01-81-38-25-00)/HD(1,GPT,73acd1b2-de41-11eb-8156-002b67dfc673,0x28,0x82000)
  Boot2002* EFI DVD/CDROM
  Boot2003* EFI Network
  Boot2001* EFI USB Device
 ---
 so boot is definitely using /efi/boot/bootx64.efi @Boot

>>>
>>> In your case, that's true. The "EFI Hard Drive" is a default entry the
>>> UEFI BIOS created for you.
>>>
>>>
 I think I can create a new boot:
 ---
 efibootmgr -a -c -l /boot/efi/efi/freebsd/loader.efi -L FreeBSD-14
 (and make it active)
 efibootmgr -a -b 
 ---
 and create other for loader.efi.old in case of problems.

>>>
>>> Yes.
>>>
>>>
 In this case I will need only update /efi/freebsd/loader.efi.

 Q: for what has been said in mailing, boot is compiled in
 /usr/src/stand, isn't a good idea that when it install new boot it backup
 old boot like /boot/kernel -> /boot/kernel.old?

>>>
>>> Yes. In fact that's what's done, but only for the BIOS version. We
>>> should do the same for efi but don't seem to do so currently. But that's
>>> likely tied up behind issues of installing things automatically into the
>>> ESP on 'installworld'.
>>>
>>> Warner
>>>
>>
>>
>> --
>> Nuno Teixeira
>> FreeBSD Committer (ports)
>>
>
>
> --
> Nuno Teixeira
> FreeBSD Committer (ports)
>


-- 
Nuno Teixeira
FreeBSD Committer (ports)


Re: Updating EFI boot loader results in boot hangup

2022-08-18 Thread Simon J. Gerraty
Warner Losh  wrote:
> I think I broke it with my latest updates. I don't have a good ZFS testing 
> setup
> so I'm spending a little time enhancing the bootable image generator to have
> one that I can easily test boot with qemu.

FWIW bhyve is *excellent* for mucking about with EFI and loader in general.

I did all the UEFI support for Junos using bhyve (initially so I could
test LOADER_VERIEXEC), and I regurlarly use it to test various install
processes - pxe boot and net install, usb install, etc.

I build loader.efi from a branch off main, everyting else is from
stable/12 at present.

The combination of makefs, mkimg and bhyve - make hacking the low level
boot bits much safer.

Byhve is quick too - my Junos VM's take about 40-50s from start to login
prompt.

--sjg