Re: bhyve zfs resizing

2019-03-18 Thread Miroslav Lachman

tech-lists wrote on 2019/03/19 01:23:


Am I correct? In that I should have used UFS in the guest rather than
zfs? Or was it the encryption?


As Alan already wrote - you can use ZFS inside of the guest but I would 
never choose ZFS in zvol backed guest. I prefere UFS. It is faster and 
does not need so much memory as ZFS does. My VirtualBox and Bhyve guests 
are small. Sometimes <1GB of RAM. Sometimes 2GB of RAM and that is very 
small for ZFS.


May be you can try to limit ZFS ARC size in /etc/sysctl.conf or in 
/boot/loader.conf

vfs.zfs.arc_max
Choose about 1/4 of your guest's RAM size and test it again.

Kind regards
Miroslav Lachman
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: bhyve zfs resizing

2019-03-18 Thread Alan Somers
On Mon, Mar 18, 2019 at 6:24 PM tech-lists  wrote:
>
> On Mon, Mar 18, 2019 at 06:56:03PM +0100, Miroslav Lachman wrote:
>
> [...]
>
> Thanks for the example, I've saved it.
>
> Ok just one other question, which I might have found the answer to, or
> might not. I'm new to this virtualising on zfs even though ive used zfs
> for years. It's basically:
>
> I made a zvol, installed 12-R into it,. Where the disks option came up I
> chose the auto defaults for *ZFS* in the guest. I also selected
> encryption for both the virtual disk and swap. I think perhaps I
> shouldn't have done all this together in the same vm because with apache
> running in it, httpd got wedged (and then everything got wedged. sync
> wouldn't return). I think the top zfs layer and the encryption layer and
> the zfs underneath got too busy. Happily the server still responded to a
> shutdown -r and came back up. It's scrubbing the zpool to be on the safe
> side.
>
> Am I correct? In that I should have used UFS in the guest rather than
> zfs? Or was it the encryption?
>
> thanks,
> --
> J.

Running ZFS inside of a ZFS-backed guest will be slower than using
UFS, but it should work just fine.  I doubt that it was the cause of
your problem.
-Alan
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: bhyve zfs resizing

2019-03-18 Thread tech-lists

On Mon, Mar 18, 2019 at 06:56:03PM +0100, Miroslav Lachman wrote:

[...]

Thanks for the example, I've saved it.

Ok just one other question, which I might have found the answer to, or
might not. I'm new to this virtualising on zfs even though ive used zfs
for years. It's basically:

I made a zvol, installed 12-R into it,. Where the disks option came up I
chose the auto defaults for *ZFS* in the guest. I also selected
encryption for both the virtual disk and swap. I think perhaps I
shouldn't have done all this together in the same vm because with apache
running in it, httpd got wedged (and then everything got wedged. sync
wouldn't return). I think the top zfs layer and the encryption layer and
the zfs underneath got too busy. Happily the server still responded to a
shutdown -r and came back up. It's scrubbing the zpool to be on the safe
side.

Am I correct? In that I should have used UFS in the guest rather than
zfs? Or was it the encryption?

thanks,
--
J.


signature.asc
Description: PGP signature


re: ifuncs check flawed?

2019-03-18 Thread Dan Allen
Another data point:

I did the whole experiment with the latest 12-STABLE but for amd64 and 
everything builds fine without changes, and runs fine too.

I used the same src.conf and make.conf.  So the problem is definitely with i386.

Dan

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: bhyve zfs resizing

2019-03-18 Thread Miroslav Lachman

tech-lists wrote on 2019/03/18 16:25:

On Mon, Mar 18, 2019 at 09:08:31AM -0600, Alan Somers wrote:


Do you mean using a zvol as the backing store for a VM?  If so, then:
1) Yes.  You can just do "zfs set volsize" on the host.
2) In theory no, but the guest may need to be rebooted to notice the
change.  And I'm not sure if the current bhyve code will expose the
new size without a reboot or not.
3) Sure.  But after you expand the zvol (or before you shrink it),
you'll have to change the size of the guest's filesystem using the
guest's native tools.



I did it 2 month ago on FreeBSD 11.2.

On the host with running guest:

# zfs set volsize=200G tank1/vol1/bhyve/kotel/disk1

Even if I unmounted disk in the guest it still does not see the new size 
until I rebooted the guest.


After reboot of the guest, you will see corrupted GPT:

# gpart show -p vtbd1
=>   40  209715120vtbd1  GPT  (200G) [CORRUPT]
 40  8   - free -  (4.0K)
 48   1024  vtbd1p1  freebsd-boot  (512K)
   1072976   - free -  (488K)
   2048  203423744  vtbd1p2  freebsd-ufs  (97G)
  2034257926289368   - free -  (3.0G)

And after running recover, the guest will see the added space

# gpart recover vtbd1
vtbd1 recovered


# gpart show -p vtbd1
=>   40  419430320vtbd1  GPT  (200G)
 40  8   - free -  (4.0K)
 48   1024  vtbd1p1  freebsd-boot  (512K)
   1072976   - free -  (488K)
   2048  203423744  vtbd1p2  freebsd-ufs  (97G)
  203425792  216004568   - free -  (103G)

After this, the partition can finally be enlarged

# gpart resize -a 1M -s 197G -i 2 vtbd1

# growfs /vol0


Kind regards
Miroslav Lachman
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


re: ifuncs check flawed?

2019-03-18 Thread Dan Allen
Well, everything built fine, but the kernel faults on boot, so the checks are 
probably needed. ;-)

I still do not understand why the linker that is part of 12-STABLE does not 
provide the support needed.

The directives in my src.conf and make.conf do not delete the lld linker.  They 
just cut down on extra llvm parts that I do not need, or so I am led to believe.

Still puzzled.

Dan


___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


re: ifuncs check flawed?

2019-03-18 Thread Dan Allen
The ifuncs check for buildkernel is identical to the one in 

  /usr/src/lib/libc/Makefile

and is here:

  /usr/src/sys/conf/kern.pre.mk

Dan

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


ifuncs check flawed?

2019-03-18 Thread Dan Allen
I have been building FreeBSD for many years, as in since 2.2.8.

Currently my amd64 build of 12-STABLE is built with the following src.conf:

WITHOUT_BHYVE=1
WITHOUT_CAPSICUM=1
WITHOUT_CDDL=1
WITHOUT_CLANG_EXTRAS=1
WITHOUT_CLANG_FULL=1
WITHOUT_CROSS_COMPILER=1
WITHOUT_DEBUG_FILES=1
WITHOUT_EXAMPLES=1
WITHOUT_HYPERV=1
WITHOUT_JAIL=1
WITHOUT_LOCALES=1
WITHOUT_PROFILE=1
WITHOUT_QUOTAS=1
WITHOUT_TESTS=1

And a make.conf of:

BATCH=yes
MK_CDDL=no
MK_CLANG_EXTRAS=no
MK_DEBUG_FILES=no
MK_NO_PROFILE=yes
MK_TESTS=no
OPTIONS_UNSET=JAVA
WITH_JADETEX=no
WITHOUT_CDDL=yes
WITHOUT_CLANG_FULL=yes
WITHOUT_CTF=yes
WITHOUT_CTM=yes
DISABLE_VULNERABILITIES=yes

So far, so good.  (I do this to fit the whole thing onto a CD-image.)

Fooling around, and using xhyve on my Mac, I want to build a i386 version in a 
VM.

So I grabbed the 20190314 snapshot of 12-STABLE, i386 flavor, and installed it. 
 Works great.

Now to rebuild the world.

The buildworld and buildkernel steps both immediately fail due to the linker 
not supporting ifuncs.

The code that stops the buildworld is this from /usr/src/lib/libc/Makefile:

---
.if (${LIBC_ARCH} == amd64 || ${LIBC_ARCH} == i386) && \
 ${.TARGETS:Mall} == all && \
 defined(LINKER_FEATURES) && ${LINKER_FEATURES:Mifunc} == ""
.error ${LIBC_ARCH} libc requires linker ifunc support
.endif
---

However the linker on my system is the lld linker that supports ifuncs!  Why 
does this check fail?

I am building on i386 but I am only building i386, as in I am not building all 
targets or amd64.

I edited the two Makefiles that have this check to remove the check and the 
builds proceed just fine.

So it appears that these checks are flawed, or I am soon to learn something new 
about FreeBSD!

Thanks,

Dan Allen



___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: bhyve zfs resizing

2019-03-18 Thread tech-lists

On Mon, Mar 18, 2019 at 09:08:31AM -0600, Alan Somers wrote:


Do you mean using a zvol as the backing store for a VM?  If so, then:
1) Yes.  You can just do "zfs set volsize" on the host.
2) In theory no, but the guest may need to be rebooted to notice the
change.  And I'm not sure if the current bhyve code will expose the
new size without a reboot or not.
3) Sure.  But after you expand the zvol (or before you shrink it),
you'll have to change the size of the guest's filesystem using the
guest's native tools.


Great, that's awesome. Thanks for clarifying 
--

J.


signature.asc
Description: PGP signature


Re: bhyve zfs resizing

2019-03-18 Thread Alan Somers
On Mon, Mar 18, 2019 at 9:05 AM tech-lists  wrote:
>
> Hi,
>
> Apart from the performance benefit as per the section for bhyve in the
> handbook, can the size of the zfs-backed guest:
>
> 1. be resized from the host?
> 2. does the guest need to be inactive?
> 3. can linux guests (or even windows ones) be resized as well?
>
> thanks,
> --
> J.

Do you mean using a zvol as the backing store for a VM?  If so, then:
1) Yes.  You can just do "zfs set volsize" on the host.
2) In theory no, but the guest may need to be rebooted to notice the
change.  And I'm not sure if the current bhyve code will expose the
new size without a reboot or not.
3) Sure.  But after you expand the zvol (or before you shrink it),
you'll have to change the size of the guest's filesystem using the
guest's native tools.

-Alan
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


bhyve zfs resizing

2019-03-18 Thread tech-lists

Hi,

Apart from the performance benefit as per the section for bhyve in the
handbook, can the size of the zfs-backed guest:

1. be resized from the host?
2. does the guest need to be inactive?
3. can linux guests (or even windows ones) be resized as well?

thanks,
--
J.


signature.asc
Description: PGP signature


Re: Observations from a ZFS reorganization on 12-STABLE

2019-03-18 Thread peter . blok
Same here using mfsbsd from 11-RELEASE. First attempt I forgot to add swap - it 
killed the ssh I was using to issue a zfs send on the remote system.

Next attempt I added swap, but ssh got killed too.

Third attempt I used mfsbsd from 12-RELEASE. It succeeded.

Now I am using mfsbsd 11-RELEASE with added swap and vis.zfs.arc_min and 
arc_max to 128Mb (it is a 4GB system) and it succeeds



> On 18 Mar 2019, at 15:14, Karl Denninger  wrote:
> 
> On 3/18/2019 08:37, Walter Cramer wrote:
>> I suggest caution in raising vm.v_free_min, at least on 11.2-RELEASE
>> systems with less RAM.  I tried "65536" (256MB) on a 4GB mini-server,
>> with vfs.zfs.arc_max of 2.5GB.  Bad things happened when the cron
>> daemon merely tried to run `periodic daily`.
>> 
>> A few more details - ARC was mostly full, and "bad things" was 1:
>> `pagedaemon` seemed to be thrashing memory - using 100% of CPU, with
>> little disk activity, and 2: many normal processes seemed unable to
>> run. The latter is probably explained by `man 3 sysctl` (see entry for
>> "VM_V_FREE_MIN").
>> 
>> 
>> On Mon, 18 Mar 2019, Pete French wrote:
>> 
>>> On 17/03/2019 21:57, Eugene Grosbein wrote:
 I agree. Recently I've found kind-of-workaround for this problem:
 increase vm.v_free_min so when "FREE" memory goes low,
 page daemon wakes earlier and shrinks UMA (and ZFS ARC too) moving
 some memory
 from WIRED to FREE quick enough so it can be re-used before bad
 things happen.
 
 But avoid increasing vm.v_free_min too much (e.g. over 1/4 of total
 RAM)
 because kernel may start behaving strange. For 16Gb system it should
 be enough
 to raise vm.v_free_min upto 262144 (1GB) or 131072 (512M).
 
 This is not permanent solution in any way but it really helps.
>>> 
>>> Ah, thats very interesting, thankyou for that! I;ve been bitten by
>>> this issue too in the past, and it is (as mentioned) much improved on
>>> 12, but the act it could still cause issues worries me.
>>> 
> Raising free_target should *not* result in that sort of thrashing. 
> However, that's not really a fix standing alone either since the
> underlying problem is not being addressed by either change.  It is
> especially dangerous to raise the pager wakeup thresholds if you still
> run into UMA allocated-but-not-in-use not being cleared out issues as
> there's a risk of severe pathological behavior arising that's worse than
> the original problem.
> 
> 11.1 and before (I didn't have enough operational experience with 11.2
> to know, as I went to 12.x from mostly-11.1 installs around here) were
> essentially unusable in my workload without either my patch set or the
> Phabricator one.
> 
> This is *very* workload-specific however, or nobody would use ZFS on
> earlier releases, and many do without significant problems.
> 
> -- 
> Karl Denninger
> k...@denninger.net   >
> /The Market Ticker/
> /[S/MIME encrypted email preferred]/

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Observations from a ZFS reorganization on 12-STABLE

2019-03-18 Thread Karl Denninger
On 3/18/2019 08:37, Walter Cramer wrote:
> I suggest caution in raising vm.v_free_min, at least on 11.2-RELEASE
> systems with less RAM.  I tried "65536" (256MB) on a 4GB mini-server,
> with vfs.zfs.arc_max of 2.5GB.  Bad things happened when the cron
> daemon merely tried to run `periodic daily`.
>
> A few more details - ARC was mostly full, and "bad things" was 1:
> `pagedaemon` seemed to be thrashing memory - using 100% of CPU, with
> little disk activity, and 2: many normal processes seemed unable to
> run. The latter is probably explained by `man 3 sysctl` (see entry for
> "VM_V_FREE_MIN").
>
>
> On Mon, 18 Mar 2019, Pete French wrote:
>
>> On 17/03/2019 21:57, Eugene Grosbein wrote:
>>> I agree. Recently I've found kind-of-workaround for this problem:
>>> increase vm.v_free_min so when "FREE" memory goes low,
>>> page daemon wakes earlier and shrinks UMA (and ZFS ARC too) moving
>>> some memory
>>> from WIRED to FREE quick enough so it can be re-used before bad
>>> things happen.
>>>
>>> But avoid increasing vm.v_free_min too much (e.g. over 1/4 of total
>>> RAM)
>>> because kernel may start behaving strange. For 16Gb system it should
>>> be enough
>>> to raise vm.v_free_min upto 262144 (1GB) or 131072 (512M).
>>>
>>> This is not permanent solution in any way but it really helps.
>>
>> Ah, thats very interesting, thankyou for that! I;ve been bitten by
>> this issue too in the past, and it is (as mentioned) much improved on
>> 12, but the act it could still cause issues worries me.
>>
Raising free_target should *not* result in that sort of thrashing. 
However, that's not really a fix standing alone either since the
underlying problem is not being addressed by either change.  It is
especially dangerous to raise the pager wakeup thresholds if you still
run into UMA allocated-but-not-in-use not being cleared out issues as
there's a risk of severe pathological behavior arising that's worse than
the original problem.

11.1 and before (I didn't have enough operational experience with 11.2
to know, as I went to 12.x from mostly-11.1 installs around here) were
essentially unusable in my workload without either my patch set or the
Phabricator one.

This is *very* workload-specific however, or nobody would use ZFS on
earlier releases, and many do without significant problems.

-- 
Karl Denninger
k...@denninger.net 
/The Market Ticker/
/[S/MIME encrypted email preferred]/


smime.p7s
Description: S/MIME Cryptographic Signature


Re: Observations from a ZFS reorganization on 12-STABLE

2019-03-18 Thread Walter Cramer
I suggest caution in raising vm.v_free_min, at least on 11.2-RELEASE 
systems with less RAM.  I tried "65536" (256MB) on a 4GB mini-server, with 
vfs.zfs.arc_max of 2.5GB.  Bad things happened when the cron daemon merely 
tried to run `periodic daily`.


A few more details - ARC was mostly full, and "bad things" was 1: 
`pagedaemon` seemed to be thrashing memory - using 100% of CPU, with 
little disk activity, and 2: many normal processes seemed unable to run. 
The latter is probably explained by `man 3 sysctl` (see entry for 
"VM_V_FREE_MIN").



On Mon, 18 Mar 2019, Pete French wrote:


On 17/03/2019 21:57, Eugene Grosbein wrote:

I agree. Recently I've found kind-of-workaround for this problem:
increase vm.v_free_min so when "FREE" memory goes low,
page daemon wakes earlier and shrinks UMA (and ZFS ARC too) moving some 
memory
from WIRED to FREE quick enough so it can be re-used before bad things 
happen.


But avoid increasing vm.v_free_min too much (e.g. over 1/4 of total RAM)
because kernel may start behaving strange. For 16Gb system it should be 
enough

to raise vm.v_free_min upto 262144 (1GB) or 131072 (512M).

This is not permanent solution in any way but it really helps.


Ah, thats very interesting, thankyou for that! I;ve been bitten by this issue 
too in the past, and it is (as mentioned) much improved on 12, but the act it 
could still cause issues worries me.


-pete.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Observations from a ZFS reorganization on 12-STABLE

2019-03-18 Thread Karl Denninger

On 3/18/2019 08:07, Pete French wrote:
>
>
> On 17/03/2019 21:57, Eugene Grosbein wrote:
>> I agree. Recently I've found kind-of-workaround for this problem:
>> increase vm.v_free_min so when "FREE" memory goes low,
>> page daemon wakes earlier and shrinks UMA (and ZFS ARC too) moving
>> some memory
>> from WIRED to FREE quick enough so it can be re-used before bad
>> things happen.
>>
>> But avoid increasing vm.v_free_min too much (e.g. over 1/4 of total RAM)
>> because kernel may start behaving strange. For 16Gb system it should
>> be enough
>> to raise vm.v_free_min upto 262144 (1GB) or 131072 (512M).
>>
>> This is not permanent solution in any way but it really helps.
>
> Ah, thats very interesting, thankyou for that! I;ve been bitten by
> this issue too in the past, and it is (as mentioned) much improved on
> 12, but the act it could still cause issues worries me.
>
>
The code patch I developed originally essentially sought to have the ARC
code pare back before the pager started evicting working set.  A second
crack went after clearing allocated-but-not-in-use UMA.

v_free_min may not be the right place to do this -- see if bumping up
vm.v_free_target also works.

I'll stick this on my "to do" list, but it's much less critical in my
applications than it was with 10.x and 11.x, both of which suffered from
it much more-severely to the point that I was getting "stalls" that in
some cases went on for 10 or more seconds due to things like your shell
being evicted to swap to make room for arc, which is flat-out nuts. 
That, at least, doesn't appear to be a problem with 12.

-- 
Karl Denninger
k...@denninger.net 
/The Market Ticker/
/[S/MIME encrypted email preferred]/


smime.p7s
Description: S/MIME Cryptographic Signature


Re: Observations from a ZFS reorganization on 12-STABLE

2019-03-18 Thread Pete French




On 17/03/2019 21:57, Eugene Grosbein wrote:

I agree. Recently I've found kind-of-workaround for this problem:
increase vm.v_free_min so when "FREE" memory goes low,
page daemon wakes earlier and shrinks UMA (and ZFS ARC too) moving some 
memory
from WIRED to FREE quick enough so it can be re-used before bad things 
happen.


But avoid increasing vm.v_free_min too much (e.g. over 1/4 of total RAM)
because kernel may start behaving strange. For 16Gb system it should be 
enough

to raise vm.v_free_min upto 262144 (1GB) or 131072 (512M).

This is not permanent solution in any way but it really helps.


Ah, thats very interesting, thankyou for that! I;ve been bitten by this 
issue too in the past, and it is (as mentioned) much improved on 12, but 
the act it could still cause issues worries me.


-pete.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hw.vga.acpi_ignore_no_vga=1 for installation media

2019-03-18 Thread Eugene Grosbein
18.03.2019 17:22, Konstantin Belousov wrote:

> That said, did anybody considered ignoring NO_VGA FACP flag on Silvermonts
> only ?  Or even better, gather SMBIOS identifications for affected BIOSes
> and ignore the flag for them ?

Is SMBIOS-bases blacklisting reliable considering future BIOS updates 
(including pre-installed)?

When it comes to human-managed interactive installation, it seems for me more 
reliable
to check real functions of VGA and ACPI flag than collecting black list.

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hw.vga.acpi_ignore_no_vga=1 for installation media

2019-03-18 Thread Leon Christopher Dietrich
It also happens on some supermicro atom boards that definitely don't run
Intel reference BIOS software. I can't provide any serial numbers dough
since it's Sunday at my location.

On 18.03.19 11:22, Konstantin Belousov wrote:
> On Mon, Mar 18, 2019 at 05:09:31AM +0700, Eugene Grosbein wrote:
>> 18.03.2019 0:34, Konstantin Belousov wrote:
>>
>>> Can anybody provide an example of machine where the flag is set but VGA
>>> works ?  For me, it is set on headless NUC when there is no monitor
>>> attached, and then BIOS does not configure framebuffer at all.
>> http://freebsd.1045724.x6.nabble.com/vt-4-related-hang-of-11-2-td6299125.html
>> https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=230172
>> https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=229235
> All of them are about Silvermont/Airmont atoms which probably share reference
> Intel BIOS code.
>
> As I noted above, BIOS on mine machine is somewhat smarter, it reports
> NO_VGA only if the display was not connected on boot.
>
 So the proposal is about reversing the set of broken machines, but only
>>> in installer ?  In other words, if it worked for installer, the installed
>>> system would be broken (again) ?
>> VGA-based installation session won't event start unless this is fixed.
>>
>> It should be easy to make installer generate the knob for target machine
>> if installer sees wrong ACPI flag with working VGA hardware.
> Until installer generates such knob, it is out of question to make
> the config of the kernel booted from the installation media different
> from the config of the installed system.
>
> That said, did anybody considered ignoring NO_VGA FACP flag on Silvermonts
> only ?  Or even better, gather SMBIOS identifications for affected BIOSes
> and ignore the flag for them ?



signature.asc
Description: OpenPGP digital signature


Re: hw.vga.acpi_ignore_no_vga=1 for installation media

2019-03-18 Thread Konstantin Belousov
On Mon, Mar 18, 2019 at 05:09:31AM +0700, Eugene Grosbein wrote:
> 18.03.2019 0:34, Konstantin Belousov wrote:
> 
> > Can anybody provide an example of machine where the flag is set but VGA
> > works ?  For me, it is set on headless NUC when there is no monitor
> > attached, and then BIOS does not configure framebuffer at all.
> 
> http://freebsd.1045724.x6.nabble.com/vt-4-related-hang-of-11-2-td6299125.html
> https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=230172
> https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=229235
All of them are about Silvermont/Airmont atoms which probably share reference
Intel BIOS code.

As I noted above, BIOS on mine machine is somewhat smarter, it reports
NO_VGA only if the display was not connected on boot.

> 
> > > So the proposal is about reversing the set of broken machines, but only
> > in installer ?  In other words, if it worked for installer, the installed
> > system would be broken (again) ?
> 
> VGA-based installation session won't event start unless this is fixed.
> 
> It should be easy to make installer generate the knob for target machine
> if installer sees wrong ACPI flag with working VGA hardware.
Until installer generates such knob, it is out of question to make
the config of the kernel booted from the installation media different
from the config of the installed system.

That said, did anybody considered ignoring NO_VGA FACP flag on Silvermonts
only ?  Or even better, gather SMBIOS identifications for affected BIOSes
and ignore the flag for them ?
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: hw.vga.acpi_ignore_no_vga=1 for installation media

2019-03-18 Thread Daniel Braniss



> On 17 Mar 2019, at 19:34, Konstantin Belousov  wrote:
> 
> On Sun, Mar 17, 2019 at 10:10:45AM -0600, Warner Losh wrote:
>> I generally like this idea... But two caveats...
>> 
>> First, we'd need to update the docs so that folks doing serial installs can
>> unset it Though serial installs are a weird beast
>> Second, if it's really needed, we should have the installer generate it.
>> alas, only vt can tell us that, but it should be easy to add a sysctl to it
>> that says that it has done video by ignoring the absence of the vga node...
> It is not about VGA node (what is that ?).
> It is about ignoring FACP flag IAPC_BOOT_ARCH={NO_VGA}, and there are
> machines which actually break when trying to access VGA hardware despite
> the flag is set.
> Can anybody provide an example of machine where the flag is set but VGA
> works ?  For me, it is set on headless NUC when there is no monitor
> attached, and then BIOS does not configure framebuffer at all.
> 
> So the proposal is about reversing the set of broken machines, but only
> in installer ?  In other words, if it worked for installer, the installed
> system would be broken (again) ?
> 
>> 
>> Warner
>> 
>> On Sun, Mar 17, 2019 at 6:58 AM Leon Christopher Dietrich <
>> dorali...@chaotikum.org> wrote:
>> 
>>> Sound's like solid idea.
>>> 
>>> A lot of systems out there lack propper ACPI description for VGA and it
>>> would definitly make the installation on such a system much more easy.
>>> 
>>> As far as I can tell it doesn't seam to break other things and even low
>>> power system without VGA (like a pcengines apu2) don't seam to suffer.
> What apu2 reports in FACP flags ?  Do
>   acpidump -dt | grep IAPC_BOOT_ARCH


mine reports:
IAPC_BOOT_ARCH=

> 
>>> 
>>> On 17.03.19 13:00, freebsd-stable-requ...@freebsd.org wrote:
 Date: Sun, 17 Mar 2019 02:59:12 +0700
 From: Eugene Grosbein 
 To: FreeBSD stable 
 Subject: hw.vga.acpi_ignore_no_vga=1 for installation media
 Message-ID: <912fc95d-5a5e-012b-7385-0f43f50dc...@grosbein.net>
 Content-Type: text/plain; charset=koi8-r
 
 Hi!
 
 Since 11.2-RELESE, default console driver vt(4) checks ACPI table for
>>> presence of VGA in the system.
 It does not initialize console (no input, no output) if ACPI states
>>> there is no VGA adapter.
 
 There are PRs describing many cases when VGA is present but ACPI lies
 and we have a regression compared with 11.1 and earlier:
 FreeBSD cannot be installed interactively onto such a system, leaving
>>> aside serial console.
 
 vt(4) has loader knob to restore pre-11.2 behaviour and ignore ACPI:
 
 hw.vga.acpi_ignore_no_vga=1
 
 Should we add this unconditionally to the installation media designed
>>> for interactive VGA-based installation?
 
 
 --
 
>>> 
>>> 
>> ___
>> freebsd-stable@freebsd.org mailing list
>> https://lists.freebsd.org/mailman/listinfo/freebsd-stable
>> To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
> ___
> freebsd-stable@freebsd.org  mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-stable 
> 
> To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org 
> "

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"