On Fri, Aug 19, 2016 at 11:50:35AM +0300, Toomas Soome wrote:
> 
> > On 19. aug 2016, at 10:39, Konstantin Belousov <kostik...@gmail.com> wrote:
> > 
> > On Thu, Aug 18, 2016 at 09:28:57PM -0600, Warner Losh wrote:
> >> On Thu, Aug 18, 2016 at 12:50 AM, Julian Elischer <jul...@freebsd.org> 
> >> wrote:
> >>> On 16/08/2016 4:54 AM, John Baldwin wrote:
> >>>> 
> >>>> On Monday, August 15, 2016 08:38:02 PM John Baldwin wrote:
> >>>>> 
> >>>>> Author: jhb
> >>>>> Date: Mon Aug 15 20:38:02 2016
> >>>>> New Revision: 304187
> >>>>> URL: https://svnweb.freebsd.org/changeset/base/304187
> >>>>> 
> >>>>> Log:
> >>>>>  Remove the mcd(4) driver for Mitsumi CD-ROM players.
> >>>>>     This is a driver for a pre-ATAPI ISA CD-ROM adapter.  As noted in
> >>>>>  the manpage, this driver is only useful as a backend to cdcontrol to
> >>>>>  play audio CDs since it doesn't use DMA, so its data performance is
> >>>>>  "abysmal" (and that was true in the mid 90's).
> >>>> 
> >>>> No one stepped up to test patches for it either when I last posted 
> >>>> patches
> >>>> to
> >>>> convert it from timeout(9) to callout(9).  I have a few more drivers that
> >>>> are
> >>>> both very old and that people have no business using in 12 (think ISA
> >>>> adapters that don't do DMA and can't be used with pccard) that I will be
> >>>> removing over the next little while.  I brought up a list of drivers on
> >>>> arch@
> >>>> a couple of years ago and the conversation drifted off into the weeds
> >>>> about
> >>>> trimming GENERIC, etc.  No one objected to the specific drivers I listed
> >>>> though (and I got a few pleas of "please remove").  If someone shows up
> >>>> desperately clutching an ISA adapter they can always dig up the source
> >>>> from
> >>>> svn and deal with forward porting it for whatever API changes have
> >>>> happened
> >>>> since it was removed.
> >>> 
> >>> 
> >>> I would imagine any machine still holding one of these probably has not
> >>> enough memory to run FreeBSD.
> >>> 
> >>> would we still run in 2MB?
> >> 
> >> With insane levels of tuning, we can run in 32MB userland that can do
> >> things. Even 64MB is tight w/o some tuning. 16MB is almost certainly
> >> right out except for very specialized situations. 2MB? We can't even
> >> load the loader in that :(. Oh, and all these memory configs are only
> >> possible if you tweak the loader's block cache...
> >> 
> > 
> > 32MB is quite usable.  Without any tuning, you get slightly less than 10MB
> > for userspace, which is enough to for many things, and plenty if swap is
> > added.
> > 
> > Note that you cannot boot on such configurations since loader was broken,
> > but if you do manage to jump to kernel, things were fine several months
> > ago.  I tested my relatively recent OOM changes on 32MB qemu config.
> >> Warner
> > 
> 
> If the target is to go as low memory as possible, sure, you can strip all 
> off, from boot loader point, you should load kernel from stage2 and not use 
> loader at all (you can load and jump kernel even now from stage2, assuming it 
> wont need any special configuration from loader config) etc etc. This means 
> highly specialized build and has nothing to do with generic all purpose 
> system.
> 
Why you describe this as an 'alternative' ? Before that loader changes,
I regularly tested on 32MB qemy i386 image and 64MB amd64 image. I do
not see anything extreme in these configs. They use normal boot path,
which provides kernels with debugging symbols, metadata, loaded modules
etc. Why should I use deficient boot2-only loading, which, additionally,
cannot work on amd64 ?

More, this is the only reasonable way for most developers to ensure that
system is still usable on tiny configs found on embedded devices. Right
now the min which I have to set up is 128MB, and VM changes are simply not
tested on anything smaller. It is guaranteed that small systems will grow
regressions fast.  And I will not jump through the hoops to mitigate
breakage induced by other people' changes.

> Also at some point, there is an question about how reasonable it is to have 
> such configuration as part of generic code base for special bits like boot 
> loader itself, as the problem is, testing all those variants is becoming 
> impossible and even keeping reasonable code base in all of the #if #else 
> #endif spaghetti is getting quite hard and error prone.
> 
> >From developers point of view, it is not really encouraging to have possible 
> >feedback like ???oh, but you did break my 32MB system boot??? ;) This does 
> >bring back some memories however. For first 2 unix systems I was dealing 
> >with, one had 8MB and another had 12MB of memory??? it was ~ 1992-1993;)
> 
Not mine, but you (?) indirectly broke system for people who do use 32MB
on other arches, since low memory config on dev systems become 128MB.
I cared about 32MB before, but not any longer.

> Right now the loader and stage2 are set to use 64MB heap to make it possible 
> to implement zfs feature support and later on, for more features.
> 
> Also note that UEFI setups are much harder to deal with regard of memory 
> management, because as long as BS are in control, you can not really control 
> the memory management there and can end up with fragmented unusable (for 
> kernel loading) layout. This is especially nasty as apparently some (buggy) 
> systems actually have runtime services using boot services memory areas, so 
> you can end up in setup where you can not re-use BS memory and those chunks 
> can be all over the low memory address space??? 

Yes, I do suspect that eventually systems appear where our default
layout of the kernel physical segments is not workable and both loader
and kernel bootstrap would need to grow much more flexibility. Initial
1GB-idmap page table structure and kernel page table would need to handle
this, and we will need a kernel relocator either in loader or in kernel
itself.
_______________________________________________
svn-src-head@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/svn-src-head
To unsubscribe, send any mail to "svn-src-head-unsubscr...@freebsd.org"

Reply via email to