On Tue, Jul 31, 2018 at 9:08 PM, Waldek Kozaczuk <jwkozac...@gmail.com>
wrote:

> When possible, it's a worthy goal to have a kernel and usr.img which can
>> work for all platforms (xen, qemu, qemu-with-kernel, etc.) and we don't
>> need
>> to recompile for each different cloud platform. I didn't fully understand
>> if your
>> patch interfers with this goal, or doesn't. I'm fine that mbloader.elf is
>> NOT
>> compressed, but can this uncompressed kernel also be run in normal
>> qemu or xen or whatever? Is there a reason why it shouldn't?
>>
>
> I hope some of the answers above address some of your questions here. The
> mbloader.elf is just a container around same loader-stripper.elf same way
> as lzloader.elf is. The difference is
> that mbloader.elf contains logic to bootstrap (really pass through memory
> info and cmdline passed in by multiboot loader - qemu --kernel)
> and invoke start32 from loader-stripped.elf. On other hand lzloader.elf is
> called from boot16.S which eventually invokes start32 from
> loader-stripped.elf
> as well. My original plan was to handle everything in lzloader.elf but it
> it quickly became to complicated especially given multiboot header is
> expected to be
>  in certain place of the file.
>

Ok. I'm still not sure about the details, but I thought we could have just
the one multiboot loader - the regular
boot will just skip over the header. Whether we use or not use compression
could be an option, but I'm not
sure why we need an option to have or not to have the multiboot header.
Does not having it save space?


> So if there is a way to bootstrap multiboot image in xen it should work
> with mbloader.elf.
>
>>
>>> Please note that instead of modifying lzloader.ld linker
>>> script to make lzloader.elf work in both non-multiboot and
>>> multiboot mode at the same time this patch adds mbloader.ld
>>> and relevant changes to Makefile to support new multiboot
>>> 32-bit ELF file called mbloader.elf. Additionally it has been
>>> observed that compression of loader-stripped.elf does
>>> not speedup loading of kernel in multiboot mode in any significant way.
>>> Therefore mbloader.elf contains uncompressed original loader-stripped.elf
>>> that is simply placed in RAM by multibool loader at expected
>>> place at 0x200000.
>>>
>>
>> So I'm a bit confused why the original loader-stripped.elf cannot be
>> modified
>> to suit the needs of mbloader.elf, why it needs to be a second copy with
>> slight
>> modifications. Do these modifications ruin something in its existing goal
>> of
>> being compressed into lzloader.ld?
>>
>
> did not look into it much. But first of all loader-stripped.elf is 64-bit
> ELF whereas multiboot mode
>  requires 32-bit ELF and puts VCPU into protected mode before invoking
> multiboot_start.
>

This still doesn't explain why you need to copy loader-stripped.elf before
linking
with it again, which I think I saw you have.


>
>>
>>>
>>> This patch can be tested with following example command:
>>>
>>> qemu-system-x86_64 -m 128m -smp 1 -nographic \
>>>  -kernel ./build/release/mbloader.elf \
>>>  -enable-kvm -cpu host,+x2apic \
>>>  -append "--console=serial --bootchart /hello" \
>>>  -gdb tcp::1234,server,nowait \
>>>  -device virtio-blk-pci,id=blk0,drive=hd0,scsi=off \
>>>  -drive file=./build/release/usr.img,if=none,id=hd0,cache=none,aio=threads
>>> \
>>>  -chardev stdio,mux=on,id=stdio,signal=off \
>>>  -mon chardev=stdio,mode=readline \
>>>  -device isa-serial,chardev=stdio
>>>
>>> Fixes #981
>>>
>>
>> Nice!
>> When used with qemu this way instead of with the usual boot sequence, do
>> you notice a boot speed difference?
>>
>
> Yes. With QEMU as above it is actually slower than usual (~700ms vs ~200ms
> per time's real reading).
>

Yikes! I wonder why.


> But if I use
> QEMU with -bios option to use qboot instead of QEMU default SeaBIOS (see
> my other email about qboot) I am able to see OSv execute
> in ~70ms (close to hyperkit) and OSv reported its boot time (per
> bootchart) at ~10ms. In think that in general (at least based on my
> experimentation) the host can load the kernel image
> fast than we can do (boot16.S) in real mode. I also found Intel NEMU (
> https://github.com/intel/nemu/) project which seems to optimize
> qemu for Windows and Linux 64 bit partly by removing support for legacy
> stuff. So concluding in long run I think that multiboot mode (or bzImage)
> will
> deliver much faster boot time.
>

Would be nice if we add to scripts/run.py an option to run OSv using these
alternative methods (qboot, hyperkit, etc.).
Or if not, at least a short document or wiki  or something on how to do it.
I think that a month after we commit this
patch, nobody will remember how to use all the various kernel executable
variants.

+//
>>> +// Define data that needs to be found in the beginning of the
>>> mbloader.elf
>>> +// by multiboot loader to properly load kernel as defined by multiboot
>>> +// specification at https://www.gnu.org/software/g
>>> rub/manual/multiboot/multiboot.html.
>>> +multiboot_header_start:
>>> +    .long MULTIBOOT_HEADER_MAGIC
>>> +    .long MULTIBOOT_HEADER_FLAGS
>>> +    .long -(MULTIBOOT_HEADER_MAGIC + MULTIBOOT_HEADER_FLAGS)
>>> +    .long multiboot_header_start + OSV_MBKERNEL_BASE# header_addr
>>> +    .long OSV_MBKERNEL_BASE      # load_addr
>>>
>>
>> Is OSV_MBKERNEL_BASE different from OSV_KERNEL_BASE?
>> Maybe it's just a fixed offset from OSV_KERNEL_BASE?
>> (the more of these macros we have, the harder it becomes to keep track
>> what
>> is actually going on, and what is really independent of the other).
>>
> It is different. The OSV_MBKERNEL_BASE is where multiboot loader needs to
> place mbloader.elf
> in memory so that it can invoke multiboot_start32 which then can expect
> loader-stripped.elf's entry
> point to be at OSV_KERNEL_PLACE. I am not sure how to simplify it.
>

If OSV_MBKERNEL_BASE always some known offset (e.g., 1024 bytes) before
OSV_KERNEL_BASE?
If so, we can use that offset from OSV_KERNEL_BASE instead of adding yet
another variable that the
user can supposedly tweak (but won't know why or how).

But I don't think I understand all the details, so I'll let you be the
judge of that.


+// This structure e820ent is referenced when compiling to 32-bit and
>>> 64-bit machine code.
>>> +// Therefore we need to define this type explicitly instead of using
>>> standard u64
>>> +// so that in both situations the compiler calculates offsets of
>>> relevant fields
>>> +// and size of e820ent struct correctly.
>>> +typedef unsigned long long multiboot_uint64_t;
>>>
>>
>> I didn't understand this comment. Don't you *really* need 64-bit  fields
>> regardless of
>> the compilation mode? Doesn't u64 work correctly in 32-bit compilation?
>> u64 is an alias for uint64_t, which is a C99 standard and should work
>> correctly. It doesn't?
>>
>> Yes, "long long" should also work, but I don't understand why u64 doesn't.
>>
> I am not sure why u64 did not work in 32 bit but for whatever reason
> sizeof(struct e820ent) was 16 instead of 24. Also the data was incorrectly
> read.
>


Oh, I found the problem. It is OSv's bug, on the way we define uint64_t (we
define it ourselves, we don't use gcc's headers):
In:

include/api/x64/bits/alltypes.h.sh:TYPEDEF unsigned long      uint64_t;

This is incorrect for -m32, where "unsigned long" is just 32 bits, not 64
bits!

I think this should either be changed to "unsigned long long" (but need to
check the implications...) or probably better - use an #ifdef:

#if __WORDSIZE == 64
typedef signed long int int64_t;
typedef unsigned long int uint64_t;
#else
typedef signed long long int __int64_t;
typedef unsigned long long int __uint64_t;
#endif

then you wouldn't need to manually change the existing use of uint64_t.

-- 
You received this message because you are subscribed to the Google Groups "OSv 
Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to osv-dev+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to