Re: [lfs-support] Booting LFS with systemd

2018-06-17 Thread Michael Shell
On Mon, 18 Jun 2018 00:53:37 -0400
Michael Shell  wrote:

> init=/bin/sh

To clarify: the above goes on the kernel command line, the others that
follow are commands to be executed in the resulting shell.


  Mike

-- 
http://lists.linuxfromscratch.org/listinfo/lfs-support
FAQ: http://www.linuxfromscratch.org/blfs/faq.html
Unsubscribe: See the above information page

Do not top post on this list.

A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing in e-mail?

http://en.wikipedia.org/wiki/Posting_style


Re: [lfs-support] Booting LFS with systemd

2018-06-17 Thread Michael Shell
On Sun, 17 Jun 2018 11:22:23 +0200
Frans de Boer  wrote:

> Alas, keeping debugging symbols did not work. I still get the message 
> "no debug symbols found" and as a reaction to the bt command "no stack".


  Frans,

You will have to show us the commands you used so we can understand
what you did.

As per

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=883690

did you obtain a systemd.coredump file?

You may have to alter some of the systemd boot parameters to be able
to get more useful information:

https://fedoraproject.org/wiki/How_to_debug_Systemd_problems
https://freedesktop.org/wiki/Software/systemd/Debugging/

The latter link shows how to debug systemd boot problems.

Some of the more relevant systemd boot parameters (to be added to
the kernel command line of your boot loader) mentioned include:

systemd.log_level=debug
systemd.dump_core=true
systemd.crash_shell=true

The first one should give you more information on the console.
The last one should be able to get you a shell after systemd crashes.
There will be a 10 second delay from the crash till the shell appears.

If you can't get a shell via systemd, then you can try booting to init
directly and then try starting systemd manually (to see if it crashes).
You should be able to get a core dump and run gdb on it in this way.
You will also have to manually remount the / filesystem as read/write:

init=/bin/sh
mount -o remount,rw /
exec /usr/lib/systemd/systemd


   Cheers,

   Mike

-- 
http://lists.linuxfromscratch.org/listinfo/lfs-support
FAQ: http://www.linuxfromscratch.org/blfs/faq.html
Unsubscribe: See the above information page

Do not top post on this list.

A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing in e-mail?

http://en.wikipedia.org/wiki/Posting_style


Re: [lfs-support] Booting LFS with systemd

2018-06-17 Thread Frans de Boer

On 16-06-18 22:13, Frans de Boer wrote:

On 29-05-18 08:39, Frans de Boer wrote:
First of all I apologize for the initial flood of messages. They where 
the result of multiple tries to get the message to the list in the 
first place. Only yesterday I found that LFS is - still - not handling 
TLS while my server had the line that is must encrypt messages. Of the 
many subjects I exchange messages with, LFS is the only one not 
supporting TLS covered traffic.


Now on subject: I will look into that matter, but normal production 
systems have the same message and do not fail.

I'll keep you informed.

Regards,
Frans.


On 29-05-18 06:43, Michael Shell wrote:

On Thu, 24 May 2018 16:21:53 +0200
Frans de Boer  wrote:


Attached is a picture with a repeatably failing boot when I build LFS
with systemd support. Any suggestion where I screwed-up?



   Frans,

A few lines before the crash, your systemd errored with:

"failed to insert module 'autofs4': No such file or directory"

That might have triggered the abort a few lines later.

Here:

https://www.linuxquestions.org/questions/linux-general-1/boot-problem-failed-to-insert-module-%27autofs4%27-4175485121/ 



they suggest recompiling the kernel with

CONFIG_AUTOFS4_FS=y

(under "Kernel automounter version 4 support (also supports v3)" in the
File systems config section) i.e., built-in rather than as a module.

If that does not fix it, see here for how to use gdb on the systemd
core dump to see the sequence of events that led to the downfall:

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=883690

and post the bt output for us to see. In anycase, if you resolve
the problem, do post to the list what the answer turned out to be.


   Cheers,

   Mike Shell



Hello, it has taken some time before I could follow-up the given 
suggestion. To no avail as expected. The error message about autofs4 has 
vanished, but the rest remains. So, still no solution.


As for the output of the gdb bt command, I need to rebuild all again and 
skip the removal of debug information. I will start this this evening, 
so hopefully I have some more info tomorrow.


Regards,
Frans.


Alas, keeping debugging symbols did not work. I still get the message 
"no debug symbols found" and as a reaction to the bt command "no stack".


I probably doing something wrong, but what can it be?

Regards,
Frans.
--
http://lists.linuxfromscratch.org/listinfo/lfs-support
FAQ: http://www.linuxfromscratch.org/blfs/faq.html
Unsubscribe: See the above information page

Do not top post on this list.

A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing in e-mail?

http://en.wikipedia.org/wiki/Posting_style


Re: [lfs-support] Rather a lot of glibc check errors

2018-06-17 Thread Hazel Russman
On Sat, 16 Jun 2018 15:16:06 +0100
Ken Moffat  wrote:

> On Sat, Jun 16, 2018 at 12:01:14PM +0100, Hazel Russman wrote:
> > On Thu, 14 Jun 2018 14:13:12 -0700
> > Paul Rogers  wrote:
> > 
> > Well, I continued and have just signed off a completely normal gcc test: 
> > only 1 unexpected error (in gcc itself, which I think I had the last time 
> > too), plus the 6 expected ones in libstdc++. I seem to have a sane 
> > toolchain.
> >   
> Good ;-)
> 
> > Interestingly gmp and mpfr both identify my machine as "nano-pc-linux-gnu" 
> > rather than x86_64-pc-linux-gnu, so Bruce is sort-of right when he calls it 
> > a different architecture. They both set themselves up to run with 
> > -march=nano -mtune=nano. What glibc thought it was building on, I have no 
> > idea, as I couldn't find a config.guess script in the package.
> >   
> 
> I'm surprised by nano-pc-linux-gnu.
> 
> But to find what -march=native will use, here are a couple of
> answers from
> 
> https://stackoverflow.com/questions/5470257/how-to-see-which-flags-march-native-will-activate
> 
> (there were other suggestions too, of course) -
> 
> The second answer suggested:
> 
>  gcc -march=native -E -v - &1 | grep cc1
> 
>  and also, to see the defines:
> 
>  echo | gcc -dM -E - -march=native
> 
> The third answer suggested:
> 
>  echo | gcc -### -E - -march=native
> 
> 
> ĸen
> -- 
For both 1 & 3, I get -march=nano -mtune=k8. The defines have x86_64, amd64 and 
k8 set but nothing for nano. I know that amd64 is used by Debian for all x86_64 
processors, but k8 puzzles me. Isn't that also an amd processor?

--
Hazel
-- 
http://lists.linuxfromscratch.org/listinfo/lfs-support
FAQ: http://www.linuxfromscratch.org/blfs/faq.html
Unsubscribe: See the above information page

Do not top post on this list.

A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing in e-mail?

http://en.wikipedia.org/wiki/Posting_style