[yocto] #dunfell

2022-11-03 Thread Frederic Martinsons
Hello, I'm currently migrating our system from warrior to dunfell and I have an 
issue on fitImage support for aarch64 architecture. We use KERNEL_IMAGETYPE = 
"fitImage" and INITRAMFS_IMAGE_BUNDLE = "1" in our configuration and the kernel 
do_deploy steps failed on not finding the initramfs file:

```
| lib/modules/4.19.255-rt113-sigfox/kernel/drivers/usb/serial/usb_wwan.ko
| lib/modules/4.19.255-rt113-sigfox/kernel/drivers/usb/serial/option.ko
| lib/modules/4.19.255-rt113-sigfox/kernel/drivers/usb/class/
| lib/modules/4.19.255-rt113-sigfox/kernel/drivers/usb/class/cdc-wdm.ko
| lib/modules/4.19.255-rt113-sigfox/modules.order
| lib/modules/4.19.255-rt113-sigfox/modules.builtin
| install: cannot stat 'arch/arm64/boot/Image.initramfs': No such file or 
directory
| WARNING: exit code 1 from a shell command.
| ERROR: Execution of 
'/home/fmartinsons/TAPOS_build_for_dunfell/build-tapos/tmp/work/a3700-tapos-linux/linux-sbs/4.19.255+gitAUTOINC+5c7ccbe1aa-r4.17.1.1/temp/run.do_deploy.1460182'
 failed with exit code 1
```

Doing some more research, I found that the problem came from this change 
https://git.openembedded.org/openembedded-core/commit/?id=526bdd88ccd758204452579333ba188e29270bde
 , I found a commit introduced in kirkstone that fix my issue (at least the 
part which revert the previous commit): 
https://git.openembedded.org/openembedded-core/commit/?id=10a4a132e87e835726bf5da81a60f6f509b90765

Can somebody know why the commit was not back-ported in dunfell ?

Regards.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#58476): https://lists.yoctoproject.org/g/yocto/message/58476
Mute This Topic: https://lists.yoctoproject.org/mt/94774777/21656
Mute #dunfell:https://lists.yoctoproject.org/g/yocto/mutehashtag/dunfell
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] #yocto #dunfell

2022-09-19 Thread Monsees, Steven C (US) via lists.yoctoproject.org

I am seeing an error when I attempt to apply a patch, and could use some 
clarification or way to resolve...

Am doing something wrong in patch creation ?
What is meant by :  "Patch console.patch does not apply (enforce with -f)" ?
How can I best resolve this issue ?

I edited a file in the recipe's git repo in order to create the patch for the 
change:

git diff -u console.c

which I used as my patch:

diff --git a/lib/console.c b/lib/console.c
index 9c10560..b932a44 100644
--- a/lib/console.c
+++ b/lib/console.c
@@ -357,7 +357,7 @@ static struct {
{  EFI_SECURITY_VIOLATION, L"Security Violation"},
// warnings
-   {  EFI_WARN_UNKOWN_GLYPH,  L"Warning Unknown Glyph"},
+   {  EFI_WARN_UNKNOWN_GLYPH, L"Warning Unknown Glyph"},
{  EFI_WARN_DELETE_FAILURE,L"Warning Delete Failure"},
{  EFI_WARN_WRITE_FAILURE, L"Warning Write Failure"},
{  EFI_WARN_BUFFER_TOO_SMALL,  L"Warning Buffer Too Small"},

I am getting the following error when the patch is applied:

ERROR: efitools-native-1.7.0+gitAUTOINC+0649468475-r0 do_patch: Applying patch 
'console.patch' on target directory 
'/disk0/scratch/yocto/workspace/poky-dunfell-23.0.19/builds/sbca-default/tmp/work/x86_64-linux/efitools-native/1.7.0+gitAUTOINC+0649468475-r0/git'
Command Error: 'quilt --quiltrc 
/disk0/scratch/yocto/workspace/poky-dunfell-23.0.19/builds/sbca-default/tmp/work/x86_64-linux/efitools-native/1.7.0+gitAUTOINC+0649468475-r0/recipe-sysroot-native/etc/quiltrc
 push' exited with 0  Output:
Applying patch console.patch
patching file lib/console.c
Hunk #1 FAILED at 357.
1 out of 1 hunk FAILED -- rejects in file lib/console.c
Patch console.patch does not apply (enforce with -f)
ERROR: Logfile of failure stored in: 
/disk0/scratch/yocto/workspace/poky-dunfell-23.0.19/builds/sbca-default/tmp/work/x86_64-linux/efitools-native/1.7.0+gitAUTOINC+0649468475-r0/temp/log.do_patch.7615
ERROR: Task 
(/disk0/scratch/yocto/workspace/poky-dunfell-23.0.19/meta-bae/meta-limws/meta-intel/meta-sbca/recipes-bsp/efitools/efitools-native_git.bb:do_patch)
 failed with exit code '1'

Thanks,
Steve

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#58081): https://lists.yoctoproject.org/g/yocto/message/58081
Mute This Topic: https://lists.yoctoproject.org/mt/93787499/21656
Mute #yocto:https://lists.yoctoproject.org/g/yocto/mutehashtag/yocto
Mute #dunfell:https://lists.yoctoproject.org/g/yocto/mutehashtag/dunfell
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] #dunfell

2022-11-03 Thread Zoran
>>> - for imageType in ${KERNEL_IMAGETYPES} ; do
>>> + for imageType in ${KERNEL_IMAGETYPE_FOR_MAKE} ; do

Interesting... What is the difference between variables
${KERNEL_IMAGETYPES} and ${KERNEL_IMAGETYPE_FOR_MAKE} ?

Zee
___

On Thu, Nov 3, 2022 at 5:24 PM Frederic Martinsons
 wrote:
>
> Hello, I'm currently migrating our system from warrior to dunfell and I have 
> an issue on fitImage support for aarch64 architecture. We use 
> KERNEL_IMAGETYPE = "fitImage" and INITRAMFS_IMAGE_BUNDLE = "1" in our 
> configuration and the kernel do_deploy steps failed on not finding the 
> initramfs file:
>
> ```
> | lib/modules/4.19.255-rt113-sigfox/kernel/drivers/usb/serial/usb_wwan.ko
> | lib/modules/4.19.255-rt113-sigfox/kernel/drivers/usb/serial/option.ko
> | lib/modules/4.19.255-rt113-sigfox/kernel/drivers/usb/class/
> | lib/modules/4.19.255-rt113-sigfox/kernel/drivers/usb/class/cdc-wdm.ko
> | lib/modules/4.19.255-rt113-sigfox/modules.order
> | lib/modules/4.19.255-rt113-sigfox/modules.builtin
> | install: cannot stat 'arch/arm64/boot/Image.initramfs': No such file or 
> directory
> | WARNING: exit code 1 from a shell command.
> | ERROR: Execution of 
> '/home/fmartinsons/TAPOS_build_for_dunfell/build-tapos/tmp/work/a3700-tapos-linux/linux-sbs/4.19.255+gitAUTOINC+5c7ccbe1aa-r4.17.1.1/temp/run.do_deploy.1460182'
>  failed with exit code 1
> ```
>
> Doing some more research, I found that the problem came from this change 
> https://git.openembedded.org/openembedded-core/commit/?id=526bdd88ccd758204452579333ba188e29270bde
>  , I found a commit introduced in kirkstone that fix my issue (at least the 
> part which revert the previous commit): 
> https://git.openembedded.org/openembedded-core/commit/?id=10a4a132e87e835726bf5da81a60f6f509b90765
>
> Can somebody know why the commit was not back-ported in dunfell ?
>
> Regards.
> 
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#58481): https://lists.yoctoproject.org/g/yocto/message/58481
Mute This Topic: https://lists.yoctoproject.org/mt/94774777/21656
Mute #dunfell:https://lists.yoctoproject.org/g/yocto/mutehashtag/dunfell
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] #dunfell

2022-11-04 Thread Frederic Martinsons
On Thu, Nov  3, 2022 at 11:38 PM, Zoran wrote:

>
> >>> - for imageType in ${KERNEL_IMAGETYPES} ; do
> >>> + for imageType in ${KERNEL_IMAGETYPE_FOR_MAKE} ; do
> 
> Interesting... What is the difference between variables
> ${KERNEL_IMAGETYPES} and ${KERNEL_IMAGETYPE_FOR_MAKE} ?
> 

In my setup I have:

* KERNEL_IMAGETYPES=fitImage (coming from KERNEL_IMAGETYPE of my machine 
configuration) 
* KERNEL_IMAGETYPE_FOR_MAKE=Image (used only internally by kernel.bbclass and 
setting to Image because of [this 
line](https://github.com/openembedded/openembedded-core/blob/dunfell/meta/classes/kernel-fitimage.bbclass#L29)


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#58486): https://lists.yoctoproject.org/g/yocto/message/58486
Mute This Topic: https://lists.yoctoproject.org/mt/94774777/21656
Mute #dunfell:https://lists.yoctoproject.org/g/yocto/mutehashtag/dunfell
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] #dunfell

2022-11-28 Thread Frederic Martinsons
A ticket has been open for backporting the fix: 
https://bugzilla.yoctoproject.org/show_bug.cgi?id=14971

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#58644): https://lists.yoctoproject.org/g/yocto/message/58644
Mute This Topic: https://lists.yoctoproject.org/mt/94774777/21656
Mute #dunfell:https://lists.yoctoproject.org/g/yocto/mutehashtag/dunfell
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] dunfell do_image_wic error

2022-07-04 Thread Mauro Ziliani

Hi all.

I update dunfell to last version.


But now I get a python exception error on do_image_wic task



| Traceback (most recent call last):
|   File "/home/yocto/sources/poky/scripts/lib/wic/filemap.py", line 
457, in get_mapped_ranges

| first_prev, last_prev = next(iterator)
| StopIteration
|
| The above exception was the direct cause of the following exception:
|
| Traceback (most recent call last):
|   File "/home/yocto/sources/poky/scripts/wic", line 542, in 
| sys.exit(main(sys.argv[1:]))
|   File "/home/yocto/sources/poky/scripts/wic", line 537, in main
| return hlp.invoke_subcommand(args, parser, hlp.wic_help_usage, 
subcommands)
|   File "/home/yocto/sources/poky/scripts/lib/wic/help.py", line 83, in 
invoke_subcommand

| subcmd[0](args, usage)
|   File "/home/yocto/sources/poky/scripts/wic", line 219, in 
wic_create_subcommand

| engine.wic_create(wks_file, rootfs_dir, bootimg_dir, kernel_dir,
|   File "/home/yocto/sources/poky/scripts/lib/wic/engine.py", line 190, 
in wic_create

| plugin.do_create()
|   File 
"/home/yocto/sources/poky/scripts/lib/wic/plugins/imager/direct.py", 
line 96, in do_create

| self.create()
|   File 
"/home/yocto/sources/poky/scripts/lib/wic/plugins/imager/direct.py", 
line 180, in create

| self._image.prepare(self)
|   File 
"/home/yocto/sources/poky/scripts/lib/wic/plugins/imager/direct.py", 
line 354, in prepare

| part.prepare(imager, imager.workdir, imager.oe_builddir,
|   File "/home/yocto/sources/poky/scripts/lib/wic/partition.py", line 
182, in prepare

| plugin.do_prepare_partition(self, srcparams_dict, creator,
|   File 
"/home/yocto/sources/poky/scripts/lib/wic/plugins/source/rawcopy.py", 
line 68, in do_prepare_partition

| sparse_copy(src, dst)
|   File "/home/yocto/sources/poky/scripts/lib/wic/filemap.py", line 
539, in sparse_copy

| for first, last in fmap.get_mapped_ranges(0, fmap.blocks_cnt):
| RuntimeError: generator raised StopIteration

MZ


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#57441): https://lists.yoctoproject.org/g/yocto/message/57441
Mute This Topic: https://lists.yoctoproject.org/mt/92171800/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] #yocto #dunfell

2022-09-20 Thread Monsees, Steven C (US) via lists.yoctoproject.org

Any hints or ideas on what I might be doing wrong would be appreciated…

From: yocto@lists.yoctoproject.org  On Behalf Of 
Monsees, Steven C (US) via lists.yoctoproject.org
Sent: Monday, September 19, 2022 2:22 PM
To: yocto@lists.yoctoproject.org
Subject: [yocto] #yocto #dunfell

External Email Alert

This email has been sent from an account outside of the BAE Systems network.
Please treat the email with caution, especially if you are requested to click 
on a link, decrypt/open an attachment, or enable macros.  For further 
information on how to spot phishing, access “Cybersecurity OneSpace Page” and 
report phishing by clicking the button “Report Phishing” on the Outlook toolbar.



I am seeing an error when I attempt to apply a patch, and could use some 
clarification or way to resolve…

Am doing something wrong in patch creation ?
What is meant by :  ”Patch console.patch does not apply (enforce with -f)” ?
How can I best resolve this issue ?

I edited a file in the recipe’s git repo in order to create the patch for the 
change:

git diff -u console.c

which I used as my patch:

diff --git a/lib/console.c b/lib/console.c
index 9c10560..b932a44 100644
--- a/lib/console.c
+++ b/lib/console.c
@@ -357,7 +357,7 @@ static struct {
{  EFI_SECURITY_VIOLATION, L"Security Violation"},

// warnings
-   {  EFI_WARN_UNKOWN_GLYPH,  L"Warning Unknown Glyph"},
+   {  EFI_WARN_UNKNOWN_GLYPH, L"Warning Unknown Glyph"},
{  EFI_WARN_DELETE_FAILURE,L"Warning Delete Failure"},
{  EFI_WARN_WRITE_FAILURE, L"Warning Write Failure"},
{  EFI_WARN_BUFFER_TOO_SMALL,  L"Warning Buffer Too Small"},

I am getting the following error when the patch is applied:

ERROR: efitools-native-1.7.0+gitAUTOINC+0649468475-r0 do_patch: Applying patch 
'console.patch' on target directory 
'/disk0/scratch/yocto/workspace/poky-dunfell-23.0.19/builds/sbca-default/tmp/work/x86_64-linux/efitools-native/1.7.0+gitAUTOINC+0649468475-r0/git'
Command Error: 'quilt --quiltrc 
/disk0/scratch/yocto/workspace/poky-dunfell-23.0.19/builds/sbca-default/tmp/work/x86_64-linux/efitools-native/1.7.0+gitAUTOINC+0649468475-r0/recipe-sysroot-native/etc/quiltrc
 push' exited with 0  Output:
Applying patch console.patch
patching file lib/console.c
Hunk #1 FAILED at 357.
1 out of 1 hunk FAILED -- rejects in file lib/console.c
Patch console.patch does not apply (enforce with -f)
ERROR: Logfile of failure stored in: 
/disk0/scratch/yocto/workspace/poky-dunfell-23.0.19/builds/sbca-default/tmp/work/x86_64-linux/efitools-native/1.7.0+gitAUTOINC+0649468475-r0/temp/log.do_patch.7615
ERROR: Task 
(/disk0/scratch/yocto/workspace/poky-dunfell-23.0.19/meta-bae/meta-limws/meta-intel/meta-sbca/recipes-bsp/efitools/efitools-native_git.bb:do_patch)
 failed with exit code '1'

Thanks,
Steve

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#58095): https://lists.yoctoproject.org/g/yocto/message/58095
Mute This Topic: https://lists.yoctoproject.org/mt/93787499/21656
Mute #yocto:https://lists.yoctoproject.org/g/yocto/mutehashtag/yocto
Mute #dunfell:https://lists.yoctoproject.org/g/yocto/mutehashtag/dunfell
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] #yocto #dunfell

2022-09-20 Thread Ross Burton
Hi,

SBSA is EDK2 based, which is basically incredibly painful to patch.  The source 
files are windows-style CR/LF so your patch needs to have CR/LF, but if the 
whole patch has CR/LF then git “helpfully” transforms the patch to just 
unix-style LF, which doesn’t apply.

The “solution” (workaround) is to convert the patch to LF encoding, and 
manually add the CR characters to the end of each line in the patch body.

The good news is that EDK2 is moving to LF encoding, so this will be solved at 
some point.

Ross


From: yocto@lists.yoctoproject.org  on behalf of 
Monsees, Steven C (US) via lists.yoctoproject.org 

Date: Tuesday, 20 September 2022 at 16:05
To: Monsees, Steven C (US) , 
yocto@lists.yoctoproject.org 
Subject: Re: [yocto] #yocto #dunfell

Any hints or ideas on what I might be doing wrong would be appreciated…

From: yocto@lists.yoctoproject.org  On Behalf Of 
Monsees, Steven C (US) via lists.yoctoproject.org
Sent: Monday, September 19, 2022 2:22 PM
To: yocto@lists.yoctoproject.org
Subject: [yocto] #yocto #dunfell

External Email Alert
This email has been sent from an account outside of the BAE Systems network.
Please treat the email with caution, especially if you are requested to click 
on a link, decrypt/open an attachment, or enable macros.  For further 
information on how to spot phishing, access “Cybersecurity OneSpace Page” and 
report phishing by clicking the button “Report Phishing” on the Outlook toolbar.


I am seeing an error when I attempt to apply a patch, and could use some 
clarification or way to resolve…

Am doing something wrong in patch creation ?
What is meant by :  ”Patch console.patch does not apply (enforce with -f)” ?
How can I best resolve this issue ?

I edited a file in the recipe’s git repo in order to create the patch for the 
change:

git diff -u console.c

which I used as my patch:

diff --git a/lib/console.c b/lib/console.c
index 9c10560..b932a44 100644
--- a/lib/console.c
+++ b/lib/console.c
@@ -357,7 +357,7 @@ static struct {
{  EFI_SECURITY_VIOLATION, L"Security Violation"},

// warnings
-   {  EFI_WARN_UNKOWN_GLYPH,  L"Warning Unknown Glyph"},
+   {  EFI_WARN_UNKNOWN_GLYPH, L"Warning Unknown Glyph"},
{  EFI_WARN_DELETE_FAILURE,L"Warning Delete Failure"},
{  EFI_WARN_WRITE_FAILURE, L"Warning Write Failure"},
{  EFI_WARN_BUFFER_TOO_SMALL,  L"Warning Buffer Too Small"},

I am getting the following error when the patch is applied:

ERROR: efitools-native-1.7.0+gitAUTOINC+0649468475-r0 do_patch: Applying patch 
'console.patch' on target directory 
'/disk0/scratch/yocto/workspace/poky-dunfell-23.0.19/builds/sbca-default/tmp/work/x86_64-linux/efitools-native/1.7.0+gitAUTOINC+0649468475-r0/git'
Command Error: 'quilt --quiltrc 
/disk0/scratch/yocto/workspace/poky-dunfell-23.0.19/builds/sbca-default/tmp/work/x86_64-linux/efitools-native/1.7.0+gitAUTOINC+0649468475-r0/recipe-sysroot-native/etc/quiltrc
 push' exited with 0  Output:
Applying patch console.patch
patching file lib/console.c
Hunk #1 FAILED at 357.
1 out of 1 hunk FAILED -- rejects in file lib/console.c
Patch console.patch does not apply (enforce with -f)
ERROR: Logfile of failure stored in: 
/disk0/scratch/yocto/workspace/poky-dunfell-23.0.19/builds/sbca-default/tmp/work/x86_64-linux/efitools-native/1.7.0+gitAUTOINC+0649468475-r0/temp/log.do_patch.7615
ERROR: Task 
(/disk0/scratch/yocto/workspace/poky-dunfell-23.0.19/meta-bae/meta-limws/meta-intel/meta-sbca/recipes-bsp/efitools/efitools-native_git.bb:do_patch)
 failed with exit code '1'

Thanks,
Steve

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#58096): https://lists.yoctoproject.org/g/yocto/message/58096
Mute This Topic: https://lists.yoctoproject.org/mt/93787499/21656
Mute #yocto:https://lists.yoctoproject.org/g/yocto/mutehashtag/yocto
Mute #dunfell:https://lists.yoctoproject.org/g/yocto/mutehashtag/dunfell
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] #yocto #dunfell

2022-09-20 Thread Monsees, Steven C (US) via lists.yoctoproject.org

Thanks... will give it a try.

From: Ross Burton 
Sent: Tuesday, September 20, 2022 11:38 AM
To: Monsees, Steven C (US) ; 
yocto@lists.yoctoproject.org
Subject: Re: [yocto] #yocto #dunfell

External Email Alert

This email has been sent from an account outside of the BAE Systems network.
Please treat the email with caution, especially if you are requested to click 
on a link, decrypt/open an attachment, or enable macros.  For further 
information on how to spot phishing, access "Cybersecurity OneSpace Page" and 
report phishing by clicking the button "Report Phishing" on the Outlook toolbar.


Hi,

SBSA is EDK2 based, which is basically incredibly painful to patch.  The source 
files are windows-style CR/LF so your patch needs to have CR/LF, but if the 
whole patch has CR/LF then git "helpfully" transforms the patch to just 
unix-style LF, which doesn't apply.

The "solution" (workaround) is to convert the patch to LF encoding, and 
manually add the CR characters to the end of each line in the patch body.

The good news is that EDK2 is moving to LF encoding, so this will be solved at 
some point.

Ross


From: yocto@lists.yoctoproject.org<mailto:yocto@lists.yoctoproject.org> 
mailto:yocto@lists.yoctoproject.org>> on behalf 
of Monsees, Steven C (US) via lists.yoctoproject.org 
mailto:steven.monsees=baesystems@lists.yoctoproject.org>>
Date: Tuesday, 20 September 2022 at 16:05
To: Monsees, Steven C (US) 
mailto:steven.mons...@baesystems.com>>, 
yocto@lists.yoctoproject.org<mailto:yocto@lists.yoctoproject.org> 
mailto:yocto@lists.yoctoproject.org>>
Subject: Re: [yocto] #yocto #dunfell

Any hints or ideas on what I might be doing wrong would be appreciated...

From: yocto@lists.yoctoproject.org<mailto:yocto@lists.yoctoproject.org> 
mailto:yocto@lists.yoctoproject.org>> On Behalf 
Of Monsees, Steven C (US) via lists.yoctoproject.org
Sent: Monday, September 19, 2022 2:22 PM
To: yocto@lists.yoctoproject.org<mailto:yocto@lists.yoctoproject.org>
Subject: [yocto] #yocto #dunfell

External Email Alert

This email has been sent from an account outside of the BAE Systems network.
Please treat the email with caution, especially if you are requested to click 
on a link, decrypt/open an attachment, or enable macros.  For further 
information on how to spot phishing, access "Cybersecurity OneSpace Page" and 
report phishing by clicking the button "Report Phishing" on the Outlook toolbar.



I am seeing an error when I attempt to apply a patch, and could use some 
clarification or way to resolve...

Am doing something wrong in patch creation ?
What is meant by :  "Patch console.patch does not apply (enforce with -f)" ?
How can I best resolve this issue ?

I edited a file in the recipe's git repo in order to create the patch for the 
change:

git diff -u console.c

which I used as my patch:

diff --git a/lib/console.c b/lib/console.c
index 9c10560..b932a44 100644
--- a/lib/console.c
+++ b/lib/console.c
@@ -357,7 +357,7 @@ static struct {
{  EFI_SECURITY_VIOLATION, L"Security Violation"},

// warnings
-   {  EFI_WARN_UNKOWN_GLYPH,  L"Warning Unknown Glyph"},
+   {  EFI_WARN_UNKNOWN_GLYPH, L"Warning Unknown Glyph"},
{  EFI_WARN_DELETE_FAILURE,L"Warning Delete Failure"},
{  EFI_WARN_WRITE_FAILURE, L"Warning Write Failure"},
{  EFI_WARN_BUFFER_TOO_SMALL,  L"Warning Buffer Too Small"},

I am getting the following error when the patch is applied:

ERROR: efitools-native-1.7.0+gitAUTOINC+0649468475-r0 do_patch: Applying patch 
'console.patch' on target directory 
'/disk0/scratch/yocto/workspace/poky-dunfell-23.0.19/builds/sbca-default/tmp/work/x86_64-linux/efitools-native/1.7.0+gitAUTOINC+0649468475-r0/git'
Command Error: 'quilt --quiltrc 
/disk0/scratch/yocto/workspace/poky-dunfell-23.0.19/builds/sbca-default/tmp/work/x86_64-linux/efitools-native/1.7.0+gitAUTOINC+0649468475-r0/recipe-sysroot-native/etc/quiltrc
 push' exited with 0  Output:
Applying patch console.patch
patching file lib/console.c
Hunk #1 FAILED at 357.
1 out of 1 hunk FAILED -- rejects in file lib/console.c
Patch console.patch does not apply (enforce with -f)
ERROR: Logfile of failure stored in: 
/disk0/scratch/yocto/workspace/poky-dunfell-23.0.19/builds/sbca-default/tmp/work/x86_64-linux/efitools-native/1.7.0+gitAUTOINC+0649468475-r0/temp/log.do_patch.7615
ERROR: Task 
(/disk0/scratch/yocto/workspace/poky-dunfell-23.0.19/meta-bae/meta-limws/meta-intel/meta-sbca/recipes-bsp/efitools/efitools-native_git.bb:do_patch)
 failed with exit code '1'

Thanks,
Steve

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#58097): https://lists.yoctoproject.org/g/yocto/message/58097
Mute This Topic: https://lists.yoctoproj

[yocto] Yocto Dunfell: u-boot-fw-utils ERROR #yocto #dunfell #swupdtae #libubootenv #imx6

2021-04-20 Thread anthony . marchand
Hello,
I'm actually migrating a yocto project zeus version to dunfell LTS to build my 
imx6 Linux system on this LTS release.

Almost all works fine except the following message I try to understand when I 
try to compile the image:

-
ERROR: Multiple .bb files are due to be built which each provide 
u-boot-fw-utils:                       | ETA:  0:00:01
Dunfell/build/../meta-mymeta/recipes-bsp/u-boot/u-boot-fw-utils_2018.09.bb
Dunfell/build/../openembedded-core/meta/recipes-bsp/u-boot/libubootenv_0.3.1.bb
A list of tasks depending on these providers is shown and may help explain 
where the dependency comes from.
Dunfell/build/../meta-mymeta/recipes-bsp/u-boot/u-boot-fw-utils_2018.09.bb has 
unique dependees:

Dunfell/build/../openembedded-core/meta/recipes-bsp/u-boot/libubootenv_0.3.1.bb 
has unique dependees:
Dunfell/build/../meta-swupdate/recipes-support/swupdate/swupdate_2020.11.bb:do_build
Dunfell/build/../meta-swupdate/recipes-support/swupdate/swupdate_2020.11.bb:do_package
Dunfell/build/../meta-swupdate/recipes-support/swupdate/swupdate_2020.11.bb:do_prepare_recipe_sysroot
It could be that one recipe provides something the other doesn't and should. 
The following provider and runtime provider differences may be helpful.
Dunfell/build/../meta-mymeta/recipes-bsp/u-boot/u-boot-fw-utils_2018.09.bb has 
unique provides:

Dunfell/build/../meta-mymeta/recipes-bsp/u-boot/u-boot-fw-utils_2018.09.bb has 
unique rprovides:
u-boot-fw-utils-dev
u-boot-fw-utils-locale
u-boot-fw-utils-dbg
u-boot-fw-utils-staticdev
u-boot-fw-utils-doc
^u-boot-fw-utils-locale-.*
u-boot-fw-utils-src
Dunfell/build/../openembedded-core/meta/recipes-bsp/u-boot/libubootenv_0.3.1.bb 
has unique provides:
libubootenv
Dunfell/build/../openembedded-core/meta/recipes-bsp/u-boot/libubootenv_0.3.1.bb 
has unique rprovides:
libubootenv
libubootenv-src
libubootenv-bin
libubootenv-dbg
libubootenv-doc
^libubootenv-locale-.*
libubootenv-locale
libubootenv-dev
libubootenv-staticdev
-
Does someone already meet this problem? Do you know what I should look in for 
to solve it?

Thanks by advance, best reguards.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#53183): https://lists.yoctoproject.org/g/yocto/message/53183
Mute This Topic: https://lists.yoctoproject.org/mt/82231262/21656
Mute #imx6:https://lists.yoctoproject.org/g/yocto/mutehashtag/imx6
Mute #yocto:https://lists.yoctoproject.org/g/yocto/mutehashtag/yocto
Mute #dunfell:https://lists.yoctoproject.org/g/yocto/mutehashtag/dunfell
Mute #swupdtae:https://lists.yoctoproject.org/g/yocto/mutehashtag/swupdtae
Mute #libubootenv:https://lists.yoctoproject.org/g/yocto/mutehashtag/libubootenv
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] dunfell do_image_wic error

2022-07-04 Thread Mauro Ziliani

Sorry for missing information


I get this error when I do


bitbake core-image-minimal


MZ

On 04/07/22 21:17, Mauro Ziliani wrote:

Hi all.

I update dunfell to last version.


But now I get a python exception error on do_image_wic task



| Traceback (most recent call last):
|   File "/home/yocto/sources/poky/scripts/lib/wic/filemap.py", line 
457, in get_mapped_ranges

| first_prev, last_prev = next(iterator)
| StopIteration
|
| The above exception was the direct cause of the following exception:
|
| Traceback (most recent call last):
|   File "/home/yocto/sources/poky/scripts/wic", line 542, in 
| sys.exit(main(sys.argv[1:]))
|   File "/home/yocto/sources/poky/scripts/wic", line 537, in main
| return hlp.invoke_subcommand(args, parser, hlp.wic_help_usage, 
subcommands)
|   File "/home/yocto/sources/poky/scripts/lib/wic/help.py", line 83, 
in invoke_subcommand

| subcmd[0](args, usage)
|   File "/home/yocto/sources/poky/scripts/wic", line 219, in 
wic_create_subcommand

| engine.wic_create(wks_file, rootfs_dir, bootimg_dir, kernel_dir,
|   File "/home/yocto/sources/poky/scripts/lib/wic/engine.py", line 
190, in wic_create

| plugin.do_create()
|   File 
"/home/yocto/sources/poky/scripts/lib/wic/plugins/imager/direct.py", 
line 96, in do_create

| self.create()
|   File 
"/home/yocto/sources/poky/scripts/lib/wic/plugins/imager/direct.py", 
line 180, in create

| self._image.prepare(self)
|   File 
"/home/yocto/sources/poky/scripts/lib/wic/plugins/imager/direct.py", 
line 354, in prepare

| part.prepare(imager, imager.workdir, imager.oe_builddir,
|   File "/home/yocto/sources/poky/scripts/lib/wic/partition.py", line 
182, in prepare

| plugin.do_prepare_partition(self, srcparams_dict, creator,
|   File 
"/home/yocto/sources/poky/scripts/lib/wic/plugins/source/rawcopy.py", 
line 68, in do_prepare_partition

| sparse_copy(src, dst)
|   File "/home/yocto/sources/poky/scripts/lib/wic/filemap.py", line 
539, in sparse_copy

| for first, last in fmap.get_mapped_ranges(0, fmap.blocks_cnt):
| RuntimeError: generator raised StopIteration

MZ





-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#57442): https://lists.yoctoproject.org/g/yocto/message/57442
Mute This Topic: https://lists.yoctoproject.org/mt/92171800/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] #yocto #dunfell #qemux86-64

2022-08-31 Thread Monsees, Steven C (US) via lists.yoctoproject.org

Hello:

I am attempting to evaluate dunfell -23.0.17, on a cetos7 linux box.

When building the default : core-image-minimal or core-image-sato, clean with 
no errors/warnings, on execution
I get the following error for both (see below)...


(1)Is this an issue with the release ?,  and is there another release I 
should be using ?

(2)Is there something I need to pre-configure for these builds to run 
properly ?

(3)How might I resolve this issue ?

06:57 smonsees@yix490016 
/disk0/scratch/smonsees/yocto/yocto-dunfell/poky-dunfell-23.0.17/dunfell_Min>runqemu
 qemux86-64
runqemu - INFO - Running MACHINE=qemux86-64 bitbake -e ...
runqemu - INFO - Continuing with the following parameters:
KERNEL: 
[/disk0/scratch/smonsees/yocto/yocto-dunfell/poky-dunfell-23.0.17/dunfell_Min/tmp/deploy/images/qemux86-64/bzImage--5.4.192+gitAUTOINC+3fecb08507_85f0668fea-r0-qemux86-64-20220830151817.bin]
MACHINE: [qemux86-64]
FSTYPE: [ext4]
ROOTFS: 
[/disk0/scratch/smonsees/yocto/yocto-dunfell/poky-dunfell-23.0.17/dunfell_Min/tmp/deploy/images/qemux86-64/core-image-minimal-qemux86-64-20220830151817.rootfs.ext4]
CONFFILE: 
[/disk0/scratch/smonsees/yocto/yocto-dunfell/poky-dunfell-23.0.17/dunfell_Min/tmp/deploy/images/qemux86-64/core-image-minimal-qemux86-64-20220830151817.qemuboot.conf]

runqemu - INFO - Setting up tap interface under sudo
[sudo] password for smonsees:
runqemu - INFO - Network configuration: 
ip=192.168.7.2::192.168.7.1:255.255.255.0
runqemu - INFO - Running 
/disk0/scratch/smonsees/yocto/yocto-dunfell/poky-dunfell-23.0.17/dunfell_Min/tmp/work/x86_64-linux/qemu-helper-native/1.0-r1/recipe-sysroot-native/usr/bin/qemu-system-x86_64
 -device virtio-net-pci,netdev=net0,mac=52:54:00:12:34:02 -netdev 
tap,id=net0,ifname=tap0,script=no,downscript=no -drive 
file=/disk0/scratch/smonsees/yocto/yocto-dunfell/poky-dunfell-23.0.17/dunfell_Min/tmp/deploy/images/qemux86-64/core-image-minimal-qemux86-64-20220830151817.rootfs.ext4,if=virtio,format=raw
 -show-cursor -usb -device usb-tablet -object 
rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0   -cpu 
core2duo -m 256 -serial mon:vc -serial null -kernel 
/disk0/scratch/smonsees/yocto/yocto-dunfell/poky-dunfell-23.0.17/dunfell_Min/tmp/deploy/images/qemux86-64/bzImage--5.4.192+gitAUTOINC+3fecb08507_85f0668fea-r0-qemux86-64-20220830151817.bin
 -append 'root=/dev/vda rw  mem=256M ip=192.168.7.2::192.168.7.1:255.255.255.0 
oprofile.timer=1 '

runqemu - ERROR - Failed to run qemu: Could not initialize SDL(x11 not 
available) - exiting

runqemu - INFO - Cleaning up
Set 'tap0' nonpersistent

Thank you for your help,
Steve


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#57954): https://lists.yoctoproject.org/g/yocto/message/57954
Mute This Topic: https://lists.yoctoproject.org/mt/93368526/21656
Mute #yocto:https://lists.yoctoproject.org/g/yocto/mutehashtag/yocto
Mute #dunfell:https://lists.yoctoproject.org/g/yocto/mutehashtag/dunfell
Mute #qemux86-64:https://lists.yoctoproject.org/g/yocto/mutehashtag/qemux86-64
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] Yocto Dunfell: package.class --> dwarfsrcfiles

2020-07-21 Thread Jan Hannig
Hello,



with the upgrade from Yocto Zeus → Dunfell, we observe lots of messages when 
building our product which seem heavy to be understood or to debug.

Actually, it's the failure of the "do_package" task of a proprietary module 
written in C with following message:



ERROR: eds-1.0-r0 do_package: dwarfsrcfiles failed with exit code 1 (cmd was 
['dwarfsrcfiles', 
'/home/jhannig/workspace/build/mguard3_tmp/work/aarch64-mguard-linux/eds/1.0-r0/package/usr/lib/libhdb.a']):

dwarfsrcfiles: 
/home/jhannig/workspace/build/mguard3_tmp/work/aarch64-mguard-linux/eds/1.0-r0/package/usr/lib/libhdb.a:
 not a valid ELF file



ERROR: Logfile of failure stored in: 
/home/jhannig/workspace/build/mguard3_tmp/work/aarch64-mguard-linux/eds/1.0-r0/temp/log.do_package.13957

ERROR: Task 
(/home/jhannig/workspace/mguard/meta-mguard/recipes-core/eds/eds_1.0.bb:do_package)
 failed with exit code '1'



Following information to understand the problem:

- The code of this module wasn't changed, and it compiled errorless with 
release "Zeus"

- The examination of the file "libhdb.a" brings following results:

- It is possible to unpack the archive-file "libhdb.a":  
jhannig@jhannig:~/Archiv/MG-2436$ ar x libhdb.a

- The Examination of the content with "file *.o" 
[jhannig@jhannig:~/Archiv/MG-2436$ file *.o] brings following results:

hdb.o: ELF 64-bit LSB relocatable, ARM aarch64, version 1 (SYSV), 
not stripped

hdbschema.o:   ELF 64-bit LSB relocatable, ARM aarch64, version 1 (SYSV), 
not stripped

hdbstaticschema.o: ELF 64-bit LSB relocatable, ARM aarch64, version 1 (SYSV), 
not stripped

utils.o:   ELF 64-bit LSB relocatable, ARM aarch64, version 1 (SYSV), 
not stripped

- This corresponds with the expectation and the intended character of the file

- Minor changes in the makefile command line didn't change anything in the 
result [EXTRA_OEMAKE += 'CC="${CC}" CPPFLAGS="${CPPFLAGS}" CFLAGS="${CFLAGS}" 
LDFLAGS="${LDFLAGS}" AR="${AR}" EDS_XML="${EDS_XML}"']

- Searching the internet for "dwarfsrcfiles" doesn't bring any informative or 
documentative result, so it doesn't become clear, what this tool exactly does.



Following questions asked to the community:

- Which cases of errors result in this error message?

- What changed with the new yocto release, that "suddenly" a build result is 
analyzed as failure?

- Where exactly in the tool code is this error thrown? The message "not a valid 
ELF file" isn't available in the code

- What should be done with the archive file and its content to eliminate the 
error?

- Is this behavior well known, and is there any documentation to get 
information about the tool?



Thanks a lot for help,

kind regards



Jan Hannig

Reasearch and Development



jhan...@phoenixcontact.com

www.phoenixcontact.com





...
PHOENIX CONTACT Cyber Security GmbH 
Richard-Willstätter-Straße 6  
D-12489 Berlin  
 
Register Court: AG Charlottenburg, HR B 202908 
Geschäftsführer/General Manager: Kilian Golm
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#50044): https://lists.yoctoproject.org/g/yocto/message/50044
Mute This Topic: https://lists.yoctoproject.org/mt/75700982/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[yocto] Yocto Dunfell: package.class --> dwarfsrcfiles

2020-07-23 Thread Jan Hannig
Hello, 

thanks for the reply and the useful hints concerning our questions!

After a debug session, it came out, that the *.a archive doesn't contain only 
*.o files, but also one *.c file.
That was missed during the first analysis.
Interestingly enough, the error came out only with the Dunfell Upgrade.
Deleting the *.c file in the archive corrected the error message.

Jan Hannig
Reasearch and Development
jhan...@phoenixcontact.com
www.phoenixcontact.com



...
PHOENIX CONTACT Cyber Security GmbH 
Richard-Willstätter-Straße 6  
D-12489 Berlin  
 
Register Court: AG Charlottenburg, HR B 202908 
Geschäftsführer/General Manager: Kilian Golm
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#50063): https://lists.yoctoproject.org/g/yocto/message/50063
Mute This Topic: https://lists.yoctoproject.org/mt/75700982/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [yocto] Yocto Dunfell: u-boot-fw-utils ERROR #yocto #dunfell #swupdtae #libubootenv #imx6

2021-04-20 Thread Mikko Rapeli
Hi,

On Tue, Apr 20, 2021 at 03:24:25AM -0700, anthony.march...@navocap.com wrote:
> Hello,
> I'm actually migrating a yocto project zeus version to dunfell LTS to build 
> my imx6 Linux system on this LTS release.
> 
> Almost all works fine except the following message I try to understand when I 
> try to compile the image:
> 
> -
> ERROR: Multiple .bb files are due to be built which each provide 
> u-boot-fw-utils:                       | ETA:  0:00:01
> Dunfell/build/../meta-mymeta/recipes-bsp/u-boot/u-boot-fw-utils_2018.09.bb
> Dunfell/build/../openembedded-core/meta/recipes-bsp/u-boot/libubootenv_0.3.1.bb
> A list of tasks depending on these providers is shown and may help explain 
> where the dependency comes from.
> Dunfell/build/../meta-mymeta/recipes-bsp/u-boot/u-boot-fw-utils_2018.09.bb 
> has unique dependees:
> 
> Dunfell/build/../openembedded-core/meta/recipes-bsp/u-boot/libubootenv_0.3.1.bb
>  has unique dependees:
> Dunfell/build/../meta-swupdate/recipes-support/swupdate/swupdate_2020.11.bb:do_build
> Dunfell/build/../meta-swupdate/recipes-support/swupdate/swupdate_2020.11.bb:do_package
> Dunfell/build/../meta-swupdate/recipes-support/swupdate/swupdate_2020.11.bb:do_prepare_recipe_sysroot
> It could be that one recipe provides something the other doesn't and should. 
> The following provider and runtime provider differences may be helpful.
> Dunfell/build/../meta-mymeta/recipes-bsp/u-boot/u-boot-fw-utils_2018.09.bb 
> has unique provides:
> 
> Dunfell/build/../meta-mymeta/recipes-bsp/u-boot/u-boot-fw-utils_2018.09.bb 
> has unique rprovides:
> u-boot-fw-utils-dev
> u-boot-fw-utils-locale
> u-boot-fw-utils-dbg
> u-boot-fw-utils-staticdev
> u-boot-fw-utils-doc
> ^u-boot-fw-utils-locale-.*
> u-boot-fw-utils-src
> Dunfell/build/../openembedded-core/meta/recipes-bsp/u-boot/libubootenv_0.3.1.bb
>  has unique provides:
> libubootenv
> Dunfell/build/../openembedded-core/meta/recipes-bsp/u-boot/libubootenv_0.3.1.bb
>  has unique rprovides:
> libubootenv
> libubootenv-src
> libubootenv-bin
> libubootenv-dbg
> libubootenv-doc
> ^libubootenv-locale-.*
> libubootenv-locale
> libubootenv-dev
> libubootenv-staticdev
> -
> Does someone already meet this problem? Do you know what I should look in for 
> to solve it?

Multiple ways to fix this. One is to switch to using dunfell branches for the
BSP SW stack, if they are available. Alternatively you can use the old
zeus based BSP SW stack but for that you need to copy the files which
BSP SW stack depends on poky side from zeus branch to the BSP SW meta layer.

Then things will work out with minor bug fixes or API changes.

For IMX*, I copied the old u-boot recipe files from poky zeus branch over to
the BSP so that meta-freescale* and meta-imx find them and prefer over
the newer u-boot version from poky dunfell branch.

Hope this helps,

-Mikko

> Thanks by advance, best reguards.

> 
> 
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#53184): https://lists.yoctoproject.org/g/yocto/message/53184
Mute This Topic: https://lists.yoctoproject.org/mt/82231262/21656
Mute #yocto:https://lists.yoctoproject.org/g/yocto/mutehashtag/yocto
Mute #dunfell:https://lists.yoctoproject.org/g/yocto/mutehashtag/dunfell
Mute #imx6:https://lists.yoctoproject.org/g/yocto/mutehashtag/imx6
Mute #swupdtae:https://lists.yoctoproject.org/g/yocto/mutehashtag/swupdtae
Mute #libubootenv:https://lists.yoctoproject.org/g/yocto/mutehashtag/libubootenv
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] How to integrate kernel-module in yocto-dunfell? #yocto #dunfell #apt #tx2 #kernel

2020-09-02 Thread kilian . brandt
Hey community,

I'm trying to use this library [can-isotp][1] on a Jetson TX2 Board. The 
version of the running OS is: 0.0.8 (dunfell).

>From the docs of the library I get the following instructions:

```
DOWNLOAD and BUILD

1. Download repository and enter the repositories root directory

 git clone https://github.com/hartkopp/can-isotp.git

 cd can-isotp

2. Build ISO-TP kernel module

  Ensure build dependencies are installed. E.g. for Debian (or Ubuntu):

 sudo apt-get install build-essential linux-headers-$(uname -r)

  To build:

 (you need to be in the repositories root path)

 make

  To install (optional):

 sudo make modules_install

3. When the PF_CAN core module is loaded ('modprobe can') the ISO-TP module
  can be loaded into the kernel with

  insmod ./net/can/can-isotp.ko

  When the can-isotp.ko module has been installed into the Linux Kernels
  modules directory (e.g. with 'make modules_install') the module should
  load automatically when opening a CAN_ISOTP socket.
```

I have some questions about that procedure. Can anybody give me an hint on how 
to install this module? Do I have to create a completly new yocto image or is 
there any way to install kernel modules afterwards? I tried to copy the folder 
onto the TX2 and install it by hand, but (as to be expected) the make command 
is not found.

I would really appreciate any help ! Best regards,

[1]: https://github.com/hartkopp/can-isotp
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#50427): https://lists.yoctoproject.org/g/yocto/message/50427
Mute This Topic: https://lists.yoctoproject.org/mt/76579670/21656
Mute #yocto: https://lists.yoctoproject.org/g/yocto+yocto/mutehashtag/yocto
Mute #dunfell: https://lists.yoctoproject.org/g/yocto+yocto/mutehashtag/dunfell
Mute #apt: https://lists.yoctoproject.org/g/yocto+yocto/mutehashtag/apt
Mute #tx2: https://lists.yoctoproject.org/g/yocto+yocto/mutehashtag/tx2
Mute #kernel: https://lists.yoctoproject.org/g/yocto+yocto/mutehashtag/kernel
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [yocto] How to integrate kernel-module in yocto-dunfell? #yocto #dunfell #apt #tx2 #kernel

2020-09-02 Thread Marek Belisko
On Wed, Sep 2, 2020 at 1:13 PM  wrote:
>
> Hey community,
>
> I'm trying to use this library [can-isotp][1] on a Jetson TX2 Board. The 
> version of the running OS is: 0.0.8 (dunfell).
>
> From the docs of the library I get the following instructions:
>
> ```
>
> DOWNLOAD and BUILD
>
> 1. Download repository and enter the repositories root directory
>
>   git clone https://github.com/hartkopp/can-isotp.git
>
>   cd can-isotp
>
> 2. Build ISO-TP kernel module
>
>Ensure build dependencies are installed. E.g. for Debian (or Ubuntu):
>
>   sudo apt-get install build-essential linux-headers-$(uname -r)
>
>To build:
>
>   (you need to be in the repositories root path)
>
>   make
>
>To install (optional):
>
>   sudo make modules_install
>
>
> 3. When the PF_CAN core module is loaded ('modprobe can') the ISO-TP module
>can be loaded into the kernel with
>
>insmod ./net/can/can-isotp.ko
>
>When the can-isotp.ko module has been installed into the Linux Kernels
>modules directory (e.g. with 'make modules_install') the module should
>load automatically when opening a CAN_ISOTP socket.
>
> ```
>
> I have some questions about that procedure. Can anybody give me an hint on 
> how to install this module? Do I have to create a completly new yocto image 
> or is there any way to install kernel modules afterwards? I tried to copy the 
> folder onto the TX2 and install it by hand, but (as to be expected) the make 
> command is not found.
PLease take a look at :
https://www.yoctoproject.org/docs/2.6/kernel-dev/kernel-dev.html#working-with-out-of-tree-modules
where it is described how to compile out of tree modules.
>
> I would really appreciate any help ! Best regards,
>
>
>   [1]: https://github.com/hartkopp/can-isotp
> 

marek
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#50428): https://lists.yoctoproject.org/g/yocto/message/50428
Mute This Topic: https://lists.yoctoproject.org/mt/76579670/21656
Mute #yocto: https://lists.yoctoproject.org/g/yocto+yocto/mutehashtag/yocto
Mute #apt: https://lists.yoctoproject.org/g/yocto+yocto/mutehashtag/apt
Mute #dunfell: https://lists.yoctoproject.org/g/yocto+yocto/mutehashtag/dunfell
Mute #tx2: https://lists.yoctoproject.org/g/yocto+yocto/mutehashtag/tx2
Mute #kernel: https://lists.yoctoproject.org/g/yocto+yocto/mutehashtag/kernel
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [yocto] How to integrate kernel-module in yocto-dunfell? #yocto #dunfell #apt #tx2 #kernel

2020-09-02 Thread Khem Raj
On Wed, Sep 2, 2020 at 4:32 AM Marek Belisko  wrote:
>
> On Wed, Sep 2, 2020 at 1:13 PM  wrote:
> >
> > Hey community,
> >
> > I'm trying to use this library [can-isotp][1] on a Jetson TX2 Board. The 
> > version of the running OS is: 0.0.8 (dunfell).
> >
> > From the docs of the library I get the following instructions:
> >
> > ```
> >
> > DOWNLOAD and BUILD
> >
> > 1. Download repository and enter the repositories root directory
> >
> >   git clone https://github.com/hartkopp/can-isotp.git
> >
> >   cd can-isotp
> >
> > 2. Build ISO-TP kernel module
> >
> >Ensure build dependencies are installed. E.g. for Debian (or Ubuntu):
> >
> >   sudo apt-get install build-essential linux-headers-$(uname -r)
> >
> >To build:
> >
> >   (you need to be in the repositories root path)
> >
> >   make
> >
> >To install (optional):
> >
> >   sudo make modules_install
> >
> >
> > 3. When the PF_CAN core module is loaded ('modprobe can') the ISO-TP module
> >can be loaded into the kernel with
> >
> >insmod ./net/can/can-isotp.ko
> >
> >When the can-isotp.ko module has been installed into the Linux Kernels
> >modules directory (e.g. with 'make modules_install') the module should
> >load automatically when opening a CAN_ISOTP socket.
> >
> > ```
> >
> > I have some questions about that procedure. Can anybody give me an hint on 
> > how to install this module? Do I have to create a completly new yocto image 
> > or is there any way to install kernel modules afterwards? I tried to copy 
> > the folder onto the TX2 and install it by hand, but (as to be expected) the 
> > make command is not found.
> PLease take a look at :
> https://www.yoctoproject.org/docs/2.6/kernel-dev/kernel-dev.html#working-with-out-of-tree-modules
> where it is described how to compile out of tree modules.
> >

in this case you might be lucky we already have recipe
http://cgit.openembedded.org/meta-openembedded/tree/meta-oe/recipes-extended/socketcan/can-isotp_git.bb?h=master
its blacklisted because it needs work to fix build on 5.2+ kernel

> > I would really appreciate any help ! Best regards,
> >
> >
> >   [1]: https://github.com/hartkopp/can-isotp
> >
>
> marek
> 
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#50429): https://lists.yoctoproject.org/g/yocto/message/50429
Mute This Topic: https://lists.yoctoproject.org/mt/76579670/21656
Mute #yocto: https://lists.yoctoproject.org/g/yocto+yocto/mutehashtag/yocto
Mute #apt: https://lists.yoctoproject.org/g/yocto+yocto/mutehashtag/apt
Mute #dunfell: https://lists.yoctoproject.org/g/yocto+yocto/mutehashtag/dunfell
Mute #tx2: https://lists.yoctoproject.org/g/yocto+yocto/mutehashtag/tx2
Mute #kernel: https://lists.yoctoproject.org/g/yocto+yocto/mutehashtag/kernel
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [yocto] How to integrate kernel-module in yocto-dunfell? #yocto #dunfell #apt #tx2 #kernel

2020-09-02 Thread kilian . brandt
Hi Khem, thanks for your reply! Now I am a bit confused. Marek has send me 
another recipe:

> 
> 
> 
> LICENSE = "GPLv2"
> 
> 
> 
> LIC_FILES_CHKSUM = " file://COPYING;md5=72d977d697c3c05830fdff00a7448931 "
> 
> 
> 
> 
> SRCREV = "6003f9997587e6a563cebf1f246bcd0eb6deff3d"
> 
> 
> 
> PV = "1.0+git${SRCPV}"
> 
> 
> 
> 
> 
> 
> 
> SRC_URI = "git://github.com/hartkopp/can-isotp.git;protocol=https"
> 
> 
> 
> 
> 
> 
> 
> S = "${WORKDIR}/git"
> 
> 
> 
> 
> 
> 
> 
> inherit module
> 
> 
> 
> 
> 
> 
> 
> EXTRA_OEMAKE += "KERNELDIR=${STAGING_KERNEL_DIR}"
> 
> 

Do you know if that is allready fixed for kernel 5,2+ ?

Best regards!

> 
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#50435): https://lists.yoctoproject.org/g/yocto/message/50435
Mute This Topic: https://lists.yoctoproject.org/mt/76579670/21656
Mute #yocto: https://lists.yoctoproject.org/g/yocto+yocto/mutehashtag/yocto
Mute #apt: https://lists.yoctoproject.org/g/yocto+yocto/mutehashtag/apt
Mute #dunfell: https://lists.yoctoproject.org/g/yocto+yocto/mutehashtag/dunfell
Mute #tx2: https://lists.yoctoproject.org/g/yocto+yocto/mutehashtag/tx2
Mute #kernel: https://lists.yoctoproject.org/g/yocto+yocto/mutehashtag/kernel
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [yocto] How to integrate kernel-module in yocto-dunfell? #yocto #dunfell #apt #tx2 #kernel

2020-09-03 Thread Marek Belisko
On Thu, Sep 3, 2020 at 8:56 AM  wrote:
>
> Hi Khem, thanks for your reply! Now I am a bit confused. Marek has sent me 
> another recipe:
Shared recipe should be the same as one from meta-oe :). So you can
include it in your image and it should be build then.
>
>  LICENSE = "GPLv2"
>
> LIC_FILES_CHKSUM = "file://COPYING;md5=72d977d697c3c05830fdff00a7448931"
>
> SRCREV = "6003f9997587e6a563cebf1f246bcd0eb6deff3d"
>
> PV = "1.0+git${SRCPV}"
>
>
>
> SRC_URI = "git://github.com/hartkopp/can-isotp.git;protocol=https"
>
>
>
> S = "${WORKDIR}/git"
>
>
>
> inherit module
>
>
>
> EXTRA_OEMAKE += "KERNELDIR=${STAGING_KERNEL_DIR}"
>
>
> Do you know if that is allready fixed for kernel 5,2+ ?
>
> Best regards!
>
> 

BR,

marek

-- 
as simple and primitive as possible
-
Marek Belisko - OPEN-NANDRA
Freelance Developer

Ruska Nova Ves 219 | Presov, 08005 Slovak Republic
Tel: +421 915 052 184
skype: marekwhite
twitter: #opennandra
web: http://open-nandra.com
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#50436): https://lists.yoctoproject.org/g/yocto/message/50436
Mute This Topic: https://lists.yoctoproject.org/mt/76579670/21656
Mute #yocto: https://lists.yoctoproject.org/g/yocto+yocto/mutehashtag/yocto
Mute #apt: https://lists.yoctoproject.org/g/yocto+yocto/mutehashtag/apt
Mute #dunfell: https://lists.yoctoproject.org/g/yocto+yocto/mutehashtag/dunfell
Mute #tx2: https://lists.yoctoproject.org/g/yocto+yocto/mutehashtag/tx2
Mute #kernel: https://lists.yoctoproject.org/g/yocto+yocto/mutehashtag/kernel
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[yocto] [dunfell] Remove hwclock #dunfell #yocto

2021-05-10 Thread Alexandre GAMBIER
Hi,

I would like to remove hwclock from the rootfs cause we don't have an RTC.
Maybe later I'll replace it with fake-hwclock.

I'm using dunfell with IPK packages and I tried to add the following settings 
in my image settings file (not all at the same time) but none of them removed 
hwclock.

* PACKAGE_EXCLUDE += " util-linux-hwclock "
* BAD_RECOMMENDATIONS += " util-linux-hwclock "
* IMAGE_INSTALL_remove += " util-linux-hwclock "

Is there a way to remove the package util-linux-hwclock ?
I could use IMAGE_POSTPROCESS_COMMAND and write my own script to remove it but 
I think it's better and safer to remove the package during the rootfs build.

Thanks

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#53437): https://lists.yoctoproject.org/g/yocto/message/53437
Mute This Topic: https://lists.yoctoproject.org/mt/82721326/21656
Mute #dunfell:https://lists.yoctoproject.org/g/yocto/mutehashtag/dunfell
Mute #yocto:https://lists.yoctoproject.org/g/yocto/mutehashtag/yocto
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] #dunfell #qt5 #raspberrypi #sdk #linux

2021-12-13 Thread arthur . forey
Hello everybody,

I come today to try to solve my problem. I am building a bsp and sdk for a 
raspberry pi (MACHINE = raspberrypi4-64) in the form of a compute module. I'm 
trying to compile this app: 
https://github.com/YvesBas/Tadarida-D/tree/master/sources for the raspberry pi).

I know that in the Libs directory, the libraries are compiled for an x86_64 
architecture. This is why I try to integrate them into the sources without this 
directory. I am also modifying the .pro to match with the correct libs. The 
integration of these two libs are present in the image of the raspberry pi, but 
in the sdk I only have the libsndfile1, so the libfftw3.h / .so is missing.

Here are elements for the creation of the bsp and the sdk:
Build Configuration:
BB_VERSION   = "1.46.0"
BUILD_SYS    = "x86_64-linux"
NATIVELSBSTRING  = "universal"
TARGET_SYS   = "aarch64-poky-linux"
MACHINE  = "raspberrypi4-64"
DISTRO   = "poky"
DISTRO_VERSION   = "3.1.12"
TUNE_FEATURES    = "aarch64 cortexa72 crc crypto"
TARGET_FPU   = ""
meta
meta-poky
meta-yocto-bsp   = "dunfell:cf5a00721f721d5077c73d1f4e812e5c79833fba"
meta-oe
meta-python
meta-networking
meta-multimedia  = "dunfell:69f94af4d91215e7d4e225bab54bf3bcfee42f1c"
meta-qt5 = "dunfell:b4d24d70aca75791902df5cd59a4f4a54aa4a125"
meta-raspberrypi = "dunfell:934064a01903b2ba9a82be93b3f0efdb4543a0e8"

In conf/local.conf :
IMAGE_INSTALL_append = " libfftw libfftwl libfftwf fftw-dev libsndfile1"

When I do bitbake meta-toolchain-qt5, I don't have libfftw3.h / .so .

What can i do ?

Thanks a lot.

Arthur

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#55574): https://lists.yoctoproject.org/g/yocto/message/55574
Mute This Topic: https://lists.yoctoproject.org/mt/87716959/21656
Mute #dunfell:https://lists.yoctoproject.org/g/yocto/mutehashtag/dunfell
Mute #qt5:https://lists.yoctoproject.org/g/yocto/mutehashtag/qt5
Mute #raspberrypi:https://lists.yoctoproject.org/g/yocto/mutehashtag/raspberrypi
Mute #sdk:https://lists.yoctoproject.org/g/yocto/mutehashtag/sdk
Mute #linux:https://lists.yoctoproject.org/g/yocto/mutehashtag/linux
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] #yocto #dunfell #qemux86-64

2022-08-31 Thread Alexander Kanavin
You need to run this in a X11 session, or use 'nographic' parameter.

Alex

On Wed, 31 Aug 2022 at 13:17, Monsees, Steven C (US) via
lists.yoctoproject.org
 wrote:
>
>
>
> Hello:
>
>
>
> I am attempting to evaluate dunfell -23.0.17, on a cetos7 linux box.
>
>
>
> When building the default : core-image-minimal or core-image-sato, clean with 
> no errors/warnings, on execution
>
> I get the following error for both (see below)…
>
>
>
> (1)Is this an issue with the release ?,  and is there another release I 
> should be using ?
>
> (2)Is there something I need to pre-configure for these builds to run 
> properly ?
>
> (3)How might I resolve this issue ?
>
>
>
> 06:57 smonsees@yix490016 
> /disk0/scratch/smonsees/yocto/yocto-dunfell/poky-dunfell-23.0.17/dunfell_Min>runqemu
>  qemux86-64
>
> runqemu - INFO - Running MACHINE=qemux86-64 bitbake -e ...
>
> runqemu - INFO - Continuing with the following parameters:
>
> KERNEL: 
> [/disk0/scratch/smonsees/yocto/yocto-dunfell/poky-dunfell-23.0.17/dunfell_Min/tmp/deploy/images/qemux86-64/bzImage--5.4.192+gitAUTOINC+3fecb08507_85f0668fea-r0-qemux86-64-20220830151817.bin]
>
> MACHINE: [qemux86-64]
>
> FSTYPE: [ext4]
>
> ROOTFS: 
> [/disk0/scratch/smonsees/yocto/yocto-dunfell/poky-dunfell-23.0.17/dunfell_Min/tmp/deploy/images/qemux86-64/core-image-minimal-qemux86-64-20220830151817.rootfs.ext4]
>
> CONFFILE: 
> [/disk0/scratch/smonsees/yocto/yocto-dunfell/poky-dunfell-23.0.17/dunfell_Min/tmp/deploy/images/qemux86-64/core-image-minimal-qemux86-64-20220830151817.qemuboot.conf]
>
>
>
> runqemu - INFO - Setting up tap interface under sudo
>
> [sudo] password for smonsees:
>
> runqemu - INFO - Network configuration: 
> ip=192.168.7.2::192.168.7.1:255.255.255.0
>
> runqemu - INFO - Running 
> /disk0/scratch/smonsees/yocto/yocto-dunfell/poky-dunfell-23.0.17/dunfell_Min/tmp/work/x86_64-linux/qemu-helper-native/1.0-r1/recipe-sysroot-native/usr/bin/qemu-system-x86_64
>  -device virtio-net-pci,netdev=net0,mac=52:54:00:12:34:02 -netdev 
> tap,id=net0,ifname=tap0,script=no,downscript=no -drive 
> file=/disk0/scratch/smonsees/yocto/yocto-dunfell/poky-dunfell-23.0.17/dunfell_Min/tmp/deploy/images/qemux86-64/core-image-minimal-qemux86-64-20220830151817.rootfs.ext4,if=virtio,format=raw
>  -show-cursor -usb -device usb-tablet -object 
> rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0   
> -cpu core2duo -m 256 -serial mon:vc -serial null -kernel 
> /disk0/scratch/smonsees/yocto/yocto-dunfell/poky-dunfell-23.0.17/dunfell_Min/tmp/deploy/images/qemux86-64/bzImage--5.4.192+gitAUTOINC+3fecb08507_85f0668fea-r0-qemux86-64-20220830151817.bin
>  -append 'root=/dev/vda rw  mem=256M 
> ip=192.168.7.2::192.168.7.1:255.255.255.0 oprofile.timer=1 '
>
>
>
> runqemu - ERROR - Failed to run qemu: Could not initialize SDL(x11 not 
> available) - exiting
>
>
>
> runqemu - INFO - Cleaning up
>
> Set 'tap0' nonpersistent
>
>
>
> Thank you for your help,
>
> Steve
>
>
>
>
> 
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#57955): https://lists.yoctoproject.org/g/yocto/message/57955
Mute This Topic: https://lists.yoctoproject.org/mt/93368526/21656
Mute #yocto:https://lists.yoctoproject.org/g/yocto/mutehashtag/yocto
Mute #dunfell:https://lists.yoctoproject.org/g/yocto/mutehashtag/dunfell
Mute #qemux86-64:https://lists.yoctoproject.org/g/yocto/mutehashtag/qemux86-64
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] #yocto #dunfell #qemux86-64

2022-08-31 Thread Monsees, Steven C (US) via lists.yoctoproject.org

Thank you for the quick replay...

The no graphic option does work but the kernel comes up with errors...

Will full wayland support be added to the dunfell release ?

Steve

-Original Message-
From: yocto@lists.yoctoproject.org  On Behalf Of 
Alexander Kanavin
Sent: Wednesday, August 31, 2022 7:23 AM
To: Monsees, Steven C (US) 
Cc: yocto@lists.yoctoproject.org
Subject: Re: [yocto] #yocto #dunfell #qemux86-64

External Email Alert

This email has been sent from an account outside of the BAE Systems network.

Please treat the email with caution, especially if you are requested to click 
on a link, decrypt/open an attachment, or enable macros.  For further 
information on how to spot phishing, access “Cybersecurity OneSpace Page” and 
report phishing by clicking the button “Report Phishing” on the Outlook toolbar.


You need to run this in a X11 session, or use 'nographic' parameter.

Alex

On Wed, 31 Aug 2022 at 13:17, Monsees, Steven C (US) via lists.yoctoproject.org 
 wrote:
>
>
>
> Hello:
>
>
>
> I am attempting to evaluate dunfell -23.0.17, on a cetos7 linux box.
>
>
>
> When building the default : core-image-minimal or core-image-sato, 
> clean with no errors/warnings, on execution
>
> I get the following error for both (see below)…
>
>
>
> (1)Is this an issue with the release ?,  and is there another release I 
> should be using ?
>
> (2)Is there something I need to pre-configure for these builds to run 
> properly ?
>
> (3)How might I resolve this issue ?
>
>
>
> 06:57 smonsees@yix490016 
> /disk0/scratch/smonsees/yocto/yocto-dunfell/poky-dunfell-23.0.17/dunfe
> ll_Min>runqemu qemux86-64
>
> runqemu - INFO - Running MACHINE=qemux86-64 bitbake -e ...
>
> runqemu - INFO - Continuing with the following parameters:
>
> KERNEL: 
> [/disk0/scratch/smonsees/yocto/yocto-dunfell/poky-dunfell-23.0.17/dunf
> ell_Min/tmp/deploy/images/qemux86-64/bzImage--5.4.192+gitAUTOINC+3fecb
> 08507_85f0668fea-r0-qemux86-64-20220830151817.bin]
>
> MACHINE: [qemux86-64]
>
> FSTYPE: [ext4]
>
> ROOTFS: 
> [/disk0/scratch/smonsees/yocto/yocto-dunfell/poky-dunfell-23.0.17/dunf
> ell_Min/tmp/deploy/images/qemux86-64/core-image-minimal-qemux86-64-202
> 20830151817.rootfs.ext4]
>
> CONFFILE: 
> [/disk0/scratch/smonsees/yocto/yocto-dunfell/poky-dunfell-23.0.17/dunf
> ell_Min/tmp/deploy/images/qemux86-64/core-image-minimal-qemux86-64-202
> 20830151817.qemuboot.conf]
>
>
>
> runqemu - INFO - Setting up tap interface under sudo
>
> [sudo] password for smonsees:
>
> runqemu - INFO - Network configuration: 
> ip=192.168.7.2::192.168.7.1:255.255.255.0
>
> runqemu - INFO - Running 
> /disk0/scratch/smonsees/yocto/yocto-dunfell/poky-dunfell-23.0.17/dunfell_Min/tmp/work/x86_64-linux/qemu-helper-native/1.0-r1/recipe-sysroot-native/usr/bin/qemu-system-x86_64
>  -device virtio-net-pci,netdev=net0,mac=52:54:00:12:34:02 -netdev 
> tap,id=net0,ifname=tap0,script=no,downscript=no -drive 
> file=/disk0/scratch/smonsees/yocto/yocto-dunfell/poky-dunfell-23.0.17/dunfell_Min/tmp/deploy/images/qemux86-64/core-image-minimal-qemux86-64-20220830151817.rootfs.ext4,if=virtio,format=raw
>  -show-cursor -usb -device usb-tablet -object 
> rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0   
> -cpu core2duo -m 256 -serial mon:vc -serial null -kernel 
> /disk0/scratch/smonsees/yocto/yocto-dunfell/poky-dunfell-23.0.17/dunfell_Min/tmp/deploy/images/qemux86-64/bzImage--5.4.192+gitAUTOINC+3fecb08507_85f0668fea-r0-qemux86-64-20220830151817.bin
>  -append 'root=/dev/vda rw  mem=256M 
> ip=192.168.7.2::192.168.7.1:255.255.255.0 oprofile.timer=1 '
>
>
>
> runqemu - ERROR - Failed to run qemu: Could not initialize SDL(x11 not 
> available) - exiting
>
>
>
> runqemu - INFO - Cleaning up
>
> Set 'tap0' nonpersistent
>
>
>
> Thank you for your help,
>
> Steve
>
>
>
>
> 
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#57957): https://lists.yoctoproject.org/g/yocto/message/57957
Mute This Topic: https://lists.yoctoproject.org/mt/93368526/21656
Mute #yocto:https://lists.yoctoproject.org/g/yocto/mutehashtag/yocto
Mute #dunfell:https://lists.yoctoproject.org/g/yocto/mutehashtag/dunfell
Mute #qemux86-64:https://lists.yoctoproject.org/g/yocto/mutehashtag/qemux86-64
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] #yocto #dunfell #qemux86-64

2022-08-31 Thread Alexander Kanavin
I need to see the errors to give a useful answer. If you want wayland,
use core-image-weston as the image.

Alex

On Wed, 31 Aug 2022 at 13:37, Monsees, Steven C (US) via
lists.yoctoproject.org
 wrote:
>
>
> Thank you for the quick replay...
>
> The no graphic option does work but the kernel comes up with errors...
>
> Will full wayland support be added to the dunfell release ?
>
> Steve
>
> -Original Message-
> From: yocto@lists.yoctoproject.org  On Behalf 
> Of Alexander Kanavin
> Sent: Wednesday, August 31, 2022 7:23 AM
> To: Monsees, Steven C (US) 
> Cc: yocto@lists.yoctoproject.org
> Subject: Re: [yocto] #yocto #dunfell #qemux86-64
>
> External Email Alert
>
> This email has been sent from an account outside of the BAE Systems network.
>
> Please treat the email with caution, especially if you are requested to click 
> on a link, decrypt/open an attachment, or enable macros.  For further 
> information on how to spot phishing, access “Cybersecurity OneSpace Page” and 
> report phishing by clicking the button “Report Phishing” on the Outlook 
> toolbar.
>
>
> You need to run this in a X11 session, or use 'nographic' parameter.
>
> Alex
>
> On Wed, 31 Aug 2022 at 13:17, Monsees, Steven C (US) via 
> lists.yoctoproject.org  
> wrote:
> >
> >
> >
> > Hello:
> >
> >
> >
> > I am attempting to evaluate dunfell -23.0.17, on a cetos7 linux box.
> >
> >
> >
> > When building the default : core-image-minimal or core-image-sato,
> > clean with no errors/warnings, on execution
> >
> > I get the following error for both (see below)…
> >
> >
> >
> > (1)Is this an issue with the release ?,  and is there another release I 
> > should be using ?
> >
> > (2)Is there something I need to pre-configure for these builds to run 
> > properly ?
> >
> > (3)How might I resolve this issue ?
> >
> >
> >
> > 06:57 smonsees@yix490016
> > /disk0/scratch/smonsees/yocto/yocto-dunfell/poky-dunfell-23.0.17/dunfe
> > ll_Min>runqemu qemux86-64
> >
> > runqemu - INFO - Running MACHINE=qemux86-64 bitbake -e ...
> >
> > runqemu - INFO - Continuing with the following parameters:
> >
> > KERNEL:
> > [/disk0/scratch/smonsees/yocto/yocto-dunfell/poky-dunfell-23.0.17/dunf
> > ell_Min/tmp/deploy/images/qemux86-64/bzImage--5.4.192+gitAUTOINC+3fecb
> > 08507_85f0668fea-r0-qemux86-64-20220830151817.bin]
> >
> > MACHINE: [qemux86-64]
> >
> > FSTYPE: [ext4]
> >
> > ROOTFS:
> > [/disk0/scratch/smonsees/yocto/yocto-dunfell/poky-dunfell-23.0.17/dunf
> > ell_Min/tmp/deploy/images/qemux86-64/core-image-minimal-qemux86-64-202
> > 20830151817.rootfs.ext4]
> >
> > CONFFILE:
> > [/disk0/scratch/smonsees/yocto/yocto-dunfell/poky-dunfell-23.0.17/dunf
> > ell_Min/tmp/deploy/images/qemux86-64/core-image-minimal-qemux86-64-202
> > 20830151817.qemuboot.conf]
> >
> >
> >
> > runqemu - INFO - Setting up tap interface under sudo
> >
> > [sudo] password for smonsees:
> >
> > runqemu - INFO - Network configuration:
> > ip=192.168.7.2::192.168.7.1:255.255.255.0
> >
> > runqemu - INFO - Running 
> > /disk0/scratch/smonsees/yocto/yocto-dunfell/poky-dunfell-23.0.17/dunfell_Min/tmp/work/x86_64-linux/qemu-helper-native/1.0-r1/recipe-sysroot-native/usr/bin/qemu-system-x86_64
> >  -device virtio-net-pci,netdev=net0,mac=52:54:00:12:34:02 -netdev 
> > tap,id=net0,ifname=tap0,script=no,downscript=no -drive 
> > file=/disk0/scratch/smonsees/yocto/yocto-dunfell/poky-dunfell-23.0.17/dunfell_Min/tmp/deploy/images/qemux86-64/core-image-minimal-qemux86-64-20220830151817.rootfs.ext4,if=virtio,format=raw
> >  -show-cursor -usb -device usb-tablet -object 
> > rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0   
> > -cpu core2duo -m 256 -serial mon:vc -serial null -kernel 
> > /disk0/scratch/smonsees/yocto/yocto-dunfell/poky-dunfell-23.0.17/dunfell_Min/tmp/deploy/images/qemux86-64/bzImage--5.4.192+gitAUTOINC+3fecb08507_85f0668fea-r0-qemux86-64-20220830151817.bin
> >  -append 'root=/dev/vda rw  mem=256M 
> > ip=192.168.7.2::192.168.7.1:255.255.255.0 oprofile.timer=1 '
> >
> >
> >
> > runqemu - ERROR - Failed to run qemu: Could not initialize SDL(x11 not
> > available) - exiting
> >
> >
> >
> > runqemu - INFO - Cleaning up
> >
> > Set 'tap0' nonpersistent
> >
> >
> >
> > Thank you for your help,
> >
> > Steve
> >
> >
> >
> >
> >
> >
>
> 
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#57958): https://lists.yoctoproject.org/g/yocto/message/57958
Mute This Topic: https://lists.yoctoproject.org/mt/93368526/21656
Mute #yocto:https://lists.yoctoproject.org/g/yocto/mutehashtag/yocto
Mute #dunfell:https://lists.yoctoproject.org/g/yocto/mutehashtag/dunfell
Mute #qemux86-64:https://lists.yoctoproject.org/g/yocto/mutehashtag/qemux86-64
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] #yocto #dunfell #qemux86-64

2022-08-31 Thread Monsees, Steven C (US) via lists.yoctoproject.org

One final question, does this issue exist in kirkstone-4.0.2 ?

Thanks,
Steve

-Original Message-
From: Alexander Kanavin  
Sent: Wednesday, August 31, 2022 7:42 AM
To: Monsees, Steven C (US) 
Cc: yocto@lists.yoctoproject.org
Subject: Re: [yocto] #yocto #dunfell #qemux86-64

External Email Alert

This email has been sent from an account outside of the BAE Systems network.

Please treat the email with caution, especially if you are requested to click 
on a link, decrypt/open an attachment, or enable macros.  For further 
information on how to spot phishing, access “Cybersecurity OneSpace Page” and 
report phishing by clicking the button “Report Phishing” on the Outlook toolbar.


I need to see the errors to give a useful answer. If you want wayland, use 
core-image-weston as the image.

Alex

On Wed, 31 Aug 2022 at 13:37, Monsees, Steven C (US) via lists.yoctoproject.org 
 wrote:
>
>
> Thank you for the quick replay...
>
> The no graphic option does work but the kernel comes up with errors...
>
> Will full wayland support be added to the dunfell release ?
>
> Steve
>
> -Original Message-
> From: yocto@lists.yoctoproject.org  On 
> Behalf Of Alexander Kanavin
> Sent: Wednesday, August 31, 2022 7:23 AM
> To: Monsees, Steven C (US) 
> Cc: yocto@lists.yoctoproject.org
> Subject: Re: [yocto] #yocto #dunfell #qemux86-64
>
> External Email Alert
>
> This email has been sent from an account outside of the BAE Systems network.
>
> Please treat the email with caution, especially if you are requested to click 
> on a link, decrypt/open an attachment, or enable macros.  For further 
> information on how to spot phishing, access “Cybersecurity OneSpace Page” and 
> report phishing by clicking the button “Report Phishing” on the Outlook 
> toolbar.
>
>
> You need to run this in a X11 session, or use 'nographic' parameter.
>
> Alex
>
> On Wed, 31 Aug 2022 at 13:17, Monsees, Steven C (US) via 
> lists.yoctoproject.org  
> wrote:
> >
> >
> >
> > Hello:
> >
> >
> >
> > I am attempting to evaluate dunfell -23.0.17, on a cetos7 linux box.
> >
> >
> >
> > When building the default : core-image-minimal or core-image-sato, 
> > clean with no errors/warnings, on execution
> >
> > I get the following error for both (see below)…
> >
> >
> >
> > (1)Is this an issue with the release ?,  and is there another release I 
> > should be using ?
> >
> > (2)Is there something I need to pre-configure for these builds to run 
> > properly ?
> >
> > (3)How might I resolve this issue ?
> >
> >
> >
> > 06:57 smonsees@yix490016
> > /disk0/scratch/smonsees/yocto/yocto-dunfell/poky-dunfell-23.0.17/dun
> > fe
> > ll_Min>runqemu qemux86-64
> >
> > runqemu - INFO - Running MACHINE=qemux86-64 bitbake -e ...
> >
> > runqemu - INFO - Continuing with the following parameters:
> >
> > KERNEL:
> > [/disk0/scratch/smonsees/yocto/yocto-dunfell/poky-dunfell-23.0.17/du
> > nf 
> > ell_Min/tmp/deploy/images/qemux86-64/bzImage--5.4.192+gitAUTOINC+3fe
> > cb 08507_85f0668fea-r0-qemux86-64-20220830151817.bin]
> >
> > MACHINE: [qemux86-64]
> >
> > FSTYPE: [ext4]
> >
> > ROOTFS:
> > [/disk0/scratch/smonsees/yocto/yocto-dunfell/poky-dunfell-23.0.17/du
> > nf
> > ell_Min/tmp/deploy/images/qemux86-64/core-image-minimal-qemux86-64-2
> > 02
> > 20830151817.rootfs.ext4]
> >
> > CONFFILE:
> > [/disk0/scratch/smonsees/yocto/yocto-dunfell/poky-dunfell-23.0.17/du
> > nf
> > ell_Min/tmp/deploy/images/qemux86-64/core-image-minimal-qemux86-64-2
> > 02
> > 20830151817.qemuboot.conf]
> >
> >
> >
> > runqemu - INFO - Setting up tap interface under sudo
> >
> > [sudo] password for smonsees:
> >
> > runqemu - INFO - Network configuration:
> > ip=192.168.7.2::192.168.7.1:255.255.255.0
> >
> > runqemu - INFO - Running 
> > /disk0/scratch/smonsees/yocto/yocto-dunfell/poky-dunfell-23.0.17/dunfell_Min/tmp/work/x86_64-linux/qemu-helper-native/1.0-r1/recipe-sysroot-native/usr/bin/qemu-system-x86_64
> >  -device virtio-net-pci,netdev=net0,mac=52:54:00:12:34:02 -netdev 
> > tap,id=net0,ifname=tap0,script=no,downscript=no -drive 
> > file=/disk0/scratch/smonsees/yocto/yocto-dunfell/poky-dunfell-23.0.17/dunfell_Min/tmp/deploy/images/qemux86-64/core-image-minimal-qemux86-64-20220830151817.rootfs.ext4,if=virtio,format=raw
> >  -show-cursor -usb -device usb-tablet -object 
> > rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0   
> > -cpu core2duo -m 2

Re: [yocto] #yocto #dunfell #qemux86-64

2022-08-31 Thread Alexander Kanavin
The issue is on your side, not on yocto's. By default qemu is
configured to show graphical output in a X11 window, so you need to
run it in a X11 session, with correctly set DISPLAY environment
variable.

Alex

On Wed, 31 Aug 2022 at 14:10, Monsees, Steven C (US)
 wrote:
>
>
> One final question, does this issue exist in kirkstone-4.0.2 ?
>
> Thanks,
> Steve
>
> -Original Message-
> From: Alexander Kanavin 
> Sent: Wednesday, August 31, 2022 7:42 AM
> To: Monsees, Steven C (US) 
> Cc: yocto@lists.yoctoproject.org
> Subject: Re: [yocto] #yocto #dunfell #qemux86-64
>
> External Email Alert
>
> This email has been sent from an account outside of the BAE Systems network.
>
> Please treat the email with caution, especially if you are requested to click 
> on a link, decrypt/open an attachment, or enable macros.  For further 
> information on how to spot phishing, access “Cybersecurity OneSpace Page” and 
> report phishing by clicking the button “Report Phishing” on the Outlook 
> toolbar.
>
>
> I need to see the errors to give a useful answer. If you want wayland, use 
> core-image-weston as the image.
>
> Alex
>
> On Wed, 31 Aug 2022 at 13:37, Monsees, Steven C (US) via 
> lists.yoctoproject.org  
> wrote:
> >
> >
> > Thank you for the quick replay...
> >
> > The no graphic option does work but the kernel comes up with errors...
> >
> > Will full wayland support be added to the dunfell release ?
> >
> > Steve
> >
> > -Original Message-
> > From: yocto@lists.yoctoproject.org  On
> > Behalf Of Alexander Kanavin
> > Sent: Wednesday, August 31, 2022 7:23 AM
> > To: Monsees, Steven C (US) 
> > Cc: yocto@lists.yoctoproject.org
> > Subject: Re: [yocto] #yocto #dunfell #qemux86-64
> >
> > External Email Alert
> >
> > This email has been sent from an account outside of the BAE Systems network.
> >
> > Please treat the email with caution, especially if you are requested to 
> > click on a link, decrypt/open an attachment, or enable macros.  For further 
> > information on how to spot phishing, access “Cybersecurity OneSpace Page” 
> > and report phishing by clicking the button “Report Phishing” on the Outlook 
> > toolbar.
> >
> >
> > You need to run this in a X11 session, or use 'nographic' parameter.
> >
> > Alex
> >
> > On Wed, 31 Aug 2022 at 13:17, Monsees, Steven C (US) via 
> > lists.yoctoproject.org 
> >  wrote:
> > >
> > >
> > >
> > > Hello:
> > >
> > >
> > >
> > > I am attempting to evaluate dunfell -23.0.17, on a cetos7 linux box.
> > >
> > >
> > >
> > > When building the default : core-image-minimal or core-image-sato,
> > > clean with no errors/warnings, on execution
> > >
> > > I get the following error for both (see below)…
> > >
> > >
> > >
> > > (1)Is this an issue with the release ?,  and is there another release 
> > > I should be using ?
> > >
> > > (2)Is there something I need to pre-configure for these builds to run 
> > > properly ?
> > >
> > > (3)How might I resolve this issue ?
> > >
> > >
> > >
> > > 06:57 smonsees@yix490016
> > > /disk0/scratch/smonsees/yocto/yocto-dunfell/poky-dunfell-23.0.17/dun
> > > fe
> > > ll_Min>runqemu qemux86-64
> > >
> > > runqemu - INFO - Running MACHINE=qemux86-64 bitbake -e ...
> > >
> > > runqemu - INFO - Continuing with the following parameters:
> > >
> > > KERNEL:
> > > [/disk0/scratch/smonsees/yocto/yocto-dunfell/poky-dunfell-23.0.17/du
> > > nf
> > > ell_Min/tmp/deploy/images/qemux86-64/bzImage--5.4.192+gitAUTOINC+3fe
> > > cb 08507_85f0668fea-r0-qemux86-64-20220830151817.bin]
> > >
> > > MACHINE: [qemux86-64]
> > >
> > > FSTYPE: [ext4]
> > >
> > > ROOTFS:
> > > [/disk0/scratch/smonsees/yocto/yocto-dunfell/poky-dunfell-23.0.17/du
> > > nf
> > > ell_Min/tmp/deploy/images/qemux86-64/core-image-minimal-qemux86-64-2
> > > 02
> > > 20830151817.rootfs.ext4]
> > >
> > > CONFFILE:
> > > [/disk0/scratch/smonsees/yocto/yocto-dunfell/poky-dunfell-23.0.17/du
> > > nf
> > > ell_Min/tmp/deploy/images/qemux86-64/core-image-minimal-qemux86-64-2
> > > 02
> > > 20830151817.qemuboot.conf]
> > >
> > >
> > >
> > > runqemu - INFO - Setting up tap interface und

Re: [yocto] #yocto #dunfell #qemux86-64

2022-08-31 Thread Monsees, Steven C (US) via lists.yoctoproject.org

I will have to double check, but the issue did not appear under zeus... 
so qemu changed sometime after zeus ?

-Original Message-
From: Alexander Kanavin  
Sent: Wednesday, August 31, 2022 8:12 AM
To: Monsees, Steven C (US) 
Cc: yocto@lists.yoctoproject.org
Subject: Re: [yocto] #yocto #dunfell #qemux86-64

External Email Alert

This email has been sent from an account outside of the BAE Systems network.

Please treat the email with caution, especially if you are requested to click 
on a link, decrypt/open an attachment, or enable macros.  For further 
information on how to spot phishing, access “Cybersecurity OneSpace Page” and 
report phishing by clicking the button “Report Phishing” on the Outlook toolbar.


The issue is on your side, not on yocto's. By default qemu is configured to 
show graphical output in a X11 window, so you need to run it in a X11 session, 
with correctly set DISPLAY environment variable.

Alex

On Wed, 31 Aug 2022 at 14:10, Monsees, Steven C (US) 
 wrote:
>
>
> One final question, does this issue exist in kirkstone-4.0.2 ?
>
> Thanks,
> Steve
>
> -Original Message-
> From: Alexander Kanavin 
> Sent: Wednesday, August 31, 2022 7:42 AM
> To: Monsees, Steven C (US) 
> Cc: yocto@lists.yoctoproject.org
> Subject: Re: [yocto] #yocto #dunfell #qemux86-64
>
> External Email Alert
>
> This email has been sent from an account outside of the BAE Systems network.
>
> Please treat the email with caution, especially if you are requested to click 
> on a link, decrypt/open an attachment, or enable macros.  For further 
> information on how to spot phishing, access “Cybersecurity OneSpace Page” and 
> report phishing by clicking the button “Report Phishing” on the Outlook 
> toolbar.
>
>
> I need to see the errors to give a useful answer. If you want wayland, use 
> core-image-weston as the image.
>
> Alex
>
> On Wed, 31 Aug 2022 at 13:37, Monsees, Steven C (US) via 
> lists.yoctoproject.org  
> wrote:
> >
> >
> > Thank you for the quick replay...
> >
> > The no graphic option does work but the kernel comes up with errors...
> >
> > Will full wayland support be added to the dunfell release ?
> >
> > Steve
> >
> > -Original Message-
> > From: yocto@lists.yoctoproject.org  On 
> > Behalf Of Alexander Kanavin
> > Sent: Wednesday, August 31, 2022 7:23 AM
> > To: Monsees, Steven C (US) 
> > Cc: yocto@lists.yoctoproject.org
> > Subject: Re: [yocto] #yocto #dunfell #qemux86-64
> >
> > External Email Alert
> >
> > This email has been sent from an account outside of the BAE Systems network.
> >
> > Please treat the email with caution, especially if you are requested to 
> > click on a link, decrypt/open an attachment, or enable macros.  For further 
> > information on how to spot phishing, access “Cybersecurity OneSpace Page” 
> > and report phishing by clicking the button “Report Phishing” on the Outlook 
> > toolbar.
> >
> >
> > You need to run this in a X11 session, or use 'nographic' parameter.
> >
> > Alex
> >
> > On Wed, 31 Aug 2022 at 13:17, Monsees, Steven C (US) via 
> > lists.yoctoproject.org 
> >  wrote:
> > >
> > >
> > >
> > > Hello:
> > >
> > >
> > >
> > > I am attempting to evaluate dunfell -23.0.17, on a cetos7 linux box.
> > >
> > >
> > >
> > > When building the default : core-image-minimal or core-image-sato, 
> > > clean with no errors/warnings, on execution
> > >
> > > I get the following error for both (see below)…
> > >
> > >
> > >
> > > (1)Is this an issue with the release ?,  and is there another release 
> > > I should be using ?
> > >
> > > (2)Is there something I need to pre-configure for these builds to run 
> > > properly ?
> > >
> > > (3)How might I resolve this issue ?
> > >
> > >
> > >
> > > 06:57 smonsees@yix490016
> > > /disk0/scratch/smonsees/yocto/yocto-dunfell/poky-dunfell-23.0.17/d
> > > un
> > > fe
> > > ll_Min>runqemu qemux86-64
> > >
> > > runqemu - INFO - Running MACHINE=qemux86-64 bitbake -e ...
> > >
> > > runqemu - INFO - Continuing with the following parameters:
> > >
> > > KERNEL:
> > > [/disk0/scratch/smonsees/yocto/yocto-dunfell/poky-dunfell-23.0.17/
> > > du
> > > nf
> > > ell_Min/tmp/deploy/images/qemux86-64/bzImage--5.4.192+gitAUTOINC+3
> > > fe cb 08507_85f0668fea-r0-qemux86-64-2022083015181

Re: [yocto] #yocto #dunfell #qemux86-64

2022-08-31 Thread Pavel Zhukov

"Monsees, Steven C (US) via lists.yoctoproject.org" 
 writes:

> I will have to double check, but the issue did not appear under zeus... 
> so qemu changed sometime after zeus ?
Not qemu but runqemu script has been changed not so long time ago (this
year) to pass `-display sdl` parameter unless nographic option has been 
specified. As it was mentioned you have to pass -nographic to use
runqemu without SDL frontend

>
> -Original Message-
> From: Alexander Kanavin  
> Sent: Wednesday, August 31, 2022 8:12 AM
> To: Monsees, Steven C (US) 
> Cc: yocto@lists.yoctoproject.org
> Subject: Re: [yocto] #yocto #dunfell #qemux86-64
>
> External Email Alert
>
> This email has been sent from an account outside of the BAE Systems network.
>
> Please treat the email with caution, especially if you are requested
> to click on a link, decrypt/open an attachment, or enable macros.  For
> further information on how to spot phishing, access “Cybersecurity
> OneSpace Page” and report phishing by clicking the button “Report
> Phishing” on the Outlook toolbar.
>
>
> The issue is on your side, not on yocto's. By default qemu is
> configured to show graphical output in a X11 window, so you need to
> run it in a X11 session, with correctly set DISPLAY environment
> variable.
>
> Alex
>
> On Wed, 31 Aug 2022 at 14:10, Monsees, Steven C (US) 
>  wrote:
>>
>>
>> One final question, does this issue exist in kirkstone-4.0.2 ?
>>
>> Thanks,
>> Steve
>>
>> -Original Message-
>> From: Alexander Kanavin 
>> Sent: Wednesday, August 31, 2022 7:42 AM
>> To: Monsees, Steven C (US) 
>> Cc: yocto@lists.yoctoproject.org
>> Subject: Re: [yocto] #yocto #dunfell #qemux86-64
>>
>> External Email Alert
>>
>> This email has been sent from an account outside of the BAE Systems network.
>>
>> Please treat the email with caution, especially if you are requested
>> to click on a link, decrypt/open an attachment, or enable macros.
>> For further information on how to spot phishing, access
>> “Cybersecurity OneSpace Page” and report phishing by clicking the
>> button “Report Phishing” on the Outlook toolbar.
>>
>>
>> I need to see the errors to give a useful answer. If you want wayland, use 
>> core-image-weston as the image.
>>
>> Alex
>>
>> On Wed, 31 Aug 2022 at 13:37, Monsees, Steven C (US) via 
>> lists.yoctoproject.org 
>>  wrote:
>> >
>> >
>> > Thank you for the quick replay...
>> >
>> > The no graphic option does work but the kernel comes up with errors...
>> >
>> > Will full wayland support be added to the dunfell release ?
>> >
>> > Steve
>> >
>> > -Original Message-
>> > From: yocto@lists.yoctoproject.org  On 
>> > Behalf Of Alexander Kanavin
>> > Sent: Wednesday, August 31, 2022 7:23 AM
>> > To: Monsees, Steven C (US) 
>> > Cc: yocto@lists.yoctoproject.org
>> > Subject: Re: [yocto] #yocto #dunfell #qemux86-64
>> >
>> > External Email Alert
>> >
>> > This email has been sent from an account outside of the BAE Systems 
>> > network.
>> >
>> > Please treat the email with caution, especially if you are
>> > requested to click on a link, decrypt/open an attachment, or
>> > enable macros.  For further information on how to spot phishing,
>> > access “Cybersecurity OneSpace Page” and report phishing by
>> > clicking the button “Report Phishing” on the Outlook toolbar.
>> >
>> >
>> > You need to run this in a X11 session, or use 'nographic' parameter.
>> >
>> > Alex
>> >
>> > On Wed, 31 Aug 2022 at 13:17, Monsees, Steven C (US) via 
>> > lists.yoctoproject.org 
>> >  wrote:
>> > >
>> > >
>> > >
>> > > Hello:
>> > >
>> > >
>> > >
>> > > I am attempting to evaluate dunfell -23.0.17, on a cetos7 linux box.
>> > >
>> > >
>> > >
>> > > When building the default : core-image-minimal or core-image-sato, 
>> > > clean with no errors/warnings, on execution
>> > >
>> > > I get the following error for both (see below)…
>> > >
>> > >
>> > >
>> > > (1)Is this an issue with the release ?,  and is there another 
>> > > release I should be using ?
>> > >
>> > > (2)Is there something I need to pre-configure for these builds to 
>> &

Re: [yocto] #yocto #dunfell #qemux86-64

2022-08-31 Thread Khem Raj

On 8/31/22 4:36 AM, Monsees, Steven C (US) via lists.yoctoproject.org wrote:


Thank you for the quick replay...

The no graphic option does work but the kernel comes up with errors...

Will full wayland support be added to the dunfell release ?



you can use vnc backend which is enabled by default. by using
runqemu publicvnc

I wish runqemu was smart enough to prioritize, interactive UI e.g. 
sdl/gtk if not available then fall back to vnc, and then use nographic

in order.

and please look at kirkstone if you are evaluating next LTS, dunfell is 
still supported but is already inching towards its end comparatively you 
get more modern package versions with kirkstone.



Steve

-Original Message-
From: yocto@lists.yoctoproject.org  On Behalf Of 
Alexander Kanavin
Sent: Wednesday, August 31, 2022 7:23 AM
To: Monsees, Steven C (US) 
Cc: yocto@lists.yoctoproject.org
Subject: Re: [yocto] #yocto #dunfell #qemux86-64

External Email Alert

This email has been sent from an account outside of the BAE Systems network.

Please treat the email with caution, especially if you are requested to click 
on a link, decrypt/open an attachment, or enable macros.  For further 
information on how to spot phishing, access “Cybersecurity OneSpace Page” and 
report phishing by clicking the button “Report Phishing” on the Outlook toolbar.


You need to run this in a X11 session, or use 'nographic' parameter.

Alex

On Wed, 31 Aug 2022 at 13:17, Monsees, Steven C (US) via lists.yoctoproject.org 
 wrote:




Hello:



I am attempting to evaluate dunfell -23.0.17, on a cetos7 linux box.



When building the default : core-image-minimal or core-image-sato,
clean with no errors/warnings, on execution

I get the following error for both (see below)…



(1)Is this an issue with the release ?,  and is there another release I 
should be using ?

(2)Is there something I need to pre-configure for these builds to run 
properly ?

(3)How might I resolve this issue ?



06:57 smonsees@yix490016
/disk0/scratch/smonsees/yocto/yocto-dunfell/poky-dunfell-23.0.17/dunfe
ll_Min>runqemu qemux86-64

runqemu - INFO - Running MACHINE=qemux86-64 bitbake -e ...

runqemu - INFO - Continuing with the following parameters:

KERNEL:
[/disk0/scratch/smonsees/yocto/yocto-dunfell/poky-dunfell-23.0.17/dunf
ell_Min/tmp/deploy/images/qemux86-64/bzImage--5.4.192+gitAUTOINC+3fecb
08507_85f0668fea-r0-qemux86-64-20220830151817.bin]

MACHINE: [qemux86-64]

FSTYPE: [ext4]

ROOTFS:
[/disk0/scratch/smonsees/yocto/yocto-dunfell/poky-dunfell-23.0.17/dunf
ell_Min/tmp/deploy/images/qemux86-64/core-image-minimal-qemux86-64-202
20830151817.rootfs.ext4]

CONFFILE:
[/disk0/scratch/smonsees/yocto/yocto-dunfell/poky-dunfell-23.0.17/dunf
ell_Min/tmp/deploy/images/qemux86-64/core-image-minimal-qemux86-64-202
20830151817.qemuboot.conf]



runqemu - INFO - Setting up tap interface under sudo

[sudo] password for smonsees:

runqemu - INFO - Network configuration:
ip=192.168.7.2::192.168.7.1:255.255.255.0

runqemu - INFO - Running 
/disk0/scratch/smonsees/yocto/yocto-dunfell/poky-dunfell-23.0.17/dunfell_Min/tmp/work/x86_64-linux/qemu-helper-native/1.0-r1/recipe-sysroot-native/usr/bin/qemu-system-x86_64
 -device virtio-net-pci,netdev=net0,mac=52:54:00:12:34:02 -netdev 
tap,id=net0,ifname=tap0,script=no,downscript=no -drive 
file=/disk0/scratch/smonsees/yocto/yocto-dunfell/poky-dunfell-23.0.17/dunfell_Min/tmp/deploy/images/qemux86-64/core-image-minimal-qemux86-64-20220830151817.rootfs.ext4,if=virtio,format=raw
 -show-cursor -usb -device usb-tablet -object 
rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0   -cpu 
core2duo -m 256 -serial mon:vc -serial null -kernel 
/disk0/scratch/smonsees/yocto/yocto-dunfell/poky-dunfell-23.0.17/dunfell_Min/tmp/deploy/images/qemux86-64/bzImage--5.4.192+gitAUTOINC+3fecb08507_85f0668fea-r0-qemux86-64-20220830151817.bin
 -append 'root=/dev/vda rw  mem=256M ip=192.168.7.2::192.168.7.1:255.255.255.0 
oprofile.timer=1 '



runqemu - ERROR - Failed to run qemu: Could not initialize SDL(x11 not
available) - exiting



runqemu - INFO - Cleaning up

Set 'tap0' nonpersistent



Thank you for your help,

Steve












OpenPGP_0xBB053355919D3314.asc
Description: OpenPGP public key


OpenPGP_signature
Description: OpenPGP digital signature

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#57964): https://lists.yoctoproject.org/g/yocto/message/57964
Mute This Topic: https://lists.yoctoproject.org/mt/93368526/21656
Mute #yocto:https://lists.yoctoproject.org/g/yocto/mutehashtag/yocto
Mute #dunfell:https://lists.yoctoproject.org/g/yocto/mutehashtag/dunfell
Mute #qemux86-64:https://lists.yoctoproject.org/g/yocto/mutehashtag/qemux86-64
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] #yocto #dunfell #qemux86-64

2022-08-31 Thread Monsees, Steven C (US) via lists.yoctoproject.org
Thank you, and I am looking at kirkstone...

-Original Message-
From: Khem Raj  
Sent: Wednesday, August 31, 2022 12:38 PM
To: Monsees, Steven C (US) ; Alexander Kanavin 

Cc: yocto@lists.yoctoproject.org
Subject: Re: [yocto] #yocto #dunfell #qemux86-64

On 8/31/22 4:36 AM, Monsees, Steven C (US) via lists.yoctoproject.org wrote:
> 
> Thank you for the quick replay...
> 
> The no graphic option does work but the kernel comes up with errors...
> 
> Will full wayland support be added to the dunfell release ?
> 

you can use vnc backend which is enabled by default. by using runqemu publicvnc

I wish runqemu was smart enough to prioritize, interactive UI e.g. 
sdl/gtk if not available then fall back to vnc, and then use nographic in order.

and please look at kirkstone if you are evaluating next LTS, dunfell is still 
supported but is already inching towards its end comparatively you get more 
modern package versions with kirkstone.

> Steve
> 
> -Original Message-
> From: yocto@lists.yoctoproject.org  On 
> Behalf Of Alexander Kanavin
> Sent: Wednesday, August 31, 2022 7:23 AM
> To: Monsees, Steven C (US) 
> Cc: yocto@lists.yoctoproject.org
> Subject: Re: [yocto] #yocto #dunfell #qemux86-64
> 
> External Email Alert
> 
> This email has been sent from an account outside of the BAE Systems network.
> 
> Please treat the email with caution, especially if you are requested to click 
> on a link, decrypt/open an attachment, or enable macros.  For further 
> information on how to spot phishing, access “Cybersecurity OneSpace Page” and 
> report phishing by clicking the button “Report Phishing” on the Outlook 
> toolbar.
> 
> 
> You need to run this in a X11 session, or use 'nographic' parameter.
> 
> Alex
> 
> On Wed, 31 Aug 2022 at 13:17, Monsees, Steven C (US) via 
> lists.yoctoproject.org  
> wrote:
>>
>>
>>
>> Hello:
>>
>>
>>
>> I am attempting to evaluate dunfell -23.0.17, on a cetos7 linux box.
>>
>>
>>
>> When building the default : core-image-minimal or core-image-sato, 
>> clean with no errors/warnings, on execution
>>
>> I get the following error for both (see below)…
>>
>>
>>
>> (1)Is this an issue with the release ?,  and is there another release I 
>> should be using ?
>>
>> (2)Is there something I need to pre-configure for these builds to run 
>> properly ?
>>
>> (3)How might I resolve this issue ?
>>
>>
>>
>> 06:57 smonsees@yix490016
>> /disk0/scratch/smonsees/yocto/yocto-dunfell/poky-dunfell-23.0.17/dunf
>> e
>> ll_Min>runqemu qemux86-64
>>
>> runqemu - INFO - Running MACHINE=qemux86-64 bitbake -e ...
>>
>> runqemu - INFO - Continuing with the following parameters:
>>
>> KERNEL:
>> [/disk0/scratch/smonsees/yocto/yocto-dunfell/poky-dunfell-23.0.17/dun
>> f 
>> ell_Min/tmp/deploy/images/qemux86-64/bzImage--5.4.192+gitAUTOINC+3fec
>> b 08507_85f0668fea-r0-qemux86-64-20220830151817.bin]
>>
>> MACHINE: [qemux86-64]
>>
>> FSTYPE: [ext4]
>>
>> ROOTFS:
>> [/disk0/scratch/smonsees/yocto/yocto-dunfell/poky-dunfell-23.0.17/dun
>> f
>> ell_Min/tmp/deploy/images/qemux86-64/core-image-minimal-qemux86-64-20
>> 2
>> 20830151817.rootfs.ext4]
>>
>> CONFFILE:
>> [/disk0/scratch/smonsees/yocto/yocto-dunfell/poky-dunfell-23.0.17/dun
>> f
>> ell_Min/tmp/deploy/images/qemux86-64/core-image-minimal-qemux86-64-20
>> 2
>> 20830151817.qemuboot.conf]
>>
>>
>>
>> runqemu - INFO - Setting up tap interface under sudo
>>
>> [sudo] password for smonsees:
>>
>> runqemu - INFO - Network configuration:
>> ip=192.168.7.2::192.168.7.1:255.255.255.0
>>
>> runqemu - INFO - Running 
>> /disk0/scratch/smonsees/yocto/yocto-dunfell/poky-dunfell-23.0.17/dunfell_Min/tmp/work/x86_64-linux/qemu-helper-native/1.0-r1/recipe-sysroot-native/usr/bin/qemu-system-x86_64
>>  -device virtio-net-pci,netdev=net0,mac=52:54:00:12:34:02 -netdev 
>> tap,id=net0,ifname=tap0,script=no,downscript=no -drive 
>> file=/disk0/scratch/smonsees/yocto/yocto-dunfell/poky-dunfell-23.0.17/dunfell_Min/tmp/deploy/images/qemux86-64/core-image-minimal-qemux86-64-20220830151817.rootfs.ext4,if=virtio,format=raw
>>  -show-cursor -usb -device usb-tablet -object 
>> rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0   
>> -cpu core2duo -m 256 -serial mon:vc -serial null -kernel 
>> /disk0/scratch/smonsees/yocto/yocto-dunfell/poky-dunfell-23.0.17/dunfell_Min/tmp/deploy/images/qemux86-64/bzImage--5.4.192+gitAUTOINC+3fecb08507_85f0668fea-r0-qemux86-64-202

[yocto] Yocto Dunfell - task package failing

2020-06-03 Thread Edson Seabra
Hi, folks.

I'm getting the error below in several packages. Some of then I could fix when 
the recipe "do_intall"
has commands like:

 "cp -a  "

How can I solve this error when the recipe has no "cp" ? What else can cause 
this issue ?

Thanks in advance...


WARNING: firefox-38.3.0esr-r0 do_package: KeyError in 
./package/usr/lib/firefox-devel-38.3.0/sdk/bin
ERROR: firefox-38.3.0esr-r0 do_package: Error executing a python function in 
exec_python_func() autogenerated:

The stack trace of python calls that resulted in this exception/failure was:
File: 'exec_python_func() autogenerated', lineno: 2, function: 
 0001:
 *** 0002:sstate_report_unihash(d)
 0003:
File: '/home/edson/ng-trunk/poky/meta/classes/sstate.bbclass', lineno: 833, 
function: sstate_report_unihash
 0829:report_unihash = getattr(bb.parse.siggen, 'report_unihash', None)
 0830:
 0831:if report_unihash:
 0832:ss = sstate_state_fromvars(d)
 *** 0833:report_unihash(os.getcwd(), ss['task'], d)
 0834:}
 0835:
 0836:#
 0837:# Shell function to decompress and prepare a package for installation
File: '/home/edson/ng-trunk/poky/bitbake/lib/bb/siggen.py', lineno: 554, 
function: report_unihash
 0550:
 0551:if "." in self.method:
 0552:(module, method) = self.method.rsplit('.', 1)
 0553:locs['method'] = 
getattr(importlib.import_module(module), method)
 *** 0554:outhash = bb.utils.better_eval('method(path, sigfile, 
task, d)', locs)
 0555:else:
 0556:outhash = bb.utils.better_eval(self.method + '(path, 
sigfile, task, d)', locs)
 0557:
 0558:try:
File: '/home/edson/ng-trunk/poky/bitbake/lib/bb/utils.py', lineno: 420, 
function: better_eval
 0416:if extraglobals:
 0417:ctx = copy.copy(ctx)
 0418:for g in extraglobals:
 0419:ctx[g] = extraglobals[g]
 *** 0420:return eval(source, ctx, locals)
 0421:
 0422:@contextmanager
 0423:def fileslocked(files):
 0424:"""Context manager for locking and unlocking file locks."""
File: '', lineno: 1, function: 
  File "", line 1, in 

File: '/home/edson/ng-trunk/poky/meta/lib/oe/sstatesig.py', lineno: 585, 
function: OEOuthashBasic
 0581:
 0582:update_hash("\n")
 0583:
 0584:# Process this directory and all its child files
 *** 0585:process(root)
 0586:for f in files:
 0587:if f == 'fixmepath':
 0588:continue
 0589:process(os.path.join(root, f))
File: '/home/edson/ng-trunk/poky/meta/lib/oe/sstatesig.py', lineno: 548, 
function: process
 0544:add_perm(stat.S_IXOTH, 'x')
 0545:
 0546:if include_owners:
 0547:try:
 *** 0548:update_hash(" %10s" % 
pwd.getpwuid(s.st_uid).pw_name)
 0549:update_hash(" %10s" % 
grp.getgrgid(s.st_gid).gr_name)
 0550:except KeyError:
 0551:bb.warn("KeyError in %s" % path)
 0552:raise
Exception: KeyError: 'getpwuid(): uid not found: 1000'

ERROR: Logfile of failure stored in: 
/home/edson/ng-trunk/nodegrid/tmp/work/core2-64-poky-linux/firefox/38.3.0esr-r0/temp/log.do_package.7911
ERROR: Task 
(/home/edson/ng-trunk/meta-extended/meta-graphics/recipes-mozilla/firefox/firefox_38.3.0esr.bb:do_package)
 failed with exit code '1'
NOTE: Tasks Summary: Attempted 5386 tasks of which 5385 didn't need to be rerun 
and 1 failed.

Summary: 1 task failed:
  
/home/edson/ng-trunk/meta-extended/meta-graphics/recipes-mozilla/firefox/firefox_38.3.0esr.bb:do_package



Edson Seabra

Principal Engineer

M +1 510 579 0843

E   edson.sea...@zpesystems.com

[1506169147061_OutlookEmoji-1505330244060_ZPELogo_Email.png1e6c5898-d340-4c90-8c28-e559c26bc7d1.png]
46757 Fremont Blvd., Fremont, CA 94538
zpesystems.com | Request a Nodegrid 
Demo
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#49572): https://lists.yoctoproject.org/g/yocto/message/49572
Mute This Topic: https://lists.yoctoproject.org/mt/74652482/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [yocto] Yocto Dunfell: package.class --> dwarfsrcfiles

2020-07-21 Thread Khem Raj



On 7/21/20 3:45 AM, Jan Hannig wrote:

Hello,

with the upgrade from Yocto Zeus → Dunfell, we observe lots of messages 
when building our product which seem heavy to be understood or to debug.


Actually, it's the failure of the "do_package" task of a proprietary 
module written in C with following message:


ERROR: eds-1.0-r0 do_package: dwarfsrcfiles failed with exit code 1 (cmd 
was ['dwarfsrcfiles', 
'/home/jhannig/workspace/build/mguard3_tmp/work/aarch64-mguard-linux/eds/1.0-r0/package/usr/lib/libhdb.a']):


dwarfsrcfiles: 
/home/jhannig/workspace/build/mguard3_tmp/work/aarch64-mguard-linux/eds/1.0-r0/package/usr/lib/libhdb.a: 
not a valid ELF file


ERROR: Logfile of failure stored in: 
/home/jhannig/workspace/build/mguard3_tmp/work/aarch64-mguard-linux/eds/1.0-r0/temp/log.do_package.13957


ERROR: Task 
(/home/jhannig/workspace/mguard/meta-mguard/recipes-core/eds/eds_1.0.bb:do_package) 
failed with exit code '1'


Following information to understand the problem:

- The code of this module wasn't changed, and it compiled errorless with 
release "Zeus"


- The examination of the file "libhdb.a" brings following results:

- It is possible to unpack the archive-file "libhdb.a": 
  jhannig@jhannig:~/Archiv/MG-2436$ ar x libhdb.a


- The Examination of the content with "file *.o" 
[jhannig@jhannig:~/Archiv/MG-2436$ file *.o] brings following results:


hdb.o: ELF 64-bit LSB relocatable, ARM aarch64, version 1 
(SYSV), not stripped


hdbschema.o:   ELF 64-bit LSB relocatable, ARM aarch64, version 1 
(SYSV), not stripped


hdbstaticschema.o: ELF 64-bit LSB relocatable, ARM aarch64, version 1 
(SYSV), not stripped


utils.o:   ELF 64-bit LSB relocatable, ARM aarch64, version 1 
(SYSV), not stripped


- This corresponds with the expectation and the intended character of 
the file


- Minor changes in the makefile command line didn't change anything in 
the result [EXTRA_OEMAKE += 'CC="${CC}" CPPFLAGS="${CPPFLAGS}" 
CFLAGS="${CFLAGS}" LDFLAGS="${LDFLAGS}" AR="${AR}" EDS_XML="${EDS_XML}"']


- Searching the internet for "dwarfsrcfiles" doesn't bring any 
informative or documentative result, so it doesn't become clear, what 
this tool exactly does.


Following questions asked to the community:

- Which cases of errors result in this error message?


something goes amiss in the ELF object generation, it could be many 
reasons like corrupt files or additional information in these objects 
which can confuse the dwarf reader




- What changed with the new yocto release, that "suddenly" a build 
result is analyzed as failure?


packages get upgraded and in this particular case elfutils is the main 
package and it would have got upgraded to new revision which can bring 
additional behavior.




- Where exactly in the tool code is this error thrown? The message "not 
a valid ELF file" isn't available in the code



its coming from dwarf reader provided by libdwfl from elfutils package.



- What should be done with the archive file and its content to eliminate 
the error?


its not clear as to what might be going here so hard to say, but perhaps 
you can debug dwarfsrcfiles tool using gdb and your .a file as input and 
see why it things its a bad ELF object. It will perhaps give more 
insights. Its a native tool so should be easy to debug through. May be 
share stack trace etc. when it reaches the error state.


Secondly, it will be good to look at the build options used to create 
this .a and see if something stands out.




- Is this behavior well known, and is there any documentation to get 
information about the tool?


its not well known, but its an error state that can happen.



Thanks a lot for help,

kind regards

Jan Hannig

Reasearch and Development

jhan...@phoenixcontact.com 

www.phoenixcontact.com 

...

PHOENIX CONTACT Cyber Security GmbH 

Richard-Willstätter-Straße 6  

D-12489 Berlin  

 

Register Court: AG Charlottenburg, HR B 202908 

Geschäftsführer/General Manager: Kilian Golm

//



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#50052): https://lists.yoctoproject.org/g/yocto/message/50052
Mute This Topic: https://lists.yoctoproject.org/mt/75700982/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[yocto] [dunfell] hidden files/folders in WORKDIR

2021-12-15 Thread Joel Winarske
I'm finding that if I create files/folders (prefixed with '.') in WORKDIR,
they don't get cleaned up with INHERIT += "rm_work".

Is this a feature or a bug?

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#55595): https://lists.yoctoproject.org/g/yocto/message/55595
Mute This Topic: https://lists.yoctoproject.org/mt/87751446/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] Question of Yocto dunfell LTS period

2022-01-13 Thread Chanho Park via lists.yoctoproject.org
Dear folks,

I found the lts period of Yocto dunfell has been extended from 2022.04 to
2024.04[1].
AFAIK, a Yocto LTS version has 2 years period. I tried to find any notice
regarding this but I just found Richard's e-mail[2] in last year.
Can I find any thread or notice regarding this extension?

[1]: https://wiki.yoctoproject.org/wiki/Releases
[2]:
https://lore.kernel.org/yocto/3878e27fb895718b8c5d65f444fea443a56e0587.camel
@linuxfoundation.org/

Best Regards,
Chanho Park


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#55847): https://lists.yoctoproject.org/g/yocto/message/55847
Mute This Topic: https://lists.yoctoproject.org/mt/88411699/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] Dunfell 3.1.1 gcc-sanitizers build failure

2020-06-30 Thread MikeB
I recently tried upgrading from 3.1.0 to 3.1.1.  I'm not sure if this is a bug 
or just my problem.  I maintain five different architectures and all five have 
the same failure in gcc-sanitizers as I'm trying to build the SDK.

| cat: 
/data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work-shared/gcc-9.3.0-r0/gcc-9.3.0/gcc/defaults.h:
 No such file or directory
| WARNING: 
/data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work/aarch64-poky-linux/gcc-sanitizers/9.3.0-r0/temp/run.do_configure.31505:1
 exit 1 from 'grep -v "\#endif.*GCC_DEFAULTS_H" > 
/data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work/aarch64-poky-linux/gcc-sanitizers/9.3.0-r0/gcc-9.3.0/build.aarch64-poky-linux.aarch64-poky-linux/gcc/defaults.h.new'
| ERROR: Execution of 
'/data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work/aarch64-poky-linux/gcc-sanitizers/9.3.0-r0/temp/run.do_configure.31505'
 failed with exit code 1:
| cat: 
/data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work-shared/gcc-9.3.0-r0/gcc-9.3.0/gcc/defaults.h:
 No such file or directory
| WARNING: 
/data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work/aarch64-poky-linux/gcc-sanitizers/9.3.0-r0/temp/run.do_configure.31505:1
 exit 1 from 'grep -v "\#endif.*GCC_DEFAULTS_H" > 
/data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work/aarch64-poky-linux/gcc-sanitizers/9.3.0-r0/gcc-9.3.0/build.aarch64-poky-linux.aarch64-poky-linux/gcc/defaults.h.new'

At first, I thought this may be a dependency issue because I inherit "rm_work" 
to tidy up; but I tried a build without it - i.e. keeping all work around - and 
got the same failure.

I've attached the full error log.  Any troubleshooting advice would be 
appreciated.


log.do_configure.31505
Description: Binary data
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#49793): https://lists.yoctoproject.org/g/yocto/message/49793
Mute This Topic: https://lists.yoctoproject.org/mt/75223575/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[yocto] pyqt5 in yocto dunfell version #dunfell #pyqt5

2020-12-08 Thread anthony . marchand
Hello everyone,

I'm trying the dunfell version of yocto, to migrate from zeus to dunfell. I 
added the layer "meta-qt5", "meta-python2", "meta-openembedded".
The problem I meet follows:
In zeus version, I installed pyqt5 in my rootfs (recipe name is: 
python3-pyqt5). Actually, I tested the command "bitbake python3-pyqt5" for zeus 
version and it works fine. But for dunfell version, the same command raise an 
error.

The source: " 
https://www.riverbankcomputing.com/static/Downloads/PyQt5/5.13.2/PyQt5-5.13.2.tar.gz";
 ( 
https://www.riverbankcomputing.com/static/Downloads/PyQt5/5.13.2/PyQt5-5.13.2.tar.gz
 ) seems to does not exist. I have the error 404. I tested with a wget command 
and it's the same.

Do you have an idea about what's going wrong? Maybe the recipe for PyQt5 
(dunfell version) point to a bad adress.

Thanks for all and best reguards.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#51690): https://lists.yoctoproject.org/g/yocto/message/51690
Mute This Topic: https://lists.yoctoproject.org/mt/78799881/21656
Mute #dunfell:https://lists.yoctoproject.org/g/yocto/mutehashtag/dunfell
Mute #pyqt5:https://lists.yoctoproject.org/g/yocto/mutehashtag/pyqt5
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] Dunfell, nodejs and typescript - short experience report

2021-02-12 Thread Simon Vogl
I have some remarks and questions about the npm/nodejs support in 
dunfell that I wanted to share. We are creating nodejs-based IoT edge 
solutions and upgrading our build environments to Dunfell one by one. In 
the course of this, we are switching to the new npm-implementation and 
found a few small issues.


Firstly, the do_configure() task takes quite some time to complete. 
After a quick analysis, I saw that most of the time is being spent in 
creating the npmrc files while packing the dependent packages. I wrote a 
small workaround to directly create the file instead of calling 'npm 
config', which results in a 3x-4x speedup:


Signed-off-by: Simon Vogl
---
 lib/bb/fetch2/npm.py | 9 +++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/lib/bb/fetch2/npm.py b/lib/bb/fetch2/npm.py
index 4789850..2720d87 100644
--- a/lib/bb/fetch2/npm.py
+++ b/lib/bb/fetch2/npm.py
@@ -97,13 +97,18 @@ class NpmEnvironment(object):
 cmd = "NPM_CONFIG_GLOBALCONFIG=%s " % cfgfile + cmd
 return runfetchcmd(cmd, d, workdir=workdir)
 
+cfg = open(cfgfile, "a")

 if self.configs:
 for key, value in self.configs:
-_run("npm config set %s %s" % (key, shlex.quote(value)))
+cfg.write("%s=%s\n" % (key, shlex.quote(value)))
+#_run("npm config set %s %s" % (key, shlex.quote(value)))
 
 if configs:

 for key, value in configs:
-_run("npm config set %s %s" % (key, shlex.quote(value)))
+cfg.write("%s=%s\n" % (key, shlex.quote(value)))
+# _run("npm config set %s %s" % (key, shlex.quote(value)))
+
+cfg.close()
 
 if args:

 for key, value in args:
-- 2.7.4

Are there any side effects that I did not stumble over yet? And I'd LOVE 
to have these calls running in a thread-pool for better performance...



Secondly, our projects are based on typescript, so a native compile step 
is necessary to create a compiled version for packing. We experimented 
with a separate release branch to check in compiled versions, but this 
is not easy to handle. I played around with npm.bbclass and found a way 
to extend configure (!) with a call to our build script before packaging:


diff --git a/meta/classes/npm.bbclass b/meta/classes/npm.bbclass
index 068032a1e5..31535098cf 100644
--- a/meta/classes/npm.bbclass
+++ b/meta/classes/npm.bbclass
@@ -170,6 +170,11 @@ python npm_do_configure() {
 
 # Configure the main package

 with tempfile.TemporaryDirectory() as tmpdir:
+# install all (native) build dependencies, overrides npm cache:
+ret = os.system("npm i")
+# run build step:
+env.run("npm run build", args=[], workdir=d.getVar("S"))
+
 tarball = npm_pack(env, d.getVar("S"), tmpdir)
 npm_unpack(tarball, d.getVar("NPM_PACKAGE"), d)

As we have plain JS packages as well, I put the modified configure() in 
a subclass and this works for us, but it does not look like a clean 
solution to me. How do you other IoT'ers address this situation?


Simon


--
VoXel Interaction Design  |  www.voxel.at
DI Dr.techn. Simon Vogl   |  si...@voxel.at
Tomaschekweg 46   |  +43 650 2323 555
A-4040 Linz - Austria |
Office address: Industriezeile 35, 4020 Linz (2nd floor)


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#52298): https://lists.yoctoproject.org/g/yocto/message/52298
Mute This Topic: https://lists.yoctoproject.org/mt/80579992/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] #dunfell Path to sources in debugfs

2021-10-05 Thread Khem Raj



On 10/5/21 7:04 AM, bohdan.shubenok@sigma.software wrote:

Hi all,

I`m trying to debug coredump generated on embedded system running 
dunfel. The issue I`m facing is with the source files path in 
"-dbg.rootfs" archive and within dedug portion of a package.

When loaded in QtCreator some sources can`t be found :


The part is missing is "*build/..*". Such notation is obviosly cancels 
itself and adding empty "build" folder manually helps.
This path allings with how it builds. Here is a part of Makefile found 
in build path for sqlite:


    build/Makefile:20:VPATH = ../sqlite-autoconf-3310100
    build/Makefile:313:abs_srcdir = 
/home/bohdan/noah/noah/tmp/work/cortexa8hf-neon-poky-linux-gnueabi/sqlite3/3_3.31.1-r0/build/../sqlite-autoconf-3310100
    build/Makefile:315:abs_top_srcdir = 
/home/bohdan/noah/noah/tmp/work/cortexa8hf-neon-poky-linux-gnueabi/sqlite3/3_3.31.1-r0/build/../sqlite-autoconf-3310100

    build/Makefile:358:srcdir = ../sqlite-autoconf-3310100

So I tried to disable out-of-tree build for sqlite by replacing 
*'inherit autotools*' with '*inherit autotools-brokensep*'. After 
building and loading new debugfs QtCreator was able to found required 
sources:



Is this a known issue or me doing something wrong with build setup?


I think its not a general problem with autotool based projects doing out 
of tree builds but just with sqlite3 package. Perhaps you might want to 
look at compiler commandline options being passed to sqlite3 build and 
see if paths can be adjusted during build to account for out of tree build.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#54964): https://lists.yoctoproject.org/g/yocto/message/54964
Mute This Topic: https://lists.yoctoproject.org/mt/86094129/21656
Mute #dunfell:https://lists.yoctoproject.org/g/yocto/mutehashtag/dunfell
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] #dunfell Path to sources in debugfs

2021-10-05 Thread Robert Berger

Hi,

My comments are inline.

On 05/10/2021 17:04, bohdan.shubenok@sigma.software wrote:

Hi all,

I`m trying to debug coredump generated on embedded system running 
dunfel. 


Just to clarify your user space applications crashes and you try to see 
why? In other words you would like to load the application and the core 
file into your debugger and inspect it?


The issue I`m facing is with the source files path in 
"-dbg.rootfs" archive and within dedug portion of a package.

When loaded in QtCreator some sources can`t be found :


The part is missing is "*build/..*". Such notation is obviosly cancels 
itself and adding empty "build" folder manually helps.
This path allings with how it builds. Here is a part of Makefile found 
in build path for sqlite:


    build/Makefile:20:VPATH = ../sqlite-autoconf-3310100
    build/Makefile:313:abs_srcdir = 
/home/bohdan/noah/noah/tmp/work/cortexa8hf-neon-poky-linux-gnueabi/sqlite3/3_3.31.1-r0/build/../sqlite-autoconf-3310100
    build/Makefile:315:abs_top_srcdir = 
/home/bohdan/noah/noah/tmp/work/cortexa8hf-neon-poky-linux-gnueabi/sqlite3/3_3.31.1-r0/build/../sqlite-autoconf-3310100

    build/Makefile:358:srcdir = ../sqlite-autoconf-3310100

So I tried to disable out-of-tree build for sqlite by replacing 
*'inherit autotools*' with '*inherit autotools-brokensep*'. After 
building and loading new debugfs QtCreator was able to found required 
sources:



Is this a known issue or me doing something wrong with build setup?


This is very strange, but also I am not quite sure how exactly you debug.
I assume you run gdbserver on the target and connect from some cross-gdb 
on your host to it.


You could try to install gdb onto your target plus debug info and 
sources to exclude the cross-gdb configuration problem as I describe below.


If you use gdbserver/cross-gdb I assume directories on your target 
rootfs and host roots are different. So you need to tell your cross-gdb 
on the host where to find the debug info and the sources.


Can you please try something like this?

http://docs.yoctoproject.org/singleindex.html#using-the-gdbserver-method

What I would inspect carefully is something like that:

$ cd directory-holding-the-debugfs-directory
$ arch-gdb
(gdb) set sysroot debugfs
(gdb) set substitute-path /usr/src/debug debugfs/usr/src/debug
(gdb) target remote IP-of-target:1234

At least in the latest and greatest version this works. I remember a bug 
a long time ago with some ancient yocto release with cross-debugging, 
but this was resolved with some upgrade and was certainly older than 
dunfell.


Regards,

Robert

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#54969): https://lists.yoctoproject.org/g/yocto/message/54969
Mute This Topic: https://lists.yoctoproject.org/mt/86094129/21656
Mute #dunfell:https://lists.yoctoproject.org/g/yocto/mutehashtag/dunfell
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] #dunfell Path to sources in debugfs

2021-10-05 Thread bohdan . shubenok
Hi,

I can see this on Qt builds as well. I doubt Qt can have such strange behavior 
left unnoticed for a long time. The part is missing is " *include/QtCore* "

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#54976): https://lists.yoctoproject.org/g/yocto/message/54976
Mute This Topic: https://lists.yoctoproject.org/mt/86094129/21656
Mute #dunfell:https://lists.yoctoproject.org/g/yocto/mutehashtag/dunfell
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] #dunfell Path to sources in debugfs

2021-10-06 Thread bohdan . shubenok
Hi,

I`m trying to debug without connection to target at all. This are the examples 
of what I`m running:

# gdb-multiarch b2010_rootfs/usr/bin/noah-heatsystem 
var/volatile/crash/core.noah.565
(gdb) set sysroot b2010_rootfs/
(gdb) set substitute-path /usr/src/debug b2010/usr/src/debug
 things are getting loaded .

(gdb) bt
#0  0xb4a6a144 in saveCursorsOnList (p=0xccf34862, iRoot=3, pExcept=0xb540b350) 
at ../sqlite-autoconf-3310100/sqlite3.c:64883
 more frames here .

And this how it looks like when out of tree build disabled:

# gdb-multiarch b2010_rootfs/usr/bin/noah-heatsystem 
var/volatile/crash/core.noah.565
(gdb) set sysroot src_add_rootfs/
(gdb) set substitute-path /usr/src/debug src_add_rootfs/usr/src/debug
 things are getting loaded .

(gdb) bt
#0  0xb4a6a144 in saveCursorsOnList (p=0xccf34862, iRoot=3, pExcept=0xb540b350) 
at sqlite3.c:64883
 more frames here .

In a  first case `layout next` command shows that no source code can be found 
and in a second case source code is shown.

Best regards,

Bohdan.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#54977): https://lists.yoctoproject.org/g/yocto/message/54977
Mute This Topic: https://lists.yoctoproject.org/mt/86094129/21656
Mute #dunfell:https://lists.yoctoproject.org/g/yocto/mutehashtag/dunfell
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] [dunfell] hidden files/folders in WORKDIR

2021-12-18 Thread Randy MacLeod

On 2021-12-15 2:22 p.m., Joel Winarske wrote:
I'm finding that if I create files/folders (prefixed with '.') in 
WORKDIR, they don't get cleaned up with INHERIT += "rm_work".


Is this a feature or a bug?


I think it's an oversight that doesn't affect many people.

The rm_work code for WORKDIR is:

http://cgit.openembedded.org/openembedded-core/tree/meta/classes/rm_work.bbclass#n97

cd ${WORKDIR}

for dir in *

do

# Retain only logs and other files in temp, safely ignore

# failures of removing pseudo folers on NFS2/3 server.

if [ $dir = 'pseudo' ]; then

rm -rf -- $dir 2> /dev/null || true

elif ! echo "$excludes" | grep -q -w "$dir"; then

rm -rf -- $dir

fi

done


so you can see that if you want to submit a patch to remove .FOO files,
you'd have to change the glob and exclude . and ..
The comment seems to justify doing that.


I guess one question is how common is it for 'dot' files to be there
and have people come to rely on the fact that rm_work doesn't remove them.

See some analysis below if you're interested but I think it's sensible
to also remove 'dot' files. Wait a day or so to see if anyone has a
use case that would be a problem and if not, could you send a patch?

Btw:
WORKDIR docs:
http://docs.yoctoproject.org/ref-manual/variables.html?highlight=workdir#term-WORKDIR
say that:
 "This directory is located within the TMPDIR directory structure and 
is specific to the recipe being built and the system for which it is 
being built. "

so that seems to give you carte blanche!

and there's no restriction given in the rm_work docs either:
   http://docs.yoctoproject.org/ref-manual/classes.html#ref-classes-rm-work


../Randy


In a build that I have on hand:

# how many files could be cleaned up?
$ ls -d tmp-glibc/work/core2-64-wrs-linux/*/*/[a-Z]* | wc -l

2887


# how many of them are 'dot' files?
$ ls -d tmp-glibc/work/core2-64-wrs-linux/*/*/.[a-Z]* | wc -l

1


# What's the file:
$ ls -d tmp-glibc/work/core2-64-wrs-linux/*/*/.[a-Z]*

tmp-glibc/work/core2-64-wrs-linux/usleep/1.0-r0/.pc


# .pc files are usually stored in a subdirectory like:
#tmp-glibc/work/core2-64-wrs-linux/m4/1.4.19-r0/m4-1.4.19/.pc
# how many 'proper' .pc files are there just out of curiousity?
$ ls -d tmp-glibc/work/core2-64-wrs-linux/*/*/*/.pc | wc -l

79


# Let's look at usleep:
$ ls tmp-glibc/work/core2-64-wrs-linux/usleep/1.0-r0

configure.sstate   deploy-rpms   image 
packages-split  pkgdata-pdata-input  recipe-sysroot 
sstate-install-deploy_source_date_epoch  usleepusleep.spec


COPYINGdeploy-source-date-epoch  license-destdir  patches 
  pkgdata-sysroot  recipe-sysroot-native 
sstate-install-populate_lic  usleep.1


debugsources.list  GPLv2.patch   package  pkgdata 
  pseudo   source-date-epoch  temp 
usleep.c



Ah it has the source in WORKDIR, that seem odd but it's a simple recipe 
with the source

provided in the recipe:
http://cgit.openembedded.org/meta-openembedded/tree/meta-oe/recipes-core/usleep/usleep_1.0.bb?h=master

# Let's clean it:

$ bitbake -c rm_work usleep


$ ls -a tmp-glibc/work/core2-64-wrs-linux/usleep/1.0-r0

.  ..  .pc  temp


$ bitbake -c patch usleep
$ echo $?
0

so that works and it seems there's no harm done.
--
# Randy MacLeod
# Wind River Linux


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#55619): https://lists.yoctoproject.org/g/yocto/message/55619
Mute This Topic: https://lists.yoctoproject.org/mt/87751446/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] Question of Yocto dunfell LTS period

2022-01-13 Thread Matthias Klein
Hello together,

I would like to add a question to the topic:

Why was the LTS period for dunfell extended?
Can we expect the same for kirstone?

Many greetings,
Matthias

-Ursprüngliche Nachricht-
Von: yocto@lists.yoctoproject.org  Im Auftrag von 
Chanho Park via lists.yoctoproject.org
Gesendet: Freitag, 14. Januar 2022 02:14
An: yocto@lists.yoctoproject.org; openembedded-c...@lists.openembedded.org
Cc: richard.pur...@linuxfoundation.org; st...@sakoman.com
Betreff: [yocto] Question of Yocto dunfell LTS period

Dear folks,

I found the lts period of Yocto dunfell has been extended from 2022.04 to 
2024.04[1].
AFAIK, a Yocto LTS version has 2 years period. I tried to find any notice 
regarding this but I just found Richard's e-mail[2] in last year.
Can I find any thread or notice regarding this extension?

[1]: https://wiki.yoctoproject.org/wiki/Releases
[2]:
https://lore.kernel.org/yocto/3878e27fb895718b8c5d65f444fea443a56e0587.camel
@linuxfoundation.org/

Best Regards,
Chanho Park


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#55870): https://lists.yoctoproject.org/g/yocto/message/55870
Mute This Topic: https://lists.yoctoproject.org/mt/88411699/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] Question of Yocto dunfell LTS period

2022-01-14 Thread Richard Purdie
On Fri, 2022-01-14 at 07:14 +, Matthias Klein wrote:
> Hello together,
> 
> I would like to add a question to the topic:
> 
> Why was the LTS period for dunfell extended?
> Can we expect the same for kirstone?

The project members agreed to fun an extension to the lifetime of dunfell from 2
years to 4 years. There should have been some announcements about this but we
lack people on the advocacy side of the project with time to make that happen.
That saddens me a lot and I think it is a missed opportunity but there isn't
much I can do about it beyond what I've already tried.

Extending Dunfell is an experiment so we would base a decision on how well the
dunfell extension is working out. Extending an LTS like this requires a
financial commitment from the project. At this point in time the project cannot
commit to extending kirkstone as we can't guarantee the finances. 

Cheers,

Richard




-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#55878): https://lists.yoctoproject.org/g/yocto/message/55878
Mute This Topic: https://lists.yoctoproject.org/mt/88411699/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] [dunfell][PATCH] pypi.bbclass: Set CVE_PRODUCT to PYPI_PACKAGE

2023-05-15 Thread Sanjaykumar kantibhai Chitroda -X (schitrod - E-INFO CHIPS INC at Cisco) via lists.yoctoproject.org
From: Alex Kiernan 

The CVE product name for PyPI packages is (usually) the same as the PyPI
package name (and not our recipe name), so use that as the default.

Signed-off-by: Alex Kiernan 
Signed-off-by: Alex Kiernan 
Signed-off-by: Richard Purdie 
(cherry picked from commit 61f6b0ad09bf87cdc2d3f08770b7c44cad1d0e58)
Signed-off-by: Sanjay Chitroda 
---
 meta/classes/pypi.bbclass | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/meta/classes/pypi.bbclass b/meta/classes/pypi.bbclass
index 87b4c85fc0..c68367449a 100644
--- a/meta/classes/pypi.bbclass
+++ b/meta/classes/pypi.bbclass
@@ -24,3 +24,5 @@ S = "${WORKDIR}/${PYPI_PACKAGE}-${PV}"
 
 UPSTREAM_CHECK_URI ?= "https://pypi.org/project/${PYPI_PACKAGE}/";
 UPSTREAM_CHECK_REGEX ?= "/${PYPI_PACKAGE}/(?P(\d+[\.\-_]*)+)/"
+
+CVE_PRODUCT ?= "python:${PYPI_PACKAGE}"
-- 
2.35.6


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#60025): https://lists.yoctoproject.org/g/yocto/message/60025
Mute This Topic: https://lists.yoctoproject.org/mt/98900696/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] Update the yocto dunfell to latest version

2023-12-17 Thread MOHAMMED HASSAN
Hi guys,
Can you suggest some resources for me to update to the latest yocto dunfell
version. Currently I am using dunfell 3.1.11 and want to update to 3.1.29.
What could be the consequences?
Also If I manually update the nodejs folder to the latest one will it fetch
the latest nodejs files(example)?

Regards,
Hassan

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#61952): https://lists.yoctoproject.org/g/yocto/message/61952
Mute This Topic: https://lists.yoctoproject.org/mt/103238937/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] Dunfell busybox NFS mount fails to build

2020-06-13 Thread MikeB
I'm trying to build busybox on Dunfell with NFS mount configured 
(CONFIG_FEATURE_MOUNT_NFS).  The build fails with the following.

| util-linux/mount.c:253:11: fatal error: rpc/rpc.h: No such file or directory
|   253 | # include 

Reading online, later versions of glibc no longer install rpc headers.  Once 
suggested solution is to configure glibc with --enable-obsolete-rpc.  I tried 
that, but then the build fails as follows.

ERROR: tcp-wrappers-7.6-r10 do_prepare_recipe_sysroot: The file 
/usr/include/rpcsvc/yppasswd.x is installed by both glibc and libnsl2, aborting

I also tried adding libtirpc as a dependency to busybox; no luck there either.

Can someone give me a workaround for this problem?

Thanks.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#49637): https://lists.yoctoproject.org/g/yocto/message/49637
Mute This Topic: https://lists.yoctoproject.org/mt/74866908/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [yocto] Dunfell 3.1.1 gcc-sanitizers build failure

2020-06-30 Thread Khem Raj



On 6/30/20 2:56 PM, MikeB wrote:
I recently tried upgrading from 3.1.0 to 3.1.1.  I'm not sure if this is 
a bug or just my problem.  I maintain five different architectures and 
all five have the same failure in gcc-sanitizers as I'm trying to build 
the SDK.


| cat: 
/data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work-shared/gcc-9.3.0-r0/gcc-9.3.0/gcc/defaults.h: 
No such file or directory
| WARNING: 
/data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work/aarch64-poky-linux/gcc-sanitizers/9.3.0-r0/temp/run.do_configure.31505:1 
exit 1 from 'grep -v "\#endif.*GCC_DEFAULTS_H" > 
/data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work/aarch64-poky-linux/gcc-sanitizers/9.3.0-r0/gcc-9.3.0/build.aarch64-poky-linux.aarch64-poky-linux/gcc/defaults.h.new'
| ERROR: Execution of 
'/data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work/aarch64-poky-linux/gcc-sanitizers/9.3.0-r0/temp/run.do_configure.31505' 
failed with exit code 1:
| cat: 
/data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work-shared/gcc-9.3.0-r0/gcc-9.3.0/gcc/defaults.h: 
No such file or directory
| WARNING: 
/data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work/aarch64-poky-linux/gcc-sanitizers/9.3.0-r0/temp/run.do_configure.31505:1 
exit 1 from 'grep -v "\#endif.*GCC_DEFAULTS_H" > 
/data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work/aarch64-poky-linux/gcc-sanitizers/9.3.0-r0/gcc-9.3.0/build.aarch64-poky-linux.aarch64-poky-linux/gcc/defaults.h.new'
At first, I thought this may be a dependency issue because I inherit 
"rm_work" to tidy up; but I tried a build without it - i.e. keeping all 
work around - and got the same failure.


I've attached the full error log.  Any troubleshooting advice would be 
appreciated.


try to do a bitbake -ccleansstate on gcc-source-9.3.0





-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#49796): https://lists.yoctoproject.org/g/yocto/message/49796
Mute This Topic: https://lists.yoctoproject.org/mt/75223575/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [yocto] Dunfell 3.1.1 gcc-sanitizers build failure

2020-06-30 Thread Joshua Watt
On Tue, Jun 30, 2020 at 4:56 PM MikeB  wrote:
>
> I recently tried upgrading from 3.1.0 to 3.1.1.  I'm not sure if this is a 
> bug or just my problem.  I maintain five different architectures and all five 
> have the same failure in gcc-sanitizers as I'm trying to build the SDK.
>
> | cat: 
> /data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work-shared/gcc-9.3.0-r0/gcc-9.3.0/gcc/defaults.h:
>  No such file or directory
> | WARNING: 
> /data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work/aarch64-poky-linux/gcc-sanitizers/9.3.0-r0/temp/run.do_configure.31505:1
>  exit 1 from 'grep -v "\#endif.*GCC_DEFAULTS_H" > 
> /data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work/aarch64-poky-linux/gcc-sanitizers/9.3.0-r0/gcc-9.3.0/build.aarch64-poky-linux.aarch64-poky-linux/gcc/defaults.h.new'
> | ERROR: Execution of 
> '/data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work/aarch64-poky-linux/gcc-sanitizers/9.3.0-r0/temp/run.do_configure.31505'
>  failed with exit code 1:
> | cat: 
> /data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work-shared/gcc-9.3.0-r0/gcc-9.3.0/gcc/defaults.h:
>  No such file or directory
> | WARNING: 
> /data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work/aarch64-poky-linux/gcc-sanitizers/9.3.0-r0/temp/run.do_configure.31505:1
>  exit 1 from 'grep -v "\#endif.*GCC_DEFAULTS_H" > 
> /data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work/aarch64-poky-linux/gcc-sanitizers/9.3.0-r0/gcc-9.3.0/build.aarch64-poky-linux.aarch64-poky-linux/gcc/defaults.h.new'
>
> At first, I thought this may be a dependency issue because I inherit 
> "rm_work" to tidy up; but I tried a build without it - i.e. keeping all work 
> around - and got the same failure.

I've encountered a similar error just today when switching SDKMACHINE.
Are you using archive.bbclass by any chance (INHERIT += "archive")? I
just recently fixed a bug in archive.bbclass
(7a57e777597d7f66d065582cfb83cd8f9468f4af) where the archiving of
gcc-source raced with do_preconfigure and I'm wondering if it's
related

>
> I've attached the full error log.  Any troubleshooting advice would be 
> appreciated. 
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#49800): https://lists.yoctoproject.org/g/yocto/message/49800
Mute This Topic: https://lists.yoctoproject.org/mt/75223575/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [yocto] Dunfell 3.1.1 gcc-sanitizers build failure

2020-06-30 Thread Joshua Watt
On Tue, Jun 30, 2020 at 8:08 PM Joshua Watt  wrote:
>
> On Tue, Jun 30, 2020 at 4:56 PM MikeB  wrote:
> >
> > I recently tried upgrading from 3.1.0 to 3.1.1.  I'm not sure if this is a 
> > bug or just my problem.  I maintain five different architectures and all 
> > five have the same failure in gcc-sanitizers as I'm trying to build the SDK.
> >
> > | cat: 
> > /data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work-shared/gcc-9.3.0-r0/gcc-9.3.0/gcc/defaults.h:
> >  No such file or directory
> > | WARNING: 
> > /data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work/aarch64-poky-linux/gcc-sanitizers/9.3.0-r0/temp/run.do_configure.31505:1
> >  exit 1 from 'grep -v "\#endif.*GCC_DEFAULTS_H" > 
> > /data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work/aarch64-poky-linux/gcc-sanitizers/9.3.0-r0/gcc-9.3.0/build.aarch64-poky-linux.aarch64-poky-linux/gcc/defaults.h.new'
> > | ERROR: Execution of 
> > '/data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work/aarch64-poky-linux/gcc-sanitizers/9.3.0-r0/temp/run.do_configure.31505'
> >  failed with exit code 1:
> > | cat: 
> > /data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work-shared/gcc-9.3.0-r0/gcc-9.3.0/gcc/defaults.h:
> >  No such file or directory
> > | WARNING: 
> > /data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work/aarch64-poky-linux/gcc-sanitizers/9.3.0-r0/temp/run.do_configure.31505:1
> >  exit 1 from 'grep -v "\#endif.*GCC_DEFAULTS_H" > 
> > /data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work/aarch64-poky-linux/gcc-sanitizers/9.3.0-r0/gcc-9.3.0/build.aarch64-poky-linux.aarch64-poky-linux/gcc/defaults.h.new'
> >
> > At first, I thought this may be a dependency issue because I inherit 
> > "rm_work" to tidy up; but I tried a build without it - i.e. keeping all 
> > work around - and got the same failure.
>
> I've encountered a similar error just today when switching SDKMACHINE.
> Are you using archive.bbclass by any chance (INHERIT += "archive")? I
> just recently fixed a bug in archive.bbclass
> (7a57e777597d7f66d065582cfb83cd8f9468f4af) where the archiving of
> gcc-source raced with do_preconfigure and I'm wondering if it's
> related

I believe I have fixed this in
https://lists.openembedded.org/g/openembedded-core/message/140161,
please try it out to make sure it solves your issue as well.

>
> >
> > I've attached the full error log.  Any troubleshooting advice would be 
> > appreciated. 
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#49801): https://lists.yoctoproject.org/g/yocto/message/49801
Mute This Topic: https://lists.yoctoproject.org/mt/75223575/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [yocto] Dunfell 3.1.1 gcc-sanitizers build failure

2020-06-30 Thread Steve Sakoman
On Tue, Jun 30, 2020 at 4:53 PM Joshua Watt  wrote:
>
> On Tue, Jun 30, 2020 at 8:08 PM Joshua Watt  wrote:
> >
> > On Tue, Jun 30, 2020 at 4:56 PM MikeB  wrote:
> > >
> > > I recently tried upgrading from 3.1.0 to 3.1.1.  I'm not sure if this is 
> > > a bug or just my problem.  I maintain five different architectures and 
> > > all five have the same failure in gcc-sanitizers as I'm trying to build 
> > > the SDK.
> > >
> > > | cat: 
> > > /data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work-shared/gcc-9.3.0-r0/gcc-9.3.0/gcc/defaults.h:
> > >  No such file or directory
> > > | WARNING: 
> > > /data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work/aarch64-poky-linux/gcc-sanitizers/9.3.0-r0/temp/run.do_configure.31505:1
> > >  exit 1 from 'grep -v "\#endif.*GCC_DEFAULTS_H" > 
> > > /data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work/aarch64-poky-linux/gcc-sanitizers/9.3.0-r0/gcc-9.3.0/build.aarch64-poky-linux.aarch64-poky-linux/gcc/defaults.h.new'
> > > | ERROR: Execution of 
> > > '/data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work/aarch64-poky-linux/gcc-sanitizers/9.3.0-r0/temp/run.do_configure.31505'
> > >  failed with exit code 1:
> > > | cat: 
> > > /data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work-shared/gcc-9.3.0-r0/gcc-9.3.0/gcc/defaults.h:
> > >  No such file or directory
> > > | WARNING: 
> > > /data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work/aarch64-poky-linux/gcc-sanitizers/9.3.0-r0/temp/run.do_configure.31505:1
> > >  exit 1 from 'grep -v "\#endif.*GCC_DEFAULTS_H" > 
> > > /data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work/aarch64-poky-linux/gcc-sanitizers/9.3.0-r0/gcc-9.3.0/build.aarch64-poky-linux.aarch64-poky-linux/gcc/defaults.h.new'
> > >
> > > At first, I thought this may be a dependency issue because I inherit 
> > > "rm_work" to tidy up; but I tried a build without it - i.e. keeping all 
> > > work around - and got the same failure.
> >
> > I've encountered a similar error just today when switching SDKMACHINE.
> > Are you using archive.bbclass by any chance (INHERIT += "archive")? I
> > just recently fixed a bug in archive.bbclass
> > (7a57e777597d7f66d065582cfb83cd8f9468f4af) where the archiving of
> > gcc-source raced with do_preconfigure and I'm wondering if it's
> > related
>
> I believe I have fixed this in
> https://lists.openembedded.org/g/openembedded-core/message/140161,
> please try it out to make sure it solves your issue as well.

That patch came in after the 3.1.1 release, but it is present in the
dunfell branch so it will make it into 3.1.2

Steve
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#49802): https://lists.yoctoproject.org/g/yocto/message/49802
Mute This Topic: https://lists.yoctoproject.org/mt/75223575/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [yocto] Dunfell 3.1.1 gcc-sanitizers build failure

2020-06-30 Thread Steve Sakoman
On Tue, Jun 30, 2020 at 5:08 PM Steve Sakoman via
lists.yoctoproject.org 
wrote:
>
> On Tue, Jun 30, 2020 at 4:53 PM Joshua Watt  wrote:
> >
> > On Tue, Jun 30, 2020 at 8:08 PM Joshua Watt  wrote:
> > >
> > > On Tue, Jun 30, 2020 at 4:56 PM MikeB  wrote:
> > > >
> > > > I recently tried upgrading from 3.1.0 to 3.1.1.  I'm not sure if this 
> > > > is a bug or just my problem.  I maintain five different architectures 
> > > > and all five have the same failure in gcc-sanitizers as I'm trying to 
> > > > build the SDK.
> > > >
> > > > | cat: 
> > > > /data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work-shared/gcc-9.3.0-r0/gcc-9.3.0/gcc/defaults.h:
> > > >  No such file or directory
> > > > | WARNING: 
> > > > /data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work/aarch64-poky-linux/gcc-sanitizers/9.3.0-r0/temp/run.do_configure.31505:1
> > > >  exit 1 from 'grep -v "\#endif.*GCC_DEFAULTS_H" > 
> > > > /data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work/aarch64-poky-linux/gcc-sanitizers/9.3.0-r0/gcc-9.3.0/build.aarch64-poky-linux.aarch64-poky-linux/gcc/defaults.h.new'
> > > > | ERROR: Execution of 
> > > > '/data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work/aarch64-poky-linux/gcc-sanitizers/9.3.0-r0/temp/run.do_configure.31505'
> > > >  failed with exit code 1:
> > > > | cat: 
> > > > /data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work-shared/gcc-9.3.0-r0/gcc-9.3.0/gcc/defaults.h:
> > > >  No such file or directory
> > > > | WARNING: 
> > > > /data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work/aarch64-poky-linux/gcc-sanitizers/9.3.0-r0/temp/run.do_configure.31505:1
> > > >  exit 1 from 'grep -v "\#endif.*GCC_DEFAULTS_H" > 
> > > > /data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work/aarch64-poky-linux/gcc-sanitizers/9.3.0-r0/gcc-9.3.0/build.aarch64-poky-linux.aarch64-poky-linux/gcc/defaults.h.new'
> > > >
> > > > At first, I thought this may be a dependency issue because I inherit 
> > > > "rm_work" to tidy up; but I tried a build without it - i.e. keeping all 
> > > > work around - and got the same failure.
> > >
> > > I've encountered a similar error just today when switching SDKMACHINE.
> > > Are you using archive.bbclass by any chance (INHERIT += "archive")? I
> > > just recently fixed a bug in archive.bbclass
> > > (7a57e777597d7f66d065582cfb83cd8f9468f4af) where the archiving of
> > > gcc-source raced with do_preconfigure and I'm wondering if it's
> > > related
> >
> > I believe I have fixed this in
> > https://lists.openembedded.org/g/openembedded-core/message/140161,
> > please try it out to make sure it solves your issue as well.
>
> That patch came in after the 3.1.1 release, but it is present in the
> dunfell branch so it will make it into 3.1.2

Doh, I'm getting ahead of myself! I was thinking of another
classes/archiver patch that Joshua sent :-)

Steve
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#49803): https://lists.yoctoproject.org/g/yocto/message/49803
Mute This Topic: https://lists.yoctoproject.org/mt/75223575/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [yocto] Dunfell 3.1.1 gcc-sanitizers build failure

2020-07-01 Thread MikeB
The combination of the
https://lists.openembedded.org/g/openembedded-core/message/140161 and
a 'bitbake
-ccleansstate on gcc-source-9.3.0' has gotten me back on track.

Thank you all for the help!

On Tue, Jun 30, 2020 at 11:10 PM Steve Sakoman  wrote:

> On Tue, Jun 30, 2020 at 5:08 PM Steve Sakoman via
> lists.yoctoproject.org 
> wrote:
> >
> > On Tue, Jun 30, 2020 at 4:53 PM Joshua Watt 
> wrote:
> > >
> > > On Tue, Jun 30, 2020 at 8:08 PM Joshua Watt 
> wrote:
> > > >
> > > > On Tue, Jun 30, 2020 at 4:56 PM MikeB  wrote:
> > > > >
> > > > > I recently tried upgrading from 3.1.0 to 3.1.1.  I'm not sure if
> this is a bug or just my problem.  I maintain five different architectures
> and all five have the same failure in gcc-sanitizers as I'm trying to build
> the SDK.
> > > > >
> > > > > | cat:
> /data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work-shared/gcc-9.3.0-r0/gcc-9.3.0/gcc/defaults.h:
> No such file or directory
> > > > > | WARNING:
> /data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work/aarch64-poky-linux/gcc-sanitizers/9.3.0-r0/temp/run.do_configure.31505:1
> exit 1 from 'grep -v "\#endif.*GCC_DEFAULTS_H" >
> /data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work/aarch64-poky-linux/gcc-sanitizers/9.3.0-r0/gcc-9.3.0/build.aarch64-poky-linux.aarch64-poky-linux/gcc/defaults.h.new'
> > > > > | ERROR: Execution of
> '/data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work/aarch64-poky-linux/gcc-sanitizers/9.3.0-r0/temp/run.do_configure.31505'
> failed with exit code 1:
> > > > > | cat:
> /data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work-shared/gcc-9.3.0-r0/gcc-9.3.0/gcc/defaults.h:
> No such file or directory
> > > > > | WARNING:
> /data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work/aarch64-poky-linux/gcc-sanitizers/9.3.0-r0/temp/run.do_configure.31505:1
> exit 1 from 'grep -v "\#endif.*GCC_DEFAULTS_H" >
> /data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work/aarch64-poky-linux/gcc-sanitizers/9.3.0-r0/gcc-9.3.0/build.aarch64-poky-linux.aarch64-poky-linux/gcc/defaults.h.new'
> > > > >
> > > > > At first, I thought this may be a dependency issue because I
> inherit "rm_work" to tidy up; but I tried a build without it - i.e. keeping
> all work around - and got the same failure.
> > > >
> > > > I've encountered a similar error just today when switching
> SDKMACHINE.
> > > > Are you using archive.bbclass by any chance (INHERIT += "archive")? I
> > > > just recently fixed a bug in archive.bbclass
> > > > (7a57e777597d7f66d065582cfb83cd8f9468f4af) where the archiving of
> > > > gcc-source raced with do_preconfigure and I'm wondering if it's
> > > > related
> > >
> > > I believe I have fixed this in
> > > https://lists.openembedded.org/g/openembedded-core/message/140161,
> > > please try it out to make sure it solves your issue as well.
> >
> > That patch came in after the 3.1.1 release, but it is present in the
> > dunfell branch so it will make it into 3.1.2
>
> Doh, I'm getting ahead of myself! I was thinking of another
> classes/archiver patch that Joshua sent :-)
>
> Steve
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#49808): https://lists.yoctoproject.org/g/yocto/message/49808
Mute This Topic: https://lists.yoctoproject.org/mt/75223575/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [yocto] Dunfell 3.1.1 gcc-sanitizers build failure

2020-07-01 Thread MikeB
The rumors of my success were exaggerated.  If performing a fresh build
from scratch, the image build succeeds, but the populate_sdk still fails as
in the original post.  If I then do a 'bitbake -ccleansstate on
gcc-source-9.3.0', the populate_sdk succeeds.

Mike

On Wed, Jul 1, 2020 at 6:45 AM MikeB  wrote:

> The combination of the
> https://lists.openembedded.org/g/openembedded-core/message/140161 and a 
> 'bitbake
> -ccleansstate on gcc-source-9.3.0' has gotten me back on track.
>
> Thank you all for the help!
>
> On Tue, Jun 30, 2020 at 11:10 PM Steve Sakoman  wrote:
>
>> On Tue, Jun 30, 2020 at 5:08 PM Steve Sakoman via
>> lists.yoctoproject.org 
>> wrote:
>> >
>> > On Tue, Jun 30, 2020 at 4:53 PM Joshua Watt 
>> wrote:
>> > >
>> > > On Tue, Jun 30, 2020 at 8:08 PM Joshua Watt 
>> wrote:
>> > > >
>> > > > On Tue, Jun 30, 2020 at 4:56 PM MikeB  wrote:
>> > > > >
>> > > > > I recently tried upgrading from 3.1.0 to 3.1.1.  I'm not sure if
>> this is a bug or just my problem.  I maintain five different architectures
>> and all five have the same failure in gcc-sanitizers as I'm trying to build
>> the SDK.
>> > > > >
>> > > > > | cat:
>> /data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work-shared/gcc-9.3.0-r0/gcc-9.3.0/gcc/defaults.h:
>> No such file or directory
>> > > > > | WARNING:
>> /data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work/aarch64-poky-linux/gcc-sanitizers/9.3.0-r0/temp/run.do_configure.31505:1
>> exit 1 from 'grep -v "\#endif.*GCC_DEFAULTS_H" >
>> /data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work/aarch64-poky-linux/gcc-sanitizers/9.3.0-r0/gcc-9.3.0/build.aarch64-poky-linux.aarch64-poky-linux/gcc/defaults.h.new'
>> > > > > | ERROR: Execution of
>> '/data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work/aarch64-poky-linux/gcc-sanitizers/9.3.0-r0/temp/run.do_configure.31505'
>> failed with exit code 1:
>> > > > > | cat:
>> /data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work-shared/gcc-9.3.0-r0/gcc-9.3.0/gcc/defaults.h:
>> No such file or directory
>> > > > > | WARNING:
>> /data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work/aarch64-poky-linux/gcc-sanitizers/9.3.0-r0/temp/run.do_configure.31505:1
>> exit 1 from 'grep -v "\#endif.*GCC_DEFAULTS_H" >
>> /data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work/aarch64-poky-linux/gcc-sanitizers/9.3.0-r0/gcc-9.3.0/build.aarch64-poky-linux.aarch64-poky-linux/gcc/defaults.h.new'
>> > > > >
>> > > > > At first, I thought this may be a dependency issue because I
>> inherit "rm_work" to tidy up; but I tried a build without it - i.e. keeping
>> all work around - and got the same failure.
>> > > >
>> > > > I've encountered a similar error just today when switching
>> SDKMACHINE.
>> > > > Are you using archive.bbclass by any chance (INHERIT += "archive")?
>> I
>> > > > just recently fixed a bug in archive.bbclass
>> > > > (7a57e777597d7f66d065582cfb83cd8f9468f4af) where the archiving of
>> > > > gcc-source raced with do_preconfigure and I'm wondering if it's
>> > > > related
>> > >
>> > > I believe I have fixed this in
>> > > https://lists.openembedded.org/g/openembedded-core/message/140161,
>> > > please try it out to make sure it solves your issue as well.
>> >
>> > That patch came in after the 3.1.1 release, but it is present in the
>> > dunfell branch so it will make it into 3.1.2
>>
>> Doh, I'm getting ahead of myself! I was thinking of another
>> classes/archiver patch that Joshua sent :-)
>>
>> Steve
>>
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#49818): https://lists.yoctoproject.org/g/yocto/message/49818
Mute This Topic: https://lists.yoctoproject.org/mt/75223575/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [yocto] Dunfell 3.1.1 gcc-sanitizers build failure

2020-07-01 Thread Joshua Watt
On Wed, Jul 1, 2020 at 9:47 AM MikeB  wrote:
>
> The rumors of my success were exaggerated.  If performing a fresh build from 
> scratch, the image build succeeds, but the populate_sdk still fails as in the 
> original post.  If I then do a 'bitbake -ccleansstate on gcc-source-9.3.0', 
> the populate_sdk succeeds.

Ok. Are you using archiver.bbclass?

>
> Mike
>
> On Wed, Jul 1, 2020 at 6:45 AM MikeB  wrote:
>>
>> The combination of the 
>> https://lists.openembedded.org/g/openembedded-core/message/140161 and a 
>> 'bitbake -ccleansstate on gcc-source-9.3.0' has gotten me back on track.
>>
>> Thank you all for the help!
>>
>> On Tue, Jun 30, 2020 at 11:10 PM Steve Sakoman  wrote:
>>>
>>> On Tue, Jun 30, 2020 at 5:08 PM Steve Sakoman via
>>> lists.yoctoproject.org 
>>> wrote:
>>> >
>>> > On Tue, Jun 30, 2020 at 4:53 PM Joshua Watt  wrote:
>>> > >
>>> > > On Tue, Jun 30, 2020 at 8:08 PM Joshua Watt  
>>> > > wrote:
>>> > > >
>>> > > > On Tue, Jun 30, 2020 at 4:56 PM MikeB  wrote:
>>> > > > >
>>> > > > > I recently tried upgrading from 3.1.0 to 3.1.1.  I'm not sure if 
>>> > > > > this is a bug or just my problem.  I maintain five different 
>>> > > > > architectures and all five have the same failure in gcc-sanitizers 
>>> > > > > as I'm trying to build the SDK.
>>> > > > >
>>> > > > > | cat: 
>>> > > > > /data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work-shared/gcc-9.3.0-r0/gcc-9.3.0/gcc/defaults.h:
>>> > > > >  No such file or directory
>>> > > > > | WARNING: 
>>> > > > > /data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work/aarch64-poky-linux/gcc-sanitizers/9.3.0-r0/temp/run.do_configure.31505:1
>>> > > > >  exit 1 from 'grep -v "\#endif.*GCC_DEFAULTS_H" > 
>>> > > > > /data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work/aarch64-poky-linux/gcc-sanitizers/9.3.0-r0/gcc-9.3.0/build.aarch64-poky-linux.aarch64-poky-linux/gcc/defaults.h.new'
>>> > > > > | ERROR: Execution of 
>>> > > > > '/data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work/aarch64-poky-linux/gcc-sanitizers/9.3.0-r0/temp/run.do_configure.31505'
>>> > > > >  failed with exit code 1:
>>> > > > > | cat: 
>>> > > > > /data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work-shared/gcc-9.3.0-r0/gcc-9.3.0/gcc/defaults.h:
>>> > > > >  No such file or directory
>>> > > > > | WARNING: 
>>> > > > > /data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work/aarch64-poky-linux/gcc-sanitizers/9.3.0-r0/temp/run.do_configure.31505:1
>>> > > > >  exit 1 from 'grep -v "\#endif.*GCC_DEFAULTS_H" > 
>>> > > > > /data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work/aarch64-poky-linux/gcc-sanitizers/9.3.0-r0/gcc-9.3.0/build.aarch64-poky-linux.aarch64-poky-linux/gcc/defaults.h.new'
>>> > > > >
>>> > > > > At first, I thought this may be a dependency issue because I 
>>> > > > > inherit "rm_work" to tidy up; but I tried a build without it - i.e. 
>>> > > > > keeping all work around - and got the same failure.
>>> > > >
>>> > > > I've encountered a similar error just today when switching SDKMACHINE.
>>> > > > Are you using archive.bbclass by any chance (INHERIT += "archive")? I
>>> > > > just recently fixed a bug in archive.bbclass
>>> > > > (7a57e777597d7f66d065582cfb83cd8f9468f4af) where the archiving of
>>> > > > gcc-source raced with do_preconfigure and I'm wondering if it's
>>> > > > related
>>> > >
>>> > > I believe I have fixed this in
>>> > > https://lists.openembedded.org/g/openembedded-core/message/140161,
>>> > > please try it out to make sure it solves your issue as well.
>>> >
>>> > That patch came in after the 3.1.1 release, but it is present in the
>>> > dunfell branch so it will make it into 3.1.2
>>>
>>> Doh, I'm getting ahead of myself! I was thinking of another
>>> classes/archiver patch that Joshua sent :-)
>>>
>>> Steve
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#49819): https://lists.yoctoproject.org/g/yocto/message/49819
Mute This Topic: https://lists.yoctoproject.org/mt/75223575/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [yocto] Dunfell 3.1.1 gcc-sanitizers build failure

2020-07-02 Thread MikeB
Yes, I am using the archiver.bbclass.  However, I'm using the one in
poky/meta.  I applied your patch (manually) to that one.

diff --git a/meta/classes/archiver.bbclass b/meta/classes/archiver.bbclass
index a8d3afcbe9..bf275f4543 100644
--- a/meta/classes/archiver.bbclass
+++ b/meta/classes/archiver.bbclass
@@ -583,8 +583,8 @@ do_deploy_archives[sstate-outputdirs] =
"${DEPLOY_DIR_SRC}"
 addtask do_deploy_archives_setscene

 addtask do_ar_original after do_unpack
-addtask do_unpack_and_patch after do_patch
-addtask do_ar_patched after do_unpack_and_patch before do_preconfigure
do_configure
+addtask do_unpack_and_patch after do_patch do_preconfigure
+addtask do_ar_patched after do_unpack_and_patch before do_configure
 addtask do_ar_configured after do_unpack_and_patch
 addtask do_ar_mirror after do_fetch
 addtask do_dumpdata

I should also point out that the failure is not limited to gcc-sanitizers.
On different architectures, the  same failure has occurred in
gcc-cross-canadian_9.3.bb:do_configure and gcc-crosssdk_9.3.bb:do_configure.
So there are multiple recipes that
reference work-shared/gcc-9.3.0-r0/gcc-9.3.0/gcc/defaults.h but it is
missing.

On all architectures (that I use), executing a 'bitbake -c cleanall
gcc-source-9.3.0' and rebuilding works around the issue and the build
succeeds.

Please note that it is not the image build that is failing; it is the
populate_sdk command that fails.

Regards, Mike

On Wed, Jul 1, 2020 at 10:52 AM Joshua Watt  wrote:

> On Wed, Jul 1, 2020 at 9:47 AM MikeB  wrote:
> >
> > The rumors of my success were exaggerated.  If performing a fresh build
> from scratch, the image build succeeds, but the populate_sdk still fails as
> in the original post.  If I then do a 'bitbake -ccleansstate on
> gcc-source-9.3.0', the populate_sdk succeeds.
>
> Ok. Are you using archiver.bbclass?
>
> >
> > Mike
> >
> > On Wed, Jul 1, 2020 at 6:45 AM MikeB  wrote:
> >>
> >> The combination of the
> https://lists.openembedded.org/g/openembedded-core/message/140161 and a
> 'bitbake -ccleansstate on gcc-source-9.3.0' has gotten me back on track.
> >>
> >> Thank you all for the help!
> >>
> >> On Tue, Jun 30, 2020 at 11:10 PM Steve Sakoman 
> wrote:
> >>>
> >>> On Tue, Jun 30, 2020 at 5:08 PM Steve Sakoman via
> >>> lists.yoctoproject.org 
> >>> wrote:
> >>> >
> >>> > On Tue, Jun 30, 2020 at 4:53 PM Joshua Watt 
> wrote:
> >>> > >
> >>> > > On Tue, Jun 30, 2020 at 8:08 PM Joshua Watt 
> wrote:
> >>> > > >
> >>> > > > On Tue, Jun 30, 2020 at 4:56 PM MikeB 
> wrote:
> >>> > > > >
> >>> > > > > I recently tried upgrading from 3.1.0 to 3.1.1.  I'm not sure
> if this is a bug or just my problem.  I maintain five different
> architectures and all five have the same failure in gcc-sanitizers as I'm
> trying to build the SDK.
> >>> > > > >
> >>> > > > > | cat:
> /data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work-shared/gcc-9.3.0-r0/gcc-9.3.0/gcc/defaults.h:
> No such file or directory
> >>> > > > > | WARNING:
> /data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work/aarch64-poky-linux/gcc-sanitizers/9.3.0-r0/temp/run.do_configure.31505:1
> exit 1 from 'grep -v "\#endif.*GCC_DEFAULTS_H" >
> /data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work/aarch64-poky-linux/gcc-sanitizers/9.3.0-r0/gcc-9.3.0/build.aarch64-poky-linux.aarch64-poky-linux/gcc/defaults.h.new'
> >>> > > > > | ERROR: Execution of
> '/data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work/aarch64-poky-linux/gcc-sanitizers/9.3.0-r0/temp/run.do_configure.31505'
> failed with exit code 1:
> >>> > > > > | cat:
> /data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work-shared/gcc-9.3.0-r0/gcc-9.3.0/gcc/defaults.h:
> No such file or directory
> >>> > > > > | WARNING:
> /data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work/aarch64-poky-linux/gcc-sanitizers/9.3.0-r0/temp/run.do_configure.31505:1
> exit 1 from 'grep -v "\#endif.*GCC_DEFAULTS_H" >
> /data/mabnhdev/exos-yocto-dunfell/build/exos-arm64/tmp/work/aarch64-poky-linux/gcc-sanitizers/9.3.0-r0/gcc-9.3.0/build.aarch64-poky-linux.aarch64-poky-linux/gcc/defaults.h.new'
> >>> > > > >
> >>> > > > > At first, I thought this may be a dependency issue because I
> inherit "rm_work" to tidy up; but I tried a build without it - i.e. keeping
> all work around - and got the same failure

[yocto] Information about Yocto Dunfell components version #dunfell

2020-09-01 Thread Yocto_user
Hi All,

I read the Dunfell release notes but couldn't find the version detail of 
following components:

bzip2 cairo faad2 fontconfig freetype gdkpixbuf

libav libffi libjpeg libogg libpng libtiff libvorbis libxcursor

openexr openssl pango pixman stb-image swscale theora x264 zlib

Please give me the version detail of these components,  we are planning to use 
dunfell but before that we needed the version detail of all these components in 
Yocto.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#50419): https://lists.yoctoproject.org/g/yocto/message/50419
Mute This Topic: https://lists.yoctoproject.org/mt/76553173/21656
Mute #dunfell: https://lists.yoctoproject.org/g/yocto+yocto/mutehashtag/dunfell
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[yocto] dunfell actual package.bbclass causing problems after commit #dunfell

2020-11-19 Thread TRO
this commit 087a099c288c6bedf368c8756d998172b02b50ad
package: get_package_mapping: avoid dependency mapping if renamed package 
provides original name
is causing this problem:
NOTE: recipe intronix-introndplus-initramfs-1.0-r0: task do_rootfs: Started
ERROR: intronix-introndplus-initramfs-1.0-r0 do_rootfs: Unable to install 
packages. Command 
'/home/dev-build2/yocto/poky/build-xco/tmp/work/xco-poky-linux/intronix-introndplus-initramfs/1.0-r0/recipe-sysroot-native/usr/bin/opkg
 --volatile-cache -f 
/home/dev-build2/yocto/poky/build-xco/tmp/work/xco-poky-linux/intronix-introndplus-initramfs/1.0-r0/opkg.conf
 -t 
/home/dev-build2/yocto/poky/build-xco/tmp/work/xco-poky-linux/intronix-introndplus-initramfs/1.0-r0/temp/ipktemp/
 -o 
/home/dev-build2/yocto/poky/build-xco/tmp/work/xco-poky-linux/intronix-introndplus-initramfs/1.0-r0/rootfs
  --force_postinstall --prefer-arch-to-version   --add-ignore-recommends 
busybox-syslog install base-passwd busybox gptfdisk initrdscripts-intronix 
packagegroup-luks-initramfs packagegroup-tpm2-initramfs run-postinsts' returned 
1:
Collected errors:
* Solver encountered 1 problem(s):
* Problem 1/1:
*   - package initrdscripts-intronix-1.0-r0.corei7-64 requires cgos, but none 
of the providers can be installed
*
* Solution 1:
*   - do not ask to install a package providing initrdscripts-intronix
* opkg_finalize_intercepts: Failed to open dir 
/home/dev-build2/yocto/poky/build-xco/tmp/work/xco-poky-linux/intronix-introndplus-initramfs/1.0-r0/temp/ipktemp//opkg-JQVGKK/opkg-intercept-DEACdL:
 No such file or directory.
* rm_r: Failed to open dir 
/home/dev-build2/yocto/poky/build-xco/tmp/work/xco-poky-linux/intronix-introndplus-initramfs/1.0-r0/temp/ipktemp//opkg-JQVGKK/opkg-intercept-DEACdL:
 No such file or directory.
* rm_r: Failed to open dir 
/home/dev-build2/yocto/poky/build-xco/tmp/work/xco-poky-linux/intronix-introndplus-initramfs/1.0-r0/temp/ipktemp//opkg-JQVGKK:
 No such file or directory.
Disfavor package: busybox-syslog

cgos is that:
https://git.congatec.com/x86/meta-congatec-x86/-/blob/dunfell/recipes-tools/cgos/cgos-mod_1.03.025.bb

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#51440): https://lists.yoctoproject.org/g/yocto/message/51440
Mute This Topic: https://lists.yoctoproject.org/mt/78368612/21656
Mute #dunfell:https://lists.yoctoproject.org/g/yocto/mutehashtag/dunfell
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] [Meta-Java] Error while installing jamvm #yocto #dunfell

2021-02-10 Thread Vijay Rakesh Munganda
Hi,

I have download meta-java layer and added to the conf/bblayer.conf file, then 
added jamvm to conf/local.conf file and ran bitbake command, but I got this 
error as "Nothing RPROVIDES 'jamvm' (but 
/home/bl-docker/rity/src/poky/../meta-mediatek-bsp/recipes-core/images/mtk-image.bb
 RDEPENDS on or otherwise requires it) jamvm was skipped: incompatible with 
machine i500-pumpkin (not in COMPATIBLE_MACHINE)"

Recipe file path is under /src/meta-java/recipes-core/jamvm/jamvm_git.bb. Did I 
miss anything or does my target board has any compatibility issue?

Thanks & Regards,
Vijay Rakesh

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#52291): https://lists.yoctoproject.org/g/yocto/message/52291
Mute This Topic: https://lists.yoctoproject.org/mt/80553046/21656
Mute #yocto:https://lists.yoctoproject.org/g/yocto/mutehashtag/yocto
Mute #dunfell:https://lists.yoctoproject.org/g/yocto/mutehashtag/dunfell
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] Dunfell, nodejs and typescript - short experience report

2021-02-12 Thread Josef Holzmayr
Howdy!

Thanks for sharing, a few comments inline.

Am Fr., 12. Feb. 2021 um 11:03 Uhr schrieb Simon Vogl :
>
> I have some remarks and questions about the npm/nodejs support in dunfell 
> that I wanted to share. We are creating nodejs-based IoT edge solutions and 
> upgrading our build environments to Dunfell one by one. In the course of 
> this, we are switching to the new npm-implementation and found a few small 
> issues.
>
> Firstly, the do_configure() task takes quite some time to complete. After a 
> quick analysis, I saw that most of the time is being spent in creating the 
> npmrc files while packing the dependent packages. I wrote a small workaround 
> to directly create the file instead of calling 'npm config', which results in 
> a 3x-4x speedup:
>
> Signed-off-by: Simon Vogl 
> ---
>  lib/bb/fetch2/npm.py | 9 +++--
>  1 file changed, 7 insertions(+), 2 deletions(-)
>
> diff --git a/lib/bb/fetch2/npm.py b/lib/bb/fetch2/npm.py
> index 4789850..2720d87 100644
> --- a/lib/bb/fetch2/npm.py
> +++ b/lib/bb/fetch2/npm.py
> @@ -97,13 +97,18 @@ class NpmEnvironment(object):
>  cmd = "NPM_CONFIG_GLOBALCONFIG=%s " % cfgfile + cmd
>  return runfetchcmd(cmd, d, workdir=workdir)
>
> +cfg = open(cfgfile, "a")
>  if self.configs:
>  for key, value in self.configs:
> -_run("npm config set %s %s" % (key, shlex.quote(value)))
> +cfg.write("%s=%s\n" % (key, shlex.quote(value)))
> +#_run("npm config set %s %s" % (key, shlex.quote(value)))
>
>  if configs:
>  for key, value in configs:
> -_run("npm config set %s %s" % (key, shlex.quote(value)))
> +cfg.write("%s=%s\n" % (key, shlex.quote(value)))
> +# _run("npm config set %s %s" % (key, 
> shlex.quote(value)))
> +
> +cfg.close()
>
>  if args:
>  for key, value in args:
> --
> 2.7.4
>
> Are there any side effects that I did not stumble over yet? And I'd LOVE to 
> have these calls running in a thread-pool for better performance...

The main side effect is that you're effectively patching poky, which
is bad for maintenance.

> Secondly, our projects are based on typescript, so a native compile step is 
> necessary to create a compiled version for packing. We experimented with a 
> separate release branch to check in compiled versions, but this is not easy 
> to handle. I played around with npm.bbclass and found a way to extend 
> configure (!) with a call to our build script before packaging:
>
> diff --git a/meta/classes/npm.bbclass b/meta/classes/npm.bbclass
> index 068032a1e5..31535098cf 100644
> --- a/meta/classes/npm.bbclass
> +++ b/meta/classes/npm.bbclass
> @@ -170,6 +170,11 @@ python npm_do_configure() {
>
>  # Configure the main package
>  with tempfile.TemporaryDirectory() as tmpdir:
> +# install all (native) build dependencies, overrides npm cache:
> +ret = os.system("npm i")
> +# run build step:
> +env.run("npm run build", args=[], workdir=d.getVar("S"))
> +
>  tarball = npm_pack(env, d.getVar("S"), tmpdir)
>  npm_unpack(tarball, d.getVar("NPM_PACKAGE"), d)
>
> As we have plain JS packages as well, I put the modified configure() in a 
> subclass and this works for us, but it does not look like a clean solution to 
> me. How do you other IoT'ers address this situation?

Again, patching poky is a bad idea. Creating custom bbclasses is much
neater: you could create a base include, and pull that together with
npm.bbclass into two final bbclasses of yours, like npm-js-voxel and
npm-ts-voxel. The former would not have the compilation step. And,
putting the typescript/webpack invocation into configure is also not
exactly how things are meant to work. I know that the dependency
tracking of npm is not easily compatible with bitbake, but the aim
should be to
1) have a recipe that provides typescript-native
2) DEPEND on typescript-native in the recipe which you need to compile
3) add a do_compile stage that does the work.

Greetz

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#52299): https://lists.yoctoproject.org/g/yocto/message/52299
Mute This Topic: https://lists.yoctoproject.org/mt/80579992/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] Dunfell, nodejs and typescript - short experience report

2021-02-12 Thread Simon Vogl

Hi,


Am 12.02.21 um 11:12 schrieb Josef Holzmayr:

Howdy!

Thanks for sharing, a few comments inline.

Am Fr., 12. Feb. 2021 um 11:03 Uhr schrieb Simon Vogl :

I have some remarks and questions about the npm/nodejs support in dunfell that 
I wanted to share. We are creating nodejs-based IoT edge solutions and 
upgrading our build environments to Dunfell one by one. In the course of this, 
we are switching to the new npm-implementation and found a few small issues.

Firstly, the do_configure() task takes quite some time to complete. After a 
quick analysis, I saw that most of the time is being spent in creating the 
npmrc files while packing the dependent packages. I wrote a small workaround to 
directly create the file instead of calling 'npm config', which results in a 
3x-4x speedup:

Signed-off-by: Simon Vogl 
---
  lib/bb/fetch2/npm.py | 9 +++--
  1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/lib/bb/fetch2/npm.py b/lib/bb/fetch2/npm.py
index 4789850..2720d87 100644
--- a/lib/bb/fetch2/npm.py
+++ b/lib/bb/fetch2/npm.py
@@ -97,13 +97,18 @@ class NpmEnvironment(object):
  cmd = "NPM_CONFIG_GLOBALCONFIG=%s " % cfgfile + cmd
  return runfetchcmd(cmd, d, workdir=workdir)

+cfg = open(cfgfile, "a")
  if self.configs:
  for key, value in self.configs:
-_run("npm config set %s %s" % (key, shlex.quote(value)))
+cfg.write("%s=%s\n" % (key, shlex.quote(value)))
+#_run("npm config set %s %s" % (key, shlex.quote(value)))

  if configs:
  for key, value in configs:
-_run("npm config set %s %s" % (key, shlex.quote(value)))
+cfg.write("%s=%s\n" % (key, shlex.quote(value)))
+# _run("npm config set %s %s" % (key, shlex.quote(value)))
+
+cfg.close()

  if args:
  for key, value in args:
--
2.7.4

Are there any side effects that I did not stumble over yet? And I'd LOVE to 
have these calls running in a thread-pool for better performance...

The main side effect is that you're effectively patching poky, which
is bad for maintenance.
I know, that's why I'm asking in the first place. But performance here 
is really really improvable.

Secondly, our projects are based on typescript, so a native compile step is 
necessary to create a compiled version for packing. We experimented with a 
separate release branch to check in compiled versions, but this is not easy to 
handle. I played around with npm.bbclass and found a way to extend configure 
(!) with a call to our build script before packaging:

diff --git a/meta/classes/npm.bbclass b/meta/classes/npm.bbclass
index 068032a1e5..31535098cf 100644
--- a/meta/classes/npm.bbclass
+++ b/meta/classes/npm.bbclass
@@ -170,6 +170,11 @@ python npm_do_configure() {

  # Configure the main package
  with tempfile.TemporaryDirectory() as tmpdir:
+# install all (native) build dependencies, overrides npm cache:
+ret = os.system("npm i")
+# run build step:
+env.run("npm run build", args=[], workdir=d.getVar("S"))
+
  tarball = npm_pack(env, d.getVar("S"), tmpdir)
  npm_unpack(tarball, d.getVar("NPM_PACKAGE"), d)

As we have plain JS packages as well, I put the modified configure() in a 
subclass and this works for us, but it does not look like a clean solution to 
me. How do you other IoT'ers address this situation?

Again, patching poky is a bad idea. Creating custom bbclasses is much
neater: you could create a base include, and pull that together with
npm.bbclass into two final bbclasses of yours, like npm-js-voxel and
npm-ts-voxel. The former would not have the compilation step. And,
putting the typescript/webpack invocation into configure is also not
exactly how things are meant to work. I know that the dependency
tracking of npm is not easily compatible with bitbake, but the aim
should be to
1) have a recipe that provides typescript-native
2) DEPEND on typescript-native in the recipe which you need to compile
3) add a do_compile stage that does the work.


Agreed, and I have a patched configure in my own subclass without 
changing the official codebase -- I just wanted to point where the 
modification needs to take place.


I actually tried the approach that you propose by playing around with 
configure_append / compile_prepend tasks, but these build steps are 
called after the package has already been packed --> the compiled data 
is not being  installed, I'd have to re-pack things.


Agreed, a typescript-native package would be nice, on the other hand 
this is where the npm-version-chaos comes in again: Many packages use 
different tsc versions,...


Simon



Greetz




--
VoXel Interaction Design  |  www.voxel.at
DI Dr.techn. Simon Vogl   |  si...@voxel.at
Tomaschekweg 46   |  +43 650 2323 555
A-4040 Linz - Austria |
Office address: Industriez

Re: [yocto] Dunfell, nodejs and typescript - short experience report

2021-02-24 Thread TRO
Hi Simon,
I'm dealing actually with the same problem. Would you like to share your  
"configure in my own subclass"?

I'm also thinking there is a need for a bbclass which actually is not using 
gyp, instead it should be able to "npm run build".

There is alsa a patch for speeding up npm npmsw fetcher 
https://www.mail-archive.com/openembedded-core@lists.openembedded.org/msg142406.html
cheers Thomas

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#52473): https://lists.yoctoproject.org/g/yocto/message/52473
Mute This Topic: https://lists.yoctoproject.org/mt/80579992/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] Dunfell, nodejs and typescript - short experience report

2021-02-27 Thread Simon Vogl
Sure, please find the bbclass file attached to this mail. All it does is 
to inherit npm and overwrite configure with an extended copy .


Meanwhile, I saw another issue popping up: It seems that multiple 
indirect dependencies to a package with different versions create 
inconsitent license checksum entries. In my case, several packages 
depend on 'xtend' in versions 4.0.0 to 4.0.2, the license file picked is 
package.json which contains the version, of course, and causes 
conflicts. Oh my.


Simon



Am 24.02.21 um 12:02 schrieb TRO:


Hi Simon,
I'm dealing actually with the same problem. Would you like to share 
your  "configure in my own subclass"?


I'm also thinking there is a need for a bbclass which actually is not 
using gyp, instead it should be able to "npm run build".


There is alsa a patch for speeding up npm npmsw fetcher 
https://www.mail-archive.com/openembedded-core@lists.openembedded.org/msg142406.html

cheers Thomas





--
VoXel Interaction Design  |  www.voxel.at
DI Dr.techn. Simon Vogl   |  si...@voxel.at
Tomaschekweg 46   |  +43 650 2323 555
A-4040 Linz - Austria |
Office address: Industriezeile 35, 4020 Linz (2nd floor)

# Copyright (C) 2020 Savoir-Faire Linux
#
# SPDX-License-Identifier: GPL-2.0-only
#
# This bbclass builds and installs an npm package to the target. The package
# sources files should be fetched in the calling recipe by using the SRC_URI
# variable. The ${S} variable should be updated depending of your fetcher.
#
# Usage:
#  SRC_URI = "..."
#  inherit npm_run_build
#
# ... for the rest see npm.bbclass


inherit npm

python npm_run_build_do_configure() {
"""
Step one: configure the npm cache and the main npm package

Every dependencies have been fetched and patched in the source directory.
They have to be packed (this remove unneeded files) and added to the npm
cache to be available for the next step.

The main package and its associated manifest file and shrinkwrap file have
to be configured to take into account these cached dependencies.

[SV] copied from npm.bbclass and extended to run npm i, npm run build 
before 
packing the main package.
"""
import base64
import copy
import json
import re
import shlex
import tempfile
from bb.fetch2.npm import NpmEnvironment
from bb.fetch2.npm import npm_unpack
from bb.fetch2.npmsw import foreach_dependencies
from bb.progress import OutOfProgressHandler

bb.utils.remove(d.getVar("NPM_CACHE"), recurse=True)
bb.utils.remove(d.getVar("NPM_PACKAGE"), recurse=True)

env = NpmEnvironment(d, configs=npm_global_configs(d))

def _npm_cache_add(tarball):
"""Run 'npm cache add' for a specified tarball"""
cmd = "npm cache add %s" % shlex.quote(tarball)
env.run(cmd)

def _npm_integrity(tarball):
"""Return the npm integrity of a specified tarball"""
sha512 = bb.utils.sha512_file(tarball)
return "sha512-" + base64.b64encode(bytes.fromhex(sha512)).decode()

def _npm_version(tarball):
"""Return the version of a specified tarball"""
regex = r"-(\d+\.\d+\.\d+(-.*)?(\+.*)?)\.tgz"
return re.search(regex, tarball).group(1)

def _npmsw_dependency_dict(orig, deptree):
"""
Return the sub dictionary in the 'orig' dictionary corresponding to the
'deptree' dependency tree. This function follows the shrinkwrap file
format.
"""
ptr = orig
for dep in deptree:
if "dependencies" not in ptr:
ptr["dependencies"] = {}
ptr = ptr["dependencies"]
if dep not in ptr:
ptr[dep] = {}
ptr = ptr[dep]
return ptr

# Manage the manifest file and shrinkwrap files
orig_manifest_file = d.expand("${S}/package.json")
orig_shrinkwrap_file = d.expand("${S}/npm-shrinkwrap.json")
cached_manifest_file = d.expand("${NPM_PACKAGE}/package.json")
cached_shrinkwrap_file = d.expand("${NPM_PACKAGE}/npm-shrinkwrap.json")

with open(orig_manifest_file, "r") as f:
orig_manifest = json.load(f)

cached_manifest = copy.deepcopy(orig_manifest)
cached_manifest.pop("dependencies", None)
cached_manifest.pop("devDependencies", None)

with open(orig_shrinkwrap_file, "r") as f:
orig_shrinkwrap = json.load(f)

cached_shrinkwrap = copy.deepcopy(orig_shrinkwrap)
cached_shrinkwrap.pop("dependencies", None)

# Manage the dependencies
progress = OutOfProgressHandler(d, r"^(\d+)/(\d+)$")
progress_total = 1 # also count the main package
progress_done = 0

def _count_dependency(name, params, deptree):
nonlocal progress_total
progress_total += 1

def _cache_dependency(name, params, deptree):
destsubdirs = [os.path.join("node_modules", dep) for dep in deptree]
destsuffix = os.path.join(*destsubdirs)
with tempfile.TemporaryDirectory() as tmpdir:
# Ad

Re: [yocto] Dunfell, nodejs and typescript - short experience report

2021-02-28 Thread TRO
Hi Simon,
thank you - in my current solution I don't use npm bbclass at all.
I basically use npmsw://${THISDIR}/${BPN}/npm-shrinkwrap.json;dev=True
this will downloadall npm stuff including angular because of dev=True to 
$S/node_modules

do_compile () {
#    build frontend
chmod -R a+w ${S}/node_modules/@angular
chmod 755 ${S}/node_modules/@angular/cli/bin/ng
cd ${S}/ && ./node_modules/@angular/cli/bin/ng build --prod
}

I'm dealing with that problem:
https://lists.yoctoproject.org/g/yocto/message/52515

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#52516): https://lists.yoctoproject.org/g/yocto/message/52516
Mute This Topic: https://lists.yoctoproject.org/mt/80579992/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] Problem with YOCTO Dunfell and host Fedora 33

2021-05-20 Thread Zoran
Hello Yocto developers,

I have few problems running the following self proprietary script from
one of my public git repos:
https://github.com/ZoranStojsavljevic/bbb-yocto/blob/master/yocto-setup.sh

I recall that last time I used the script (I used then Fedora 31), the
./yocto setup dunfell worked seamlessly, did setup the environment,
and upon bitbake -k core-image-minimal completed the tasks without any
problem.

Now, I am using Fedora 33 (in the meantime I did two Fedora version upgrades).

The problem is that while compiling the cannelloni package, the
following errors were issued (please, look into the attached file
cmake_problem.txt).

This cmake problem was introduced after switching from Fedora 31 to Fedora 33 ?!

Any clue/idea why this is happening??? What is the cause of the problem?

Thank you,
Zoran
___
[vuser@fedora33-ssd build]$ bitbake -k core-image-minimal
WARNING: Host distribution "fedora-33" has not been validated with this version 
of the build system; you may possibly experience unexpected failures. It is 
recommended that you use a tested distribution.
Loading cache: 100% 
|| 
Time: 0:00:00
Loaded 3309 entries from dependency cache.
Parsing recipes: 100% 
|##| 
Time: 0:00:01
Parsing of 2201 .bb files complete (2198 cached, 3 parsed). 3312 targets, 117 
skipped, 1 masked, 0 errors.
NOTE: Resolving any missing task queue dependencies

Build Configuration:
BB_VERSION   = "1.46.0"
BUILD_SYS= "x86_64-linux"
NATIVELSBSTRING  = "universal"
TARGET_SYS   = "arm-poky-linux-gnueabi"
MACHINE  = "beaglebone"
DISTRO   = "poky"
DISTRO_VERSION   = "3.1.7"
TUNE_FEATURES= "arm vfp cortexa8 neon callconvention-hard"
TARGET_FPU   = "hard"
meta 
meta-poky
meta-yocto-bsp   = "dunfell:97a9f30f1c457c55bf0c791d0466ff8620110a49"
meta-jumpnow = "dunfell:b3995636741be0d219a50035c98ded8b48590888"
meta-bbb = "dunfell:fa02d8e9079c1cc18f83527588a9dd2747293992"
meta-oe  
meta-python  
meta-networking  = "dunfell:2915810edbb6599051e30efb3b7f805665ddcc23"
meta-qt5 = 
"upstream/dunfell:b4d24d70aca75791902df5cd59a4f4a54aa4a125"
meta-socketcan   = "master:4e7128b75ba731fc8be662385659fb7f9c440d12"

Initialising tasks: 100% 
|###| Time: 
0:00:04
Sstate summary: Wanted 10 Found 0 Missed 10 Current 1698 (0% match, 99% 
complete)
NOTE: Executing Tasks
ERROR: cannelloni-1.0-r0 do_compile: Execution of 
'/home/vuser/projects_yocto/bbb-yocto/poky/build/tmp/work/cortexa8hf-neon-poky-linux-gnueabi/cannelloni/1.0-r0/temp/run.do_compile.2406989'
 failed with exit code 1:
[1/12] 
/home/vuser/projects_yocto/bbb-yocto/poky/build/tmp/work/cortexa8hf-neon-poky-linux-gnueabi/cannelloni/1.0-r0/recipe-sysroot-native/usr/bin/arm-poky-linux-gnueabi/arm-poky-linux-gnueabi-g++
   -I. -Wall -DNDEBUG -MD -MT CMakeFiles/addsources.dir/connection.cpp.o -MF 
CMakeFiles/addsources.dir/connection.cpp.o.d -o 
CMakeFiles/addsources.dir/connection.cpp.o -c 
/home/vuser/projects_yocto/bbb-yocto/poky/build/tmp/work/cortexa8hf-neon-poky-linux-gnueabi/cannelloni/1.0-r0/git/connection.cpp
FAILED: CMakeFiles/addsources.dir/connection.cpp.o 
/home/vuser/projects_yocto/bbb-yocto/poky/build/tmp/work/cortexa8hf-neon-poky-linux-gnueabi/cannelloni/1.0-r0/recipe-sysroot-native/usr/bin/arm-poky-linux-gnueabi/arm-poky-linux-gnueabi-g++
   -I. -Wall -DNDEBUG -MD -MT CMakeFiles/addsources.dir/connection.cpp.o -MF 
CMakeFiles/addsources.dir/connection.cpp.o.d -o 
CMakeFiles/addsources.dir/connection.cpp.o -c 
/home/vuser/projects_yocto/bbb-yocto/poky/build/tmp/work/cortexa8hf-neon-poky-linux-gnueabi/cannelloni/1.0-r0/git/connection.cpp
In file included from 
/home/vuser/projects_yocto/bbb-yocto/poky/build/tmp/work/cortexa8hf-neon-poky-linux-gnueabi/cannelloni/1.0-r0/git/connection.cpp:21:
/home/vuser/projects_yocto/bbb-yocto/poky/build/tmp/work/cortexa8hf-neon-poky-linux-gnueabi/cannelloni/1.0-r0/git/connection.h:23:10:
 fatal error: linux/can/raw.h: No such file or directory
   23 | #include 
  |  ^
compilation terminated.
[2/12] 
/home/vuser/projects_yocto/bbb-yocto/poky/build/tmp/work/cortexa8hf-neon-poky-linux-gnueabi/cannelloni/1.0-r0/recipe-sysroot-native/usr/bin/arm-poky-linux-gnueabi/arm-poky-linux-gnueabi-g++
   -I. -Wall -DNDEBUG -MD -MT CMakeFiles/addsources.dir/thread.cpp.o -MF 
CMakeFiles/addsources.dir/thread.cpp.o.d -o 
CMakeFiles/addsources.dir/thread.cpp.o -c 
/home/vuser/projects_yocto/bbb-yocto/poky/build/tmp/work/cortexa8hf-neon-poky-linux-gnueabi/cannelloni/1.0-r0/git/thread.cpp
FAILED: CMakeFiles/addsources.dir/thread.cpp.o 
/home/vuser/projects_yocto/bbb-yocto/poky/build/tmp/work/cortexa8hf-neon-poky-linux-gnueabi/cannelloni/1.0-r0/recipe-sy

[yocto] Dunfell: problem with kernel-module install and libkmod.so

2021-10-15 Thread Patrick Boettcher
Hi list,

I'm facing an issue with a BSP I created using dunfell (up-to-date on
poky and oe). I'm using a stable kernel and u-boot 2021.07 from denx's
mainline-stable-layer.

I stripped down my machine.conf and basically the boot is working fine.
Was working fine.

When finally I created a functional defconfig (one which didn't strip
down the kernel next to nothing - thanks to KCONFIG_MODE="alldefconfig"
). I started to create fragments to remove unused parts.

The very first fragment I created was leading to rootfs which crashed
at the moment when /sbin/init was invoked, with the strangest errors
I've ever seen:

  /sbin/init: error while loading shared libraries: libkmod.so.2:
  cannot open shared object file: No such file or directory

Of course libkmod.so.2 (and its target) is present.

Then I realized that no modules where installed in the rootfs. (modules
are there if I don't have the fragment)

So I added kernel-modules to IMAGE_INSTALL:append .

The modules appeared, but the panic still occurred.

What can I do to understand what's going on? I diff'ed the rootfs, the
only difference I could was in ldconfig's aux-cache.

Thanks for any help in advance,
--
Patrick.




-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#55086): https://lists.yoctoproject.org/g/yocto/message/55086
Mute This Topic: https://lists.yoctoproject.org/mt/86358110/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] [dunfell][PATCH] openssl: update from 1.1.1n to 1.1.1o

2022-05-18 Thread Daniel Karlsson
This update fixes CVE-2022-1292.

Signed-off-by: Daniel Karlsson 
---
 .../openssl/{openssl_1.1.1n.bb => openssl_1.1.1o.bb}| 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
 rename meta/recipes-connectivity/openssl/{openssl_1.1.1n.bb => 
openssl_1.1.1o.bb} (98%)

diff --git a/meta/recipes-connectivity/openssl/openssl_1.1.1n.bb 
b/meta/recipes-connectivity/openssl/openssl_1.1.1o.bb
similarity index 98%
rename from meta/recipes-connectivity/openssl/openssl_1.1.1n.bb
rename to meta/recipes-connectivity/openssl/openssl_1.1.1o.bb
index 8538bd5a18..c9cfc759c9 100644
--- a/meta/recipes-connectivity/openssl/openssl_1.1.1n.bb
+++ b/meta/recipes-connectivity/openssl/openssl_1.1.1o.bb
@@ -24,7 +24,7 @@ SRC_URI_append_class-nativesdk = " \
file://environment.d-openssl.sh \
"
 
-SRC_URI[sha256sum] = 
"40dceb51a4f6a5275bde0e6bf20ef4b91bfc32ed57c0552e2e8e15463372b17a"
+SRC_URI[sha256sum] = 
"9384a2b0570dd80358841464677115df785edb941c71211f75076d72fe6b438f"
 
 inherit lib_package multilib_header multilib_script ptest
 MULTILIB_SCRIPTS = "${PN}-bin:${bindir}/c_rehash"
-- 
2.25.1


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#57127): https://lists.yoctoproject.org/g/yocto/message/57127
Mute This Topic: https://lists.yoctoproject.org/mt/91183012/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] [dunfell][PATCH] pypi.bbclass: Set CVE_PRODUCT to PYPI_PACKAGE

2023-05-15 Thread Ross Burton
This patch should go to openembedded-c...@lists.openembedded.org 
.

Thanks,
Ross

> On 15 May 2023, at 11:55, Sanjaykumar kantibhai Chitroda -X (schitrod - 
> E-INFO CHIPS INC at Cisco) via lists.yoctoproject.org 
>  wrote:
> 
> From: Alex Kiernan 
> 
> The CVE product name for PyPI packages is (usually) the same as the PyPI
> package name (and not our recipe name), so use that as the default.
> 
> Signed-off-by: Alex Kiernan 
> Signed-off-by: Alex Kiernan 
> Signed-off-by: Richard Purdie 
> (cherry picked from commit 61f6b0ad09bf87cdc2d3f08770b7c44cad1d0e58)
> Signed-off-by: Sanjay Chitroda 
> ---
> meta/classes/pypi.bbclass | 2 ++
> 1 file changed, 2 insertions(+)
> 
> diff --git a/meta/classes/pypi.bbclass b/meta/classes/pypi.bbclass
> index 87b4c85fc0..c68367449a 100644
> --- a/meta/classes/pypi.bbclass
> +++ b/meta/classes/pypi.bbclass
> @@ -24,3 +24,5 @@ S = "${WORKDIR}/${PYPI_PACKAGE}-${PV}"
> 
> UPSTREAM_CHECK_URI ?= "https://pypi.org/project/${PYPI_PACKAGE}/";
> UPSTREAM_CHECK_REGEX ?= "/${PYPI_PACKAGE}/(?P(\d+[\.\-_]*)+)/"
> +
> +CVE_PRODUCT ?= "python:${PYPI_PACKAGE}"
> -- 
> 2.35.6
> 
> 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#60026): https://lists.yoctoproject.org/g/yocto/message/60026
Mute This Topic: https://lists.yoctoproject.org/mt/98900696/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: 
https://lists.yoctoproject.org/g/yocto/leave/6691583/21656/737036229/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] Update the yocto dunfell to latest version

2023-12-18 Thread Khem Raj
On Sun, Dec 17, 2023 at 11:37 PM MOHAMMED HASSAN
 wrote:
>
> Hi guys,
> Can you suggest some resources for me to update to the latest yocto dunfell 
> version. Currently I am using dunfell 3.1.11 and want to update to 3.1.29.
> What could be the consequences?

This will be relatively safer to do since point releases do not
contain major features and essentially are bug fixes and security
fixes. However, this is
not always the case that it will be smooth for you since there might
be a particular bugfix that could change a behavior which you might be
relying on
so needs some due diligence.

> Also If I manually update the nodejs folder to the latest one will it fetch 
> the latest nodejs files(example)?

This is your call and hence the problems you encounter will also be
unique to you. You might seek help in the community and if there are
some
folks who have tried what you will be doing you might find some support.

>
> Regards,
> Hassan
>
> 
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#61967): https://lists.yoctoproject.org/g/yocto/message/61967
Mute This Topic: https://lists.yoctoproject.org/mt/103238937/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] Dunfell busybox NFS mount fails to build

2020-06-13 Thread Khem Raj
On Saturday, June 13, 2020 4:06:59 PM PDT MikeB wrote:
> I'm trying to build busybox on Dunfell with NFS mount configured
> (CONFIG_FEATURE_MOUNT_NFS).  The build fails with the following.
> | util-linux/mount.c:253:11: fatal error: rpc/rpc.h: No such file or
> | directory|
> |   253 | # include 
> 
> Reading online, later versions of glibc no longer install rpc headers.  Once
> suggested solution is to configure glibc with --enable-obsolete-rpc.  I
> tried that, but then the build fails as follows.
> 
> ERROR: tcp-wrappers-7.6-r10 do_prepare_recipe_sysroot: The file
> /usr/include/rpcsvc/yppasswd.x is installed by both glibc and libnsl2,
> aborting
> 
> I also tried adding libtirpc as a dependency to busybox; no luck there
> either.
> 
> Can someone give me a workaround for this problem?
> 

Obsolete rpc is really not supported as you can see but if you are using it 
also ensure libnsl2 is not used either.
remove libnsl2 from tcp-wrappers dependencies
 
> Thanks.




-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#49638): https://lists.yoctoproject.org/g/yocto/message/49638
Mute This Topic: https://lists.yoctoproject.org/mt/74866908/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [yocto] Dunfell busybox NFS mount fails to build

2020-06-15 Thread MikeB
Removing the libnsl2 dependency worked for tcp-wrappers, but a similar
situation arose with python3.  In the case of python3, removing libnsl2
dependency caused other failures in the build.

However, reading some comments in util-linux/mount.c, the
CONFIG_FEATURE_MOUNT_NFS
seems to only apply to kernels < 2.6.23.  So, I'm going to leave it
undefined until I can do further testing.

Regards, Mike

On Sun, Jun 14, 2020 at 1:08 AM Khem Raj  wrote:

> On Saturday, June 13, 2020 4:06:59 PM PDT MikeB wrote:
> > I'm trying to build busybox on Dunfell with NFS mount configured
> > (CONFIG_FEATURE_MOUNT_NFS).  The build fails with the following.
> > | util-linux/mount.c:253:11: fatal error: rpc/rpc.h: No such file or
> > | directory|
> > |   253 | # include 
> >
> > Reading online, later versions of glibc no longer install rpc headers.
> Once
> > suggested solution is to configure glibc with --enable-obsolete-rpc.  I
> > tried that, but then the build fails as follows.
> >
> > ERROR: tcp-wrappers-7.6-r10 do_prepare_recipe_sysroot: The file
> > /usr/include/rpcsvc/yppasswd.x is installed by both glibc and libnsl2,
> > aborting
> >
> > I also tried adding libtirpc as a dependency to busybox; no luck there
> > either.
> >
> > Can someone give me a workaround for this problem?
> >
>
> Obsolete rpc is really not supported as you can see but if you are using
> it
> also ensure libnsl2 is not used either.
> remove libnsl2 from tcp-wrappers dependencies
>
> > Thanks.
>
>
>
>
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#49642): https://lists.yoctoproject.org/g/yocto/message/49642
Mute This Topic: https://lists.yoctoproject.org/mt/74866908/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[yocto] dunfell gcc-sanitizers-arm-8.3 fails to build

2020-07-31 Thread Ryan Harkin
Hello,

I'm migrating from Warrior to Dunfell and I'm getting a curious build
failure in gcc-sanitizers.

Here's the full gory detail:
https://pastebin.ubuntu.com/p/nh4cDKMvgS/

However, the main error is this:

| In file included from
../../../../../../../../../work-shared/gcc-arm-8.3-r2019.03/git/libsanitizer/sanitizer_common/sanitizer_platform_limits_posix.cc:193:
|
../../../../../../../../../work-shared/gcc-arm-8.3-r2019.03/git/libsanitizer/sanitizer_common/sanitizer_internal_defs.h:317:72:
error: size of array 'assertion_failed__1152' is negative
|  typedef char IMPL_PASTE(assertion_failed_##_, line)[2*(int)(pred)-1]

I have no idea where to begin with this. I don't even know why
gcc-sanitizers is included in the build, what it does, or why I need it.
I'm building an image with dev packages and gcc, so I guess that's why.

I've hacked meta-arm to patch sanitizer_platform_limits_posix.cc to null
out the macros and that builds fine. I'm sure it won't work, should someone
want to use it, mind you.

Is there something obvious that I should be doing as part of a Warrior ->
Dunfell migration to get this to work?

note: Warrior used meta-linaro-toolchain and for Dunfell, it's moved to
meta-arm-toolchain.

Regards,
Ryan.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#50152): https://lists.yoctoproject.org/g/yocto/message/50152
Mute This Topic: https://lists.yoctoproject.org/mt/75909560/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [yocto] Information about Yocto Dunfell components version #dunfell

2020-09-01 Thread Josef Holzmayr-Khosh Amoz
Insert the package names into
http://layers.openembedded.org/layerindex/branch/dunfell/recipes and see
for yourself :)

Greetz

Am Di., 1. Sept. 2020 um 12:22 Uhr schrieb Yocto_user <
avinashyadav9...@gmail.com>:

> Hi All,
>
> I read the Dunfell release notes but couldn't find the version detail of
> following components:
> bzip2
> cairo
> faad2
> fontconfig
> freetype
> gdkpixbuf
> libav
> libffi
> libjpeg
> libogg
> libpng
> libtiff
> libvorbis
> libxcursor
> openexr
> openssl
> pango
> pixman
> stb-image
> swscale
> theora
> x264
> zlib
> Please give me the version detail of these components,  we are planning to
> use dunfell but before that we needed the version detail of all these
> components in Yocto. 
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#50420): https://lists.yoctoproject.org/g/yocto/message/50420
Mute This Topic: https://lists.yoctoproject.org/mt/76553173/21656
Mute #dunfell: https://lists.yoctoproject.org/g/yocto+yocto/mutehashtag/dunfell
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [yocto] Information about Yocto Dunfell components version #dunfell

2020-09-01 Thread Paul Barker
On Tue, 1 Sep 2020 at 11:25, Josef Holzmayr-Khosh Amoz
 wrote:
>
> Insert the package names into 
> http://layers.openembedded.org/layerindex/branch/dunfell/recipes and see for 
> yourself :)
>
> Greetz
>
> Am Di., 1. Sept. 2020 um 12:22 Uhr schrieb Yocto_user 
> :
>>
>> Hi All,
>>
>> I read the Dunfell release notes but couldn't find the version detail of 
>> following components:
>> bzip2
>> cairo
>> faad2
>> fontconfig
>> freetype
>> gdkpixbuf
>> libav
>> libffi
>> libjpeg
>> libogg
>> libpng
>> libtiff
>> libvorbis
>> libxcursor
>> openexr
>> openssl
>> pango
>> pixman
>> stb-image
>> swscale
>> theora
>> x264
>> zlib
>>
>> Please give me the version detail of these components,  we are planning to 
>> use dunfell but before that we needed the version detail of all these 
>> components in Yocto.

Also worth noting: The exact versions used will depend on which commit
from the dunfell branch you choose. You should follow updates to the
stable branch to get the latest security fixes for your packages so
the versions should not be perfectly static anyway.

-- 
Paul Barker
Konsulko Group
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#50422): https://lists.yoctoproject.org/g/yocto/message/50422
Mute This Topic: https://lists.yoctoproject.org/mt/76553173/21656
Mute #dunfell: https://lists.yoctoproject.org/g/yocto+yocto/mutehashtag/dunfell
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [yocto] Problem with YOCTO Dunfell and host Fedora 33

2021-05-20 Thread Joel Winarske
Hi Zoran,

Your cannelloni recipe is set to autorev, meaning it's not locked to a
commit.  So when something changes upstream you have to manage it.

Chances are Canelloni introduced a CMake change which is overwriting
(opposed to appending) one or more variables required for cross compiling.
Perhaps try to cross compile (not a host build) Canelloni by itself without
Yocto involved.  Once that's sorted, then reintroduce yocto.


Joel


On Thu, May 20, 2021, 6:58 AM Zoran  wrote:

> Hello Yocto developers,
>
> I have few problems running the following self proprietary script from
> one of my public git repos:
> https://github.com/ZoranStojsavljevic/bbb-yocto/blob/master/yocto-setup.sh
>
> I recall that last time I used the script (I used then Fedora 31), the
> ./yocto setup dunfell worked seamlessly, did setup the environment,
> and upon bitbake -k core-image-minimal completed the tasks without any
> problem.
>
> Now, I am using Fedora 33 (in the meantime I did two Fedora version
> upgrades).
>
> The problem is that while compiling the cannelloni package, the
> following errors were issued (please, look into the attached file
> cmake_problem.txt).
>
> This cmake problem was introduced after switching from Fedora 31 to Fedora
> 33 ?!
>
> Any clue/idea why this is happening??? What is the cause of the problem?
>
> Thank you,
> Zoran
> ___
>
> 
>
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#53606): https://lists.yoctoproject.org/g/yocto/message/53606
Mute This Topic: https://lists.yoctoproject.org/mt/82961982/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] Problem with YOCTO Dunfell and host Fedora 33

2021-05-20 Thread Zoran
Hello Joel,

Thank you for the tips. Really helpful, appreciated very much.

I spent some time this morning investigating this issue, and to find
the culprit.

Here are my findings, which resulted in a cannelloni.bb recipe change
(according to what you wrote).

The fix submitted is in recipe:
https://github.com/ZoranStojsavljevic/meta-socketcan/blob/master/recipes-can/cannelloni/cannelloni.bb

The last cannelloni version which works is:
https://github.com/mguentner/cannelloni/commit/0bd7e27db35bdef361226882ae04205504f7b2f4

The culprit introducing the cmake errors is this one:
https://github.com/mguentner/cannelloni/commit/d01dd1dc745914d129b1f4da2074e282253246af

And, the issue recorded with Maximilian Guentner's cannelloni repo:
https://github.com/mguentner/cannelloni/issues/35

Thank you again,
Zoran
___

On Thu, May 20, 2021 at 4:48 PM Joel Winarske  wrote:
>
> Hi Zoran,
>
> Your cannelloni recipe is set to autorev, meaning it's not locked to a 
> commit.  So when something changes upstream you have to manage it.
>
> Chances are Canelloni introduced a CMake change which is overwriting (opposed 
> to appending) one or more variables required for cross compiling.  Perhaps 
> try to cross compile (not a host build) Canelloni by itself without Yocto 
> involved.  Once that's sorted, then reintroduce yocto.
>
>
> Joel
>
>
> On Thu, May 20, 2021, 6:58 AM Zoran  wrote:
>>
>> Hello Yocto developers,
>>
>> I have few problems running the following self proprietary script from
>> one of my public git repos:
>> https://github.com/ZoranStojsavljevic/bbb-yocto/blob/master/yocto-setup.sh
>>
>> I recall that last time I used the script (I used then Fedora 31), the
>> ./yocto setup dunfell worked seamlessly, did setup the environment,
>> and upon bitbake -k core-image-minimal completed the tasks without any
>> problem.
>>
>> Now, I am using Fedora 33 (in the meantime I did two Fedora version 
>> upgrades).
>>
>> The problem is that while compiling the cannelloni package, the
>> following errors were issued (please, look into the attached file
>> cmake_problem.txt).
>>
>> This cmake problem was introduced after switching from Fedora 31 to Fedora 
>> 33 ?!
>>
>> Any clue/idea why this is happening??? What is the cause of the problem?
>>
>> Thank you,
>> Zoran
>> ___
>>
>> 
>>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#53614): https://lists.yoctoproject.org/g/yocto/message/53614
Mute This Topic: https://lists.yoctoproject.org/mt/82961982/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] Problem with YOCTO Dunfell and host Fedora 33

2021-06-29 Thread Zoran
Hello to everyone,

Mguentner fixed the cmake issue:
https://github.com/mguentner/cannelloni/issues/35

With this patch:
https://github.com/mguentner/cannelloni/commit/125a7c72e4bcbbf580aeb6ee03e25ed0540be217

So I also reinstated the old cannelloni recipe with:
https://github.com/ZoranStojsavljevic/meta-socketcan/commit/b79e35425b72ba1caf90404a953235a43202e16f

Zee
___

On Fri, May 21, 2021 at 7:55 AM Zoran via lists.yoctoproject.org
 wrote:
>
> Hello Joel,
>
> Thank you for the tips. Really helpful, appreciated very much.
>
> I spent some time this morning investigating this issue, and to find
> the culprit.
>
> Here are my findings, which resulted in a cannelloni.bb recipe change
> (according to what you wrote).
>
> The fix submitted is in recipe:
> https://github.com/ZoranStojsavljevic/meta-socketcan/blob/master/recipes-can/cannelloni/cannelloni.bb
>
> The last cannelloni version which works is:
> https://github.com/mguentner/cannelloni/commit/0bd7e27db35bdef361226882ae04205504f7b2f4
>
> The culprit introducing the cmake errors is this one:
> https://github.com/mguentner/cannelloni/commit/d01dd1dc745914d129b1f4da2074e282253246af
>
> And, the issue recorded with Maximilian Guentner's cannelloni repo:
> https://github.com/mguentner/cannelloni/issues/35
>
> Thank you again,
> Zoran
> ___
>
> On Thu, May 20, 2021 at 4:48 PM Joel Winarske  wrote:
> >
> > Hi Zoran,
> >
> > Your cannelloni recipe is set to autorev, meaning it's not locked to a 
> > commit.  So when something changes upstream you have to manage it.
> >
> > Chances are Canelloni introduced a CMake change which is overwriting 
> > (opposed to appending) one or more variables required for cross compiling.  
> > Perhaps try to cross compile (not a host build) Canelloni by itself without 
> > Yocto involved.  Once that's sorted, then reintroduce yocto.
> >
> >
> > Joel
> >
> >
> > On Thu, May 20, 2021, 6:58 AM Zoran  wrote:
> >>
> >> Hello Yocto developers,
> >>
> >> I have few problems running the following self proprietary script from
> >> one of my public git repos:
> >> https://github.com/ZoranStojsavljevic/bbb-yocto/blob/master/yocto-setup.sh
> >>
> >> I recall that last time I used the script (I used then Fedora 31), the
> >> ./yocto setup dunfell worked seamlessly, did setup the environment,
> >> and upon bitbake -k core-image-minimal completed the tasks without any
> >> problem.
> >>
> >> Now, I am using Fedora 33 (in the meantime I did two Fedora version 
> >> upgrades).
> >>
> >> The problem is that while compiling the cannelloni package, the
> >> following errors were issued (please, look into the attached file
> >> cmake_problem.txt).
> >>
> >> This cmake problem was introduced after switching from Fedora 31 to Fedora 
> >> 33 ?!
> >>
> >> Any clue/idea why this is happening??? What is the cause of the problem?
> >>
> >> Thank you,
> >> Zoran
> >> ___
> >>
> >>
> >>
>
> 
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#54011): https://lists.yoctoproject.org/g/yocto/message/54011
Mute This Topic: https://lists.yoctoproject.org/mt/82961982/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] dunfell: pkgconfig-native build fails in existing Yocto BSP

2021-10-24 Thread Matthias Klein
Hello,

our dunfell based yocto no longer builds in the pkgconfig-native section. I 
tried it in an Ubuntu 18.04 and Debian 10 Docker container.
The build always aborts as follows:

checking thread related cflags... -D_REENTRANT
| checking for pthread_create/pthread_join... no
| checking for pthread_create/pthread_join in -lpthread... no
| checking for pthread_create/pthread_join in -lpthread32... no
| checking for pthread_create/pthread_join in -lpthreads... no
| checking for pthread_create/pthread_join in -lthread... no
| configure: error: I can't find the libraries for the thread implementation
|  posix. Please choose another thread implementation or
|   provide information on your thread implementation.
| configure: error: ../../git/glib/configure failed for glib
| NOTE: The following config.log files may provide further information.
| NOTE: 
/work/yocto-optimeas/build/tmp/work/x86_64-linux/pkgconfig-native/0.29.2+gitAUTOINC+edf8e6f0ea-r0/build/glib/config.log
 
/work/yocto-optimeas/build/tmp/work/x86_64-linux/pkgconfig-native/0.29.2+gitAUTOINC+edf8e6f0ea-r0/build/config.log
| ERROR: configure failed
| WARNING: 
/work/yocto-optimeas/build/tmp/work/x86_64-linux/pkgconfig-native/0.29.2+gitAUTOINC+edf8e6f0ea-r0/temp/run.do_configure.243679:1
 exit 1 from 'exit 1'
| ERROR: Execution of 
'/work/yocto-optimeas/build/tmp/work/x86_64-linux/pkgconfig-native/0.29.2+gitAUTOINC+edf8e6f0ea-r0/temp/run.do_configure.243679'
 failed with exit code 1
ERROR: Task 
(virtual:native:/work/yocto-optimeas/yocto/poky/meta/recipes-devtools/pkgconfig/pkgconfig_git.bb:do_configure)
 failed with exit code '1'
ERROR: ninja-native-1.10.0-r0 do_compile: Execution of 
'/work/yocto-optimeas/build/tmp/work/x86_64-linux/ninja-native/1.10.0-r0/temp/run.do_compile.319277'
 failed with exit code 1
ERROR: Logfile of failure stored in: 
/work/yocto-optimeas/build/tmp/work/x86_64-linux/ninja-native/1.10.0-r0/temp/log.do_compile.319277
Log data follows:
| DEBUG: Executing shell function do_compile
| ninja: fatal: posix_spawn: Operation not permitted
| bootstrapping ninja...
| wrote build.ninja.
| bootstrap complete.  rebuilding...
| Traceback (most recent call last):
|   File "./configure.py", line 709, in 
| subprocess.check_call(rebuild_args)
|   File "/usr/lib/python3.7/subprocess.py", line 347, in check_call
| raise CalledProcessError(retcode, cmd)
| subprocess.CalledProcessError: Command '['./ninja']' returned non-zero exit 
status 1.
| WARNING: 
/work/yocto-optimeas/build/tmp/work/x86_64-linux/ninja-native/1.10.0-r0/temp/run.do_compile.319277:1
 exit 1 from 'python3 ./configure.py --bootstrap'
| ERROR: Execution of 
'/work/yocto-optimeas/build/tmp/work/x86_64-linux/ninja-native/1.10.0-r0/temp/run.do_compile.319277'
 failed with exit code 1
ERROR: Task 
(virtual:native:/work/yocto-optimeas/yocto/poky/meta/recipes-devtools/ninja/ninja_1.10.0.bb:do_compile)
 failed with exit code '1'


I also tried using the commits from dunfell-next in the poky repo as a test. 
But then the build doesn't even start and I get the following error:

ERROR: ParseError at 
/work/yocto-optimeas/yocto/meta-openembedded/meta-oe/recipes-extended/libimobiledevice/libplist_2.1.0.bb:9:
 Could not inherit file classes/python3targetconfig.bbclass 
  | ETA:  --:--:--


Does anyone have any idea what is causing this? does anyone have the same 
behavior?

Many greetings,
Matthias


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#55129): https://lists.yoctoproject.org/g/yocto/message/55129
Mute This Topic: https://lists.yoctoproject.org/mt/86571641/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] Dunfell: problem with kernel-module install and libkmod.so

2021-10-28 Thread Patrick Boettcher
On Fri, 15 Oct 2021 20:03:52 +
"Patrick Boettcher"  wrote:

>Hi list,
>
>I'm facing an issue with a BSP I created using dunfell (up-to-date on
>poky and oe). I'm using a stable kernel and u-boot 2021.07 from denx's
>mainline-stable-layer.

It was a problem of DDR memory configuration and calibration.

Hard to track, but once you know it, easy to find and fix.

--
Patrick.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#55181): https://lists.yoctoproject.org/g/yocto/message/55181
Mute This Topic: https://lists.yoctoproject.org/mt/86358110/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] Dunfell - ERROR: ca-certificates-20211016-r0 do_fetch: Fetcher failure

2021-11-03 Thread Darcy Watkins
Hi,

After syncup of Yocto dunfell, I get the following error:


dwatkins@carmd-ed-n11377-docker-dwatkins_apollo17:64bit build $ bitbake 
ca-certificates -c fetch

Loading cache: 100% 
|#|
 Time: 0:00:00

Loaded 4042 entries from dependency cache.

Parsing recipes: 100% 
|###|
 Time: 0:00:00

Parsing of 2833 .bb files complete (2815 cached, 18 parsed). 4060 targets, 183 
skipped, 0 masked, 0 errors.

WARNING: No recipes available for:

  
/home/dwatkins/workspace/mgos/apollo17/meta-mg90-bsp/recipes-kernel/firmware/linux-firmware_git.bbappend

  
/home/dwatkins/workspace/mgos/apollo17/meta-mg90-bsp/recipes-kernel/linux/linux-qoriq_4.19.bbappend

  
/home/dwatkins/workspace/mgos/apollo17/meta-mgos-distro/meta-openssl-fips/recipes-support/openssl/openssl_1.0.2%.bbappend

  
/home/dwatkins/workspace/mgos/apollo17/meta-mgos-core/recipes-support/cherrypy/cherrypy-python_%.bbappend

  
/home/dwatkins/workspace/mgos/apollo17/meta-mgos-core/recipes-support/hostapd/hostapd_2.6.bbappend

  
/home/dwatkins/workspace/mgos/apollo17/meta-mgos-core/recipes-support/hostapd/hostapd_2.8.bbappend

  
/home/dwatkins/workspace/mgos/apollo17/meta-mgos-core/recipes-support/wpa-supplicant/wpa-supplicant_2.6.bbappend

  
/home/dwatkins/workspace/mgos/apollo17/meta-mgos-core/recipes-support/wpa-supplicant/wpa-supplicant_2.7.bbappend

NOTE: Resolving any missing task queue dependencies



Build Configuration:

BB_VERSION   = "1.46.0"

BUILD_SYS= "x86_64-linux"

NATIVELSBSTRING  = "universal"

TARGET_SYS   = "arm-poky-linux-gnueabi"

MACHINE  = "mg90"

DISTRO   = "mgos"

DISTRO_VERSION   = "3.1.11"

TUNE_FEATURES= "arm vfp cortexa7 neon callconvention-hard"

TARGET_FPU   = "hard"

meta-mgos-core   = "main:96c5c6d35f19d16f65100ee29cb23e9a1470876c"

meta-mgos-release= "main:0825ac63c95db495330848f80d6d68b6f47a77d4"

meta-mg90-bsp= "main:47d0284b7a337df7587055c405213f9428c94884"

meta-mgos-airprime   = "main:5e8ffb01629c60d282b22e3313740e3b2cf325f4"

meta

meta-daisy-cf

meta-openssl-fips

meta-sigma   = "main:abf8a7a7408b690dfb0dff796ce8e94b6b661b0d"

meta

meta-poky

meta-yocto-bsp   = "HEAD:0810ac6b926cd901f0619e95f367efc79d4c3159"

meta-oe

meta-networking

meta-python

meta-perl= "HEAD:814eec96c2a29172da57a425a3609f8b6fcc6afe"

meta-security

meta-integrity

meta-security-compliance

meta-security-isafw  = "HEAD:b76698c788cb8ca632077a972031899ef15025d6"

meta-freescale   = "HEAD:727fd8df20c8ee58474ce15cd5e1459f14bee977"

meta-java= "HEAD:6e84638d77ac921aac46649095bca5ddbde94d2a"

workspace= ":"



Initialising tasks: 100% 
||
 Time: 0:00:00

Sstate summary: Wanted 0 Found 0 Missed 0 Current 0 (0% match, 0% complete)

NOTE: No setscene tasks

NOTE: Executing Tasks

WARNING: ca-certificates-20211016-r0 do_fetch: Failed to fetch URL 
git://salsa.debian.org/debian/ca-certificates.git;protocol=https, attempting 
MIRRORS if available

ERROR: ca-certificates-20211016-r0 do_fetch: Fetcher failure: Fetch command 
export PSEUDO_DISABLED=1; export 
PATH="/home/dwatkins/workspace/mgos/apollo17/build/tmp/sysroots-uninative/x86_64-linux/usr/bin:/home/dwatkins/workspace/mgos/apollo17/upstream/yocto/poky/scripts:/home/dwatkins/workspace/mgos/apollo17/build/tmp/work/all-poky-linux/ca-certificates/20211016-r0/recipe-sysroot-native/usr/bin/allarch-poky-linux:/home/dwatkins/workspace/mgos/apollo17/build/tmp/work/all-poky-linux/ca-certificates/20211016-r0/recipe-sysroot/usr/bin/crossscripts:/home/dwatkins/workspace/mgos/apollo17/build/tmp/work/all-poky-linux/ca-certificates/20211016-r0/recipe-sysroot-native/usr/sbin:/home/dwatkins/workspace/mgos/apollo17/build/tmp/work/all-poky-linux/ca-certificates/20211016-r0/recipe-sysroot-native/usr/bin:/home/dwatkins/workspace/mgos/apollo17/build/tmp/work/all-poky-linux/ca-certificates/20211016-r0/recipe-sysroot-native/sbin:/home/dwatkins/workspace/mgos/apollo17/build/tmp/work/all-poky-linux/ca-certificates/20211016-r0/recipe-sysroot-native/bin:/home/dwatkins/workspac

Re: [yocto] [OE-core] Question of Yocto dunfell LTS period

2022-01-13 Thread Jate Sujjavanich
I received the answer to this question in IRC on 10/29/2021.

16:06 <https://libera.irclog.whitequark.org/yocto/2021-10-29#31083490> <
jatedev> I saw a wiki edit that said dunfell's LTS period is now through
April 2024, but I've seen no announcements. Any truth to this rumor?
16:20 <https://libera.irclog.whitequark.org/yocto/2021-10-29#31083555> <
smurray> jatedev: it was announced on one of the technical calls a few
weeks back, so yes AFAIK
16:25 <https://libera.irclog.whitequark.org/yocto/2021-10-29#31083577> 
jatedev: definitely true. I wish we could sort out the project advocacy
side of things :(

Let this email serve as a little advocacy.


-Jate S.

On Thu, Jan 13, 2022 at 8:14 PM Chanho Park via lists.openembedded.org
 wrote:

> Dear folks,
>
> I found the lts period of Yocto dunfell has been extended from 2022.04 to
> 2024.04[1].
> AFAIK, a Yocto LTS version has 2 years period. I tried to find any notice
> regarding this but I just found Richard's e-mail[2] in last year.
> Can I find any thread or notice regarding this extension?
>
> [1]: https://wiki.yoctoproject.org/wiki/Releases
> [2]:
>
> https://lore.kernel.org/yocto/3878e27fb895718b8c5d65f444fea443a56e0587.camel
> @linuxfoundation.org/
>
> Best Regards,
> Chanho Park
>
>
> 
>
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#55848): https://lists.yoctoproject.org/g/yocto/message/55848
Mute This Topic: https://lists.yoctoproject.org/mt/88412101/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] [OE-core] Question of Yocto dunfell LTS period

2022-01-13 Thread Michael Opdenacker

On 1/14/22 2:36 AM, Jate Sujjavanich wrote:
> I received the answer to this question in IRC on 10/29/2021.
>
> 16:06
>  
> I saw a wiki edit that said dunfell's LTS period is now through April
> 2024, but I've seen no announcements. Any truth to this rumor? 
> 16:20
>  
> jatedev: it was announced on one of the technical calls a few weeks
> back, so yes AFAIK 
> 16:25
>  
> jatedev: definitely true. I wish we could sort out the project
> advocacy side of things :( 
>
> Let this email serve as a little advocacy.


I believe this would also help if the website (releases page) and/or
official documentation carried such information.
Thanks for raising this concern!
Cheers
Michael.

-- 
Michael Opdenacker, Bootlin
Embedded Linux and Kernel engineering
https://bootlin.com


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#55849): https://lists.yoctoproject.org/g/yocto/message/55849
Mute This Topic: https://lists.yoctoproject.org/mt/88412101/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] [OE-core] Question of Yocto dunfell LTS period

2022-01-14 Thread Alexander Kanavin
Actually, I think this was some kind of missed PR opportunity. Regular
distros, such as RHEL, Debian and everyone else, are constantly trotting
out their support windows as the reason to hand them the job of making
products, so we could counter that better perhaps.

Alex

On Fri, 14 Jan 2022 at 06:28, Michael Opdenacker <
michael.opdenac...@bootlin.com> wrote:

>
> On 1/14/22 2:36 AM, Jate Sujjavanich wrote:
> > I received the answer to this question in IRC on 10/29/2021.
> >
> > 16:06
> >  > 
> > I saw a wiki edit that said dunfell's LTS period is now through April
> > 2024, but I've seen no announcements. Any truth to this rumor?
> > 16:20
> >  > 
> > jatedev: it was announced on one of the technical calls a few weeks
> > back, so yes AFAIK
> > 16:25
> >  
> > jatedev: definitely true. I wish we could sort out the project
> > advocacy side of things :(
> >
> > Let this email serve as a little advocacy.
>
>
> I believe this would also help if the website (releases page) and/or
> official documentation carried such information.
> Thanks for raising this concern!
> Cheers
> Michael.
>
> --
> Michael Opdenacker, Bootlin
> Embedded Linux and Kernel engineering
> https://bootlin.com
>
>
> 
>
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#55873): https://lists.yoctoproject.org/g/yocto/message/55873
Mute This Topic: https://lists.yoctoproject.org/mt/88412101/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] [OE-core] Question of Yocto dunfell LTS period

2022-01-14 Thread Zoltan Boszormenyi via lists.yoctoproject.org

Funny you mention PR.

It would also help if recipes staying on the same version
but adding patches for e.g. CVE fixes should increase their
PR value so their rebuilt versions can be put into a package repo.

Throwing away the buildroot (as suggested any time some obscure
build error happens) and restarting a complete image build
resets the autopr value so the repo would contain packages with
the same r0.0 release string and the package manager doesn't know
that the package is newer.

2022. 01. 14. 10:42 keltezéssel, Alexander Kanavin írta:
Actually, I think this was some kind of missed PR opportunity. Regular distros, such as 
RHEL, Debian and everyone else, are constantly trotting out their support windows as the 
reason to hand them the job of making products, so we could counter that better perhaps.


Alex

On Fri, 14 Jan 2022 at 06:28, Michael Opdenacker > wrote:



On 1/14/22 2:36 AM, Jate Sujjavanich wrote:
 > I received the answer to this question in IRC on 10/29/2021.
 >
 > 16:06
 > > 
 > I saw a wiki edit that said dunfell's LTS period is now through April
 > 2024, but I've seen no announcements. Any truth to this rumor?
 > 16:20
 > > 
 > jatedev: it was announced on one of the technical calls a few weeks
 > back, so yes AFAIK
 > 16:25
 > > 
 > jatedev: definitely true. I wish we could sort out the project
 > advocacy side of things :(
 >
 > Let this email serve as a little advocacy.


I believe this would also help if the website (releases page) and/or
official documentation carried such information.
Thanks for raising this concern!
Cheers
Michael.

-- 
Michael Opdenacker, Bootlin

Embedded Linux and Kernel engineering
https://bootlin.com 











-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#55877): https://lists.yoctoproject.org/g/yocto/message/55877
Mute This Topic: https://lists.yoctoproject.org/mt/88412101/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] [OE-core] Question of Yocto dunfell LTS period

2022-01-14 Thread Ross Burton
On Fri, 14 Jan 2022 at 12:41, Zoltan Boszormenyi via
lists.yoctoproject.org  wrote:
> It would also help if recipes staying on the same version
> but adding patches for e.g. CVE fixes should increase their
> PR value so their rebuilt versions can be put into a package repo.
>
> Throwing away the buildroot (as suggested any time some obscure
> build error happens) and restarting a complete image build
> resets the autopr value so the repo would contain packages with
> the same r0.0 release string and the package manager doesn't know
> that the package is newer.

If you want persistent PRs between builds after deleting the build
tree, simply archive the PR database.

We're not going back to manually updating the PR on changes.

Ross

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#55880): https://lists.yoctoproject.org/g/yocto/message/55880
Mute This Topic: https://lists.yoctoproject.org/mt/88412101/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] [dunfell][PATCH] openssl: update from 1.1.1n to 1.1.1o

2022-05-18 Thread Steve Sakoman
On Wed, May 18, 2022 at 12:14 AM Daniel Karlsson
 wrote:
>
> This update fixes CVE-2022-1292.

Hi Daniel,

I've already taken a patch for this update and it is in the set sent
out for review yesterday:

https://lists.openembedded.org/g/openembedded-core/message/165788

In the future if you want to check what patches I currently have under
test, you can look at this branch of oe-core:

https://git.openembedded.org/openembedded-core-contrib/log/?h=stable/dunfell-nut

I appreciate the help with CVEs so don't let this discourage you from
submitting future patches! It's a never ending battle to keep up with
new CVEs :-(

Steve

>
> Signed-off-by: Daniel Karlsson 
> ---
>  .../openssl/{openssl_1.1.1n.bb => openssl_1.1.1o.bb}| 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>  rename meta/recipes-connectivity/openssl/{openssl_1.1.1n.bb => 
> openssl_1.1.1o.bb} (98%)
>
> diff --git a/meta/recipes-connectivity/openssl/openssl_1.1.1n.bb 
> b/meta/recipes-connectivity/openssl/openssl_1.1.1o.bb
> similarity index 98%
> rename from meta/recipes-connectivity/openssl/openssl_1.1.1n.bb
> rename to meta/recipes-connectivity/openssl/openssl_1.1.1o.bb
> index 8538bd5a18..c9cfc759c9 100644
> --- a/meta/recipes-connectivity/openssl/openssl_1.1.1n.bb
> +++ b/meta/recipes-connectivity/openssl/openssl_1.1.1o.bb
> @@ -24,7 +24,7 @@ SRC_URI_append_class-nativesdk = " \
> file://environment.d-openssl.sh \
> "
>
> -SRC_URI[sha256sum] = 
> "40dceb51a4f6a5275bde0e6bf20ef4b91bfc32ed57c0552e2e8e15463372b17a"
> +SRC_URI[sha256sum] = 
> "9384a2b0570dd80358841464677115df785edb941c71211f75076d72fe6b438f"
>
>  inherit lib_package multilib_header multilib_script ptest
>  MULTILIB_SCRIPTS = "${PN}-bin:${bindir}/c_rehash"
> --
> 2.25.1
>
>
> 
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#57130): https://lists.yoctoproject.org/g/yocto/message/57130
Mute This Topic: https://lists.yoctoproject.org/mt/91183012/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] Error when building NPM package grpc with yocto Dunfell

2020-06-02 Thread Edson Seabra
Hi, all.

I created the recipe for grpc package with:

recipetool create "npm://registry.npmjs.org/;package=grpc;version=latest"

The build fails with the error ENOTCACHED.

I could build several other NPM package with the same process in the same build 
environment.

Any help on this issue will be very appreciated. Thanks in advance.

Exception: bb.fetch2.FetchError: Fetcher failure: Fetch command export 
PSEUDO_DISABLED=1; unset _PYTHON_SYSCONFIGDATA_NAME; export 
PATH="/home/edson/ng-trunk/nodegrid/tmp/sysroots-uninative/x86_64-linux/usr/bin:/home/edson/ng-trunk/poky/scripts:/home/edson/ng-trunk/nodegrid/tmp/work/core2-64-poky-linux/grpc/1.24.2-r0/recipe-sysroot-native/usr/bin/x86_64-poky-linux:/home/edson/ng-trunk/nodegrid/tmp/work/core2-64-poky-linux/grpc/1.24.2-r0/recipe-sysroot/usr/bin/crossscripts:/home/edson/ng-trunk/nodegrid/tmp/work/core2-64-poky-linux/grpc/1.24.2-r0/recipe-sysroot-native/usr/sbin:/home/edson/ng-trunk/nodegrid/tmp/work/core2-64-poky-linux/grpc/1.24.2-r0/recipe-sysroot-native/usr/bin:/home/edson/ng-trunk/nodegrid/tmp/work/core2-64-poky-linux/grpc/1.24.2-r0/recipe-sysroot-native/sbin:/home/edson/ng-trunk/nodegrid/tmp/work/core2-64-poky-linux/grpc/1.24.2-r0/recipe-sysroot-native/bin:/home/edson/ng-trunk/poky/bitbake/bin:/home/edson/ng-trunk/nodegrid/tmp/hosttools";
 export HOME="/tmp/tmpfksel5vy"; NPM_CONFIG_GLOBALCONFIG=/tmp/tmpfksel5vy/npmrc 
NPM_CONFIG_USERCONFIG=/tmp/tmpfksel5vy/npmrc npm install 
/tmp/tmp8fo4p29v/grpc-1.24.2.tgz --target_arch=x64 --build-from-source=true 
--prefer-offline=true failed with exit code 1, output:
npm info it worked if it ends with ok
npm verb cli [
npm verb cli   
'/home/edson/ng-trunk/nodegrid/tmp/work/core2-64-poky-linux/grpc/1.24.2-r0/recipe-sysroot-native/usr/bin/node',
npm verb cli   
'/home/edson/ng-trunk/nodegrid/tmp/work/core2-64-poky-linux/grpc/1.24.2-r0/recipe-sysroot-native/usr/bin/npm',
npm verb cli   'install',
npm verb cli   '/tmp/tmp8fo4p29v/grpc-1.24.2.tgz',
npm verb cli   '--target_arch=x64',
npm verb cli   '--build-from-source=true',
npm verb cli   '--prefer-offline=true'
npm verb cli ]
npm info using npm@6.13.4
npm info using node@v12.14.1
npm verb npm-session 1c72313bb5c9ab81
npm sill install loadCurrentTree
npm sill install readGlobalPackageData
npm sill pacote file manifest for 
undefined@file:/tmp/tmp8fo4p29v/grpc-1.24.2.tgz fetched in 321ms
npm timing stage:loadCurrentTree Completed in 332ms
npm sill install loadIdealTree
npm sill install cloneCurrentTreeToIdealTree
npm timing stage:loadIdealTree:cloneCurrentTree Completed in 0ms
npm sill install loadShrinkwrap
npm timing stage:loadIdealTree:loadShrinkwrap Completed in 2ms
npm sill install loadAllDepsIntoIdealTree
npm sill resolveWithNewModule grpc@1.24.2 checking installable status
npm sill tarball trying file:/tmp/tmp8fo4p29v/grpc-1.24.2.tgz by hash: 
sha512-fG3+KfpQcxXDADPndyHQu4czffN5AU+1aOn36SmFP67mgMkInvxK7l+/AoEk5tN+OguAWmQxTwCDHaL1G6DMMw==
npm sill extract file:/tmp/tmp8fo4p29v/grpc-1.24.2.tgz extracted to 
/tmp/npm-7958-3f94111f/unpack-639eccef (1705ms)
npm sill addBundled read tarball
npm sill cleanup remove extracted module
npm sill removeObsoleteDep removing node-pre-gyp@0.14.0 from the tree as its 
been replaced by a newer version or is no longer required
npm sill fetchPackageMetaData error for node-pre-gyp@0.14.0 request to 
https://registry.npmjs.org/node-pre-gyp failed: cache mode is 'only-if-cached' 
but no cached response available.
npm timing stage:rollbackFailedOptional Completed in 1ms
npm timing stage:runTopLevelLifecycles Completed in 2486ms
npm verb stack Error: request to https://registry.npmjs.org/node-pre-gyp 
failed: cache mode is 'only-if-cached' but no cached response available.
npm verb stack at 
/home/edson/ng-trunk/nodegrid/tmp/work/core2-64-poky-linux/grpc/1.24.2-r0/recipe-sysroot-native/usr/lib/node_modules/npm/node_modules/make-fetch-happen/index.js:166:21
npm verb stack at tryCatcher 
(/home/edson/ng-trunk/nodegrid/tmp/work/core2-64-poky-linux/grpc/1.24.2-r0/recipe-sysroot-native/usr/lib/node_modules/npm/node_modules/bluebird/js/release/util.js:16:23)
npm verb stack at Promise._settlePromiseFromHandler 
(/home/edson/ng-trunk/nodegrid/tmp/work/core2-64-poky-linux/grpc/1.24.2-r0/recipe-sysroot-native/usr/lib/node_modules/npm/node_modules/bluebird/js/release/promise.js:517:31)
npm verb stack at Promise._settlePromise 
(/home/edson/ng-trunk/nodegrid/tmp/work/core2-64-poky-linux/grpc/1.24.2-r0/recipe-sysroot-native/usr/lib/node_modules/npm/node_modules/bluebird/js/release/promise.js:574:18)
npm verb stack at Promise._settlePromise0 
(/home/edson/ng-trunk/nodegrid/tmp/work/core2-64-poky-linux/grpc/1.24.2-r0/recipe-sysroot-native/usr/lib/node_modules/npm/node_modules/bluebird/js/release/promise.js:619:10)
npm verb stack at Promise._settlePromises 
(/home/edson/ng-trunk/nodegrid/tmp/work/core2-64-poky-linux/grpc/1.24.2-r0/recipe-sysroot-native/usr/lib/node_modules/npm/node_modules/bluebird/js/release/promise.js:699:18)
np

Re: [yocto] dunfell gcc-sanitizers-arm-8.3 fails to build

2020-07-31 Thread Khem Raj
On Fri, Jul 31, 2020 at 8:35 AM Ryan Harkin  wrote:
>
> Hello,
>
> I'm migrating from Warrior to Dunfell and I'm getting a curious build failure 
> in gcc-sanitizers.
>
> Here's the full gory detail:
> https://pastebin.ubuntu.com/p/nh4cDKMvgS/
>
> However, the main error is this:
>
> | In file included from 
> ../../../../../../../../../work-shared/gcc-arm-8.3-r2019.03/git/libsanitizer/sanitizer_common/sanitizer_platform_limits_posix.cc:193:
> | 
> ../../../../../../../../../work-shared/gcc-arm-8.3-r2019.03/git/libsanitizer/sanitizer_common/sanitizer_internal_defs.h:317:72:
>  error: size of array 'assertion_failed__1152' is negative
> |  typedef char IMPL_PASTE(assertion_failed_##_, line)[2*(int)(pred)-1]
>
> I have no idea where to begin with this. I don't even know why gcc-sanitizers 
> is included in the build, what it does, or why I need it. I'm building an 
> image with dev packages and gcc, so I guess that's why.
>
> I've hacked meta-arm to patch sanitizer_platform_limits_posix.cc to null out 
> the macros and that builds fine. I'm sure it won't work, should someone want 
> to use it, mind you.
>
> Is there something obvious that I should be doing as part of a Warrior -> 
> Dunfell migration to get this to work?
>
> note: Warrior used meta-linaro-toolchain and for Dunfell, it's moved to 
> meta-arm-toolchain.
>

is gcc 8.3 the latest for linaro

> Regards,
> Ryan.
> 
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#50154): https://lists.yoctoproject.org/g/yocto/message/50154
Mute This Topic: https://lists.yoctoproject.org/mt/75909560/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [yocto] dunfell gcc-sanitizers-arm-8.3 fails to build

2020-07-31 Thread Sumit Garg
Hi Ryan,

On Fri, 31 Jul 2020 at 21:05, Ryan Harkin  wrote:
>
> Hello,
>
> I'm migrating from Warrior to Dunfell and I'm getting a curious build failure 
> in gcc-sanitizers.
>
> Here's the full gory detail:
> https://pastebin.ubuntu.com/p/nh4cDKMvgS/
>
> However, the main error is this:
>
> | In file included from 
> ../../../../../../../../../work-shared/gcc-arm-8.3-r2019.03/git/libsanitizer/sanitizer_common/sanitizer_platform_limits_posix.cc:193:
> | 
> ../../../../../../../../../work-shared/gcc-arm-8.3-r2019.03/git/libsanitizer/sanitizer_common/sanitizer_internal_defs.h:317:72:
>  error: size of array 'assertion_failed__1152' is negative
> |  typedef char IMPL_PASTE(assertion_failed_##_, line)[2*(int)(pred)-1]
>
> I have no idea where to begin with this. I don't even know why gcc-sanitizers 
> is included in the build, what it does, or why I need it. I'm building an 
> image with dev packages and gcc, so I guess that's why.
>
> I've hacked meta-arm to patch sanitizer_platform_limits_posix.cc to null out 
> the macros and that builds fine. I'm sure it won't work, should someone want 
> to use it, mind you.
>
> Is there something obvious that I should be doing as part of a Warrior -> 
> Dunfell migration to get this to work?

I would suggest you to switch to GCC 9.2 Arm toolchain for Dunfell.
RPB does the same [1].

GCCVERSION = "arm-9.2"

[1] 
https://github.com/96boards/meta-rpb/blob/dunfell/conf/distro/include/rpb.inc#L31

-Sumit

>
> note: Warrior used meta-linaro-toolchain and for Dunfell, it's moved to 
> meta-arm-toolchain.
>
> Regards,
> Ryan.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#50156): https://lists.yoctoproject.org/g/yocto/message/50156
Mute This Topic: https://lists.yoctoproject.org/mt/75909560/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [yocto] dunfell gcc-sanitizers-arm-8.3 fails to build

2020-08-01 Thread Ryan Harkin
Hi Khem,

On Fri, 31 Jul 2020, 21:58 Khem Raj,  wrote:

> On Fri, Jul 31, 2020 at 8:35 AM Ryan Harkin 
> wrote:
> >
> > Hello,
> >
> > I'm migrating from Warrior to Dunfell and I'm getting a curious build
> failure in gcc-sanitizers.
> >
> > Here's the full gory detail:
> > https://pastebin.ubuntu.com/p/nh4cDKMvgS/
> >
> > However, the main error is this:
> >
> > | In file included from
> ../../../../../../../../../work-shared/gcc-arm-8.3-r2019.03/git/libsanitizer/sanitizer_common/sanitizer_platform_limits_posix.cc:193:
> > |
> ../../../../../../../../../work-shared/gcc-arm-8.3-r2019.03/git/libsanitizer/sanitizer_common/sanitizer_internal_defs.h:317:72:
> error: size of array 'assertion_failed__1152' is negative
> > |  typedef char IMPL_PASTE(assertion_failed_##_,
> line)[2*(int)(pred)-1]
> >
> > I have no idea where to begin with this. I don't even know why
> gcc-sanitizers is included in the build, what it does, or why I need it.
> I'm building an image with dev packages and gcc, so I guess that's why.
> >
> > I've hacked meta-arm to patch sanitizer_platform_limits_posix.cc to null
> out the macros and that builds fine. I'm sure it won't work, should someone
> want to use it, mind you.
> >
> > Is there something obvious that I should be doing as part of a Warrior
> -> Dunfell migration to get this to work?
> >
> > note: Warrior used meta-linaro-toolchain and for Dunfell, it's moved to
> meta-arm-toolchain.
> >
>
> is gcc 8.3 the latest for linaro
>

I assume so. I haven't attempted to change the default.


> > Regards,
> > Ryan.
> > 
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#50157): https://lists.yoctoproject.org/g/yocto/message/50157
Mute This Topic: https://lists.yoctoproject.org/mt/75909560/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [yocto] dunfell gcc-sanitizers-arm-8.3 fails to build

2020-08-01 Thread Ryan Harkin
Hi Sumit,

On Sat, 1 Aug 2020, 07:51 Sumit Garg,  wrote:

> Hi Ryan,
>
> On Fri, 31 Jul 2020 at 21:05, Ryan Harkin  wrote:
> >
> > Hello,
> >
> > I'm migrating from Warrior to Dunfell and I'm getting a curious build
> failure in gcc-sanitizers.
> >
> > Here's the full gory detail:
> > https://pastebin.ubuntu.com/p/nh4cDKMvgS/
> >
> > However, the main error is this:
> >
> > | In file included from
> ../../../../../../../../../work-shared/gcc-arm-8.3-r2019.03/git/libsanitizer/sanitizer_common/sanitizer_platform_limits_posix.cc:193:
> > |
> ../../../../../../../../../work-shared/gcc-arm-8.3-r2019.03/git/libsanitizer/sanitizer_common/sanitizer_internal_defs.h:317:72:
> error: size of array 'assertion_failed__1152' is negative
> > |  typedef char IMPL_PASTE(assertion_failed_##_,
> line)[2*(int)(pred)-1]
> >
> > I have no idea where to begin with this. I don't even know why
> gcc-sanitizers is included in the build, what it does, or why I need it.
> I'm building an image with dev packages and gcc, so I guess that's why.
> >
> > I've hacked meta-arm to patch sanitizer_platform_limits_posix.cc to null
> out the macros and that builds fine. I'm sure it won't work, should someone
> want to use it, mind you.
> >
> > Is there something obvious that I should be doing as part of a Warrior
> -> Dunfell migration to get this to work?
>
> I would suggest you to switch to GCC 9.2 Arm toolchain for Dunfell.
> RPB does the same [1].
>

I can do that, I have no reason to use 8.3 other than it's the default. Is
my problem a known issue with 8.3, or is moving to 9.2 a general suggestion?


> GCCVERSION = "arm-9.2"
>
> [1]
> https://github.com/96boards/meta-rpb/blob/dunfell/conf/distro/include/rpb.inc#L31


Thanks for the pointer.


>
> -Sumit
>
> >
> > note: Warrior used meta-linaro-toolchain and for Dunfell, it's moved to
> meta-arm-toolchain.
> >
> > Regards,
> > Ryan.
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#50158): https://lists.yoctoproject.org/g/yocto/message/50158
Mute This Topic: https://lists.yoctoproject.org/mt/75909560/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [yocto] dunfell gcc-sanitizers-arm-8.3 fails to build

2020-08-01 Thread Ryan Harkin
On Sat, 1 Aug 2020 at 10:09, Ryan Harkin  wrote:

> Hi Khem,
>
> On Fri, 31 Jul 2020, 21:58 Khem Raj,  wrote:
>
>> On Fri, Jul 31, 2020 at 8:35 AM Ryan Harkin 
>> wrote:
>> >
>> > Hello,
>> >
>> > I'm migrating from Warrior to Dunfell and I'm getting a curious build
>> failure in gcc-sanitizers.
>> >
>> > Here's the full gory detail:
>> > https://pastebin.ubuntu.com/p/nh4cDKMvgS/
>> >
>> > However, the main error is this:
>> >
>> > | In file included from
>> ../../../../../../../../../work-shared/gcc-arm-8.3-r2019.03/git/libsanitizer/sanitizer_common/sanitizer_platform_limits_posix.cc:193:
>> > |
>> ../../../../../../../../../work-shared/gcc-arm-8.3-r2019.03/git/libsanitizer/sanitizer_common/sanitizer_internal_defs.h:317:72:
>> error: size of array 'assertion_failed__1152' is negative
>> > |  typedef char IMPL_PASTE(assertion_failed_##_,
>> line)[2*(int)(pred)-1]
>> >
>> > I have no idea where to begin with this. I don't even know why
>> gcc-sanitizers is included in the build, what it does, or why I need it.
>> I'm building an image with dev packages and gcc, so I guess that's why.
>> >
>> > I've hacked meta-arm to patch sanitizer_platform_limits_posix.cc to
>> null out the macros and that builds fine. I'm sure it won't work, should
>> someone want to use it, mind you.
>> >
>> > Is there something obvious that I should be doing as part of a Warrior
>> -> Dunfell migration to get this to work?
>> >
>> > note: Warrior used meta-linaro-toolchain and for Dunfell, it's moved to
>> meta-arm-toolchain.
>> >
>>
>> is gcc 8.3 the latest for linaro
>>
>
> I assume so. I haven't attempted to change the default.
>

I'm sorry, that's incorrect: local.conf has an over-ride to specify 8.3.
I've just removed it and now it's using 9.3. And it's building fine.

Sumit, do you know if there's a reason for using 9.2 in RPB instead of 9.3?


>
>> > Regards,
>> > Ryan.
>> > 
>>
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#50159): https://lists.yoctoproject.org/g/yocto/message/50159
Mute This Topic: https://lists.yoctoproject.org/mt/75909560/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [yocto] dunfell gcc-sanitizers-arm-8.3 fails to build

2020-08-01 Thread Sumit Garg
On Sat, 1 Aug 2020 at 14:42, Ryan Harkin  wrote:
>
> Hi Sumit,
>
> On Sat, 1 Aug 2020, 07:51 Sumit Garg,  wrote:
>>
>> Hi Ryan,
>>
>> On Fri, 31 Jul 2020 at 21:05, Ryan Harkin  wrote:
>> >
>> > Hello,
>> >
>> > I'm migrating from Warrior to Dunfell and I'm getting a curious build 
>> > failure in gcc-sanitizers.
>> >
>> > Here's the full gory detail:
>> > https://pastebin.ubuntu.com/p/nh4cDKMvgS/
>> >
>> > However, the main error is this:
>> >
>> > | In file included from 
>> > ../../../../../../../../../work-shared/gcc-arm-8.3-r2019.03/git/libsanitizer/sanitizer_common/sanitizer_platform_limits_posix.cc:193:
>> > | 
>> > ../../../../../../../../../work-shared/gcc-arm-8.3-r2019.03/git/libsanitizer/sanitizer_common/sanitizer_internal_defs.h:317:72:
>> >  error: size of array 'assertion_failed__1152' is negative
>> > |  typedef char IMPL_PASTE(assertion_failed_##_, line)[2*(int)(pred)-1]
>> >
>> > I have no idea where to begin with this. I don't even know why 
>> > gcc-sanitizers is included in the build, what it does, or why I need it. 
>> > I'm building an image with dev packages and gcc, so I guess that's why.
>> >
>> > I've hacked meta-arm to patch sanitizer_platform_limits_posix.cc to null 
>> > out the macros and that builds fine. I'm sure it won't work, should 
>> > someone want to use it, mind you.
>> >
>> > Is there something obvious that I should be doing as part of a Warrior -> 
>> > Dunfell migration to get this to work?
>>
>> I would suggest you to switch to GCC 9.2 Arm toolchain for Dunfell.
>> RPB does the same [1].
>
>
> I can do that, I have no reason to use 8.3 other than it's the default. Is my 
> problem a known issue with 8.3, or is moving to 9.2 a general suggestion?
>

AFAIK, as upstream OE switches to a newer GCC version, it stops
supporting older GCC versions. And since we are building here Arm
toolchain from source and rely on OE core recipes, so it is possible
that build could fail while trying to build an old toolchain version
using latest OE core recipes.

So the general recommendation is to keep the Arm toolchain version as
close to the OE core to avoid any possible build failures.

-Sumit

>>
>> GCCVERSION = "arm-9.2"
>>
>> [1] 
>> https://github.com/96boards/meta-rpb/blob/dunfell/conf/distro/include/rpb.inc#L31
>
>
> Thanks for the pointer.
>
>>
>>
>> -Sumit
>>
>> >
>> > note: Warrior used meta-linaro-toolchain and for Dunfell, it's moved to 
>> > meta-arm-toolchain.
>> >
>> > Regards,
>> > Ryan.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#50160): https://lists.yoctoproject.org/g/yocto/message/50160
Mute This Topic: https://lists.yoctoproject.org/mt/75909560/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [yocto] dunfell gcc-sanitizers-arm-8.3 fails to build

2020-08-01 Thread Sumit Garg
On Sat, 1 Aug 2020 at 14:57, Ryan Harkin  wrote:
>
>
>
> On Sat, 1 Aug 2020 at 10:09, Ryan Harkin  wrote:
>>
>> Hi Khem,
>>
>> On Fri, 31 Jul 2020, 21:58 Khem Raj,  wrote:
>>>
>>> On Fri, Jul 31, 2020 at 8:35 AM Ryan Harkin  wrote:
>>> >
>>> > Hello,
>>> >
>>> > I'm migrating from Warrior to Dunfell and I'm getting a curious build 
>>> > failure in gcc-sanitizers.
>>> >
>>> > Here's the full gory detail:
>>> > https://pastebin.ubuntu.com/p/nh4cDKMvgS/
>>> >
>>> > However, the main error is this:
>>> >
>>> > | In file included from 
>>> > ../../../../../../../../../work-shared/gcc-arm-8.3-r2019.03/git/libsanitizer/sanitizer_common/sanitizer_platform_limits_posix.cc:193:
>>> > | 
>>> > ../../../../../../../../../work-shared/gcc-arm-8.3-r2019.03/git/libsanitizer/sanitizer_common/sanitizer_internal_defs.h:317:72:
>>> >  error: size of array 'assertion_failed__1152' is negative
>>> > |  typedef char IMPL_PASTE(assertion_failed_##_, 
>>> > line)[2*(int)(pred)-1]
>>> >
>>> > I have no idea where to begin with this. I don't even know why 
>>> > gcc-sanitizers is included in the build, what it does, or why I need it. 
>>> > I'm building an image with dev packages and gcc, so I guess that's why.
>>> >
>>> > I've hacked meta-arm to patch sanitizer_platform_limits_posix.cc to null 
>>> > out the macros and that builds fine. I'm sure it won't work, should 
>>> > someone want to use it, mind you.
>>> >
>>> > Is there something obvious that I should be doing as part of a Warrior -> 
>>> > Dunfell migration to get this to work?
>>> >
>>> > note: Warrior used meta-linaro-toolchain and for Dunfell, it's moved to 
>>> > meta-arm-toolchain.
>>> >
>>>
>>> is gcc 8.3 the latest for linaro
>>
>>
>> I assume so. I haven't attempted to change the default.
>
>
> I'm sorry, that's incorrect: local.conf has an over-ride to specify 8.3.
> I've just removed it and now it's using 9.3. And it's building fine.
>
> Sumit, do you know if there's a reason for using 9.2 in RPB instead of 9.3?
>

Arm GCC 9.3 toolchain isn't released yet (see here [1]).

[1] 
https://developer.arm.com/tools-and-software/open-source-software/developer-tools/gnu-toolchain/gnu-a/downloads

-Sumit

>>
>>>
>>> > Regards,
>>> > Ryan.
>>> > 
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#50161): https://lists.yoctoproject.org/g/yocto/message/50161
Mute This Topic: https://lists.yoctoproject.org/mt/75909560/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [yocto] dunfell gcc-sanitizers-arm-8.3 fails to build

2020-08-01 Thread Sumit Garg
On Sat, 1 Aug 2020 at 19:40, Sumit Garg via lists.yoctoproject.org
 wrote:
>
> On Sat, 1 Aug 2020 at 14:57, Ryan Harkin  wrote:
> >
> >
> >
> > On Sat, 1 Aug 2020 at 10:09, Ryan Harkin  wrote:
> >>
> >> Hi Khem,
> >>
> >> On Fri, 31 Jul 2020, 21:58 Khem Raj,  wrote:
> >>>
> >>> On Fri, Jul 31, 2020 at 8:35 AM Ryan Harkin  
> >>> wrote:
> >>> >
> >>> > Hello,
> >>> >
> >>> > I'm migrating from Warrior to Dunfell and I'm getting a curious build 
> >>> > failure in gcc-sanitizers.
> >>> >
> >>> > Here's the full gory detail:
> >>> > https://pastebin.ubuntu.com/p/nh4cDKMvgS/
> >>> >
> >>> > However, the main error is this:
> >>> >
> >>> > | In file included from 
> >>> > ../../../../../../../../../work-shared/gcc-arm-8.3-r2019.03/git/libsanitizer/sanitizer_common/sanitizer_platform_limits_posix.cc:193:
> >>> > | 
> >>> > ../../../../../../../../../work-shared/gcc-arm-8.3-r2019.03/git/libsanitizer/sanitizer_common/sanitizer_internal_defs.h:317:72:
> >>> >  error: size of array 'assertion_failed__1152' is negative
> >>> > |  typedef char IMPL_PASTE(assertion_failed_##_, 
> >>> > line)[2*(int)(pred)-1]
> >>> >
> >>> > I have no idea where to begin with this. I don't even know why 
> >>> > gcc-sanitizers is included in the build, what it does, or why I need 
> >>> > it. I'm building an image with dev packages and gcc, so I guess that's 
> >>> > why.
> >>> >
> >>> > I've hacked meta-arm to patch sanitizer_platform_limits_posix.cc to 
> >>> > null out the macros and that builds fine. I'm sure it won't work, 
> >>> > should someone want to use it, mind you.
> >>> >
> >>> > Is there something obvious that I should be doing as part of a Warrior 
> >>> > -> Dunfell migration to get this to work?
> >>> >
> >>> > note: Warrior used meta-linaro-toolchain and for Dunfell, it's moved to 
> >>> > meta-arm-toolchain.
> >>> >
> >>>
> >>> is gcc 8.3 the latest for linaro
> >>
> >>
> >> I assume so. I haven't attempted to change the default.
> >
> >
> > I'm sorry, that's incorrect: local.conf has an over-ride to specify 8.3.
> > I've just removed it and now it's using 9.3. And it's building fine.
> >

It's using GCC 9.3 from OE core. If you wish to use Arm toolchain then
you need to override the default OE core GCC version with Arm
toolchain GCC version:

GCCVERSION = "arm-9.2"

-Sumit

> > Sumit, do you know if there's a reason for using 9.2 in RPB instead of 9.3?
> >
>
> Arm GCC 9.3 toolchain isn't released yet (see here [1]).
>
> [1] 
> https://developer.arm.com/tools-and-software/open-source-software/developer-tools/gnu-toolchain/gnu-a/downloads
>
> -Sumit
>
> >>
> >>>
> >>> > Regards,
> >>> > Ryan.
> >>> >
> 
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#50162): https://lists.yoctoproject.org/g/yocto/message/50162
Mute This Topic: https://lists.yoctoproject.org/mt/75909560/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [yocto] dunfell gcc-sanitizers-arm-8.3 fails to build

2020-08-02 Thread Ryan Harkin
On Sat, 1 Aug 2020, 15:19 Sumit Garg,  wrote:

> On Sat, 1 Aug 2020 at 19:40, Sumit Garg via lists.yoctoproject.org
>  wrote:
> >
> > On Sat, 1 Aug 2020 at 14:57, Ryan Harkin  wrote:
> > >
> > >
> > >
> > > On Sat, 1 Aug 2020 at 10:09, Ryan Harkin 
> wrote:
> > >>
> > >> Hi Khem,
> > >>
> > >> On Fri, 31 Jul 2020, 21:58 Khem Raj,  wrote:
> > >>>
> > >>> On Fri, Jul 31, 2020 at 8:35 AM Ryan Harkin 
> wrote:
> > >>> >
> > >>> > Hello,
> > >>> >
> > >>> > I'm migrating from Warrior to Dunfell and I'm getting a curious
> build failure in gcc-sanitizers.
> > >>> >
> > >>> > Here's the full gory detail:
> > >>> > https://pastebin.ubuntu.com/p/nh4cDKMvgS/
> > >>> >
> > >>> > However, the main error is this:
> > >>> >
> > >>> > | In file included from
> ../../../../../../../../../work-shared/gcc-arm-8.3-r2019.03/git/libsanitizer/sanitizer_common/sanitizer_platform_limits_posix.cc:193:
> > >>> > |
> ../../../../../../../../../work-shared/gcc-arm-8.3-r2019.03/git/libsanitizer/sanitizer_common/sanitizer_internal_defs.h:317:72:
> error: size of array 'assertion_failed__1152' is negative
> > >>> > |  typedef char IMPL_PASTE(assertion_failed_##_,
> line)[2*(int)(pred)-1]
> > >>> >
> > >>> > I have no idea where to begin with this. I don't even know why
> gcc-sanitizers is included in the build, what it does, or why I need it.
> I'm building an image with dev packages and gcc, so I guess that's why.
> > >>> >
> > >>> > I've hacked meta-arm to patch sanitizer_platform_limits_posix.cc
> to null out the macros and that builds fine. I'm sure it won't work, should
> someone want to use it, mind you.
> > >>> >
> > >>> > Is there something obvious that I should be doing as part of a
> Warrior -> Dunfell migration to get this to work?
> > >>> >
> > >>> > note: Warrior used meta-linaro-toolchain and for Dunfell, it's
> moved to meta-arm-toolchain.
> > >>> >
> > >>>
> > >>> is gcc 8.3 the latest for linaro
> > >>
> > >>
> > >> I assume so. I haven't attempted to change the default.
> > >
> > >
> > > I'm sorry, that's incorrect: local.conf has an over-ride to specify
> 8.3.
> > > I've just removed it and now it's using 9.3. And it's building fine.
> > >
>
> It's using GCC 9.3 from OE core. If you wish to use Arm toolchain then
> you need to override the default OE core GCC version with Arm
> toolchain GCC version:
>
> GCCVERSION = "arm-9.2"
>

Ah, right, I see! Yes, I think I'll stick with that suggestion.

Thanks for the explanation.



> -Sumit
>
> > > Sumit, do you know if there's a reason for using 9.2 in RPB instead of
> 9.3?
> > >
> >
> > Arm GCC 9.3 toolchain isn't released yet (see here [1]).
> >
> > [1]
> https://developer.arm.com/tools-and-software/open-source-software/developer-tools/gnu-toolchain/gnu-a/downloads
> >
> > -Sumit
> >
> > >>
> > >>>
> > >>> > Regards,
> > >>> > Ryan.
> > >>> >
> > 
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#50164): https://lists.yoctoproject.org/g/yocto/message/50164
Mute This Topic: https://lists.yoctoproject.org/mt/75909560/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[yocto] dunfell: what would cause "ERROR: Nothing PROVIDES 'coreutils-native'."?

2020-08-09 Thread Robert P. J. Day

migrating from zeus to dunfell and, suddenly, some "ndisc"-related recipes no
longer build with:

  ERROR: Nothing PROVIDES 'coreutils-native'.

the offending recipe is from meta-networking/recipes-support, ndisc6_git.bb,
clearly at the line:

  DEPENDS = "coreutils-native"

whereupon "git blame" informs me that that line comes from here:

  commit 51272d11594e8609237e0e049b1f97ff95ab7d19
  Author: Sumit Garg 
  Date:   Tue Jan 21 14:26:11 2020 +0530

ndisc6: fix coreutils-native tool dependency

coreutils-native tool dependency was implicitly met while building with
source GCC tool-set which isn't the case with external tool-set.

Signed-off-by: Sumit Garg 
Signed-off-by: Khem Raj 

i've examined dunfell's coreutils recipe and i see nothing that suggests
i shouldn't be able to bitbake the native recipe -- BBCLASSEXTEND clearly
extends the recipe to include "native." so now i'm looking at the local
layer to see if there is something in a .conf file that deactivates the
native building of coreutils but i don't see anything suspicious.

  i'm sure i'm missing something obvious but i'm at a loss. thoughts?

rday

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#50251): https://lists.yoctoproject.org/g/yocto/message/50251
Mute This Topic: https://lists.yoctoproject.org/mt/76084670/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [yocto] dunfell: pkgconfig-native build fails in existing Yocto BSP

2021-10-25 Thread Ahmed Hossam
Hello,

We also have the same issue with similar in-docker builds, a workaround for the 
issue is mentioned here: 
https://bugzilla.yoctoproject.org/show_bug.cgi?id=14519 until the fix is 
backported to dunfell.

Best Regards,
Ahmed Hossam

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#55133): https://lists.yoctoproject.org/g/yocto/message/55133
Mute This Topic: https://lists.yoctoproject.org/mt/86571641/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [yocto] dunfell: pkgconfig-native build fails in existing Yocto BSP

2021-10-25 Thread Richard Purdie
On Mon, 2021-10-25 at 06:50 +, Matthias Klein wrote:
> Hello,
> 
> our dunfell based yocto no longer builds in the pkgconfig-native section. I 
> tried it in an Ubuntu 18.04 and Debian 10 Docker container.
> The build always aborts as follows:
> 
> checking thread related cflags... -D_REENTRANT
> > checking for pthread_create/pthread_join... no
> > checking for pthread_create/pthread_join in -lpthread... no
> > checking for pthread_create/pthread_join in -lpthread32... no
> > checking for pthread_create/pthread_join in -lpthreads... no
> > checking for pthread_create/pthread_join in -lthread... no
> > configure: error: I can't find the libraries for the thread implementation
> >  posix. Please choose another thread implementation or
> >   provide information on your thread implementation.
> > configure: error: ../../git/glib/configure failed for glib
> > NOTE: The following config.log files may provide further information.
> > NOTE: 
> > /work/yocto-optimeas/build/tmp/work/x86_64-linux/pkgconfig-native/0.29.2+gitAUTOINC+edf8e6f0ea-r0/build/glib/config.log
> >  
> > /work/yocto-optimeas/build/tmp/work/x86_64-linux/pkgconfig-native/0.29.2+gitAUTOINC+edf8e6f0ea-r0/build/config.log
> > ERROR: configure failed
> > WARNING: 
> > /work/yocto-optimeas/build/tmp/work/x86_64-linux/pkgconfig-native/0.29.2+gitAUTOINC+edf8e6f0ea-r0/temp/run.do_configure.243679:1
> >  exit 1 from 'exit 1'
> > ERROR: Execution of 
> > '/work/yocto-optimeas/build/tmp/work/x86_64-linux/pkgconfig-native/0.29.2+gitAUTOINC+edf8e6f0ea-r0/temp/run.do_configure.243679'
> >  failed with exit code 1
> ERROR: Task 
> (virtual:native:/work/yocto-optimeas/yocto/poky/meta/recipes-devtools/pkgconfig/pkgconfig_git.bb:do_configure)
>  failed with exit code '1'
> ERROR: ninja-native-1.10.0-r0 do_compile: Execution of 
> '/work/yocto-optimeas/build/tmp/work/x86_64-linux/ninja-native/1.10.0-r0/temp/run.do_compile.319277'
>  failed with exit code 1
> ERROR: Logfile of failure stored in: 
> /work/yocto-optimeas/build/tmp/work/x86_64-linux/ninja-native/1.10.0-r0/temp/log.do_compile.319277
> Log data follows:
> > DEBUG: Executing shell function do_compile
> > ninja: fatal: posix_spawn: Operation not permitted
> > bootstrapping ninja...
> > wrote build.ninja.
> > bootstrap complete.  rebuilding...
> > Traceback (most recent call last):
> >   File "./configure.py", line 709, in 
> > subprocess.check_call(rebuild_args)
> >   File "/usr/lib/python3.7/subprocess.py", line 347, in check_call
> > raise CalledProcessError(retcode, cmd)
> > subprocess.CalledProcessError: Command '['./ninja']' returned non-zero exit 
> > status 1.
> > WARNING: 
> > /work/yocto-optimeas/build/tmp/work/x86_64-linux/ninja-native/1.10.0-r0/temp/run.do_compile.319277:1
> >  exit 1 from 'python3 ./configure.py --bootstrap'
> > ERROR: Execution of 
> > '/work/yocto-optimeas/build/tmp/work/x86_64-linux/ninja-native/1.10.0-r0/temp/run.do_compile.319277'
> >  failed with exit code 1
> ERROR: Task 
> (virtual:native:/work/yocto-optimeas/yocto/poky/meta/recipes-devtools/ninja/ninja_1.10.0.bb:do_compile)
>  failed with exit code '1'
> 
> 
> I also tried using the commits from dunfell-next in the poky repo as a test. 
> But then the build doesn't even start and I get the following error:
> 
> ERROR: ParseError at 
> /work/yocto-optimeas/yocto/meta-openembedded/meta-oe/recipes-extended/libimobiledevice/libplist_2.1.0.bb:9:
>  Could not inherit file classes/python3targetconfig.bbclass   
> | ETA:  --:--:--
> 
> 
> Does anyone have any idea what is causing this? does anyone have the same 
> behavior?

I just backported a uninative upgrade to dunfell which should address this
issue.

Cheers,

Richard




-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#55134): https://lists.yoctoproject.org/g/yocto/message/55134
Mute This Topic: https://lists.yoctoproject.org/mt/86571641/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



  1   2   >