Re: [yocto] Some questions about the webhob design

2012-07-09 Thread Xu, Dongxiao


> -Original Message-
> From: yocto-boun...@yoctoproject.org
> [mailto:yocto-boun...@yoctoproject.org] On Behalf Of Paul Eggleton
> Sent: Saturday, July 07, 2012 1:30 AM
> To: Stewart, David C
> Cc: yocto@yoctoproject.org
> Subject: Re: [yocto] Some questions about the webhob design
> 
> On Friday 06 July 2012 17:13:45 Stewart, David C wrote:
> > > From: Paul Eggleton [mailto:paul.eggle...@linux.intel.com]
> > > Sent: Friday, July 06, 2012 4:23 AM
> > > To: Stewart, David C
> > >
> > > On Thursday 05 July 2012 23:00:17 Stewart, David C wrote:
> > > > Guys - I'm really struggling with this overall concept of concurrency.
> > > >
> > > > It implies that if Paul and I are sharing the same project and I
> > > > make a change to a .bb file to experiment with something (assuming
> > > > we have the ability to do that, refer to my last email) and my
> > > > change breaks my build, it will break everyone else's build as
> > > > well.  But the beauty thing is that it breaks it silently, because
> > > > the configuration silently changed for everyone on the project.
> > >
> > > The key word there I think is "experiment". Is it reasonable to
> > > expect to handle people making experimental changes to something
> > > that others are relying upon? It seems to me that whether
> > > experimental changes are likely and whether or not it will seriously
> > > impact other users depends on what development stage the particular
> project is at.
> >
> > Whether it is reasonable or not, if it's possible for people to shoot
> > themselves in the foot, they will.  Even with safety guards on a
> > circular saw, I have seen people routinely disable them, so there you go.
> 
> Right, and the logical extension from this is that no matter what safeguards 
> we
> put in, there will be users that find a way shoot themselves in the foot. Of
> course we should try to make it harder to do the wrong thing, but there's a
> limit to what we can do. Despite this however I'm still convinced it's 
> important
> to offer the ability for people to work on the same thing.
> 
> > How about a strongly visible warning on the Projects page that
> > simultaneous user changes will affect all users of the project's
> > files? How about an asynchronous notification to users on the main
> > screen that some files have changed since their last build (and maybe list
> them).
> 
> I'm not opposed to a warning, but the thing is the scenario presented earlier
> wasn't strictly speaking a case of simultaneous user changes (which I think we
> have pretty well covered in the design in terms of showing real-time warnings
> to go together with real-time changes to the project) - in the scenario one 
> user
> set up the project, and some time later another user attempted to use that
> project, and in the mean time someone broke it while neither of the other two
> users were logged in. A warning to help in that case could only be along the
> lines of "This project is shared with other users, so please take care when
> making changes."

Hi Paul,

Today Jessica and PRC team spend some time thinking about the development model 
that how people co-work within the webhob project.

Say an architect (his user id is "arch_a") creates a project with certain 
configurations and customizations, and names it as "Project_FRI2", and he 
invites "arch_b" also as the architect. Besides that, the project recruits 
"dev_a", "dev_b", and "dev_c" as developers, and "builer_a", "builder_b", and 
"builder_c" as normal build service users.

According to our discussion, we have some proposal on the permission 
management, and need your comments here.

1) Only "arch_a" and "arch_b" are allowed to change project settings (including 
configurations, recipes, packages, etc).
2) "dev_a", "dev_b", and "dev_c" have the permission to fork this project. If 
they want to made any change to "Project_FRI2", they firstly need to tune their 
code/layers under their forked project, and then contribute back to the main 
"Project_FRI2" (This is something like the git branches and pull request we use 
in Yocto project development ). They are not allowed to modify project settings 
directly in "Project_FRI2".
3) "builder_a", "builder_b", and "builder_c" are only allowed to log into the 
Project_FRI2 project and schedule their build on it, or download images. 

Of course there should be other roles, like program manager, etc.
The overall idea is only very small set of the users could change the project 
setting. This sort of permission management can help to reduce the rate of 
changing project setting at the same time, since we no longer support 
developers to change settings in the main project. The developers need to 
follow the way of "fork project--> development --> contribute back to main 
project".

What's your thought on this?

Thanks,
Dongxiao

> 
> > Again, writing a user story is what I'm looking for. I would do it
> > myself, but I'm still struggling. :-)
> 
> I spoke to Jim about this and 

[yocto] Some questions about the webhob design

2012-07-03 Thread Xu, Dongxiao
Hi Paul and Jim,

Today Jessica and PRC team had a discussion of the webhob tasks listed in the 
wiki page, and about the "Group" and "Project" concept, we still have some 
questions.

Say user A and user B are privileged users, who have the right to customize 
images, while user C is normal user (Say a TME) and only "building image" is 
allowed.

1) If user A customized an image with certain configurations, and he told user 
C that the environment is ready, and user C can build his demo image there. 
However, before user C starts to build, user B changed some configurations, 
making the final output not the expected version.

2) Another issue may be how to avoid global project changes happened together? 
For example user A and user B change the setting in the same time?

3) If user A changes the global project setting, what's the impact to the user 
B who already kicked off a build based on the original setting?

We are really appreciate if you can help to clarify these questions for us.

Thanks,
Dongxiao
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Hob question: meta-yocto layer and 'defaultsetup' distro option

2012-07-03 Thread Xu, Dongxiao
> -Original Message-
> From: Barros Pena, Belen
> Sent: Tuesday, July 03, 2012 7:04 PM
> To: Khem Raj; Wang, Shane; Xu, Dongxiao
> Cc: yocto
> Subject: Re: [yocto] Hob question: meta-yocto layer and 'defaultsetup' distro
> option
> 
> Thanks, Khem.
> 
> Just to make sure I understand this correctly: if I don't use the meta-yocto 
> layer,
> OE-Core should provide reference distro policies that allow me to build an
> image. So I guess the 'defaultsetup' option in Hob is referring to those 
> OE-Core
> policies. Shane, Dongxiao: could you confirm if this is the case?

Yes, actually "defaultsetup" will be applied in default for any kind of layer 
settings, including the default OE-Core. The difference is, OE-Core only 
applies "defaultsetup", while for meta-yocto layer, we can add additional 
distro policies (poky, poky-bleeding, etc) to override some of the 
"defaultsetup" settings.

Thanks,
Dongxiao

> 
> Thanks!
> 
> Belen
> 
> On 02/07/2012 17:53, "Khem Raj"  wrote:
> 
> >On Mon, Jul 2, 2012 at 9:05 AM, Barros Pena, Belen
> > wrote:
> >>
> >> If I delete the meta-yocto layer, am I supposed to provide the distro
> >> to be used via a layer or some other means?
> >
> >OE-Core has reference distro policies which are enough to generate
> >images by just using OE-Core and that is what happens when you drop
> >meta-yocto its testing the reference distro setup from OE-Core
> >
> >

___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Yocto Web Hob 1.3 requirements

2012-05-29 Thread Xu, Dongxiao
Hi Jim,

Do you have any update on the progress of UI/process design about the webhob 
design?

Thanks,
Dongxiao

From: yocto-boun...@yoctoproject.org [mailto:yocto-boun...@yoctoproject.org] On 
Behalf Of Jim Kosem
Sent: Tuesday, May 08, 2012 6:50 PM
To: yocto@yoctoproject.org
Subject: [yocto] Yocto Web Hob 1.3 requirements

Hi,

We in the Yocto London team have put together user requirements and ideas 
around what we think Web Hob should be.

https://wiki.yoctoproject.org/wiki/Yocto_Webhob_1.3

Any contributions, help and comments are appreciated.

Thanks.

- Jim Kosem
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] A question about PACKAGE_ARCH renaming

2012-04-18 Thread Xu, Dongxiao
On Wed, 2012-04-18 at 13:55 +0100, Richard Purdie wrote:
> On Wed, 2012-04-18 at 12:45 +, Hatle, Mark wrote:
> > There is/was a conversion that changed - to _ in the package arch.  And yes 
> > this needs to be fixed ASAP.
> 
> Let put this really simply. We are not doing this just as the -rc4 is
> about to build. If we want to start considering a change like this we
> might as well abort the release, start again and slip four weeks.
> 
> Why? That dash syntax works fine in opkg and is in use in real world
> opkg feeds. This appears to be an rpm only issue. It also only appears
> to be directly affecting hob right at this point.
> 
> When we filtered MACHINE people complained a lot. When we filtered
> MACHINE_ARCH there were more complaints. If you just put the filtering
> straight onto PACKAGE_ARCH as well, there will be immense trouble as it
> will badly break Angstrom for a start :(.
> 
> I'd ask people think about what they're proposing and put it into
> perspective.
> 
> I believe Dongxiao has a patch which would allow us to work around the
> issue in hob for this release and then we can re-assess this issue in
> 1.3. I suspect the fix is going to be to drop the name mangling in the
> core and move it to be specific to rpm.

A bug 2328 is filed for this issue.

Thanks,
Dongxiao

> 
> Cheers,
> 
> Richard
> 
> 
> 
> 


___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] A question about PACKAGE_ARCH renaming

2012-04-17 Thread Xu, Dongxiao
On Wed, 2012-04-18 at 08:38 +0800, Xu, Dongxiao wrote:
> On Tue, 2012-04-17 at 10:35 -0500, Mark Hatle wrote:
> > On 4/16/12 8:01 PM, Xu, Dongxiao wrote:
> > > Hi,
> > >
> > > I am testing beagleboard with RPM, and there is a question I am confused
> > > with that PACKAGE_ARCH is renamed for certain packages. For example the
> > > "acl" package, whose expected PACKAGE_ARCH is "armv7a-vfp-neon", however
> > > in RPM file, the arch is renamed to "armv7a", see
> > > "acl-2.2.51-r2.armv7a.rpm". However IPK package still shows
> > > "acl_2.2.51-r2_armv7a-vfp-neon.ipk".
> > >
> > > Could anybody give hint on this?
> > >
> > > Thanks,
> > > Dongxiao
> > >
> > 
> > I've not seen that happen before.  Can you checked if an 
> > acl-...armv7a-vfp-neon.rpm was generated and RPM is simply not using it, or 
> > was 
> > it never generated?
> 
> No, there is no acl-xxx.armv7a-vfp-neon.rpm, only acl-xxx.armv7a.rpm
> created.

Just looked at this issue with Lianhao, and we got some clues.

It seems that we don't allow '-' exists in architecture label within
RPM. Here for the beagleboard case, we use the parameter as:

rpm ... --target "armv7a-vfp-neon-poky-linux"

I think the RPM internal strips all the contents after the first '-' and
use "armv7a" as the architecture label.

Similar is the multilib case, we can see from the code that, we use
'lib64_qemux86' instead of 'lib64-qemux86' as the architecture label.

If our thoughts are right, I think we need a fix for that before 1.2
release?

Thanks,
Dongxiao

> 
> Actually I think this issue does exist since our 1.1 release, you can
> have a look at the package repo:
> 
> http://downloads.yoctoproject.org/releases/yocto/yocto-1.1/rpm/armv7a-vfp-neon/
> 
> The directory is named as "armv7a-vfp-neon", however all the packages
> under the directory are of "armv7a" architecture.
> 
> While see the ipk part:
> http://downloads.yoctoproject.org/releases/yocto/yocto-1.1/ipk/armv7a-vfp-neon/
> The directory name and rpm architecture name are the same.
> 
> Thanks,
> Dongxiao
> 
> > 
> > As another user mentioned, it is possible for a package to say it wants a 
> > specific arch type, but if it did -- it should be consistent between 
> > packaging 
> > systems.
> > 
> > --Mark
> 


___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] A question about PACKAGE_ARCH renaming

2012-04-17 Thread Xu, Dongxiao
On Tue, 2012-04-17 at 10:35 -0500, Mark Hatle wrote:
> On 4/16/12 8:01 PM, Xu, Dongxiao wrote:
> > Hi,
> >
> > I am testing beagleboard with RPM, and there is a question I am confused
> > with that PACKAGE_ARCH is renamed for certain packages. For example the
> > "acl" package, whose expected PACKAGE_ARCH is "armv7a-vfp-neon", however
> > in RPM file, the arch is renamed to "armv7a", see
> > "acl-2.2.51-r2.armv7a.rpm". However IPK package still shows
> > "acl_2.2.51-r2_armv7a-vfp-neon.ipk".
> >
> > Could anybody give hint on this?
> >
> > Thanks,
> > Dongxiao
> >
> 
> I've not seen that happen before.  Can you checked if an 
> acl-...armv7a-vfp-neon.rpm was generated and RPM is simply not using it, or 
> was 
> it never generated?

No, there is no acl-xxx.armv7a-vfp-neon.rpm, only acl-xxx.armv7a.rpm
created.

Actually I think this issue does exist since our 1.1 release, you can
have a look at the package repo:

http://downloads.yoctoproject.org/releases/yocto/yocto-1.1/rpm/armv7a-vfp-neon/

The directory is named as "armv7a-vfp-neon", however all the packages
under the directory are of "armv7a" architecture.

While see the ipk part:
http://downloads.yoctoproject.org/releases/yocto/yocto-1.1/ipk/armv7a-vfp-neon/
The directory name and rpm architecture name are the same.

Thanks,
Dongxiao

> 
> As another user mentioned, it is possible for a package to say it wants a 
> specific arch type, but if it did -- it should be consistent between 
> packaging 
> systems.
> 
> --Mark


___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] A question about PACKAGE_ARCH renaming

2012-04-17 Thread Xu, Dongxiao
Any hint on this question?

I need answers to address a Hob issue. Thanks in advance.

Thanks,
Dongxiao

On Tue, 2012-04-17 at 09:01 +0800, Xu, Dongxiao wrote:
> Hi,
> 
> I am testing beagleboard with RPM, and there is a question I am confused
> with that PACKAGE_ARCH is renamed for certain packages. For example the
> "acl" package, whose expected PACKAGE_ARCH is "armv7a-vfp-neon", however
> in RPM file, the arch is renamed to "armv7a", see
> "acl-2.2.51-r2.armv7a.rpm". However IPK package still shows
> "acl_2.2.51-r2_armv7a-vfp-neon.ipk".
> 
> Could anybody give hint on this? 
> 
> Thanks,
> Dongxiao
> 
> ___
> yocto mailing list
> yocto@yoctoproject.org
> https://lists.yoctoproject.org/listinfo/yocto


___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] A question about PACKAGE_ARCH renaming

2012-04-16 Thread Xu, Dongxiao
Hi,

I am testing beagleboard with RPM, and there is a question I am confused
with that PACKAGE_ARCH is renamed for certain packages. For example the
"acl" package, whose expected PACKAGE_ARCH is "armv7a-vfp-neon", however
in RPM file, the arch is renamed to "armv7a", see
"acl-2.2.51-r2.armv7a.rpm". However IPK package still shows
"acl_2.2.51-r2_armv7a-vfp-neon.ipk".

Could anybody give hint on this? 

Thanks,
Dongxiao

___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Installation order question with RPM backend

2012-04-11 Thread Xu, Dongxiao
On Wed, 2012-04-11 at 10:56 -0500, Mark Hatle wrote:
> On 4/11/12 10:51 AM, Xu, Dongxiao wrote:
> > On Wed, 2012-04-11 at 10:45 -0500, Mark Hatle wrote:
> >> On 4/11/12 10:37 AM, Xu, Dongxiao wrote:
> >>> On Wed, 2012-04-11 at 10:25 -0500, Mark Hatle wrote:
> >>>> On 4/11/12 10:14 AM, Xu, Dongxiao wrote:
> >>>>> Hi Mark,
> >>>>>
> >>>>> I met a strange issue while using RPM to generate the rootfs.
> >>>>>
> >>>>> In the installation list, if we have 2 RPM packages, say A.rpm and
> >>>>> B.rpm. package A RDEPENDS on package B. While installing the two
> >>>>> packages? Does RPM ensures to install B first and then install A?
> >>>>>
> >>>>> The real issue is: we have certain packages that need to run
> >>>>> useradd/groupadd at rootfs time, for example, the dbus. However the
> >>>>> useradd/groupadd bbclass RDEPENDS on base-files, which provides
> >>>>> the /etc/group file. While installing the final image, sometimes we saw
> >>>>> it installs dbus firstly and then base-files, causing the
> >>>>> useradd/groupadd script error since it could not find /etc/group file.
> >>>>
> >>>> it does enforce install order, however the /etc/group, /etc/passwd files 
> >>>> (last
> >>>> time I checked) were being put into place by the post install scripts.  
> >>>> The
> >>>> scripting order is handled somewhat independently of the package install 
> >>>> order.
> >>>> (post install scripts get delayed intentionally for performance 
> >>>> reasons.
> >>>> There is a way to hint a dependency for them as well...)
> >>>>
> >>>> The passwd/group files are fairly unique files, and generally are 
> >>>> installed
> >>>> -first- (individually) before any other packages on most RPM 
> >>>> installations.
> >>>> After that the methods and install ordering works...
> >>>>
> >>> But does the following log indicates the dbus-1 is installed before
> >>> base-passwd?
> >>>
> >>> dbus-1##
> >>>Adding system startup
> >>> for 
> >>> /distro/sdb/build-basic/tmp/work/qemux86-poky-linux/hob-image-hob-basic-1.0-r0/rootfs/etc/init.d/dbus-1.
> >>> kernel-module-uvesafb ##
> >>> libusb-compat ##
> >>> base-passwd   ##
> >>
> >> Certainly appears that way.. but we'd need to look into the packages and
> >> understand the requirements as they are defined and trace them to see if 
> >> there
> >> is a problem w/ the ordering or if the packages have a problem.
> >>
> >> You will often see mysterious reordering when there is a circular 
> >> dependency.
> >> RPM has to break this dependency in some way, and it does it by simply 
> >> choosing
> >> one or the other.  (There is a hint mechanism for circular dependencies, 
> >> but
> >> we've never used it.)
> >>
> >> My suggestion is lets look at the package runtime dependenices and manually
> >> trace them..  Focus on dbus-1 and base-passwd... and see if the order is 
> >> right
> >> or wrong or if there is a circular dependency.
> >
> > I checked the dbus.spec and base-passwd.spec.
> > For dbus.spec, there is a line:
> >
> > %package -n dbus-1
> > Requires: base-passwd
> >
> > And for base-passwd, there is no dbus exists in base-passwd.spec.
> 
> You need to query the binary packages for the real values..
> 
> rpm -qp  --requires

Here I pasted the command output. I think there is no circular
dependency here.

It is relatively easy to reproduce this rootfs error:
For a certain image bb file, say task-core-basic. Change the definition
of the installation package with the following line:
PACKAGE_INSTALL = "task-core-basic task-core-ssh-openssh
task-core-apps-console task-core-boot"
# bitbake core-image-basic

Then you will see the useradd/groupadd error and the dependency order
issue.

Thanks,
Dongxiao

dongxiao@dongxiao-osel:/distro/sdb/build-basic/tmp/deploy/rpm/i586$ rpm
-qp base-passwd-3.5.24-r0.i586.rpm --requires
warning: base-passwd-3.5.24-r0.i586.rpm: Header V4 DSA/SHA

Re: [yocto] Installation order question with RPM backend

2012-04-11 Thread Xu, Dongxiao
On Wed, 2012-04-11 at 10:45 -0500, Mark Hatle wrote:
> On 4/11/12 10:37 AM, Xu, Dongxiao wrote:
> > On Wed, 2012-04-11 at 10:25 -0500, Mark Hatle wrote:
> >> On 4/11/12 10:14 AM, Xu, Dongxiao wrote:
> >>> Hi Mark,
> >>>
> >>> I met a strange issue while using RPM to generate the rootfs.
> >>>
> >>> In the installation list, if we have 2 RPM packages, say A.rpm and
> >>> B.rpm. package A RDEPENDS on package B. While installing the two
> >>> packages? Does RPM ensures to install B first and then install A?
> >>>
> >>> The real issue is: we have certain packages that need to run
> >>> useradd/groupadd at rootfs time, for example, the dbus. However the
> >>> useradd/groupadd bbclass RDEPENDS on base-files, which provides
> >>> the /etc/group file. While installing the final image, sometimes we saw
> >>> it installs dbus firstly and then base-files, causing the
> >>> useradd/groupadd script error since it could not find /etc/group file.
> >>
> >> it does enforce install order, however the /etc/group, /etc/passwd files 
> >> (last
> >> time I checked) were being put into place by the post install scripts.  The
> >> scripting order is handled somewhat independently of the package install 
> >> order.
> >>(post install scripts get delayed intentionally for performance reasons.
> >> There is a way to hint a dependency for them as well...)
> >>
> >> The passwd/group files are fairly unique files, and generally are installed
> >> -first- (individually) before any other packages on most RPM installations.
> >> After that the methods and install ordering works...
> >>
> > But does the following log indicates the dbus-1 is installed before
> > base-passwd?
> >
> > dbus-1##
> >   Adding system startup
> > for 
> > /distro/sdb/build-basic/tmp/work/qemux86-poky-linux/hob-image-hob-basic-1.0-r0/rootfs/etc/init.d/dbus-1.
> > kernel-module-uvesafb ##
> > libusb-compat ##
> > base-passwd   ##
> 
> Certainly appears that way.. but we'd need to look into the packages and 
> understand the requirements as they are defined and trace them to see if 
> there 
> is a problem w/ the ordering or if the packages have a problem.
> 
> You will often see mysterious reordering when there is a circular dependency. 
> RPM has to break this dependency in some way, and it does it by simply 
> choosing 
> one or the other.  (There is a hint mechanism for circular dependencies, but 
> we've never used it.)
> 
> My suggestion is lets look at the package runtime dependenices and manually 
> trace them..  Focus on dbus-1 and base-passwd... and see if the order is 
> right 
> or wrong or if there is a circular dependency.

I checked the dbus.spec and base-passwd.spec.
For dbus.spec, there is a line:

%package -n dbus-1
Requires: base-passwd

And for base-passwd, there is no dbus exists in base-passwd.spec.

Thanks,
Dongxiao

> 
> (Also our version of RPM 5 is a bit old, and there are some known bugs in 
> it.. 
> as far as I know, none of them with the dependency resolution or code paths 
> we 
> follow.)
> 
> --Mark
> 
> > Thanks,
> > Dongxiao
> >
> >> --Mark
> >>
> >>> I tried ipk and it doesn't have problem since it ensures to install
> >>> base-files firstly.
> >>>
> >>> Any comment is welcome.
> >>>
> >>> Thanks,
> >>> Dongxiao
> >>>
> >>
> >
> >
> 


___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Installation order question with RPM backend

2012-04-11 Thread Xu, Dongxiao
On Wed, 2012-04-11 at 10:25 -0500, Mark Hatle wrote:
> On 4/11/12 10:14 AM, Xu, Dongxiao wrote:
> > Hi Mark,
> >
> > I met a strange issue while using RPM to generate the rootfs.
> >
> > In the installation list, if we have 2 RPM packages, say A.rpm and
> > B.rpm. package A RDEPENDS on package B. While installing the two
> > packages? Does RPM ensures to install B first and then install A?
> >
> > The real issue is: we have certain packages that need to run
> > useradd/groupadd at rootfs time, for example, the dbus. However the
> > useradd/groupadd bbclass RDEPENDS on base-files, which provides
> > the /etc/group file. While installing the final image, sometimes we saw
> > it installs dbus firstly and then base-files, causing the
> > useradd/groupadd script error since it could not find /etc/group file.
> 
> it does enforce install order, however the /etc/group, /etc/passwd files 
> (last 
> time I checked) were being put into place by the post install scripts.  The 
> scripting order is handled somewhat independently of the package install 
> order. 
>   (post install scripts get delayed intentionally for performance reasons. 
> There is a way to hint a dependency for them as well...)
> 
> The passwd/group files are fairly unique files, and generally are installed 
> -first- (individually) before any other packages on most RPM installations. 
> After that the methods and install ordering works...
> 
But does the following log indicates the dbus-1 is installed before
base-passwd?

dbus-1##
 Adding system startup
for 
/distro/sdb/build-basic/tmp/work/qemux86-poky-linux/hob-image-hob-basic-1.0-r0/rootfs/etc/init.d/dbus-1.
kernel-module-uvesafb ##
libusb-compat ##
base-passwd   ##

Thanks,
Dongxiao

> --Mark
> 
> > I tried ipk and it doesn't have problem since it ensures to install
> > base-files firstly.
> >
> > Any comment is welcome.
> >
> > Thanks,
> > Dongxiao
> >
> 


___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] Installation order question with RPM backend

2012-04-11 Thread Xu, Dongxiao
Hi Mark,

I met a strange issue while using RPM to generate the rootfs.

In the installation list, if we have 2 RPM packages, say A.rpm and
B.rpm. package A RDEPENDS on package B. While installing the two
packages? Does RPM ensures to install B first and then install A?

The real issue is: we have certain packages that need to run
useradd/groupadd at rootfs time, for example, the dbus. However the
useradd/groupadd bbclass RDEPENDS on base-files, which provides
the /etc/group file. While installing the final image, sometimes we saw
it installs dbus firstly and then base-files, causing the
useradd/groupadd script error since it could not find /etc/group file.

I tried ipk and it doesn't have problem since it ensures to install
base-files firstly.

Any comment is welcome.

Thanks,
Dongxiao

___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Deleting layers in Hob

2012-03-29 Thread Xu, Dongxiao
On Thu, 2012-03-29 at 19:05 +0800, Barros Pena, Belen wrote:
> Hi all,
> 
> Do we have enough information to make a decision about the meta-yocto
> layer? I don't understand all the technical details, but I am inclined to
> make it non-deletable in Hob (i.e. it is not possible to delete this layer
> in Hob).

The layer is removable in Hob, since many people are just using oe-core
without meta-yocto layer.

I have patches to solve the deletion of meta-yocto layer, and will send
it out soon.

Thanks,
Dongxiao

> 
> What do you think?
> 
> Belen
> 
> On 27/03/2012 17:56, "Joshua Lock"  wrote:
> 
> >On 27/03/12 00:19, Lu, Lianhao wrote:
> >>
> >>> -Original Message-
> >>> From: yocto-boun...@yoctoproject.org
> >>>[mailto:yocto-boun...@yoctoproject.org] On Behalf Of Xu, Dongxiao
> >>> Sent: Tuesday, March 27, 2012 2:49 PM
> >>> To: yocto
> >>> Subject: [yocto] Deleting layers in Hob
> >>>
> >>> When using Hob in Yocto Project, I found a issue when deleting layers.
> >>>I
> >>> think I ever raised this problem before.
> >>>
> >>> Let me briefly introduce how layer removal works in Hob. When user
> >>> changes a layer, it will following the below steps
> >>> 1) init the cooker.
> >>> 2) set new layers to cooker.
> >>> 3) parse configuration files.
> >>> 4) get available machines, distros, SDKs, etc.
> >>>
> >>> As we know, if we source oe-init-build-env in Yocto project
> >>>environment,
> >>> we will have DISTRO="poky" set in local.conf by default, where the
> >>> "poky" DISTRO comes from the meta-yocto layer. If user deletes
> >>> meta-yocto in Hob, and then error will happen when bitbake parsing the
> >>> local.conf, since it could not find where the "poky" DISTRO is defined.
> >>>
> >>> Even if we are able to successfully removed the meta-yocto layer by
> >>> removing the DISTRO definition in local.conf, system will report
> >>>another
> >>> issue that:
> >>>
> >>> Your configuration is using stamp files including the sstate hash but
> >>> your build directory was built with stamp files that do not include
> >>> this.
> >>> To continue, either rebuild or switch back to the OEBasic signature
> >>> handler with BB_SIGNATURE_HANDLER = 'OEBasic'.
> >>>
> >>> This is because BB_SIGNATURE_HANDLER = "OEBasic" is also defined in
> >>> meta-yocto layer (poky.conf).
> >>
> >> Meta-yocto is using OEBasicHash as default signature
> >>handler(ABI_VERSION=8, see ${TMPDIR}/abi_version), while oe-core is
> >>still using the OEBasic(ABI_VERSION=7). This means the oe-core can not
> >>reuse the stamp files generated by meta-yocto.
> >
> >Could we workaround the incompatibility by setting BB_SIGNATURE_HANDLER
> >= "OEBasic" somewhere in meta-hob?
> >
> >Cheers,
> >Joshua
> >-- 
> >Joshua '贾詡' Lock
> > Yocto Project "Johannes factotum"
> > Intel Open Source Technology Centre
> >___
> >yocto mailing list
> >yocto@yoctoproject.org
> >https://lists.yoctoproject.org/listinfo/yocto
> 


___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] Deleting layers in Hob

2012-03-26 Thread Xu, Dongxiao
When using Hob in Yocto Project, I found a issue when deleting layers. I
think I ever raised this problem before.

Let me briefly introduce how layer removal works in Hob. When user
changes a layer, it will following the below steps 
1) init the cooker.
2) set new layers to cooker.
3) parse configuration files.
4) get available machines, distros, SDKs, etc.

As we know, if we source oe-init-build-env in Yocto project environment,
we will have DISTRO="poky" set in local.conf by default, where the
"poky" DISTRO comes from the meta-yocto layer. If user deletes
meta-yocto in Hob, and then error will happen when bitbake parsing the
local.conf, since it could not find where the "poky" DISTRO is defined.

Even if we are able to successfully removed the meta-yocto layer by
removing the DISTRO definition in local.conf, system will report another
issue that:

Your configuration is using stamp files including the sstate hash but
your build directory was built with stamp files that do not include
this.
To continue, either rebuild or switch back to the OEBasic signature
handler with BB_SIGNATURE_HANDLER = 'OEBasic'.

This is because BB_SIGNATURE_HANDLER = "OEBasic" is also defined in
meta-yocto layer (poky.conf).


So it seems that in certain environment (e.x, Yocto Project), certain
layer (e.x, meta-yocto) should not be removed? 

Or any idea on how to solve this problem?

Thanks,
Dongxiao


___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] pseudo interaction issue

2012-03-22 Thread Xu, Dongxiao
On Thu, 2012-03-22 at 21:29 -0500, Peter Seebach wrote:
> On Fri, 23 Mar 2012 09:01:16 +0800
> "Xu, Dongxiao"  wrote:
> 
> > I think the difference between Hob and other UI (e.x., knotty) is
> > that, when building image is finished in knotty, the UI, bitbake
> > server, and pseudo all quit. But in Hob, everything still alive after
> > a build. I noticed that the pseudo error happens only when Hob is
> > trying to issue a second build.
> 
> I get it on the first build.

What do you mean by first build? Did you click "Just bake" button?

Actually "Just bake" button is divided into two steps:

1) build_target(packages)
2) build_target(image)

I noticed the pseudo error will happen when calling build_target(image).
Therefore I also treated it as second build.

Thanks,
Dongxiao

> 
> ... You see the issue.
> 
> Pseudo shouldn't be having a problem, it's designed to restart as
> needed.
> 
> Right now, what I know is:
> 1.  I didn't catch popen(), and this can actually be an issue with
> stuff like PSEUDO_UNLOAD or PSEUDO_DISABLED in play.
> 2.  If I wrap popen, and have the wrapper unconditionally emit a
> diagnostic, that works for simple os.popen() test cases.
> 3.  But not for the case that's triggering this.
> 
> So it looks like, when this runs, we have a Python session which has
> had pseudo unloaded, not just disabled, which then sets LD_PRELOAD but
> doesn't set PSEUDO_PREFIX.  Or something.
> 
> I'm still trying to get better data on this, like figure out how the
> sub-process is even getting invoked.
> 
> -s


___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] pseudo interaction issue

2012-03-22 Thread Xu, Dongxiao
On Thu, 2012-03-22 at 11:18 -0500, Peter Seebach wrote:
> On Thu, 22 Mar 2012 09:49:41 +0800
> "Xu, Dongxiao"  wrote:
> 
> > Hi Mark,
> > 
> > Any update on this one? I think we may need to track it in bugzilla.
> 
> I have been looking into this.  I've convinced myself that popen() is
> broken under pseudo, but that's not enough to explain this:
> 
> * I have a fixed pseudo where popen works.  It still fails sometimes
>   under hob.
> * When it fails, the popen() wrapper isn't even getting called.
> * Still looking into this.
> 
> Interestingly, I can't get this failure to occur at all outside of hob.

I think the difference between Hob and other UI (e.x., knotty) is that,
when building image is finished in knotty, the UI, bitbake server, and
pseudo all quit. But in Hob, everything still alive after a build. I
noticed that the pseudo error happens only when Hob is trying to issue a
second build.

Thanks,
Dongxiao

> 
> -s


___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] pseudo interaction issue

2012-03-21 Thread Xu, Dongxiao
Hi Mark,

Any update on this one? I think we may need to track it in bugzilla.

Thanks,
Dongxiao

On Wed, 2012-03-14 at 17:02 +0800, Xu, Dongxiao wrote:
> Hi Mark,
> 
> When using the new Hob to build targets, I also observed the pseudo
> output:
> 
> "pseudo: You must set the PSEUDO_PREFIX environment variable to run
> pseudo."
> 
> Here is the step to reproduce it:
> 
> 1) source oe-init-build-env
> 2) hob
> 3) select machine and base image. Here I use qemux86 and
> core-image-minimal.
> 4) click "Just bake". For this first time build, pseudo works OK.
> 5) after the build finishes, return to image configuration page and
> click "Just bake" button again. Then after the build starts, pseudo will
> print out the above logs.
> 
> Thanks,
> Dongxiao
> 
> On Fri, 2012-02-17 at 10:50 -0800, Mark Hatle wrote:
> > We're looking into this issue.  You should never get the "pseudo: You must 
> > set 
> > the PSEUDO_PREFIX environment variable to run pseudo." message.  This means 
> > something appears to have avoided the wrappers.
> > 
> > I'll let you know once we figure out something.
> > 
> > --Mark
> > 
> > On 2/17/12 9:35 AM, Paul Eggleton wrote:
> > > Hi all,
> > >
> > > I'm trying to extend buildhistory to write out the metadata revisions just
> > > before it does the commit to the buildhistory repository, and I'm having 
> > > some
> > > pseudo-related trouble. The structure is a little unusual, in that the
> > > execution flow is an event handler that calls a shell function (via
> > > bb.build.exec_func()) and during parsing this function an ${@...} 
> > > reference to
> > > a python function is evaluated, which then calls os.popen(), at which 
> > > point I
> > > get the error "pseudo: You must set the PSEUDO_PREFIX environment 
> > > variable to
> > > run pseudo."
> > >
> > > I don't need pseudo at this stage. I've tried setting PSEUDO_DISABLED=1 
> > > and
> > > even PSEUDO_UNLOAD=1 just prior to the os.popen() call (or within it) and
> > > despite evidence that pseudo is taking notice of these being set in other
> > > contexts (when the function is called from elsewhere) even when doing 
> > > this I
> > > still get the error above. I could rearrange the structure to avoid this
> > > execution flow however that would bar me from reusing existing code that 
> > > we
> > > have for getting the metadata revision.
> > >
> > > Any suggestions?
> > >
> > > Cheers,
> > > Paul
> > >
> > 
> > ___
> > yocto mailing list
> > yocto@yoctoproject.org
> > https://lists.yoctoproject.org/listinfo/yocto
> 


___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] pseudo interaction issue

2012-03-14 Thread Xu, Dongxiao
Hi Mark,

When using the new Hob to build targets, I also observed the pseudo
output:

"pseudo: You must set the PSEUDO_PREFIX environment variable to run
pseudo."

Here is the step to reproduce it:

1) source oe-init-build-env
2) hob
3) select machine and base image. Here I use qemux86 and
core-image-minimal.
4) click "Just bake". For this first time build, pseudo works OK.
5) after the build finishes, return to image configuration page and
click "Just bake" button again. Then after the build starts, pseudo will
print out the above logs.

Thanks,
Dongxiao

On Fri, 2012-02-17 at 10:50 -0800, Mark Hatle wrote:
> We're looking into this issue.  You should never get the "pseudo: You must 
> set 
> the PSEUDO_PREFIX environment variable to run pseudo." message.  This means 
> something appears to have avoided the wrappers.
> 
> I'll let you know once we figure out something.
> 
> --Mark
> 
> On 2/17/12 9:35 AM, Paul Eggleton wrote:
> > Hi all,
> >
> > I'm trying to extend buildhistory to write out the metadata revisions just
> > before it does the commit to the buildhistory repository, and I'm having 
> > some
> > pseudo-related trouble. The structure is a little unusual, in that the
> > execution flow is an event handler that calls a shell function (via
> > bb.build.exec_func()) and during parsing this function an ${@...} reference 
> > to
> > a python function is evaluated, which then calls os.popen(), at which point 
> > I
> > get the error "pseudo: You must set the PSEUDO_PREFIX environment variable 
> > to
> > run pseudo."
> >
> > I don't need pseudo at this stage. I've tried setting PSEUDO_DISABLED=1 and
> > even PSEUDO_UNLOAD=1 just prior to the os.popen() call (or within it) and
> > despite evidence that pseudo is taking notice of these being set in other
> > contexts (when the function is called from elsewhere) even when doing this I
> > still get the error above. I could rearrange the structure to avoid this
> > execution flow however that would bar me from reusing existing code that we
> > have for getting the metadata revision.
> >
> > Any suggestions?
> >
> > Cheers,
> > Paul
> >
> 
> ___
> yocto mailing list
> yocto@yoctoproject.org
> https://lists.yoctoproject.org/listinfo/yocto


___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] Web Hob GUI design

2012-03-12 Thread Xu, Dongxiao
Hi Belen,

In the meantime of GTK Hob development, we are also moving forward on
web hob development. One question is that, will the web hob GUI follow
with GTK hob design? Or do we need a new design for web style?

Thanks,
Dongxiao

___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] COMPATIBLE_HOST in initramfs-live-install_1.0.bb

2012-02-28 Thread Xu, Dongxiao
Hi Tom,

I saw in initramfs-live-install_1.0.bb recipe, there is a line to set
the COMPATIBLE_HOST:

COMPATIBLE_HOST = "(i.86|x86_64).*-linux"

But actually initramfs-live-install is set as dependency in
core-image-minimal-initramfs.bb. Therefore if we set machine to be
"qemuarm" or something else that is not x86 architecture, and then
execute:
# bitbake core-image-minimal-initramfs
or
# bitbake universe

System will report an error of:

ERROR: Nothing RPROVIDES
'initramfs-live-install' (but 
/home/yocto-build5/poky-contrib/meta/recipes-core/images/core-image-minimal-initramfs.bb
 RDEPENDS on or otherwise requires it)
ERROR: initramfs-live-install was skipped: incompatible with host
arm-poky-linux-gnueabi (not in COMPATIBLE_HOST)
NOTE: Runtime target 'initramfs-live-install' is unbuildable,
removing...
Missing or unbuildable dependency chain was: ['initramfs-live-install']
ERROR: Required build target 'core-image-minimal-initramfs' has no
buildable providers.
Missing or unbuildable dependency chain was:
['core-image-minimal-initramfs', 'initramfs-live-install']

Summary: There was 1 WARNING message shown.
Summary: There were 2 ERROR messages shown, returning a non-zero exit
code.

Could you help to explain the background to set compatible host for the
initramfs-live-install recipe?

Thanks,
Dongxiao

___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] One question about taskdata and runqueue

2012-02-27 Thread Xu, Dongxiao
On Mon, 2012-02-27 at 16:19 +, Richard Purdie wrote:
> On Mon, 2012-02-27 at 21:51 +0800, Xu, Dongxiao wrote:
> > Hi list,
> > 
> > If I have two recipes, see following. Both of them provides
> > "virtual/test" and has package named "test-test", the only difference is
> > the RDEPENDS of the package "test-test".
> > 
> > test-a_1.0.bb
> > 
> > PROVIDES = "virtual/test"
> > PACKAGES = "test-test"
> > # Assume that the abcd package are provided by recipe abcd.bb
> > RDEPENDS_test-test = "abcd"
> > 
> > 
> > test-b_1.0.bb
> > 
> > PROVIDES = "virtual/test"
> > PACKAGES = "test-test"
> > 
> > In a certain configuration file, we have the PREFERRED_PROVIDER set as:
> > PREFERRED_PROVIDER_virtual/test = "test-a".
> > 
> > Then if a real recipe, for example, the 'v86d', depends on the
> > "virtual/test":
> > DEPENDS = "virtual/test"
> > 
> > Finally if I run the following command:
> > # bitbake v86d
> > 
> > We know that the recipe "abcd" will be included in the runqueue.
> > 
> > My question is, can we get the build dependency to recipe "abcd" through
> > taskdata? Or it is finalized until we create the RunQueue object?
> 
> task data should have a list of providers for "virtual/test", sorted in
> priority order. There should be two entries in that list, one for test-a
> and test-b. Since you set the preferred provider, you should have test-a
> as the first item.
> 
> Once you resolve it to a recipe file, you should be able to look at the
> recipe file's dependencies in dataCache.
> 
> The trouble is you're now resolving all the dependencies in the code I
> think you're referring to. This was in general the job of
> prepare_runqueue() and I'm starting to worry you're duplicating its
> functionality.

Hmm, previously we use the taskdata is to reduce the time for building
up the dependency tree. It seems that we will have to use runqueue to
determine the dependency.

Thanks,
Dongxiao

> 
> Cheers,
> 
> Richard
> 


___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] One question about taskdata and runqueue

2012-02-27 Thread Xu, Dongxiao
Hi list,

If I have two recipes, see following. Both of them provides
"virtual/test" and has package named "test-test", the only difference is
the RDEPENDS of the package "test-test".

test-a_1.0.bb

PROVIDES = "virtual/test"
PACKAGES = "test-test"
# Assume that the abcd package are provided by recipe abcd.bb
RDEPENDS_test-test = "abcd"


test-b_1.0.bb

PROVIDES = "virtual/test"
PACKAGES = "test-test"

In a certain configuration file, we have the PREFERRED_PROVIDER set as:
PREFERRED_PROVIDER_virtual/test = "test-a".

Then if a real recipe, for example, the 'v86d', depends on the
"virtual/test":
DEPENDS = "virtual/test"

Finally if I run the following command:
# bitbake v86d

We know that the recipe "abcd" will be included in the runqueue.

My question is, can we get the build dependency to recipe "abcd" through
taskdata? Or it is finalized until we create the RunQueue object?


Thanks for help!
-- Dongxiao

___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] RFC: Hob 1.2 design

2012-02-02 Thread Xu, Dongxiao
One more from me that, Hob may work as a deploy tool, and in the new movie, the 
approach is to select "My images" and then click deploy button. I think normal 
user may not have such knowledge. I am still wondering if adding a "deploy" 
button in the tool bar?

Thanks,
Dongxiao

From: Wang, Shane
Sent: Friday, February 03, 2012 8:26 AM
To: Wang, Shane; Barros Pena, Belen; Xu, Dongxiao; Lu, Lianhao
Cc: Eggleton, Paul; Purdie, Richard; Zhang, Jessica; Lock, Joshua; Liu, Song; 
Stewart, David C; yocto@yoctoproject.org
Subject: RE: RFC: Hob 1.2 design

I get one more:

In the Image details screen, after opening an image file by clicking "My 
images", we don't allow to "Edit Package", I think.
Because with an image only, we can't generate the packages from it, you can't 
assume there is a build directory (tmp/), is my understanding correct?

--
Shane
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] RPM multilib package installation issue

2011-09-02 Thread Xu, Dongxiao
Hi Mark,

> -Original Message-
> From: Mark Hatle [mailto:mark.ha...@windriver.com]
> Sent: Friday, September 02, 2011 11:03 PM
> To: Xu, Dongxiao
> Cc: Richard Purdie (richard.pur...@linuxfoundation.org);
> yocto@yoctoproject.org
> Subject: Re: RPM multilib package installation issue
> 
> On 9/2/11 2:33 AM, Xu, Dongxiao wrote:
> > Hi Mark and Richard,
> >
> > I am trying to setup a RPM multilib system that, it is a qemux86-64
> > base image with MULTILIB_IMAGE_INSTALL = "lib32-connman-gnome". With
> > several fixes, the build can pass.
> >
> > However in run time testing I met a problem that, for those libraries
> > whose base/multilib versions packages will be both built out (like
> > libgtk, it has "libgtk-2.0-2.22.1-r2.x86_64.rpm" and
> > "libgtk-2.0-2.22.1-r2.x86.rpm"), the rpm will only installs the lib32 
> > version of
> it.
> 
> During filesystem construction the system uses dependencies to decide what
> to install.  If you build a 32-bit connman-gnome and it requires other 32-bit
> libraries the dependency scanner will either pick them up and install them, or
> error due to missing dependencies.
> 
> In the manual case you would use "rpm -Uhv " manually specifying which
> one you want.  RPM will detect a multilib package and will allow installation 
> of
> both versions.  (Note always use rpm -U and not rpm -i..  rpm -i just blindly
> installs the software with no checking if an existing version exists.)

In our poky system, I saw we use some command to generate 
install_solution.manifest, and then use "rpm -Uvh" to install them.

I attached my install_solution.manifest here. From the list we saw that some 
libraries are only installed once. For example the db, both 32bit and 64 bit 
are needed, however only 32bit is installed (libgtk is another example). 

Thanks,
Dongxiao

> 
> > Therefore one question is, if there are two rpm packages with the same
> > PN, PV, PR, but different architecture (like our multilib case), then
> > we run command "rpm -ivh libgtk", which version of libgtk will be
> > installed? Or does rpm have any parameter to force installing them
> > both? Actually multilib requires to install them both with certain order.
> 
> No specific order should be necessary on a multilib system.  As long as the 
> end
> dependencies are satisfied the resulting filesystem will work.  (Exceptions to
> this are when there are pre and post install scripts that have their own 
> unique
> dependencies.. but those are not the normal case for OE-Core/Yocto.)
> 
> --Mark
> 
> > Thanks, Dongxiao
> >



install_solution.manifest
Description: install_solution.manifest
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] RPM multilib package installation issue

2011-09-02 Thread Xu, Dongxiao
Hi Mark and Richard,

I am trying to setup a RPM multilib system that, it is a qemux86-64 base image 
with MULTILIB_IMAGE_INSTALL = "lib32-connman-gnome". With several fixes, the 
build can pass.

However in run time testing I met a problem that, for those libraries whose 
base/multilib versions packages will be both built out (like libgtk, it has 
"libgtk-2.0-2.22.1-r2.x86_64.rpm" and "libgtk-2.0-2.22.1-r2.x86.rpm"), the rpm 
will only installs the lib32 version of it.

Therefore one question is, if there are two rpm packages with the same PN, PV, 
PR, but different architecture (like our multilib case), then we run command 
"rpm -ivh libgtk", which version of libgtk will be installed? Or does rpm have 
any parameter to force installing them both? Actually multilib requires to 
install them both with certain order. 

Thanks,
Dongxiao 

___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] PACKAGES_DYNAMIC handling in multilib

2011-07-11 Thread Xu, Dongxiao
Hi Richard,

Recently I am doing some work related with multilib.

In current code logic, I see packages that defined in "PACKAGES" variable will 
be automatically tagged with libxx-PN.

However there seems no logic to handle the PACKAGES_DYNAMIC case. Is it missed?

Besides, there is runtime package split code, for example in perl recipe, there 
are perl-module-* packages split by the following code.

python populate_packages_prepend () {
libdir = bb.data.expand('${libdir}/perl/${PV}', d)
do_split_packages(d, libdir, 'auto/(Encode/.[^/]*)/.*', 
'perl-module-%s', 'perl module %s', recursive=True, allow_dirs=False, 
match_path=True, prepend=False)
do_split_packages(d, libdir, 'auto/([^/]*)/.*', 'perl-module-%s', 'perl 
module %s', recursive=True, allow_dirs=False, match_path=True, prepend=False)
do_split_packages(d, libdir, 'Module/([^\/]*).*', 'perl-module-%s', 
'perl module %s', recursive=True, allow_dirs=False, match_path=True, 
prepend=False)
do_split_packages(d, libdir, 
'(^(?!(CPAN\/|CPANPLUS\/|Module\/|unicore\/|auto\/)[^\/]).*)\.(pm|pl|e2x)', 
'perl-module-%s', 'perl module %s', recursive=True, allow_dirs=False, 
match_path=True, prepend=False)
}

To support lib32-perl/lib64-perl and other similar recipes, I think we may have 
to specifically handle such pieces of code in current poky. 

Do you have suggestions on the above?

Thanks,
Dongxiao 

___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] configure optimization feature update

2011-06-29 Thread Xu, Dongxiao


> -Original Message-
> From: yocto-boun...@yoctoproject.org
> [mailto:yocto-boun...@yoctoproject.org] On Behalf Of Xu, Dongxiao
> Sent: Friday, June 17, 2011 10:19 AM
> To: Richard Purdie
> Cc: yocto@yoctoproject.org
> Subject: Re: [yocto] configure optimization feature update
> 
> Hi Richard,
> 
> > -Original Message-
> > From: Richard Purdie [mailto:richard.pur...@linuxfoundation.org]
> > Sent: Thursday, June 16, 2011 11:01 PM
> > To: Xu, Dongxiao
> > Cc: yocto@yoctoproject.org
> > Subject: Re: configure optimization feature update
> >
> > Hi Dongxiao,
> >
> > On Thu, 2011-06-16 at 08:57 +0800, Xu, Dongxiao wrote:
> > > Recently I was doing the "configure optimization" feature and
> > > collecting data for it.
> > >
> > > The main logic of this feature is straight forward:
> > >
> > > 1. Use the diff file as autoreconf cache. (I use command: "diff -ruN
> > > SOURCE-ORIG SOURCE", here "SOURCE-ORIG" is the source directory
> > > before running autoreconf, while "SOURCE" is the directory after
> > > running autoreconf).
> > > 2. Add SRC_URI checksum for all patches of the source code.
> > > 3. Tag each autoreconf cache file with ${PN} and the SRC_URI
> > > checksum of source code and all patches.
> > > 4. If the currently SRC_URI checksum matches the cached checksum,
> > > then we can patch the cache instead of running "autoreconf" stage.
> > >
> > > I did some testings for sato build, the result is not as good as we
> > > expected:
> > >
> > > On a server build machine (Genuine Intel(R) CPU @ 2.40GHz, 2 sockets
> > > with 6
> > core each and hyperthreading, thus 24 logical CPUs in all, 66G memory):
> > >
> > > w/o the optimization:
> > > real83m40.963s
> > > user496m58.550s
> > > sys 329m1.590s
> > >
> > > w/ the optimization:
> > > real79m1.062s
> > > user460m58.600s
> > > sys 347m42.120s
> > >
> > > It has about 5% performance gain.
> >
> > Whats interesting there is the relatively large sys times compared to
> > user. Any idea why that's happening? Spinning locks?
> 
> Yes, I also noticed the the in-consistent data of user and sys.
> During the build, sometimes I found the build will suspend for some time and
> system is doing "kjournald".
> It happens relatively frequent on that 24 CPU's server with "48" and "-j48"
> assigned for build parallel parameters.
> I am not sure whether this caused the above phenomenon.
> 
> >
> > > I also tested the patch on a desktop core-i7 machine (Intel(R)
> > > Core(TM) i7
> > CPU 870 @ 2.93GHz, 4 core 8 logical CPU, 4G memory):
> > >
> > > w/o the optimization:
> > > real105m25.436s
> > > user372m48.040s
> > > sys 51m23.950s
> > >
> > > w/ the optimization:
> > > real103m38.314s
> > > user332m35.770s
> > > sys 49m4.520s
> > >
> > > It only has about 2% performance gain.
> > >
> > > The result is not encouraging.
> >
> > Agreed, this isn't as good as we'd hoped for :(.
> >
> > > There are also some other things we need to take into consideration
> > > for this feature:
> > >
> > > 1. If add this feature, the first build time should be longer than
> > > current since it needs to build the autoreconf cache.
> > > 2. Maintainers needs to maintain the SRC_URI checksums not only for
> > > source code, but also all its patches. For some recipes, it has more
> > > than 20 patches, which needs assignable maintenance effort.
> > > 3. How to distribute the caches will be a problem. The total size of
> > > such cache is about 900M (before compression) and 200M (after
> > > compression). Since the size is not small, distributing it with Poky
> > > source code doesn't make sense. On another aspect, we can use
> > > something like "sstate". But since we already have caches of sstate,
> > > I think it is not necessary for us to enable another similar cache
> > > mechanism with little improvement.
> > >
> > > Therefore my opinion is we may give up this feature. What's your
> > > comments and suggestions?
> >
> > I think we should put the patches together on a branch in contrib so
> > we keep them somewhere in case we want them.

Re: [yocto] configure optimization feature update

2011-06-16 Thread Xu, Dongxiao
Hi Richard,

> -Original Message-
> From: Richard Purdie [mailto:richard.pur...@linuxfoundation.org]
> Sent: Thursday, June 16, 2011 11:01 PM
> To: Xu, Dongxiao
> Cc: yocto@yoctoproject.org
> Subject: Re: configure optimization feature update
> 
> Hi Dongxiao,
> 
> On Thu, 2011-06-16 at 08:57 +0800, Xu, Dongxiao wrote:
> > Recently I was doing the "configure optimization" feature and
> > collecting data for it.
> >
> > The main logic of this feature is straight forward:
> >
> > 1. Use the diff file as autoreconf cache. (I use command: "diff -ruN
> > SOURCE-ORIG SOURCE", here "SOURCE-ORIG" is the source directory before
> > running autoreconf, while "SOURCE" is the directory after running
> > autoreconf).
> > 2. Add SRC_URI checksum for all patches of the source code.
> > 3. Tag each autoreconf cache file with ${PN} and the SRC_URI checksum
> > of source code and all patches.
> > 4. If the currently SRC_URI checksum matches the cached checksum, then
> > we can patch the cache instead of running "autoreconf" stage.
> >
> > I did some testings for sato build, the result is not as good as we
> > expected:
> >
> > On a server build machine (Genuine Intel(R) CPU @ 2.40GHz, 2 sockets with 6
> core each and hyperthreading, thus 24 logical CPUs in all, 66G memory):
> >
> > w/o the optimization:
> > real83m40.963s
> > user496m58.550s
> > sys 329m1.590s
> >
> > w/ the optimization:
> > real79m1.062s
> > user460m58.600s
> > sys 347m42.120s
> >
> > It has about 5% performance gain.
> 
> Whats interesting there is the relatively large sys times compared to user. 
> Any
> idea why that's happening? Spinning locks?

Yes, I also noticed the the in-consistent data of user and sys.
During the build, sometimes I found the build will suspend for some time and 
system is doing "kjournald".
It happens relatively frequent on that 24 CPU's server with "48" and "-j48" 
assigned for build parallel parameters.
I am not sure whether this caused the above phenomenon.

> 
> > I also tested the patch on a desktop core-i7 machine (Intel(R) Core(TM) i7
> CPU 870 @ 2.93GHz, 4 core 8 logical CPU, 4G memory):
> >
> > w/o the optimization:
> > real105m25.436s
> > user372m48.040s
> > sys 51m23.950s
> >
> > w/ the optimization:
> > real103m38.314s
> > user332m35.770s
> > sys 49m4.520s
> >
> > It only has about 2% performance gain.
> >
> > The result is not encouraging.
> 
> Agreed, this isn't as good as we'd hoped for :(.
> 
> > There are also some other things we need to take into consideration
> > for this feature:
> >
> > 1. If add this feature, the first build time should be longer than
> > current since it needs to build the autoreconf cache.
> > 2. Maintainers needs to maintain the SRC_URI checksums not only for
> > source code, but also all its patches. For some recipes, it has more
> > than 20 patches, which needs assignable maintenance effort.
> > 3. How to distribute the caches will be a problem. The total size of
> > such cache is about 900M (before compression) and 200M (after
> > compression). Since the size is not small, distributing it with Poky
> > source code doesn't make sense. On another aspect, we can use
> > something like "sstate". But since we already have caches of sstate, I
> > think it is not necessary for us to enable another similar cache
> > mechanism with little improvement.
> >
> > Therefore my opinion is we may give up this feature. What's your
> > comments and suggestions?
> 
> I think we should put the patches together on a branch in contrib so we keep
> them somewhere in case we want them. Certainly tracking what changes the
> autoreconf process makes may be useful in other situations in future so its
> worth keeping the patches. I think you're right and we should shelve the idea
> for now though as it doesn't look to be worth the pain it entails.

OK, I will queue my patch into a contrib tree and keep it there.

> 
> For reference, we probably do need to start tracking the file checksums for 
> the
> benefit of sstate.

Could you explain more here? Here the file checksums you mentioned is SRC_URI 
checksum?
How can it help sstate?

Thanks,
Dongxiao

> 
> The mediocre performance improvement is likely down to the size of the cache
> data but I can't immediately think of a way to improve that :(.
> 
> Cheers,
> 
> Richard

___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] configure optimization feature update

2011-06-15 Thread Xu, Dongxiao
> -Original Message-
> From: Khem Raj [mailto:raj.k...@gmail.com]
> Sent: Thursday, June 16, 2011 9:29 AM
> To: Xu, Dongxiao
> Cc: Richard Purdie (richard.pur...@linuxfoundation.org);
> yocto@yoctoproject.org
> Subject: Re: [yocto] configure optimization feature update
> 
> On Wed, Jun 15, 2011 at 5:57 PM, Xu, Dongxiao 
> wrote:
> > Hi Richard,
> >
> > Recently I was doing the "configure optimization" feature and collecting 
> > data
> for it.
> >
> > The main logic of this feature is straight forward:
> >
> > 1. Use the diff file as autoreconf cache. (I use command: "diff -ruN
> SOURCE-ORIG SOURCE", here "SOURCE-ORIG" is the source directory before
> running autoreconf, while "SOURCE" is the directory after running autoreconf).
> > 2. Add SRC_URI checksum for all patches of the source code.
> > 3. Tag each autoreconf cache file with ${PN} and the SRC_URI checksum of
> source code and all patches.
> > 4. If the currently SRC_URI checksum matches the cached checksum, then we
> can patch the cache instead of running "autoreconf" stage.
> >
> 
> The autoconf'ing is sort of arbitrary at the moment. Depending on what is
> staged the results may vary. So some way of creating a cache is nice  since it
> can be used to verify if package rebuild happens with same configure variables
> or not. Sometimes we have seen some packages are build differently first time
> since many packages are not staged.
> 
> 
> > I did some testings for sato build, the result is not as good as we 
> > expected:
> >
> > On a server build machine (Genuine Intel(R) CPU @ 2.40GHz, 2 sockets with 6
> core each and hyperthreading, thus 24 logical CPUs in all, 66G memory):
> >
> > w/o the optimization:
> > real    83m40.963s
> > user    496m58.550s
> > sys     329m1.590s
> >
> > w/ the optimization:
> > real    79m1.062s
> > user    460m58.600s
> > sys     347m42.120s
> >
> > It has about 5% performance gain.
> >
> > I also tested the patch on a desktop core-i7 machine (Intel(R) Core(TM) i7
> CPU 870 @ 2.93GHz, 4 core 8 logical CPU, 4G memory):
> >
> > w/o the optimization:
> > real    105m25.436s
> > user    372m48.040s
> > sys     51m23.950s
> >
> > w/ the optimization:
> > real    103m38.314s
> > user    332m35.770s
> > sys     49m4.520s
> >
> > It only has about 2% performance gain.
> >
> > The result is not encouraging.
> >
> > There are also some other things we need to take into consideration for this
> feature:
> >
> > 1. If add this feature, the first build time should be longer than current 
> > since it
> needs to build the autoreconf cache.
> > 2. Maintainers needs to maintain the SRC_URI checksums not only for source
> code, but also all its patches. For some recipes, it has more than 20 patches,
> which needs assignable maintenance effort.
> 
> Yeah thats definite pain.
> 
> > 3. How to distribute the caches will be a problem. The total size of such 
> > cache
> is about 900M (before compression) and 200M (after compression). Since the
> size is not small, distributing it with Poky source code doesn't make sense. 
> On
> another aspect, we can use something like "sstate". But since we already have
> caches of sstate, I think it is not necessary for us to enable another similar
> cache mechanism with little improvement.
> >
> 
> hmm this is a real problem and probably the perf killer. I wonder why the 
> sizes
> are so big.

I diff the source files before and after autoreconf and using the generated 
patch as our autoreconf cache. For a certain source code, for example connman, 
such diff file will be as large as 3M, and for a sato build, we have about 300 
recipes need autoreconf.

Thanks,
Dongxiao

> 
> > Therefore my opinion is we may give up this feature. What's your comments
> and suggestions?
> >
> > Thanks,
> > Dongxiao
> >
> > ___
> > yocto mailing list
> > yocto@yoctoproject.org
> > https://lists.yoctoproject.org/listinfo/yocto
> >
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] configure optimization feature update

2011-06-15 Thread Xu, Dongxiao
Hi Richard,

Recently I was doing the "configure optimization" feature and collecting data 
for it.

The main logic of this feature is straight forward:

1. Use the diff file as autoreconf cache. (I use command: "diff -ruN 
SOURCE-ORIG SOURCE", here "SOURCE-ORIG" is the source directory before running 
autoreconf, while "SOURCE" is the directory after running autoreconf).
2. Add SRC_URI checksum for all patches of the source code.
3. Tag each autoreconf cache file with ${PN} and the SRC_URI checksum of source 
code and all patches.
4. If the currently SRC_URI checksum matches the cached checksum, then we can 
patch the cache instead of running "autoreconf" stage.

I did some testings for sato build, the result is not as good as we expected:

On a server build machine (Genuine Intel(R) CPU @ 2.40GHz, 2 sockets with 6 
core each and hyperthreading, thus 24 logical CPUs in all, 66G memory):

w/o the optimization:
real83m40.963s
user496m58.550s
sys 329m1.590s

w/ the optimization:
real79m1.062s
user460m58.600s
sys 347m42.120s

It has about 5% performance gain.

I also tested the patch on a desktop core-i7 machine (Intel(R) Core(TM) i7 CPU 
870 @ 2.93GHz, 4 core 8 logical CPU, 4G memory):

w/o the optimization:
real105m25.436s
user372m48.040s
sys 51m23.950s

w/ the optimization:
real103m38.314s
user332m35.770s
sys 49m4.520s

It only has about 2% performance gain.

The result is not encouraging.

There are also some other things we need to take into consideration for this 
feature:

1. If add this feature, the first build time should be longer than current 
since it needs to build the autoreconf cache.
2. Maintainers needs to maintain the SRC_URI checksums not only for source 
code, but also all its patches. For some recipes, it has more than 20 patches, 
which needs assignable maintenance effort.
3. How to distribute the caches will be a problem. The total size of such cache 
is about 900M (before compression) and 200M (after compression). Since the size 
is not small, distributing it with Poky source code doesn't make sense. On 
another aspect, we can use something like "sstate". But since we already have 
caches of sstate, I think it is not necessary for us to enable another similar 
cache mechanism with little improvement.

Therefore my opinion is we may give up this feature. What's your comments and 
suggestions?

Thanks,
Dongxiao 

___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Race condition when building a recipe and poky-image-minimal-initramfs

2011-03-14 Thread Xu, Dongxiao
Mark Hatle wrote:
> On 3/14/11 8:47 PM, Xu, Dongxiao wrote:
>> Hi Richard,
> 
> There was already a defect covering this.  The bug number is "797".
> 
> In order to fix the problem a lock was added to the RPM generation. 
> This lock should be preventing both RPM package generation and rootfs
> construction from running at the same time.  
> 
> The code was checked into Bernard on 2011-03-10.  If your image is
> from after that date, please reopen the defect and add the details
> below.  

Yes, thanks for pointing it out. The phenomenon is similar as bug 797. I will 
mark it as a duplication.

Just checked that, your fixing patch for 797 is already in my build. Therefore 
I will re-open the bug.

My build is based on commit 43a2d098008eee711399b8d64594d84ae034b9bf.

In my side, the race condition is happened in manifest generation while 
executing "package_update_index_rpm()".

Thanks,
Dongxiao

> 
> --Mark
> 
>> These days I found a race condition between building a recipe and
>> poky-image-minimal-initramfs, to reproduce it, you can try as: 
>> 
>> 1) Run "bitbake poky-image-sato-live". Build the full sato-live
>> image until it is successful. 2) Bump connman's PR (or some other
>> sato recipe's PR). 3) Rebuild the sato live image by "bitbake
>> poky-image-sato-live". 
>> 
>> Sometimes, I meet build error like the following, however it does
>> not happen every time. 
>> 
>> ---
>> Generating solve db for
>>/distro/dongxiao/build-qemux86/tmp/deploy/rpm/qemux86... total:  
>>1  0.00 MB  1.424841 secs fingerprint:  1020 
>>0.006796 MB  0.033057 secs install:   340 
>>0.00 MB  0.371773 secs dbadd: 340 
>>0.00 MB  0.362746 secs dbget:2196 
>>0.00 MB  0.004278 secs dbput: 340 
>>1.504908 MB  0.308950 secs readhdr:  3401 
>>2.961280 MB  0.005603 secs hdrload:  1700 
>>4.389932 MB  0.007001 secs hdrget:  57535 
>> 0.00 MB  0.043769 secs 
>> Generating solve db for
>> /distro/dongxiao/build-qemux86/tmp/deploy/rpm/i586... 
>> error: open of
>> /distro/dongxiao/build-qemux86/tmp/deploy/rpm/i586/connman-plugin-ethe
>> rnet-0.65-r4.i586.rpm failed: No such file or directory
>> rpm.real: ./rpmio_internal.h:190: fdGetOPath: Assertion `fd !=
>> ((void *)0) && fd->magic == 0x04463138' failed.
>> /distro/dongxiao/build-qemux86/tmp/work/qemux86-poky-linux/poky-image-minimal-initramfs-1.0-r0/temp/run.do_rootfs.468:
>> line 375:   669 Aborted rpm --dbpath /var/lib/rpm
>> --define='_openall_before_chroot 1' -i --replacepkgs --replacefiles
>> --oldpackage -D "_dbpath $pkgdir/solvedb" --justdb --noaid --nodeps
>> --noorder --noscripts --notriggers --noparentdirs --nolinktos
>> --stats --ignoresize --nosignature --nodigest -D "__dbi_txn create
>> nofsync" $pkgdir/solvedb/manifest ERROR: Function 'do_rootfs' failed
>> (see
>> /distro/dongxiao/build-qemux86/tmp/work/qemux86-poky-linux/poky-image-
>> minimal-initramfs-1.0-r0/temp/log.do_rootfs.468 for further
>> information)
>> ---
>> 
>> The root cause for this issue should be,
>> poky-image-minimal-initramfs's do_rootfs task doesn't have
>> dependency on connman, thus their tasks will be run simultaneously.
>> Poky-image-minimal-initramfs's do_rootfs will call
>> "rootfs_rpm_do_rootfs" --> "package_update_index_rpm", where it will
>> update all the packages depsolver db in ${DEPLOY_DIR_RPM}. 
>> 
>> When the package_update_index_rpm function is handling connman's rpm
>> package, and at the same time, connman is removing old rpm and
>> trying to generate a new one (e.x, from r4 to r5), then the build
>> error will occur, saying that it could not find r4 version of
>> connman-plugin-ethernet...
>> 
>> One choice may be to force poky-image-minimal-initramfs's do_rootfs
>> to depends on all recipe's do_package to ensure correctness, even
>> though it only depends on some basic recipes.  
>> 
>> However I think it is not such elegant.
>> 
>> Do you have ideas on it?
>> 
>> BTW, I will file a bug 867 to track this issue.
>> http://bugzilla.pokylinux.org/show_bug.cgi?id=867
>> 
>> Thanks,
>> Dongxiao
>> ___
>> yocto mailing list
>> yocto@yoctoproject.org
>> https://lists.yoctoproject.org/listinfo/yocto
> 
> ___
> yocto mailing list
> yocto@yoctoproject.org
> https://lists.yoctoproject.org/listinfo/yocto

___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] Race condition when building a recipe and poky-image-minimal-initramfs

2011-03-14 Thread Xu, Dongxiao
Hi Richard,

These days I found a race condition between building a recipe and 
poky-image-minimal-initramfs, to reproduce it, you can try as:

1) Run "bitbake poky-image-sato-live". Build the full sato-live image until it 
is successful.
2) Bump connman's PR (or some other sato recipe's PR).
3) Rebuild the sato live image by "bitbake poky-image-sato-live".

Sometimes, I meet build error like the following, however it does not happen 
every time.

---
Generating solve db for /distro/dongxiao/build-qemux86/tmp/deploy/rpm/qemux86...
   total:   1  0.00 MB  1.424841 secs
   fingerprint:  1020  0.006796 MB  0.033057 secs
   install:   340  0.00 MB  0.371773 secs
   dbadd: 340  0.00 MB  0.362746 secs
   dbget:2196  0.00 MB  0.004278 secs
   dbput: 340  1.504908 MB  0.308950 secs
   readhdr:  3401  2.961280 MB  0.005603 secs
   hdrload:  1700  4.389932 MB  0.007001 secs
   hdrget:  57535  0.00 MB  0.043769 secs
Generating solve db for /distro/dongxiao/build-qemux86/tmp/deploy/rpm/i586...
error: open of 
/distro/dongxiao/build-qemux86/tmp/deploy/rpm/i586/connman-plugin-ethernet-0.65-r4.i586.rpm
 failed: No such file or directory
rpm.real: ./rpmio_internal.h:190: fdGetOPath: Assertion `fd != ((void *)0) && 
fd->magic == 0x04463138' failed.
/distro/dongxiao/build-qemux86/tmp/work/qemux86-poky-linux/poky-image-minimal-initramfs-1.0-r0/temp/run.do_rootfs.468:
 line 375:   669 Aborted rpm --dbpath /var/lib/rpm 
--define='_openall_before_chroot 1' -i --replacepkgs --replacefiles 
--oldpackage -D "_dbpath $pkgdir/solvedb" --justdb --noaid --nodeps --noorder 
--noscripts --notriggers --noparentdirs --nolinktos --stats --ignoresize 
--nosignature --nodigest -D "__dbi_txn create nofsync" $pkgdir/solvedb/manifest
ERROR: Function 'do_rootfs' failed (see 
/distro/dongxiao/build-qemux86/tmp/work/qemux86-poky-linux/poky-image-minimal-initramfs-1.0-r0/temp/log.do_rootfs.468
 for further information)
---

The root cause for this issue should be, poky-image-minimal-initramfs's 
do_rootfs task doesn't have dependency on connman, thus their tasks will be run 
simultaneously. Poky-image-minimal-initramfs's do_rootfs will call 
"rootfs_rpm_do_rootfs" --> "package_update_index_rpm", where it will update all 
the packages depsolver db in ${DEPLOY_DIR_RPM}.

When the package_update_index_rpm function is handling connman's rpm package, 
and at the same time, connman is removing old rpm and trying to generate a new 
one (e.x, from r4 to r5), then the build error will occur, saying that it could 
not find r4 version of connman-plugin-ethernet...

One choice may be to force poky-image-minimal-initramfs's do_rootfs to depends 
on all recipe's do_package to ensure correctness, even though it only depends 
on some basic recipes.

However I think it is not such elegant.

Do you have ideas on it?

BTW, I will file a bug 867 to track this issue. 
http://bugzilla.pokylinux.org/show_bug.cgi?id=867

Thanks,
Dongxiao 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] Could you help to have a look at BUG 660

2011-03-02 Thread Xu, Dongxiao
Hi Richard and Saul,

Could you help to have a review of BUG 660?

http://bugzilla.pokylinux.org/show_bug.cgi?id=660

BUG 660 is about missing kernel firmware in poky-image-minimal.

As from most of the machine configuration files in current poky, including
igep0030.conf, "kernel-modules" and "linux-firmware-sd8686" are part of
"MACHINE_EXTRA_RRECOMMENDS", like:

MACHINE_EXTRA_RRECOMMENDS = " kernel-modules linux-firmware-sd8686"

However for poky-image-minimal, only "MACHINE_ESSENTIAL_EXTRA_RRECOMMENDS" is
included. If the kernel module and firmware files are indeed necessary for
successful boot, we can add them into minimal image.

Any comment on that?

Thanks,
Dongxiao 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Command line audio

2011-02-14 Thread Xu, Dongxiao
There is no such command-line recipe audio player in Yocto currently.

But to play music in command line, does "gst-launch" help with your case?

Thanks,
Dongxiao

Chris Tapp wrote:
> Is there a command-line audio player recipe in the Yocto meta?
> 
> I've searched for the usual candidates (mpg123, etc.), but I've not
> found anything. 
> 
> I'm just after something that I can use to see if the kernel
> configuration I have now gives me working audio. 
> 
> Thanks !
> 
> Chris Tapp
> 
> opensou...@keylevel.com
> www.keylevel.com
> 
> 
> 
> ___
> yocto mailing list
> yocto@yoctoproject.org
> https://lists.yoctoproject.org/listinfo/yocto

___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Generation of upgrade statistics table

2011-01-19 Thread Xu, Dongxiao
Wold, Saul wrote:
> On 01/19/2011 06:23 PM, Xu, Dongxiao wrote:
>> Hi Saul,
>> 
>> Could you share with us how you generate your upgrade statistics
>> table? 
>> We want to locally have a try of it and see if there is anything we
>> missed. 
>> 
> Dongziao,
> 
> I use the checkpkg task in distrodata.bbclass, you need to ensure you
> add INHERIT += distrodata to your local.conf to enable it. 

Thanks, we will have a try.

-- Dongxiao

> 
> Sau!
> 
>> Thanks,
>> Dongxiao
>> 
>> ___
>> yocto mailing list
>> yocto@yoctoproject.org
>> https://lists.yoctoproject.org/listinfo/yocto

___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] Generation of upgrade statistics table

2011-01-19 Thread Xu, Dongxiao
Hi Saul,

Could you share with us how you generate your upgrade statistics table?
We want to locally have a try of it and see if there is anything we missed. 

Thanks,
Dongxiao 

___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] Libtoo sysrootl issue when testing machine specific sysroot

2011-01-11 Thread Xu, Dongxiao
Garman, Scott A wrote:
> On 01/10/2011 07:15 PM, Xu, Dongxiao wrote:
>> Hi Richard,
>> 
>> When testing the machine specific sysroot patchset for atom-pc and
>> emenlow machines, it exposed a libtool issue that, after the built of
>> "Machine A", and then try to build "Machine B" of the same
>> architecture, those "-L" paths generated by libtool still points to
>> the "Machine A" sysroot, which is definitely not correct and may have
>> issues.
> 
> Thanks Donxiao for the info. I will make sure to test this scenario
> before submitting my libtool 2.4 sysroot support to ensure it is
> resolved.  

Hi Scott,

This issue may only happen with the implementation of machine specific sysroot. 

Currently different machines of one architecture share the same path of sysroot 
and the issue would not be triggered. 

If you want to do the above test, you can based on my branch 
http://git.pokylinux.org/cgit/cgit.cgi/poky-contrib/log/?h=dxu4/mach_sysroot_v3.
 

Or you can share me your branch if you think it is mostly stable, and then I 
can test my patchset on it. :-)

Thanks,
Dongxiao

> 
> Scott

___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] Libtoo sysrootl issue when testing machine specific sysroot

2011-01-10 Thread Xu, Dongxiao
Hi Richard,

When testing the machine specific sysroot patchset for atom-pc and emenlow 
machines, it exposed a libtool issue that, after the built of "Machine A", and 
then try to build "Machine B" of the same architecture, those "-L" paths 
generated by libtool still points to the "Machine A" sysroot, which is 
definitely not correct and may have issues.

One example is like the following, after built of emenlow, and then try to 
build atom-pc, the sysroot is points to atom-pc paths, however some "-L" paths 
point to emenlow directories.

/distro/dongxiao/build-core2/tmp/sysroots/x86_64-linux/usr/bin/core2-poky-linux/i586-poky-linux-gcc
 -m32 -march=core2 -msse3 -mtune=generic -mfpmath=sse 
--sysroot=/distro/dongxiao/build-core2/tmp/sysroots/atom-pc -DHAVE_DIX_CONFIG_H 
-Wall -Wpointer-arith -Wstrict-prototypes -Wmissing-prototypes 
-Wmissing-declarations -Wnested-externs -fno-strict-aliasing 
-Wbad-function-cast -Wformat=2 -Wold-style-definition 
-Wdeclaration-after-statement -D_BSD_SOURCE -DHAS_FCHOWN -DHAS_STICKY_DIR_BIT 
-I/distro/dongxiao/build-core2/tmp/sysroots/atom-pc/usr/include/pixman-1 
-I/distro/dongxiao/build-core2/tmp/sysroots/atom-pc/usr/include/freetype2 
-I../../include -I../../include -I../../Xext -I../../composite 
-I../../damageext -I../../xfixes -I../../Xi -I../../mi -I../../miext/shadow 
-I../../miext/damage -I../../render -I../../randr -I../../fb 
-fvisibility=hidden -DHAVE_XORG_CONFIG_H -fvisibility=hidden -DXF86PM 
-fexpensive-optimizations -fomit-frame-pointer -frename-registers -O2 -ggdb 
-felimin
 ate-unused-debug-types -Wl,-O1 -Wl,--as-needed -o .libs/Xorg xorg.o 
-Wl,--export-dynamic  ../../dix/.libs/libmain.a ./.libs/libxorg.a 
-L/distro/dongxiao/build-core2/tmp/sysroots/emenlow/usr/lib 
/distro/dongxiao/build-core2/tmp/sysroots/atom-pc/usr/lib/libudev.so 
/distro/dongxiao/build-core2/tmp/sysroots/atom-pc/usr/lib/libgcrypt.so 
/distro/dongxiao/build-core2/tmp/sysroots/emenlow/usr/lib/libgpg-error.so -ldl 
/distro/dongxiao/build-core2/tmp/sysroots/atom-pc/usr/lib/libpciaccess.so 
-lpthread 
/distro/dongxiao/build-core2/tmp/sysroots/atom-pc/usr/lib/libpixman-1.so 
/distro/dongxiao/build-core2/tmp/sysroots/atom-pc/usr/lib/libXfont.so 
/distro/dongxiao/build-core2/tmp/sysroots/emenlow/usr/lib/libfreetype.so 
/distro/dongxiao/build-core2/tmp/sysroots/emenlow/usr/lib/libfontenc.so 
/distro/dongxiao/build-core2/tmp/sysroots/emenlow/usr/lib/libz.so 
/distro/dongxiao/build-core2/tmp/sysroots/atom-pc/usr/lib/libXau.so 
/distro/dongxiao/build-core2/tmp/sysroots/atom-pc/usr/lib/libXdmcp.so 
 -lm


This problem happens when building atom-pc and emenlow since some libraries 
they are using are not the same.

This issue should also exist for qemuppc and mpc8315e-rdb build but it isn't 
exposed during my pervious testings, maybe because these files between the two 
machines are the same. 

We saw we have plan to add libtool sysroot support and remove its workaround, 
http://bugzilla.pokylinux.org/show_bug.cgi?id=353, I think this should help to 
solve the issue.

CC Scott.G who owns this libtool enhancement.

Thanks,
Dongxiao 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] M3 Package Update List

2011-01-03 Thread Xu, Dongxiao
Hi Saul,

Here is an update for recipes in the list under my name.

I upgraded most of the recipes except:

1) upstream check error, actually they are up to date.
Speex, net-tools, libid3tag, libgsmd

2) git development tree:
Mtd-utils

3) dependency on other recipes:
Clutter-gst (depend on latest version of clutter).

Thanks,
Dongxiao

Saul Wold wrote:
> Folks,
> 
> Please find attached the list of M3 Recipe Updates, our goal for 1.0
> M3 release is to complete the update process we started in 0.9 and
> 1.0 M2.  
> There are about 150 recipes in this list, some people may have more
> and if there is a need to request help, please let me know.  If you
> plan on helping, please let the current owner know that you will be
> updating a recipe.   
> 
> Unless I hear otherwise, I will work to update the "None" maintained
> list, but I believe some of these are owned by people and the distro
> tracking information needs to be updated.  
> 
> Thanks again to the team for the hard work during 0.9 and M2, we will
> always have updates to do, but they will get easier with our gained
> experience and expertise.  
> 
> Happy Holidays and Happy New Year to all.

___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] A question about libtool-cross

2010-12-23 Thread Xu, Dongxiao
Hi Richard,

When looking at libtool-cross recipe, I found that it doesn't inherit the 
cross.bbclass, thus its files are populated into target sysroot. Is it on 
purpose or its inherit is missing? 
BTW, for other cross recipes, like gcc-cross, binutils-cross, etc, they all 
inherit cross.bbclass

Thanks,
Dongxiao 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] Some simple tests about pseudo performance

2010-12-09 Thread Xu, Dongxiao
Hi,

I did some simple tests for pseudo performance. 

I wrote a simple program which is repeatedly calling fopen, fflush, and fclose, 
which should be sensitive to pseudo/fakeroot since they trap the system calls.

I run the program on native, fakeroot, and pseudo. 

int main()
{
FILE *fp;
int i;

for (i = 0; i < 100; i++) {
fp = fopen("/tmp/12321.txt", "w");
fflush(fp);
fclose(fp);
}
return 0;
}

Test results are:

Native:   2.729 secs
Fakeroot: 2.752 secs
Pseudo:   51.814 secs

We saw pseudo cost about 20 times of seconds than native and fakeroot.

I did a profile when the program is running. From the following table we saw 
that a lot of cycles are within sqlite3 operations...

I am wondering whether this point can be optimized? For example, is it workable 
to cache those database operations in memory and finally flush it into disk 
when pseudo exits?

It's just a first thought and any suggestion and comment is welcome.

Counted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit mask 
of 0x00 (No unit mask) count 10
samples  %image name   app name symbol name
693127   58.2980  no-vmlinux   no-vmlinux   /no-vmlinux
211108   17.7560  libsqlite3.so.0.8.6  libsqlite3.so.0.8.6  
/usr/lib/libsqlite3.so.0.8.6
1122699.4428  libc-2.10.1.so   libc-2.10.1.so   
/lib/tls/i686/cmov/libc-2.10.1.so
16792 1.4124  [vdso] (tgid:28674 range:0xfd-0xfd1000) a.out 
   [vdso] (tgid:28674 range:0xfd-0xfd1000)
15256 1.2832  libpthread-2.10.1.so libpthread-2.10.1.so 
pthread_mutex_lock
13163 1.1071  libpthread-2.10.1.so libpthread-2.10.1.so 
__pthread_mutex_unlock_usercnt
7386  0.6212  libpseudo.so libpseudo.so 
pseudo_client_op
6753  0.5680  libpseudo.so libpseudo.so 
pseudo_debug_real
6680  0.5618  pseudo   pseudo   
pseudo_server_start
6663  0.5604  [vdso] (tgid:28676 range:0x8d6000-0x8d7000) pseudo
   [vdso] (tgid:28676 range:0x8d6000-0x8d7000)
5049  0.4247  pseudo   pseudo   pseudo_op
4906  0.4126  [vdso] (tgid:28467 range:0xd32000-0xd33000) a.out 
   [vdso] (tgid:28467 range:0xd32000-0xd33000)
4427  0.3723  [vdso] (tgid:28607 range:0x5c-0x5c1000) a.out 
   [vdso] (tgid:28607 range:0x5c-0x5c1000)
4391  0.3693  [vdso] (tgid:29027 range:0x6d-0x6d1000) a.out 
   [vdso] (tgid:29027 range:0x6d-0x6d1000)
4188  0.3522  [vdso] (tgid:29172 range:0x41a000-0x41b000) a.out 
   [vdso] (tgid:29172 range:0x41a000-0x41b000)
3584  0.3014  [vdso] (tgid:2411 range:0x892000-0x893000) a.out  
  [vdso] (tgid:2411 range:0x892000-0x893000)
2985  0.2511  pseudo   pseudo   
pseudo_debug_real
2921  0.2457  postgres postgres 
/usr/lib/postgresql/8.3/bin/postgres
2905  0.2443  vim.basicvim.basic
/usr/bin/vim.basic
2449  0.2060  bash bash /bin/bash
2439  0.2051  libpseudo.so libpseudo.so fopen
2163  0.1819  [vdso] (tgid:28609 range:0x3da000-0x3db000) pseudo
   [vdso] (tgid:28609 range:0x3da000-0x3db000)
2096  0.1763  [vdso] (tgid:29029 range:0xfef000-0xff) pseudo
   [vdso] (tgid:29029 range:0xfef000-0xff)
2086  0.1755  libpseudo.so libpseudo.so 
pseudo_append_element
1917  0.1612  [vdso] (tgid:28469 range:0xe72000-0xe73000) pseudo
   [vdso] (tgid:28469 range:0xe72000-0xe73000)
1835  0.1543  libpseudo.so libpseudo.so wrap_fopen
1833  0.1542  libpseudo.so libpseudo.so 
pseudo_msg_receive
1819  0.1530  libpseudo.so libpseudo.so __lxstat64
1762  0.1482  libpseudo.so libpseudo.so 
__i686.get_pc_thunk.bx

Thanks,
Dongxiao 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] FW: [oe] BitBake parallel parsing testing

2010-11-24 Thread Xu, Dongxiao
FYI, bitbake parallel parsing is enabled, which can greatly reduce the file 
parsing time.
There are some data test result in OE's following mails.

Bitbake 1.10: 3m2.185s
parallel-parsing (BB_NUM_PARSE_THREADS=num_cpu): 1m48.232s parallel-parsing 
(BB_NUM_PARSE_THREADS=2*num_cpu): 1m1.869s

Thanks,
Dongxiao

Chris Larson wrote:
> Greetings all,
> This is a request for further testers for the parallel parsing branch
> of bitbake, as well as for review of the code.  I'm fairly certain
> that the last real issue with it was resolved yesterday, and I
> haven't gotten any reports of problems from those testing it since
> last week, so I'm opening it up to more public testing, so we can be
> absolutely certain that we can merge it to master in the near future
> without risking problems.  
> 
> For code review:
> Either follow the below steps, then git log master..parallel-parsing
> or git diff master..parallel-parsing, or use the web interface at
> https://github.com/kergoth/bitbake/commits/parallel-parsing.  
> 
> For testing from scratch:
> 
> git clone git://github.com/kergoth/bitbake cd bitbake git checkout -b
> parallel-parsing origin/parallel-parsing 
> 
> From an existing bitbake repository:
> 
> git remote add kergoth git://github.com/kergoth/bitbake git remote
> update git checkout -b parallel-parsing kergoth/parallel-parsing 
> 
> Thanks for your time, and do let me know what you think, either of
> the code, performance in general, or the new progress bar (shows an
> ETA), or anything else :)  

___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: [yocto] [PULL] multimedia upgrades and disk optmization, Dongxiao Xu, 2010/11/12

2010-11-12 Thread Xu, Dongxiao
Saul Wold wrote:
> On 11/12/2010 03:24 AM, Xu, Dongxiao wrote:
>> Hi Saul,
>> 
>> This pull request contains some gstreamer recipe upgrades and disk
>> space optimization, please help to review and pull. 
>> 
> Dongxiao,
> 
> Will you have a distro tracking pull request also?

Hi Saul,

Here is the pull request for distro tracking fields.

 meta/conf/distro/include/distro_tracking_fields.inc |   26 ++--
 1 file changed, 13 insertions(+), 13 deletions(-)

Dongxiao Xu (1):
  distro_tracking: Update distro tracking for gstreamer and gst-* recipes

Pull URL: http://git.pokylinux.org/cgit.cgi/poky-contrib/log/?h=dxu4/distro

> 
>> Thanks,
>> Dongxiao
>> 
>>   meta/classes/sstate.bbclass
> Minor nit, traditional the options go between the command and the
> file list, so I fixed this to be "rm -rf ${SSTATE_BUILDDIR}". 

Yes you are right, thanks for the fixing.

Thanks,
Dongxiao

> 
> Sau!
>   |3
>>   meta/recipes-multimedia/gstreamer/gst-plugins-bad_0.10.19.bb  
>>   |   24
>>   meta/recipes-multimedia/gstreamer/gst-plugins-bad_0.10.20.bb  
>>   |   24
>>   meta/recipes-multimedia/gstreamer/gst-plugins-base_0.10.29.bb 
>>   |   22
>>   meta/recipes-multimedia/gstreamer/gst-plugins-base_0.10.30.bb 
>>   |   22
>>   meta/recipes-multimedia/gstreamer/gst-plugins-good_0.10.23.bb 
>>   |   19
>>   meta/recipes-multimedia/gstreamer/gst-plugins-good_0.10.25.bb 
>>   |   19
>>   meta/recipes-multimedia/gstreamer/gst-plugins-ugly_0.10.15.bb 
>>   |   19
>>   meta/recipes-multimedia/gstreamer/gst-plugins-ugly_0.10.16.bb 
>>   |   19
>>  
>>  
>>  
>> meta/recipes-multimedia/gstreamer/gstreamer-0.10.29/check_fix.patch 
>> |   17
>> meta/recipes-multimedia/gstreamer/gstreamer-0.10.29/gst-inspect-check-error.patch
>> |   14
>> meta/recipes-multimedia/gstreamer/gstreamer-0.10.29/gstregistrybinary.c
>> |  487 --
>> meta/recipes-multimedia/gstreamer/gstreamer-0.10.29/gstregistrybinary.h
>> |  194 ---
>> meta/recipes-multimedia/gstreamer/gstreamer-0.10.30/check_fix.patch 
>> |   17
>> meta/recipes-multimedia/gstreamer/gstreamer-0.10.30/gst-inspect-check-error.patch
>> |   14
>> meta/recipes-multimedia/gstreamer/gstreamer-0.10.30/gstregistrybinary.c
>> |  487 ++
>> meta/recipes-multimedia/gstreamer/gstreamer-0.10.30/gstregistrybinary.h
>> |  194 +++ meta/recipes-multimedia/gstreamer/gstreamer_0.10.29.bb   
>> |   30 meta/recipes-multimedia/gstreamer/gstreamer_0.10.30.bb   
>> |   30 19 files changed, 829 insertions(+), 826 deletions(-)
>> 
>> Dongxiao Xu (6):
>>sstate.bbclass: Remove the temp sstate-build-* directories in
>>WORKDIR gstreamer: Upgrade to version 0.10.30
>>gst-plugins-base: Upgraded to version 0.10.30
>>gst-plugins-good: Upgraded to version 0.10.25
>>gst-plugins-bad: Upgraded to version 0.10.20
>>gst-plugins-ugly: Upgraded to version 0.10.16
>> 
>> Pull URL:
>> http://git.pokylinux.org/cgit.cgi/poky-contrib/log/?h=dxu4/distro
>> ___
>> yocto mailing list
>> yocto@yoctoproject.org
>> https://lists.yoctoproject.org/listinfo/yocto

___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] [PULL] multimedia upgrades and disk optmization, Dongxiao Xu, 2010/11/12

2010-11-12 Thread Xu, Dongxiao
Hi Saul,

This pull request contains some gstreamer recipe upgrades and disk space 
optimization, please help to review and pull.

Thanks,
Dongxiao 

 meta/classes/sstate.bbclass
   |3
 meta/recipes-multimedia/gstreamer/gst-plugins-bad_0.10.19.bb   
   |   24
 meta/recipes-multimedia/gstreamer/gst-plugins-bad_0.10.20.bb   
   |   24
 meta/recipes-multimedia/gstreamer/gst-plugins-base_0.10.29.bb  
   |   22
 meta/recipes-multimedia/gstreamer/gst-plugins-base_0.10.30.bb  
   |   22
 meta/recipes-multimedia/gstreamer/gst-plugins-good_0.10.23.bb  
   |   19
 meta/recipes-multimedia/gstreamer/gst-plugins-good_0.10.25.bb  
   |   19
 meta/recipes-multimedia/gstreamer/gst-plugins-ugly_0.10.15.bb  
   |   19
 meta/recipes-multimedia/gstreamer/gst-plugins-ugly_0.10.16.bb  
   |   19
 meta/recipes-multimedia/gstreamer/gstreamer-0.10.29/check_fix.patch
   |   17
 
meta/recipes-multimedia/gstreamer/gstreamer-0.10.29/gst-inspect-check-error.patch
 |   14
 meta/recipes-multimedia/gstreamer/gstreamer-0.10.29/gstregistrybinary.c
   |  487 --
 meta/recipes-multimedia/gstreamer/gstreamer-0.10.29/gstregistrybinary.h
   |  194 ---
 meta/recipes-multimedia/gstreamer/gstreamer-0.10.30/check_fix.patch
   |   17
 
meta/recipes-multimedia/gstreamer/gstreamer-0.10.30/gst-inspect-check-error.patch
 |   14
 meta/recipes-multimedia/gstreamer/gstreamer-0.10.30/gstregistrybinary.c
   |  487 ++
 meta/recipes-multimedia/gstreamer/gstreamer-0.10.30/gstregistrybinary.h
   |  194 +++
 meta/recipes-multimedia/gstreamer/gstreamer_0.10.29.bb 
   |   30
 meta/recipes-multimedia/gstreamer/gstreamer_0.10.30.bb 
   |   30
 19 files changed, 829 insertions(+), 826 deletions(-)

Dongxiao Xu (6):
  sstate.bbclass: Remove the temp sstate-build-* directories in WORKDIR
  gstreamer: Upgrade to version 0.10.30
  gst-plugins-base: Upgraded to version 0.10.30
  gst-plugins-good: Upgraded to version 0.10.25
  gst-plugins-bad: Upgraded to version 0.10.20
  gst-plugins-ugly: Upgraded to version 0.10.16

Pull URL: http://git.pokylinux.org/cgit.cgi/poky-contrib/log/?h=dxu4/distro
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


[yocto] Some investigation on disk space occupation for Yocto Linux

2010-11-09 Thread Xu, Dongxiao
Hi, Richard,

I just had a quick investigation on disk space occupation for Yocto linux.
Here are some findings and thoughts.

I built poky-image-minimal based on poky green release and yocto-0.9 release,
within the build directory, I dumped the tmp dir size:

Tmp dir size:
Green: 7.4G
Yocto-0.9: 27G


The "work" dir occupies most of the space (~90%) in both releases.
Here are the details with "work" dir.

Green:
1.5Mall-poky-linux
5.2Gi586-poky-linux
675Mi686-linux
1.1Gqemux86-poky-linux

Yocto-0.9:
1.8Mall-poky-linux
16G i586-poky-linux
4.8Gqemux86-poky-linux
3.5Gx86_64-linux

For the directory of "i586-poky-linux", Green has 34 sub directories, while 
Yocto-0.9
has 64 sub directories, which doubles Green's number. However the size is 
triple.

This should be a problem.

For a certain package directory, for example, ncurses-5.4-r14.

Green:
Total: 112M
12M image
36M ncurses-5.4
16M package
16M packages-split
18M staging-pkg
15M sysroot-destdir
1.1Mtemp

Yocto-0.9:
Total: 167M
13M image
36M ncurses-5.4
17M package
17M packages-split
16M sysroot-destdir
1.4Mtemp
2.5Mdeploy-ipks
2.6Mdeploy-rpms
4.0Kncurses.requires
116Kncurses.spec
40K pkgdata
12M pseudo
12K shlibs
2.5Msstate-build-deploy-ipk
2.6Msstate-build-deploy-rpm
33M sstate-build-package
16M sstate-build-populate-sysroot

We saw in Yocto-0.9, size is 50% larger than Green release.

Some directories within package are new in Yocto-0.9, like pseudo, 
sstate-build-*. 
I just took a glance at sstate.bbclass, the current logic seems that it will 
first
copy directories (like deploy-rpms, deploy-ipks, package, package-split, 
sysroot, etc)
into sstate-build-*, and then archive it into sstate-cache directory.

So the first step of optimization from my thoughts are: 
1) Can we remove the sstate-build-* directories after archive is done?
2) Or is it possible to omit the copy process and archive directly from 
directories
(deploy-rpms, deploy-ipks, package, package-split, sysroot, etc) to 
sstate-cache?

Thanks,
Dongxiao 
___
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto