Re: [OpenStack-Infra] Adding ARM64 cloud to infra

2018-01-12 Thread Ian Wienand
On 01/13/2018 05:01 AM, Jeremy Stanley wrote:
> On 2018-01-12 17:54:20 +0100 (+0100), Marcin Juszkiewicz wrote:
> [...]
>> UEFI expects GPT and DIB is completely not prepared for it. I made
>> block-layout-arm64.yaml file and got it used just to see "sorry,
>> mbr expected" message.
> 
> I concur. It looks like the DIB team would welcome work toward GPT
> support based on the label entry at
> https://docs.openstack.org/diskimage-builder/latest/user_guide/building_an_image.html#module-partitioning
> and I find https://bugzilla.redhat.com/show_bug.cgi?id=1488557
> suggesting there's probably also interest within Red Hat for it as
> well.

Yes, it would be welcome.  So far it's been a bit of a "nice to have"
which has kept it low priority, but a concrete user could help our
focus here.

>> You have whole Python class to create MBR bit by bit when few
>> calls to 'sfdisk/gdisk' shell commands do the same.
> 
> Well, the comments at
> http://git.openstack.org/cgit/openstack/diskimage-builder/tree/diskimage_builder/block_device/level1/mbr.py?id=5d5fa06#n28
> make some attempt at explaining why it doesn't just do that instead
> (at least as of ~7 months ago?).

I agree with the broad argument of this sentiment; that writing a
binary-level GPT implementation is out of scope for dib (and the
existing MBR one is, with hindsight, something I would have pushed
back on more).

dib-block-device being in python is a double edged sword -- on the one
hand it's harder to drop in a few lines like in shell, but on the
other hand it has proper data structures, unit testing, logging and
config-reading abilities -- things that all are rather ugly, or get
lost with shell.  The code is not perfect, but doing more things like
[1,2] to enhance and better use libraries will help everyone (and
notice that's making it easier to translate directly to parted, no
coincidence :)

The GPL linkage issue, as described in the code, prevents us doing the
obvious thing and calling directly via python.  But I believe will we
be OK just making system() calls to parted to configure GPT;
especially given the clearly modular nature of it all.

In terms of implementation, since you've already looked, I think
essentially diskimage_builder/block_device/level1.py create() will
need some moderate re-factoring to call a gpt implementation in
response to a gpt label, which could translate self.partitions into a
format for calling parted via our existing exec_sudo.

This is highly amenable to a test-driven development scenario as we
have some pretty good existing unit tests for various parts of the
partitioning to template from (for example, tests/test_lvm.py).  So
bringing up a sample config and test, then working backwards from what
calls we expect to see is probably a great way to start.  Even if you
just want to provide some (pseudo)shell examples based on your
experience and any thoughts on the yaml config files it would be
helpful.

--

I try to run the meetings described in [3] if there is anything on the
agenda.  The cadence is probably not appropriate for this, we can do
much better via mail here, or #openstack-dib in IRC.  I hope we can
collaborate in a positive way; as I mentioned I think as a first step
we'd be best working backwards from what we expect to see in terms of
configuration, partition layout and parted calls.

Thanks,

-i

[1] https://review.openstack.org/#/c/503574/
[2] https://review.openstack.org/#/c/503572/
[3] https://wiki.openstack.org/wiki/Meetings/diskimage-builder

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Adding ARM64 cloud to infra

2018-01-12 Thread Clark Boylan
On Fri, Jan 12, 2018, at 3:27 PM, Dan Radez wrote:
> fwiw
> We've been building arm images for tripleo and posting them.
> https://images.rdoproject.org/aarch64/pike/delorean/current-tripleo-rdo/
> 
> 
> This uses delorean and overcloud build:
> 
>     DIB_YUM_REPO_CONF+="/etc/yum.repos.d/delorean-deps-${OSVER}.repo
> /etc/yum.repos.d/delorean-${OSVER}.repo /etc/yum.repos.d/ceph.repo
> /etc/yum.repos.d/epel.repo /etc/yum.repos.d/radez.fedorapeople.repo" \
>     openstack --debug overcloud image build \
>     --config-file overcloud-aarch64.yaml \
>     --config-file
> /usr/share/openstack-tripleo-common/image-yaml/overcloud-images.yaml \
>     --config-file
> /usr/share/openstack-tripleo-common/image-yaml/overcloud-images-centos7.yaml
>     # --config-file overcloud-images.yaml --config-file
> overcloud-images-centos7.yaml --config-file aarch64-gumpf.yaml --image-name
>     #openstack --debug overcloud image build --type overcloud-full
> --node-arch aarch64
> 
> It's not quite an orthodox RDO build, There are still a few things in
> place that work around arm related packaging discrepancies or x86
> related configs. But we get good builds from it.
> 
> I don't know the details of what overcloud build does to the dib builds,
> Though I don't believe these are whole disk images. I think the
> overcloud and undercloud are root partition images and the kernel an
> initrd are composed into the disk for the overcloud by OOO and we direct
> boot them to launch a undercloud VM.
> 
> Happy to share details if anyone wants more.
> 
> Radez

Looking into this a big more `openstack overcloud image build` takes in the 
yaml config files you list and converts that into a forked diskimage-builder 
process to build an image. The centos7 dib element in particular seems to have 
aarch64 support via building on top of the upstream centos7 aarch64 image. We 
do use the centos-minimal element for our images though as it allows us to do 
things like install glean. Chances are we still need need to sort out UEFI and 
GPT for general dib use.

Just to be sure there isn't any other magic going on can you provide the 
contents of the overcloud-aarch64.yaml or point to where it can be found? It 
doesn't appear to be in tripleo-common with the other configs.

It is good to know that this is working in some cases though.

Clark

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Adding ARM64 cloud to infra

2018-01-12 Thread Dan Radez
fwiw
We've been building arm images for tripleo and posting them.
https://images.rdoproject.org/aarch64/pike/delorean/current-tripleo-rdo/


This uses delorean and overcloud build:

    DIB_YUM_REPO_CONF+="/etc/yum.repos.d/delorean-deps-${OSVER}.repo
/etc/yum.repos.d/delorean-${OSVER}.repo /etc/yum.repos.d/ceph.repo
/etc/yum.repos.d/epel.repo /etc/yum.repos.d/radez.fedorapeople.repo" \
    openstack --debug overcloud image build \
    --config-file overcloud-aarch64.yaml \
    --config-file
/usr/share/openstack-tripleo-common/image-yaml/overcloud-images.yaml \
    --config-file
/usr/share/openstack-tripleo-common/image-yaml/overcloud-images-centos7.yaml
    # --config-file overcloud-images.yaml --config-file
overcloud-images-centos7.yaml --config-file aarch64-gumpf.yaml --image-name
    #openstack --debug overcloud image build --type overcloud-full
--node-arch aarch64

It's not quite an orthodox RDO build, There are still a few things in
place that work around arm related packaging discrepancies or x86
related configs. But we get good builds from it.

I don't know the details of what overcloud build does to the dib builds,
Though I don't believe these are whole disk images. I think the
overcloud and undercloud are root partition images and the kernel an
initrd are composed into the disk for the overcloud by OOO and we direct
boot them to launch a undercloud VM.

Happy to share details if anyone wants more.

Radez



On 01/12/2018 09:59 AM, Jeremy Stanley wrote:
> On 2018-01-12 11:17:33 +0100 (+0100), Marcin Juszkiewicz wrote:
> [...]
>> I am aware that you like to build disk images on your own but have
>> you considered using virt-install with generated preseed/kickstart
>> files? It would move several arch related things (like bootloader)
>> to be handled by distribution rules instead of handling them again
>> in code.
> [...]
>
> We pre-generate and upload images via Glance because it allows us to
> upload the same image to all providers (modulo processor
> architecture in this case obviously). Once we have more than one
> arm64 deployment to integrate, being able to know that we're
> uploading identical images to all of them will be useful from a
> consistency standpoint. Honestly, getting EFI bits into DIB is
> probably no harder than writing a new nodepool builder backend to do
> remote virt-install, and would be of use to a lot more people when
> implemented.
>
> If you look in
> http://git.openstack.org/cgit/openstack/diskimage-builder/tree/diskimage_builder/elements/bootloader/finalise.d/50-bootloader
> there's support for setting up ppc64 PReP boot partitions... I don't
> expect getting correct EFI partition creation integrated would be
> much tougher? That said, it's something the DIB maintainers will
> want to weigh in on obviously.
>
>
> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

[OpenStack-Infra] Merging feature/zuulv3 into master in Zuul and Nodepool repos

2018-01-12 Thread Clark Boylan
Hello,

I think we are very close to being ready to merge the zuulv3 feature branch 
into master in both the Zuul and Nodepool repos. In particular we merged 
https://review.openstack.org/#/c/523951/ which should prevent breakages for 
anyone using that deployment method (single_node_ci) for an all in one CI suite.

One thing I've noticed is that we don't have this same handling in the lower 
level individual service manifests. For us I don't think that is a major issue, 
we'll just pin our builders to the nodepool 0.5.0 tag, do the merge, then 
update our configs and switch back to master. But do we have any idea if it is 
common for third part CI's to bypass single_node_ci and construct their own 
like we do?

As for the actual merging itself a quick test locally using `git merge -s 
recursive -X theirs feature/zuulv3` on the master branch of nodepool appears to 
work. I have to delete the files that the feature branch deleted by hand but 
otherwise the merge is automated. The resulting tree does also pass `tox -e 
pep8` and `tox -epy36` testing.

We will probably want a soft freeze of both Zuul and Nodepool then do our best 
to get both merged together so that we don't have to remember which project has 
merged and which hasn't. Once that is done we will need to repropose any open 
changes on the feature branch to the master branch, abandon the changes on the 
feature branch then delete the feature branch. Might be a good idea to merge as 
many feature branch changes as possible before hand?

Am I missing anything?

Thank you,
Clark

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Adding ARM64 cloud to infra

2018-01-12 Thread Jeremy Stanley
On 2018-01-12 17:54:20 +0100 (+0100), Marcin Juszkiewicz wrote:
[...]
> UEFI expects GPT and DIB is completely not prepared for it. I made
> block-layout-arm64.yaml file and got it used just to see "sorry,
> mbr expected" message.

I concur. It looks like the DIB team would welcome work toward GPT
support based on the label entry at
https://docs.openstack.org/diskimage-builder/latest/user_guide/building_an_image.html#module-partitioning
and I find https://bugzilla.redhat.com/show_bug.cgi?id=1488557
suggesting there's probably also interest within Red Hat for it as
well.

> You have whole Python class to create MBR bit by bit when few
> calls to 'sfdisk/gdisk' shell commands do the same.

Well, the comments at
http://git.openstack.org/cgit/openstack/diskimage-builder/tree/diskimage_builder/block_device/level1/mbr.py?id=5d5fa06#n28
make some attempt at explaining why it doesn't just do that instead
(at least as of ~7 months ago?). Per the subsequent discussion in
#openstack-dib I don't know whether there is also work underway to
solve the identified deficiencies in sfdisk and gparted but those
more directly involved in DIB development may have answers when
they're around (which they may not be at this point in the weekend).
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Adding ARM64 cloud to infra

2018-01-12 Thread Marcin Juszkiewicz
W dniu 12.01.2018 o 16:54, Jeremy Stanley pisze:
> On 2018-01-12 16:06:03 +0100 (+0100), Marcin Juszkiewicz wrote:
>> Or someone will try to target q35/uefi emulation instead of i440fx 
>> one on x86 alone.
> 
> I'm curious why we'd need emulation there...

Developers around x86 virtualisation live in world where VM is like PC
from 90s (i440fx qemu model). You boot BIOS which reads bootloader from
1st sector of your storage, you have 32 PCI slots with hotplug etc. All
you need is VM + disk image with one partition using MBR partitioning.

If you want to have something which reminds Arm64 (but still is x86)
then you switch to Q35 qemu model and enable UEFI as bootloader. And all
your existing (and working with previous model) disk images with one
partition are useless. Your hotplug options are limited to amount of
pcie root ports defined in VM (usually 2-3). All your disk images need
to be converted to GPT partitioning, you need to have ESP (EFI System
Partition) partition with EFI bootloader stored there.

But (nearly) no one in x86 world goes for q35 model. Mostly because it
requires more work to be done and because users will ask why they can
not add 6th storage and 11th network card. And in arm64 world we do not
have such luck.

That's why I mentioned q35.

>> If I disable installing grub I can build useless one partition
>> disk image on arm64. Nothing will boot it.
> 
> See my other reply on this thread with a link to the bootloader 
> element. It seems like it's got support implemented for 
> multi-partition images needed by 64-bit PowerPC systems, so not 
> terribly dissimilar.

Firmware used by 64-bit Power system accepts MBR partitioned storage.

UEFI expects GPT and DIB is completely not prepared for it. I made
block-layout-arm64.yaml file and got it used just to see "sorry, mbr
expected" message. You have whole Python class to create MBR bit by bit
when few calls to 'sfdisk/gdisk' shell commands do the same.


___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Adding ARM64 cloud to infra

2018-01-12 Thread Jeremy Stanley
On 2018-01-12 16:06:03 +0100 (+0100), Marcin Juszkiewicz wrote:
> Or someone will try to target q35/uefi emulation instead of i440fx
> one on x86 alone.

I'm curious why we'd need emulation there... the expectation is that
DIB is running on a native 64-bit ARM system (under a hypervisor,
but still not cross-architecture). The reason we'll be deploying a
Nodepool builder server in the environment is so that we don't need
to worry about cross-building an arm64 rootfs and boot partition.

> I am tired of yet another disk image building projects.
> All think they are special, all have same assumptions. btdt.

While I can't disagree with regard to the proliferation of disk
image builders, this one has existed since July of 2012 and sees
extensive use throughout OpenStack. At the time it came into being,
there weren't a lot of good options for cross-distro orchestration
of image builds (e.g., one tool which could build Debian images on
CentOS and CentOS images on Debian).

> If I disable installing grub I can build useless one partition disk
> image on arm64. Nothing will boot it.

See my other reply on this thread with a link to the bootloader
element. It seems like it's got support implemented for
multi-partition images needed by 64-bit PowerPC systems, so not
terribly dissimilar.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Adding ARM64 cloud to infra

2018-01-12 Thread Gema Gomez


On 12/01/18 00:28, Clark Boylan wrote:
> On Wed, Jan 10, 2018, at 1:41 AM, Gema Gomez wrote:
>> Hi all,
>>
>> Linaro would like to add a new cloud to infra so that we can run tests
>> on ARM64 going forward. This discussion has been ongoing for the good
>> part of a year, apologies that it took us so long to get to a point
>> where we feel comfortable going ahead in terms of stability of
>> infrastructure and functionality.
>>
>> My team has been taking care of making OpenStack as multiarch as
>> possible and making the experience of using an ARM64 cloud as close to
>> using a traditional amd64 one as possible. We have the Linaro Developer
>> Cloud program, which consists of a set of clouds that run on ARM64
>> hardware donated by the Linaro Enterprise Group members[1] and dedicated
>> to enablement/testing of upstream projects. Until recently our clouds
>> were run by an engineering team and were used to do enablement of
>> different projects and bug fixes of OpenStack, now we have a dedicated
>> admin and we are ready to take it a step forward. The clouds are
>> currently running OpenStack Newton but we are going to be moving to
>> Queens as soon as the release is out. Kolla has volunteered to be the
>> first project for this experiment, they have been pushing us to start
>> doing CI on our images so they also feel more comfortable accepting our
>> changes. We will welcome any other project that wants to be part of this
>> experiment, but we'll focus our engineering efforts initially on
>> enabling Kolla CI.
>>
>> After some preliminary discussion with fungi and inc0, we are going to
>> start small and grow from there. The initial plan is to add 2 projects:
>>
>> 1. Control-plane project that will host a nodepool builder with 8 vCPUs,
>> 8 GB RAM, 1TB storage on a Cinder volume for the image building scratch
>> space. A cache server with similar specs + 200GB on a cinder volume for
>> AFS and Apache proxy caches. They will have a routable IP address.
>>
>> 2. Jobs project, we'll have capacity for 6 test instances initially and
>> after initial assessment grow it as required (8 vCPUs/8 GB RAM, 80GB
>> storage, 1 routable IP each).
>>
>> Is there anything else we are missing for the initial trial? Any
>> questions/concerns before we start? I will try to have presence on the
>> infra weekly calls/IRC channel or have someone from my team on them
>> going forward.
> 
> This plan looks good to me. The one question I had on IRC (and putting it 
> here for historical reasons) is whether or not Andrew FileSystem (AFS) will 
> build and run on arm64. OpenAFS is not in the linux kernel tree so this may 
> not work. The good news is mtreinish reports that after a quick test on some 
> of his hardware AFS was working.
> 
>>
>> In practical terms, once we've created the resources, is there a guide
>> to getting the infra bits in place for it? Who to give the credentials
>> to/etc?
> 
> New clouds happen infrequently enough and require a reasonable amount of 
> communication to get going so I don't think we have written down a guide 
> beyond what we have on the donating resources page [2].
> 
> Typically what happens is we'll have an infra root act as the contact point 
> to set things up, you'll provide them with credentials via email (or whatever 
> communication system is most convenient) then they will immediately change 
> the password(s). It is also helpful if we can get a contact individual for 
> the cloud side and we'll record that in our passwords file so that any one of 
> our infra roots knows who to talk to should the need arise.
> 
> Once the initial credential exchange happens the next step is for that infra 
> root to double check quotas and get the mirror host up and running as well as 
> image builder (and images) built. Once that is done you should be ready to 
> push changes to projects that add jobs using the new nodepool labels 
> (something like ubuntu-xenial-arm64).

Sounds good. Quick update, we are working on applying a patch
(https://review.openstack.org/#/c/489951/) to our Newton deployment so
that uploaded images do not require any extra parameters. Once that is
done we can give infra credentials. Who will be our infra counterpart
for that?

We are also happy to add engineers to work on any fixes required to make
the infra tools work as seamlessly as possible with ARM64.

Cheers!
Gema

>>
>> Thanks!
>> Gema
> 
> Thank you! this is exciting.
> 
>>
>> [1] https://www.linaro.org/groups/leg/
> [2] https://docs.openstack.org/infra/system-config/contribute-cloud.html
> 
> Clark
> 
> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra
> 

-- 
Gema Gomez-Solano
Tech Lead, SDI
Linaro Ltd
IRC: gema@#linaro on irc.freenode.net

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org

Re: [OpenStack-Infra] Adding ARM64 cloud to infra

2018-01-12 Thread Gema Gomez


On 12/01/18 15:49, Paul Belanger wrote:
> On Fri, Jan 12, 2018 at 11:17:33AM +0100, Marcin Juszkiewicz wrote:
>> Wu dniu 12.01.2018 o 01:09, Ian Wienand pisze:
>>> On 01/10/2018 08:41 PM, Gema Gomez wrote:
 1. Control-plane project that will host a nodepool builder with 8 vCPUs,
 8 GB RAM, 1TB storage on a Cinder volume for the image building scratch
 space.
>>> Does this mean you're planning on using diskimage-builder to produce
>>> the images to run tests on?  I've seen occasional ARM things come by,
>>> but of course diskimage-builder doesn't have CI for it (yet :) so it's
>>> status is probably "unknown".
>>
>> I had a quick look at diskimage-builder tool.
>>
>> It looks to me that you always build MBR based image with one partition.
>> This will have to be changed as AArch64 is UEFI based platform (both
>> baremetal and VM) so disk needs to use GPT for partitioning and EFI
>> System Partition needs to be present (with grub-efi binary on it).
>>
> This is often the case when bringing new images online, that some changes to 
> DIB
> will be required to support them. I suspect somebody with access to AArch64
> hardware will first need to run build-image.sh[1] and paste the build.log. 
> That
> will build an image locally for you using our DIB elements.
> 
> [1] 
> http://git.openstack.org/cgit/openstack-infra/project-config/tree/tools/build-image.sh

Yep, that won't be an issue. Will do that on Monday.

>> I am aware that you like to build disk images on your own but have you
>> considered using virt-install with generated preseed/kickstart files? It
>> would move several arch related things (like bootloader) to be handled
>> by distribution rules instead of handling them again in code.
>>
> I don't believe we want to look at using a new tool to build all our images,
> switching to virt-install would be a large change. There are reasons why we
> build images from scratch and don't believe switching to virt-install help
> with that
>>
>> Sent a patch to make it choose proper grub package on aarch64.
>>
>> ___
>> OpenStack-Infra mailing list
>> OpenStack-Infra@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra
> 
> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra
> 

-- 
Gema Gomez-Solano
Tech Lead, SDI
Linaro Ltd
IRC: gema@#linaro on irc.freenode.net

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Adding ARM64 cloud to infra

2018-01-12 Thread Marcin Juszkiewicz
W dniu 12.01.2018 o 15:49, Paul Belanger pisze:
> On Fri, Jan 12, 2018 at 11:17:33AM +0100, Marcin Juszkiewicz wrote:
>> Wu dniu 12.01.2018 o 01:09, Ian Wienand pisze:
>>> On 01/10/2018 08:41 PM, Gema Gomez wrote:
 1. Control-plane project that will host a nodepool builder with 8 vCPUs,
 8 GB RAM, 1TB storage on a Cinder volume for the image building scratch
 space.
>>> Does this mean you're planning on using diskimage-builder to produce
>>> the images to run tests on?  I've seen occasional ARM things come by,
>>> but of course diskimage-builder doesn't have CI for it (yet :) so it's
>>> status is probably "unknown".
>>
>> I had a quick look at diskimage-builder tool.
>>
>> It looks to me that you always build MBR based image with one partition.
>> This will have to be changed as AArch64 is UEFI based platform (both
>> baremetal and VM) so disk needs to use GPT for partitioning and EFI
>> System Partition needs to be present (with grub-efi binary on it).
>>
> This is often the case when bringing new images online, that some changes to 
> DIB
> will be required to support them. I suspect somebody with access to AArch64
> hardware will first need to run build-image.sh[1] and paste the build.log. 
> That
> will build an image locally for you using our DIB elements.

Or someone will try to target q35/uefi emulation instead of i440fx one
on x86 alone. I am tired of yet another disk image building projects.
All think they are special, all have same assumptions. btdt.

If I disable installing grub I can build useless one partition disk
image on arm64. Nothing will boot it.

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Adding ARM64 cloud to infra

2018-01-12 Thread Jeremy Stanley
On 2018-01-12 11:17:33 +0100 (+0100), Marcin Juszkiewicz wrote:
[...]
> I am aware that you like to build disk images on your own but have
> you considered using virt-install with generated preseed/kickstart
> files? It would move several arch related things (like bootloader)
> to be handled by distribution rules instead of handling them again
> in code.
[...]

We pre-generate and upload images via Glance because it allows us to
upload the same image to all providers (modulo processor
architecture in this case obviously). Once we have more than one
arm64 deployment to integrate, being able to know that we're
uploading identical images to all of them will be useful from a
consistency standpoint. Honestly, getting EFI bits into DIB is
probably no harder than writing a new nodepool builder backend to do
remote virt-install, and would be of use to a lot more people when
implemented.

If you look in
http://git.openstack.org/cgit/openstack/diskimage-builder/tree/diskimage_builder/elements/bootloader/finalise.d/50-bootloader
there's support for setting up ppc64 PReP boot partitions... I don't
expect getting correct EFI partition creation integrated would be
much tougher? That said, it's something the DIB maintainers will
want to weigh in on obviously.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Adding ARM64 cloud to infra

2018-01-12 Thread Paul Belanger
On Fri, Jan 12, 2018 at 11:17:33AM +0100, Marcin Juszkiewicz wrote:
> Wu dniu 12.01.2018 o 01:09, Ian Wienand pisze:
> > On 01/10/2018 08:41 PM, Gema Gomez wrote:
> >> 1. Control-plane project that will host a nodepool builder with 8 vCPUs,
> >> 8 GB RAM, 1TB storage on a Cinder volume for the image building scratch
> >> space.
> > Does this mean you're planning on using diskimage-builder to produce
> > the images to run tests on?  I've seen occasional ARM things come by,
> > but of course diskimage-builder doesn't have CI for it (yet :) so it's
> > status is probably "unknown".
> 
> I had a quick look at diskimage-builder tool.
> 
> It looks to me that you always build MBR based image with one partition.
> This will have to be changed as AArch64 is UEFI based platform (both
> baremetal and VM) so disk needs to use GPT for partitioning and EFI
> System Partition needs to be present (with grub-efi binary on it).
> 
This is often the case when bringing new images online, that some changes to DIB
will be required to support them. I suspect somebody with access to AArch64
hardware will first need to run build-image.sh[1] and paste the build.log. That
will build an image locally for you using our DIB elements.

[1] 
http://git.openstack.org/cgit/openstack-infra/project-config/tree/tools/build-image.sh
> I am aware that you like to build disk images on your own but have you
> considered using virt-install with generated preseed/kickstart files? It
> would move several arch related things (like bootloader) to be handled
> by distribution rules instead of handling them again in code.
> 
I don't believe we want to look at using a new tool to build all our images,
switching to virt-install would be a large change. There are reasons why we
build images from scratch and don't believe switching to virt-install help
with that.
> 
> Sent a patch to make it choose proper grub package on aarch64.
> 
> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] Adding ARM64 cloud to infra

2018-01-12 Thread Marcin Juszkiewicz
Wu dniu 12.01.2018 o 01:09, Ian Wienand pisze:
> On 01/10/2018 08:41 PM, Gema Gomez wrote:
>> 1. Control-plane project that will host a nodepool builder with 8 vCPUs,
>> 8 GB RAM, 1TB storage on a Cinder volume for the image building scratch
>> space.
> Does this mean you're planning on using diskimage-builder to produce
> the images to run tests on?  I've seen occasional ARM things come by,
> but of course diskimage-builder doesn't have CI for it (yet :) so it's
> status is probably "unknown".

I had a quick look at diskimage-builder tool.

It looks to me that you always build MBR based image with one partition.
This will have to be changed as AArch64 is UEFI based platform (both
baremetal and VM) so disk needs to use GPT for partitioning and EFI
System Partition needs to be present (with grub-efi binary on it).

I am aware that you like to build disk images on your own but have you
considered using virt-install with generated preseed/kickstart files? It
would move several arch related things (like bootloader) to be handled
by distribution rules instead of handling them again in code.


Sent a patch to make it choose proper grub package on aarch64.

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra