Re: FBR: To update pungi on compose-x86-01.phx2.fedoraproject.org to pungi-4.1.22-10.fc27

2018-03-22 Thread Mohan Boddu
I didn't want to update it since pungi had so many issues with modular support. 
Hence I waited this long.
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org


Re: FBR: To update pungi on compose-x86-01.phx2.fedoraproject.org to pungi-4.1.22-10.fc27

2018-03-22 Thread Kevin Fenzi
+1 makes sense. We likely should have just updated it there for the
other freeze breaks.

kevin



signature.asc
Description: OpenPGP digital signature
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org


Re: FBR: To update pungi on compose-x86-01.phx2.fedoraproject.org to pungi-4.1.22-10.fc27

2018-03-22 Thread Stephen John Smoogen
+1

On 22 March 2018 at 16:21, Mohan Boddu  wrote:
> We updated pungi on branched and rawhide composer a lot during this freeze, 
> here's the list of the FBR and why we updated pungi
>
> https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org/thread/TPAIGZ5WNTBLDNHBX47X5SH6S4Q4PB3O/
> https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org/thread/S33EAB7PBHMU5H6RWVYEEMV6XICFH66Q/
> https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org/thread/24NVPGBYB4CD4NXFSVLEZKZC227JMVRH/
> https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org/thread/PQUO2BPFYAOHT4DPICFSUMSSQEXO6UT3/
>
> Now that we are getting close to RC request and all the known issues are 
> fixed in pungi, I would like to update pungi on
> compose-x86-01.phx2.fedoraproject.org from which we run the RC compose.
> ___
> infrastructure mailing list -- infrastructure@lists.fedoraproject.org
> To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org



-- 
Stephen J Smoogen.
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org


Re: Freeze Break Request: Change oz.cfg on power builders to just use 1 cpu for now

2018-03-22 Thread Stephen John Smoogen
+1

On 22 March 2018 at 15:20, Kevin Fenzi  wrote:
> From f5934de9f70bc1b31348091bd69279f143f7ef78 Mon Sep 17 00:00:00 2001
> From: Kevin Fenzi 
> Date: Thu, 22 Mar 2018 19:13:23 +
> Subject: [PATCH] Per https://pagure.io/releng/issue/7326 move the power
>  builders oz config to use just 1 cpu for now. There is a bug in nested virt
>  with more than 1 cpu that is causing all the images to fail to build.
> Note that all these images are non release blocking and are currently
> failing in f28 and rawhide, so there's not much downside.
>
> Signed-off-by: Kevin Fenzi 
> ---
>  roles/koji_builder/files/oz.cfg| 22 --
>  roles/koji_builder/tasks/main.yml  |  2 +-
>  roles/koji_builder/templates/oz.cfg.j2 | 26 ++
>  3 files changed, 27 insertions(+), 23 deletions(-)
>  delete mode 100644 roles/koji_builder/files/oz.cfg
>  create mode 100644 roles/koji_builder/templates/oz.cfg.j2
>
> diff --git a/roles/koji_builder/files/oz.cfg
> b/roles/koji_builder/files/oz.cfg
> deleted file mode 100644
> index 3d045d2..000
> --- a/roles/koji_builder/files/oz.cfg
> +++ /dev/null
> @@ -1,22 +0,0 @@
> -[paths]
> -output_dir = /var/lib/libvirt/images
> -data_dir = /var/lib/oz
> -screenshot_dir = /var/lib/oz/screenshots
> -# sshprivkey = /etc/oz/id_rsa-icicle-gen
> -
> -[libvirt]
> -uri = qemu:///system
> -image_type = raw
> -# type = kvm
> -# bridge_name = virbr0
> -cpus = 2
> -memory = 3096
> -
> -[cache]
> -original_media = yes
> -modified_media = no
> -jeos = no
> -
> -[icicle]
> -safe_generation = no
> -
> diff --git a/roles/koji_builder/tasks/main.yml
> b/roles/koji_builder/tasks/main.yml
> index 75c427c..c573739 100644
> --- a/roles/koji_builder/tasks/main.yml
> +++ b/roles/koji_builder/tasks/main.yml
> @@ -154,7 +154,7 @@
>
>  # oz.cfg  upstream ram and cpu definitions are not enough
>  - name: oz.cfg
> -  copy: src=oz.cfg dest=/etc/oz/oz.cfg
> +  template: src=oz.cfg dest=/etc/oz/oz.cfg
>tags:
>- koji_builder
>
> diff --git a/roles/koji_builder/templates/oz.cfg.j2
> b/roles/koji_builder/templates/oz.cfg.j2
> new file mode 100644
> index 000..b3dacc8
> --- /dev/null
> +++ b/roles/koji_builder/templates/oz.cfg.j2
> @@ -0,0 +1,26 @@
> +[paths]
> +output_dir = /var/lib/libvirt/images
> +data_dir = /var/lib/oz
> +screenshot_dir = /var/lib/oz/screenshots
> +# sshprivkey = /etc/oz/id_rsa-icicle-gen
> +
> +[libvirt]
> +uri = qemu:///system
> +image_type = raw
> +# type = kvm
> +# bridge_name = virbr0
> +{% if ansible_architecture == 'ppc64' or ansible_architecture ==
> 'ppc64le' %}
> +cpus = 1
> +{% elif %}
> +cpus = 2
> +{% endif %}
> +memory = 3096
> +
> +[cache]
> +original_media = yes
> +modified_media = no
> +jeos = no
> +
> +[icicle]
> +safe_generation = no
> +
> --
> 1.8.3.1
>
>
> ___
> infrastructure mailing list -- infrastructure@lists.fedoraproject.org
> To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
>



-- 
Stephen J Smoogen.
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org


Re: Freeze Break Request: Change oz.cfg on power builders to just use 1 cpu for now

2018-03-22 Thread Mohan Boddu
LGTM, +1
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org


FBR: To update pungi on compose-x86-01.phx2.fedoraproject.org to pungi-4.1.22-10.fc27

2018-03-22 Thread Mohan Boddu
We updated pungi on branched and rawhide composer a lot during this freeze, 
here's the list of the FBR and why we updated pungi

https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org/thread/TPAIGZ5WNTBLDNHBX47X5SH6S4Q4PB3O/
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org/thread/S33EAB7PBHMU5H6RWVYEEMV6XICFH66Q/
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org/thread/24NVPGBYB4CD4NXFSVLEZKZC227JMVRH/
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org/thread/PQUO2BPFYAOHT4DPICFSUMSSQEXO6UT3/

Now that we are getting close to RC request and all the known issues are fixed 
in pungi, I would like to update pungi on
compose-x86-01.phx2.fedoraproject.org from which we run the RC compose.
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org


Freeze Break Request: Change oz.cfg on power builders to just use 1 cpu for now

2018-03-22 Thread Kevin Fenzi
From f5934de9f70bc1b31348091bd69279f143f7ef78 Mon Sep 17 00:00:00 2001
From: Kevin Fenzi 
Date: Thu, 22 Mar 2018 19:13:23 +
Subject: [PATCH] Per https://pagure.io/releng/issue/7326 move the power
 builders oz config to use just 1 cpu for now. There is a bug in nested virt
 with more than 1 cpu that is causing all the images to fail to build.
Note that all these images are non release blocking and are currently
failing in f28 and rawhide, so there's not much downside.

Signed-off-by: Kevin Fenzi 
---
 roles/koji_builder/files/oz.cfg| 22 --
 roles/koji_builder/tasks/main.yml  |  2 +-
 roles/koji_builder/templates/oz.cfg.j2 | 26 ++
 3 files changed, 27 insertions(+), 23 deletions(-)
 delete mode 100644 roles/koji_builder/files/oz.cfg
 create mode 100644 roles/koji_builder/templates/oz.cfg.j2

diff --git a/roles/koji_builder/files/oz.cfg
b/roles/koji_builder/files/oz.cfg
deleted file mode 100644
index 3d045d2..000
--- a/roles/koji_builder/files/oz.cfg
+++ /dev/null
@@ -1,22 +0,0 @@
-[paths]
-output_dir = /var/lib/libvirt/images
-data_dir = /var/lib/oz
-screenshot_dir = /var/lib/oz/screenshots
-# sshprivkey = /etc/oz/id_rsa-icicle-gen
-
-[libvirt]
-uri = qemu:///system
-image_type = raw
-# type = kvm
-# bridge_name = virbr0
-cpus = 2
-memory = 3096
-
-[cache]
-original_media = yes
-modified_media = no
-jeos = no
-
-[icicle]
-safe_generation = no
-
diff --git a/roles/koji_builder/tasks/main.yml
b/roles/koji_builder/tasks/main.yml
index 75c427c..c573739 100644
--- a/roles/koji_builder/tasks/main.yml
+++ b/roles/koji_builder/tasks/main.yml
@@ -154,7 +154,7 @@

 # oz.cfg  upstream ram and cpu definitions are not enough
 - name: oz.cfg
-  copy: src=oz.cfg dest=/etc/oz/oz.cfg
+  template: src=oz.cfg dest=/etc/oz/oz.cfg
   tags:
   - koji_builder

diff --git a/roles/koji_builder/templates/oz.cfg.j2
b/roles/koji_builder/templates/oz.cfg.j2
new file mode 100644
index 000..b3dacc8
--- /dev/null
+++ b/roles/koji_builder/templates/oz.cfg.j2
@@ -0,0 +1,26 @@
+[paths]
+output_dir = /var/lib/libvirt/images
+data_dir = /var/lib/oz
+screenshot_dir = /var/lib/oz/screenshots
+# sshprivkey = /etc/oz/id_rsa-icicle-gen
+
+[libvirt]
+uri = qemu:///system
+image_type = raw
+# type = kvm
+# bridge_name = virbr0
+{% if ansible_architecture == 'ppc64' or ansible_architecture ==
'ppc64le' %}
+cpus = 1
+{% elif %}
+cpus = 2
+{% endif %}
+memory = 3096
+
+[cache]
+original_media = yes
+modified_media = no
+jeos = no
+
+[icicle]
+safe_generation = no
+
-- 
1.8.3.1



signature.asc
Description: OpenPGP digital signature
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org


Re: FBR: F28 onwards fedimg needs to parse AtomicHost and Cloud variants separately

2018-03-22 Thread Ricky Elrod
+1, seems fairly easy to revert if it fails.

-re

On Wed, Mar 21, 2018 at 1:53 PM, Kevin Fenzi  wrote:
> +1 here.
>
> kevin
>
>
>
> ___
> infrastructure mailing list -- infrastructure@lists.fedoraproject.org
> To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
>
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org


Re: Proposed zchunk file format - V3

2018-03-22 Thread Neal Gompa
On Thu, Mar 22, 2018 at 9:45 AM, Michal Domonkos  wrote:
> On Thu, Mar 22, 2018 at 12:39 PM, Jonathan Dieter  wrote:
>> CC'ing fedora-infrastructure, as I think they got lost somewhere along
>> the way.
>
> Oh, thanks.  I screwed up (again), this time by hitting "Reply"
> instead of "Reply to all" in gmail (*facepalm*).
>
>> Just to be clear, zchunk *could* use buzhash.  There's no rule about
>> where the boundaries need to be, only that the application creating the
>> zchunk file is consistent.  I'd actually like to make the command-line
>> utility use buzhash, but I'm trying to keep the code BSD 2-clause, so I
>> can't just lift casync's buzhash code, and I haven't had time to write
>> that part myself.
>
> Makes sense, thanks for the clarification!
>
> For completeness, I'm also copying below the "git concept" idea that I
> elaborated on in the "lost" email:
>
> ---8<-
>
> The git concept is basically just a generalization of the chunking
> idea we're talking about.
>
> As long as your data semantically represents a tree, you can chunk up
> the content and get a structure like this:
>
> tree
> +-- tree
> +-- blob1
> +-- blob2
> +-- blob3
> +-- tree
> +-- blob1
> +-- blob2
> +-- blob3
> +-- ...
>
> Now, to sync this structure locally, a simple recursive algorithm is
> used:  look at the root tree to see what objects it needs (i.e. gather
> a list of hashes), then download them and do the same with those
> recursively until you have no more incomplete trees left.  In order to
> avoid having too many files, blobs could be stored in one file (maybe
> per tree) and accessed via HTTP ranges, the same way as in zchunk.
>
> The point is, you will only have to fetch those subtrees where some
> objects along the path have changed.  The effectiveness is then a
> (logarithmic) function of how deep and how well you do the chunking of
> your content.
>
> Applying this to our domain, we have:
>
> repomd (tree)
> +-- primary (tree)
> +-- srpm1 (tree)
> +-- rpm1 (blob)
> +-- rpm2 (blob)
> +-- srpm2 (tree)
> +-- srpm3 (tree)
> +-- filelists (tree)
> +-- ...
>
> Doing a "checkout" of such a structure would result in the traditional
> metadata files we're using now.  That's just for backward
> compatibility; we could, of course, have a different structure that's
> better suited for our use case.
>
> As you can see, this is really just zchunk, only generalized (not sure
> if compressions plays a role here, I haven't considered it).
>
> ---8<-
>

One thing I'm concerned about is handling appended metadata. For
example, both Mageia and openSUSE append AppStream metadata to the
repodata, using a combination of appstream-builder[1] (or
appstream-generator[2]) and modifyrepo_c[3]. How does this scale to
handling that?

[1]: https://www.mankier.com/1/appstream-builder
[2]: https://www.mankier.com/1/appstream-generator
[3]: https://www.mankier.com/8/modifyrepo_c

-- 
真実はいつも一つ!/ Always, there's only one truth!
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org


Re: Proposed zchunk file format - V3

2018-03-22 Thread Michal Domonkos
On Thu, Mar 22, 2018 at 12:39 PM, Jonathan Dieter  wrote:
> CC'ing fedora-infrastructure, as I think they got lost somewhere along
> the way.

Oh, thanks.  I screwed up (again), this time by hitting "Reply"
instead of "Reply to all" in gmail (*facepalm*).

> Just to be clear, zchunk *could* use buzhash.  There's no rule about
> where the boundaries need to be, only that the application creating the
> zchunk file is consistent.  I'd actually like to make the command-line
> utility use buzhash, but I'm trying to keep the code BSD 2-clause, so I
> can't just lift casync's buzhash code, and I haven't had time to write
> that part myself.

Makes sense, thanks for the clarification!

For completeness, I'm also copying below the "git concept" idea that I
elaborated on in the "lost" email:

---8<-

The git concept is basically just a generalization of the chunking
idea we're talking about.

As long as your data semantically represents a tree, you can chunk up
the content and get a structure like this:

tree
+-- tree
+-- blob1
+-- blob2
+-- blob3
+-- tree
+-- blob1
+-- blob2
+-- blob3
+-- ...

Now, to sync this structure locally, a simple recursive algorithm is
used:  look at the root tree to see what objects it needs (i.e. gather
a list of hashes), then download them and do the same with those
recursively until you have no more incomplete trees left.  In order to
avoid having too many files, blobs could be stored in one file (maybe
per tree) and accessed via HTTP ranges, the same way as in zchunk.

The point is, you will only have to fetch those subtrees where some
objects along the path have changed.  The effectiveness is then a
(logarithmic) function of how deep and how well you do the chunking of
your content.

Applying this to our domain, we have:

repomd (tree)
+-- primary (tree)
+-- srpm1 (tree)
+-- rpm1 (blob)
+-- rpm2 (blob)
+-- srpm2 (tree)
+-- srpm3 (tree)
+-- filelists (tree)
+-- ...

Doing a "checkout" of such a structure would result in the traditional
metadata files we're using now.  That's just for backward
compatibility; we could, of course, have a different structure that's
better suited for our use case.

As you can see, this is really just zchunk, only generalized (not sure
if compressions plays a role here, I haven't considered it).

---8<-

Regards,

Michal
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org


Re: Proposed zchunk file format - V3

2018-03-22 Thread Jonathan Dieter
CC'ing fedora-infrastructure, as I think they got lost somewhere along
the way.

On Tue, 2018-03-20 at 17:04 +0100, Michal Domonkos wrote:

> Yeah, the level doesn't really matter much.  My point was, as long as
> we chunk, some of the data that we will be downloading we will already
> have locally.  Typically (according to mdhist), it seems that package
> updates are more common than new additions, so we won't be reusing the
> unchanged parts of package tags.  But that's inevitable if we're
> chunking.

Ok, I see your point, and you're absolutely right.


> > The beauty of the zchunk format (or zsync, or any other chunked format)
> > is that we don't have to download different files based on what we
> > have, but rather, we download either fewer or more parts of the same
> > file based on what we have.  From the server side, we don't have to
> > worry about the deltas, and the clients just get what they need.
> 
> +1
> 
> Simplicity is key, I think.  Even at the cost of not having the
> perfectly efficient solution.  The whole packaging stack is already
> complicated enough.

+1000 on that last!


> While I'm not completely sure about application-specific boundaries
> being superior to buzhash (used by casync) in terms of data savings,
> it's clear that using http range requests and concatenating the
> objects together in a smart way (as you suggested previously) to
> reduce the number of HTTP requests is a good move in the right
> direction.

Just to be clear, zchunk *could* use buzhash.  There's no rule about
where the boundaries need to be, only that the application creating the
zchunk file is consistent.  I'd actually like to make the command-line
utility use buzhash, but I'm trying to keep the code BSD 2-clause, so I
can't just lift casync's buzhash code, and I haven't had time to write
that part myself.  

Currently zck.c has a really ugly if statement that chooses a division
based on string matching if it's true and a really naive inefficient
rolling hash if it's false.  If you wanted to contribute buzhash, I'd
happily take it!

> BTW, in the original thread, you mentioned a reduction of 30-40% when
> using casync.  I'm wondering, how did you measure it?  I saw chunk
> reuse ranging from 80% to 90% per metadata update, which seemed quite
> optimistic.  What I did was:
> 
> $ casync make snap1.caidx /path/to/repodata/snap1
> $ casync make --verbose snap2.caidx /path/to/repodata/snap2
> 
> Reused chunks: X (Y%)
> 

IIRC, I went into the web server logs and measured the number of bytes
that casync actually downloaded as compared to the gzip size of the
data.

Thanks so much for your interest!

Jonathan
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org


Re: Initial pre-alpha version of zchunk available for testing and comments

2018-03-22 Thread Jonathan Dieter
On Thu, 2018-03-22 at 11:55 +0200, Jonathan Dieter wrote:
> I've got a working zchunk library, complete with some utilities at
> https://github.com/jdieter/zchunk, but I wanted to get some feedback
> before I went much further.  It's only dependencies are libcurl and
> (optionally, but very heavily recommended) libzstd.

While I'm thinking about it, it used meson as its build system, so
you'll need that too.

Jonathan
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org


Initial pre-alpha version of zchunk available for testing and comments

2018-03-22 Thread Jonathan Dieter
I've got a working zchunk library, complete with some utilities at
https://github.com/jdieter/zchunk, but I wanted to get some feedback
before I went much further.  It's only dependencies are libcurl and
(optionally, but very heavily recommended) libzstd.

There are test files in https://www.jdieter.net/downloads/zchunk-test,
and the dictionary I used is in https://www.jdieter.net/downloads.

What works:
 * Creating zchunk files (using zck)
 * Reading zchunk files (using unzck)
 * Downloading zchunk files (using zckdl)

What doesn't:
 * Resuming zchunk downloads
 * Using any of the tools to overwrite a file
 * Automatic maximum ranges in request detection
 * Streaming chunking in the library

The main thing I want to ask for advice on is the last item on that
last list.  Currently, every piece of data send to zck_compress() is
treated as a new chunk.

I'd prefer to have zck_compress() just keep streaming data and have a
zck_end_chunk() function that ends the current chunk, but zstd doesn't
support streamed compression with a dict in its dynamic library.  You
have to use zstd's static library to get that function (because it's
not seen as stable yet).

Any suggestions on how to deal with this?  Should I require the static
library, write my own wrapper that buffers the streamed data until
zck_end_chunk() is called, or just require each chunk to be sent in its
entirety?

Jonathan

___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org