Hello community,

here is the log from the commit of package openSUSE-release-tools for 
openSUSE:Factory checked in at 2017-11-12 18:10:52
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/openSUSE-release-tools (Old)
 and      /work/SRC/openSUSE:Factory/.openSUSE-release-tools.new (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "openSUSE-release-tools"

Sun Nov 12 18:10:52 2017 rev:19 rq:541063 version:20171112.b690943

Changes:
--------
--- 
/work/SRC/openSUSE:Factory/openSUSE-release-tools/openSUSE-release-tools.changes
    2017-11-10 14:58:24.301891358 +0100
+++ 
/work/SRC/openSUSE:Factory/.openSUSE-release-tools.new/openSUSE-release-tools.changes
       2017-11-12 18:11:07.341641991 +0100
@@ -1,0 +2,55 @@
+Sun Nov 12 11:04:22 UTC 2017 - opensuse-releaset...@opensuse.org
+
+- Update to version 20171112.b690943:
+  * pkglistgen: fix up coolo's code
+  * pkglistgen: Output overlapping packages as yaml
+  * pkglistgen: Allow new recommended flag to take over recommends
+  * pkglistgen: Ignore modules recursively
+  * pkglistgen: Implement UNWANTED support
+  * pkglistgen: Do not ignore recommendes from other modules
+  * pkglistgen: Have update command exit 1 if it updated something
+  * pkglistgen: Create an unsorted.yml and output duplications
+
+-------------------------------------------------------------------
+Fri Nov 10 07:38:52 UTC 2017 - opensuse-releaset...@opensuse.org
+
+- Update to version 20171110.5906e5c:
+  * dist/spec: appease the exit status gods with || true.
+  * dist/spec: restart totest-manager instances properly.
+  * dist/spec: only run %systemd_postun for oneshot services.
+
+-------------------------------------------------------------------
+Thu Nov 09 22:44:53 UTC 2017 - opensuse-releaset...@opensuse.org
+
+- Update to version 20171109.f927c57:
+  * metrics: rework to store points as named tuple and write in batches.
+  * metrics: rework request pagination to provide as generator.
+  * metrics: call ET.clear() to release unneeded memory used by search result.
+
+-------------------------------------------------------------------
+Thu Nov 09 22:30:40 UTC 2017 - opensuse-releaset...@opensuse.org
+
+- Update to version 20171109.bcdea68:
+  * Don't die on delete requests
+
+-------------------------------------------------------------------
+Thu Nov 09 22:21:17 UTC 2017 - opensuse-releaset...@opensuse.org
+
+- Update to version 20171109.3e191ca:
+  * repo_checker: review failed stagings with only openQA failures.
+
+-------------------------------------------------------------------
+Thu Nov 09 22:15:05 UTC 2017 - opensuse-releaset...@opensuse.org
+
+- Update to version 20171109.1efadc5:
+  * metrics/grafana/review: include opensuse-review-team who graphs.
+  * metrics/grafana/review: default to openSUSE:Factory.
+  * metrics/grafana/review: disable annotations by default.
+  * metrics/grafana/staging: "Project stats" to "Totals"
+  * metrics/grafana/staging: remove 1s interval as it causes RAM issues.
+  * metrics/grafana: standardize title prefix with 'OSRT: '.
+  * dist/ci: grafana dir must be owned by grafana user since it writes lock.
+  * dist/spec: correct metrics postun to reference systemctl by absolute path.
+  * metrics: prefix release schedule file with source dir path.
+
+-------------------------------------------------------------------

Old:
----
  openSUSE-release-tools-20171109.3d34370.obscpio

New:
----
  openSUSE-release-tools-20171112.b690943.obscpio

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ openSUSE-release-tools.spec ++++++
--- /var/tmp/diff_new_pack.4mSpa1/_old  2017-11-12 18:11:07.957619583 +0100
+++ /var/tmp/diff_new_pack.4mSpa1/_new  2017-11-12 18:11:07.957619583 +0100
@@ -20,7 +20,7 @@
 %define source_dir osc-plugin-factory
 %define announcer_filename factory-package-news
 Name:           openSUSE-release-tools
-Version:        20171109.3d34370
+Version:        20171112.b690943
 Release:        0
 Summary:        Tools to aid in staging and release work for openSUSE/SUSE
 License:        GPL-2.0+ and MIT
@@ -278,70 +278,32 @@
 # TODO Correct makefile to actually install source.
 mkdir -p %{buildroot}%{_datadir}/%{source_dir}/%{announcer_filename}
 
-%pre announcer
-%service_add_pre %{announcer_filename}.service
-
-%post announcer
-%service_add_post %{announcer_filename}.service
-
-%preun announcer
-%service_del_preun %{announcer_filename}.service
-
 %postun announcer
-%service_del_postun %{announcer_filename}.service
+%systemd_postun
 
 %pre check-source
-%service_add_pre osrt-check-source.service
 getent passwd osrt-check-source > /dev/null || \
   useradd -r -m -s /sbin/nologin -c "user for 
openSUSE-release-tools-check-source" osrt-check-source
 exit 0
 
-%post check-source
-%service_add_post osrt-check-source.service
-
-%preun check-source
-%service_del_preun osrt-check-source.service
-
 %postun check-source
-%service_del_postun osrt-check-source.service
+%systemd_postun
 
 %pre leaper
-%service_add_pre osrt-leaper-crawler@.service
-%service_add_pre osrt-leaper-manager@.service
-%service_add_pre osrt-leaper-review.service
 getent passwd osrt-leaper > /dev/null || \
   useradd -r -m -s /sbin/nologin -c "user for openSUSE-release-tools-leaper" 
osrt-leaper
 exit 0
 
-%post leaper
-%service_add_post osrt-leaper-crawler@.service
-%service_add_post osrt-leaper-manager@.service
-%service_add_post osrt-leaper-review.service
-
-%preun leaper
-%service_del_preun osrt-leaper-crawler@.service
-%service_del_preun osrt-leaper-manager@.service
-%service_del_preun osrt-leaper-review.service
-
 %postun leaper
-%service_del_postun osrt-leaper-crawler@.service
-%service_del_postun osrt-leaper-manager@.service
-%service_del_postun osrt-leaper-review.service
+%systemd_postun
 
 %pre maintenance
-%service_add_pre osrt-maintenance-incidents.service
 getent passwd osrt-maintenance > /dev/null || \
   useradd -r -m -s /sbin/nologin -c "user for 
openSUSE-release-tools-maintenance" osrt-maintenance
 exit 0
 
-%post maintenance
-%service_add_post osrt-maintenance-incidents.service
-
-%preun maintenance
-%service_del_preun osrt-maintenance-incidents.service
-
 %postun maintenance
-%service_del_postun osrt-maintenance-incidents.service
+%systemd_postun
 
 %pre metrics
 getent passwd osrt-metrics > /dev/null || \
@@ -351,78 +313,39 @@
 %postun metrics
 %systemd_postun
 # If grafana-server.service is enabled then restart it to load new dashboards.
-if [ -x /usr/bin/systemctl ] && systemctl is-enabled grafana-server ; then
+if [ -x /usr/bin/systemctl ] && /usr/bin/systemctl is-enabled grafana-server ; 
then
   /usr/bin/systemctl try-restart --no-block grafana-server
 fi
 
 %pre repo-checker
-%service_add_pre osrt-repo-checker.service
-%service_add_pre osrt-repo-checker-project_only@.service
 getent passwd osrt-repo-checker > /dev/null || \
   useradd -r -m -s /sbin/nologin -c "user for 
openSUSE-release-tools-repo-checker" osrt-repo-checker
 exit 0
 
-%post repo-checker
-%service_add_post osrt-repo-checker.service
-%service_add_post osrt-repo-checker-project_only@.service
-
-%preun repo-checker
-%service_del_preun osrt-repo-checker.service
-%service_del_preun osrt-repo-checker-project_only@.service
-
 %postun repo-checker
-%service_del_postun osrt-repo-checker.service
-%service_del_postun osrt-repo-checker-project_only@.service
+%systemd_postun
 
 %pre staging-bot
-%service_add_pre osrt-staging-bot-daily@.service
-%service_add_pre osrt-staging-bot-devel-list.service
-%service_add_pre osrt-staging-bot-regular@.service
-%service_add_pre osrt-staging-bot-reminder.service
-%service_add_pre osrt-staging-bot-supersede@.service
-%service_add_pre osrt-staging-bot-support-rebuild@.service
 getent passwd osrt-staging-bot > /dev/null || \
   useradd -r -m -s /sbin/nologin -c "user for 
openSUSE-release-tools-staging-bot" osrt-staging-bot
 exit 0
 
-%post staging-bot
-%service_add_post osrt-staging-bot-daily@.service
-%service_add_post osrt-staging-bot-devel-list.service
-%service_add_post osrt-staging-bot-regular@.service
-%service_add_post osrt-staging-bot-reminder.service
-%service_add_post osrt-staging-bot-supersede@.service
-%service_add_post osrt-staging-bot-support-rebuild@.service
-
-%preun staging-bot
-%service_del_preun osrt-staging-bot-daily@.service
-%service_del_preun osrt-staging-bot-devel-list.service
-%service_del_preun osrt-staging-bot-regular@.service
-%service_del_preun osrt-staging-bot-reminder.service
-%service_del_preun osrt-staging-bot-supersede@.service
-%service_del_preun osrt-staging-bot-support-rebuild@.service
-
 %postun staging-bot
-%service_del_postun osrt-staging-bot-daily@.service
-%service_del_postun osrt-staging-bot-devel-list.service
-%service_del_postun osrt-staging-bot-regular@.service
-%service_del_postun osrt-staging-bot-reminder.service
-%service_del_postun osrt-staging-bot-supersede@.service
-%service_del_postun osrt-staging-bot-support-rebuild@.service
+%systemd_postun
 
 %pre totest-manager
-%service_add_pre osrt-totest-manager@.service
 getent passwd osrt-totest-manager > /dev/null || \
   useradd -r -m -s /sbin/nologin -c "user for 
openSUSE-release-tools-totest-manager" osrt-totest-manager
 exit 0
 
-%post totest-manager
-%service_add_post osrt-totest-manager@.service
-
-%preun totest-manager
-%service_del_preun osrt-totest-manager@.service
-
 %postun totest-manager
-%service_del_postun osrt-totest-manager@.service
+%systemd_postun
+if [ -x /usr/bin/systemctl ] ; then
+  instances=($(systemctl list-units -t service --full | grep -oP 
osrt-totest-manager@[^.]+ || true))
+  if [ ${#instances[@]} -gt 0 ] ; then
+    systemctl try-restart --no-block ${instances[@]}
+  fi
+fi
 
 %files
 %defattr(-,root,root,-)
@@ -509,7 +432,7 @@
 %{_datadir}/%{source_dir}/metrics
 %{_datadir}/%{source_dir}/metrics.py
 # To avoid adding grafana as BuildRequires since it does not live in same repo.
-%dir %{_localstatedir}/lib/grafana
+%dir %attr(0750, grafana, grafana) %{_localstatedir}/lib/grafana
 %dir %{_localstatedir}/lib/grafana/dashboards
 %{_localstatedir}/lib/grafana/dashboards/%{name}
 %{_unitdir}/osrt-metrics@.service

++++++ _servicedata ++++++
--- /var/tmp/diff_new_pack.4mSpa1/_old  2017-11-12 18:11:08.001617982 +0100
+++ /var/tmp/diff_new_pack.4mSpa1/_new  2017-11-12 18:11:08.005617836 +0100
@@ -1,6 +1,6 @@
 <servicedata>
   <service name="tar_scm">
     <param 
name="url">https://github.com/openSUSE/osc-plugin-factory.git</param>
-    <param 
name="changesrevision">ace4ae06fde903c78736f25b2b15212f0ee5f8d1</param>
+    <param 
name="changesrevision">b6909435e9c6f4da1d1209378797b0ae78769473</param>
   </service>
 </servicedata>

++++++ openSUSE-release-tools-20171109.3d34370.obscpio -> 
openSUSE-release-tools-20171112.b690943.obscpio ++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/openSUSE-release-tools-20171109.3d34370/check_maintenance_incidents.py 
new/openSUSE-release-tools-20171112.b690943/check_maintenance_incidents.py
--- old/openSUSE-release-tools-20171109.3d34370/check_maintenance_incidents.py  
2017-11-09 14:57:23.000000000 +0100
+++ new/openSUSE-release-tools-20171112.b690943/check_maintenance_incidents.py  
2017-11-12 11:59:03.000000000 +0100
@@ -96,7 +96,7 @@
             pkgname = a.tgt_package
             project = a.tgt_project
 
-        if project.startswith('openSUSE:Leap:'):
+        if project.startswith('openSUSE:Leap:') and hasattr(a, 'src_project'):
             mapping = MaintenanceChecker._get_lookup_yml(self.apiurl, project)
             if mapping is None:
                 self.logger.error("error loading mapping for 
{}".format(project))
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/openSUSE-release-tools-20171109.3d34370/dist/package/openSUSE-release-tools.spec
 
new/openSUSE-release-tools-20171112.b690943/dist/package/openSUSE-release-tools.spec
--- 
old/openSUSE-release-tools-20171109.3d34370/dist/package/openSUSE-release-tools.spec
        2017-11-09 14:57:23.000000000 +0100
+++ 
new/openSUSE-release-tools-20171112.b690943/dist/package/openSUSE-release-tools.spec
        2017-11-12 11:59:03.000000000 +0100
@@ -278,70 +278,32 @@
 # TODO Correct makefile to actually install source.
 mkdir -p %{buildroot}%{_datadir}/%{source_dir}/%{announcer_filename}
 
-%pre announcer
-%service_add_pre %{announcer_filename}.service
-
-%post announcer
-%service_add_post %{announcer_filename}.service
-
-%preun announcer
-%service_del_preun %{announcer_filename}.service
-
 %postun announcer
-%service_del_postun %{announcer_filename}.service
+%systemd_postun
 
 %pre check-source
-%service_add_pre osrt-check-source.service
 getent passwd osrt-check-source > /dev/null || \
   useradd -r -m -s /sbin/nologin -c "user for 
openSUSE-release-tools-check-source" osrt-check-source
 exit 0
 
-%post check-source
-%service_add_post osrt-check-source.service
-
-%preun check-source
-%service_del_preun osrt-check-source.service
-
 %postun check-source
-%service_del_postun osrt-check-source.service
+%systemd_postun
 
 %pre leaper
-%service_add_pre osrt-leaper-crawler@.service
-%service_add_pre osrt-leaper-manager@.service
-%service_add_pre osrt-leaper-review.service
 getent passwd osrt-leaper > /dev/null || \
   useradd -r -m -s /sbin/nologin -c "user for openSUSE-release-tools-leaper" 
osrt-leaper
 exit 0
 
-%post leaper
-%service_add_post osrt-leaper-crawler@.service
-%service_add_post osrt-leaper-manager@.service
-%service_add_post osrt-leaper-review.service
-
-%preun leaper
-%service_del_preun osrt-leaper-crawler@.service
-%service_del_preun osrt-leaper-manager@.service
-%service_del_preun osrt-leaper-review.service
-
 %postun leaper
-%service_del_postun osrt-leaper-crawler@.service
-%service_del_postun osrt-leaper-manager@.service
-%service_del_postun osrt-leaper-review.service
+%systemd_postun
 
 %pre maintenance
-%service_add_pre osrt-maintenance-incidents.service
 getent passwd osrt-maintenance > /dev/null || \
   useradd -r -m -s /sbin/nologin -c "user for 
openSUSE-release-tools-maintenance" osrt-maintenance
 exit 0
 
-%post maintenance
-%service_add_post osrt-maintenance-incidents.service
-
-%preun maintenance
-%service_del_preun osrt-maintenance-incidents.service
-
 %postun maintenance
-%service_del_postun osrt-maintenance-incidents.service
+%systemd_postun
 
 %pre metrics
 getent passwd osrt-metrics > /dev/null || \
@@ -351,78 +313,39 @@
 %postun metrics
 %systemd_postun
 # If grafana-server.service is enabled then restart it to load new dashboards.
-if [ -x /usr/bin/systemctl ] && systemctl is-enabled grafana-server ; then
+if [ -x /usr/bin/systemctl ] && /usr/bin/systemctl is-enabled grafana-server ; 
then
   /usr/bin/systemctl try-restart --no-block grafana-server
 fi
 
 %pre repo-checker
-%service_add_pre osrt-repo-checker.service
-%service_add_pre osrt-repo-checker-project_only@.service
 getent passwd osrt-repo-checker > /dev/null || \
   useradd -r -m -s /sbin/nologin -c "user for 
openSUSE-release-tools-repo-checker" osrt-repo-checker
 exit 0
 
-%post repo-checker
-%service_add_post osrt-repo-checker.service
-%service_add_post osrt-repo-checker-project_only@.service
-
-%preun repo-checker
-%service_del_preun osrt-repo-checker.service
-%service_del_preun osrt-repo-checker-project_only@.service
-
 %postun repo-checker
-%service_del_postun osrt-repo-checker.service
-%service_del_postun osrt-repo-checker-project_only@.service
+%systemd_postun
 
 %pre staging-bot
-%service_add_pre osrt-staging-bot-daily@.service
-%service_add_pre osrt-staging-bot-devel-list.service
-%service_add_pre osrt-staging-bot-regular@.service
-%service_add_pre osrt-staging-bot-reminder.service
-%service_add_pre osrt-staging-bot-supersede@.service
-%service_add_pre osrt-staging-bot-support-rebuild@.service
 getent passwd osrt-staging-bot > /dev/null || \
   useradd -r -m -s /sbin/nologin -c "user for 
openSUSE-release-tools-staging-bot" osrt-staging-bot
 exit 0
 
-%post staging-bot
-%service_add_post osrt-staging-bot-daily@.service
-%service_add_post osrt-staging-bot-devel-list.service
-%service_add_post osrt-staging-bot-regular@.service
-%service_add_post osrt-staging-bot-reminder.service
-%service_add_post osrt-staging-bot-supersede@.service
-%service_add_post osrt-staging-bot-support-rebuild@.service
-
-%preun staging-bot
-%service_del_preun osrt-staging-bot-daily@.service
-%service_del_preun osrt-staging-bot-devel-list.service
-%service_del_preun osrt-staging-bot-regular@.service
-%service_del_preun osrt-staging-bot-reminder.service
-%service_del_preun osrt-staging-bot-supersede@.service
-%service_del_preun osrt-staging-bot-support-rebuild@.service
-
 %postun staging-bot
-%service_del_postun osrt-staging-bot-daily@.service
-%service_del_postun osrt-staging-bot-devel-list.service
-%service_del_postun osrt-staging-bot-regular@.service
-%service_del_postun osrt-staging-bot-reminder.service
-%service_del_postun osrt-staging-bot-supersede@.service
-%service_del_postun osrt-staging-bot-support-rebuild@.service
+%systemd_postun
 
 %pre totest-manager
-%service_add_pre osrt-totest-manager@.service
 getent passwd osrt-totest-manager > /dev/null || \
   useradd -r -m -s /sbin/nologin -c "user for 
openSUSE-release-tools-totest-manager" osrt-totest-manager
 exit 0
 
-%post totest-manager
-%service_add_post osrt-totest-manager@.service
-
-%preun totest-manager
-%service_del_preun osrt-totest-manager@.service
-
 %postun totest-manager
-%service_del_postun osrt-totest-manager@.service
+%systemd_postun
+if [ -x /usr/bin/systemctl ] ; then
+  instances=($(systemctl list-units -t service --full | grep -oP 
osrt-totest-manager@[^.]+ || true))
+  if [ ${#instances[@]} -gt 0 ] ; then
+    systemctl try-restart --no-block ${instances[@]}
+  fi
+fi
 
 %files
 %defattr(-,root,root,-)
@@ -509,7 +432,7 @@
 %{_datadir}/%{source_dir}/metrics
 %{_datadir}/%{source_dir}/metrics.py
 # To avoid adding grafana as BuildRequires since it does not live in same repo.
-%dir %{_localstatedir}/lib/grafana
+%dir %attr(0750, grafana, grafana) %{_localstatedir}/lib/grafana
 %dir %{_localstatedir}/lib/grafana/dashboards
 %{_localstatedir}/lib/grafana/dashboards/%{name}
 %{_unitdir}/osrt-metrics@.service
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/openSUSE-release-tools-20171109.3d34370/metrics/grafana/compare.json 
new/openSUSE-release-tools-20171112.b690943/metrics/grafana/compare.json
--- old/openSUSE-release-tools-20171109.3d34370/metrics/grafana/compare.json    
2017-11-09 14:57:23.000000000 +0100
+++ new/openSUSE-release-tools-20171112.b690943/metrics/grafana/compare.json    
2017-11-12 11:59:03.000000000 +0100
@@ -500,6 +500,6 @@
     ]
   },
   "timezone": "",
-  "title": "openSUSE Staging Comparison",
+  "title": "OSRT: Comparison",
   "version": 6
 }
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/openSUSE-release-tools-20171109.3d34370/metrics/grafana/review.json 
new/openSUSE-release-tools-20171112.b690943/metrics/grafana/review.json
--- old/openSUSE-release-tools-20171109.3d34370/metrics/grafana/review.json     
2017-11-09 14:57:23.000000000 +0100
+++ new/openSUSE-release-tools-20171112.b690943/metrics/grafana/review.json     
2017-11-12 11:59:03.000000000 +0100
@@ -3,7 +3,7 @@
     "list": [
       {
         "datasource": "$project",
-        "enable": true,
+        "enable": false,
         "hide": false,
         "iconColor": "rgba(255, 96, 96, 1)",
         "limit": 100,
@@ -1996,6 +1996,271 @@
       "showTitle": false,
       "title": "Dashboard Row",
       "titleSize": "h6"
+    },
+    {
+      "collapse": false,
+      "height": 250,
+      "panels": [
+        {
+          "aliasColors": {},
+          "bars": false,
+          "dashLength": 10,
+          "dashes": false,
+          "datasource": "$project",
+          "fill": 1,
+          "id": 40,
+          "legend": {
+            "avg": false,
+            "current": false,
+            "max": false,
+            "min": false,
+            "show": true,
+            "total": false,
+            "values": false
+          },
+          "lines": true,
+          "linewidth": 1,
+          "links": [],
+          "nullPointMode": "null",
+          "percentage": false,
+          "pointradius": 5,
+          "points": false,
+          "renderer": "flot",
+          "seriesOverrides": [],
+          "spaceLength": 10,
+          "span": 12,
+          "stack": false,
+          "steppedLine": false,
+          "targets": [
+            {
+              "alias": "$tag_who_completed",
+              "dsType": "influxdb",
+              "groupBy": [
+                {
+                  "params": [
+                    "$__interval"
+                  ],
+                  "type": "time"
+                },
+                {
+                  "params": [
+                    "who_completed"
+                  ],
+                  "type": "tag"
+                },
+                {
+                  "params": [
+                    "null"
+                  ],
+                  "type": "fill"
+                }
+              ],
+              "measurement": "review",
+              "orderByTime": "ASC",
+              "policy": "default",
+              "refId": "A",
+              "resultFormat": "time_series",
+              "select": [
+                [
+                  {
+                    "params": [
+                      "open_for"
+                    ],
+                    "type": "field"
+                  },
+                  {
+                    "params": [],
+                    "type": "count"
+                  },
+                  {
+                    "params": [],
+                    "type": "cumulative_sum"
+                  }
+                ]
+              ],
+              "tags": [
+                {
+                  "key": "by_group",
+                  "operator": "=",
+                  "value": "opensuse-review-team"
+                }
+              ]
+            }
+          ],
+          "thresholds": [],
+          "timeFrom": null,
+          "timeShift": null,
+          "title": "opensuse-review-team - who",
+          "tooltip": {
+            "shared": true,
+            "sort": 0,
+            "value_type": "individual"
+          },
+          "type": "graph",
+          "xaxis": {
+            "buckets": null,
+            "mode": "time",
+            "name": null,
+            "show": true,
+            "values": []
+          },
+          "yaxes": [
+            {
+              "format": "short",
+              "label": null,
+              "logBase": 1,
+              "max": null,
+              "min": null,
+              "show": true
+            },
+            {
+              "format": "short",
+              "label": null,
+              "logBase": 1,
+              "max": null,
+              "min": null,
+              "show": true
+            }
+          ]
+        }
+      ],
+      "repeat": null,
+      "repeatIteration": null,
+      "repeatRowId": null,
+      "showTitle": false,
+      "title": "Dashboard Row",
+      "titleSize": "h6"
+    },
+    {
+      "collapse": false,
+      "height": 250,
+      "panels": [
+        {
+          "aliasColors": {},
+          "bars": true,
+          "dashLength": 10,
+          "dashes": false,
+          "datasource": "$project",
+          "fill": 1,
+          "id": 41,
+          "interval": "1d",
+          "legend": {
+            "avg": false,
+            "current": false,
+            "max": false,
+            "min": false,
+            "show": true,
+            "total": false,
+            "values": false
+          },
+          "lines": false,
+          "linewidth": 1,
+          "links": [],
+          "nullPointMode": "null",
+          "percentage": true,
+          "pointradius": 5,
+          "points": false,
+          "renderer": "flot",
+          "seriesOverrides": [],
+          "spaceLength": 10,
+          "span": 12,
+          "stack": true,
+          "steppedLine": false,
+          "targets": [
+            {
+              "alias": "$tag_who_completed",
+              "dsType": "influxdb",
+              "groupBy": [
+                {
+                  "params": [
+                    "$__interval"
+                  ],
+                  "type": "time"
+                },
+                {
+                  "params": [
+                    "who_completed"
+                  ],
+                  "type": "tag"
+                },
+                {
+                  "params": [
+                    "null"
+                  ],
+                  "type": "fill"
+                }
+              ],
+              "measurement": "review",
+              "orderByTime": "ASC",
+              "policy": "default",
+              "refId": "A",
+              "resultFormat": "time_series",
+              "select": [
+                [
+                  {
+                    "params": [
+                      "open_for"
+                    ],
+                    "type": "field"
+                  },
+                  {
+                    "params": [],
+                    "type": "count"
+                  }
+                ]
+              ],
+              "tags": [
+                {
+                  "key": "by_group",
+                  "operator": "=",
+                  "value": "opensuse-review-team"
+                }
+              ]
+            }
+          ],
+          "thresholds": [],
+          "timeFrom": null,
+          "timeShift": null,
+          "title": "opensuse-review-team - who",
+          "tooltip": {
+            "shared": true,
+            "sort": 0,
+            "value_type": "individual"
+          },
+          "type": "graph",
+          "xaxis": {
+            "buckets": null,
+            "mode": "time",
+            "name": null,
+            "show": true,
+            "values": []
+          },
+          "yaxes": [
+            {
+              "format": "short",
+              "label": null,
+              "logBase": 1,
+              "max": "100",
+              "min": null,
+              "show": true
+            },
+            {
+              "format": "short",
+              "label": null,
+              "logBase": 1,
+              "max": null,
+              "min": null,
+              "show": true
+            }
+          ]
+        }
+      ],
+      "repeat": null,
+      "repeatIteration": null,
+      "repeatRowId": null,
+      "showTitle": false,
+      "title": "Dashboard Row",
+      "titleSize": "h6"
     }
   ],
   "schemaVersion": 14,
@@ -2005,8 +2270,8 @@
     "list": [
       {
         "current": {
-          "text": "openSUSE:Leap:42.3v2",
-          "value": "openSUSE:Leap:42.3v2"
+          "text": "openSUSE:Factory",
+          "value": "openSUSE:Factory"
         },
         "hide": 0,
         "label": null,
@@ -2049,6 +2314,6 @@
     ]
   },
   "timezone": "",
-  "title": "openSUSE Staging Reviews",
+  "title": "OSRT: Reviews",
   "version": 12
 }
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/openSUSE-release-tools-20171109.3d34370/metrics/grafana/staging.json 
new/openSUSE-release-tools-20171112.b690943/metrics/grafana/staging.json
--- old/openSUSE-release-tools-20171109.3d34370/metrics/grafana/staging.json    
2017-11-09 14:57:23.000000000 +0100
+++ new/openSUSE-release-tools-20171112.b690943/metrics/grafana/staging.json    
2017-11-12 11:59:03.000000000 +0100
@@ -239,7 +239,7 @@
           "datasource": "$project",
           "fill": 1,
           "id": 1,
-          "interval": "1s",
+          "interval": null,
           "legend": {
             "alignAsTable": false,
             "avg": false,
@@ -372,7 +372,7 @@
           "thresholds": [],
           "timeFrom": null,
           "timeShift": null,
-          "title": "Project stats",
+          "title": "Totals",
           "tooltip": {
             "shared": true,
             "sort": 0,
@@ -3901,6 +3901,6 @@
     ]
   },
   "timezone": "",
-  "title": "Staging",
+  "title": "OSRT: Staging",
   "version": 20
 }
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/openSUSE-release-tools-20171109.3d34370/metrics.py 
new/openSUSE-release-tools-20171112.b690943/metrics.py
--- old/openSUSE-release-tools-20171109.3d34370/metrics.py      2017-11-09 
14:57:23.000000000 +0100
+++ new/openSUSE-release-tools-20171112.b690943/metrics.py      2017-11-12 
11:59:03.000000000 +0100
@@ -18,63 +18,70 @@
 from osclib.conf import Config
 from osclib.stagingapi import StagingAPI
 
+SOURCE_DIR = os.path.dirname(os.path.realpath(__file__))
+Point = namedtuple('Point', ['measurement', 'tags', 'fields', 'time', 'delta'])
+
 # Duplicate Leap config to handle 13.2 without issue.
 osclib.conf.DEFAULT[
     r'openSUSE:(?P<project>[\d.]+)'] = osclib.conf.DEFAULT[
     r'openSUSE:(?P<project>Leap:[\d.]+)']
 
-# Provide osc.core.get_request_list() that swaps out search() implementation 
and
-# uses lxml ET to avoid having to reparse to peform complex xpaths.
+# Provide osc.core.get_request_list() that swaps out search() implementation to
+# capture the generated query, paginate over and yield each request to avoid
+# loading all requests at the same time. Additionally, use lxml ET to avoid
+# having to re-parse to perform complex xpaths.
 def get_request_list(*args, **kwargs):
-    global _requests
-
     osc.core._search = osc.core.search
-    osc.core.search = search
+    osc.core.search = search_capture
     osc.core._ET = osc.core.ET
     osc.core.ET = ET
 
     osc.core.get_request_list(*args, **kwargs)
 
     osc.core.search = osc.core._search
+
+    query = search_capture.query
+    for request in search_paginated_generator(query[0], query[1], **query[2]):
+        # Python 3 yield from.
+        yield request
+
     osc.core.ET = osc.core._ET
 
-    return _requests
+def search_capture(apiurl, queries=None, **kwargs):
+    search_capture.query = (apiurl, queries, kwargs)
+    return {'request': ET.fromstring('<collection matches="0"></collection>')}
 
 # Provides a osc.core.search() implementation for use with get_request_list()
-# that paginates in sets of 1000.
-def search(apiurl, queries=None, **kwargs):
-    global _requests
-
+# that paginates in sets of 1000 and yields each request.
+def search_paginated_generator(apiurl, queries=None, **kwargs):
     if "submit/target/@project='openSUSE:Factory'" in kwargs['request']:
         kwargs['request'] = osc.core.xpath_join(kwargs['request'], 
'@id>250000', op='and')
 
-    requests = []
+    request_count = 0
     queries['request']['limit'] = 1000
     queries['request']['offset'] = 0
     while True:
-        collection = osc.core._search(apiurl, queries, **kwargs)['request']
-        requests.extend(collection.findall('request'))
+        collection = osc.core.search(apiurl, queries, **kwargs)['request']
+        if not request_count:
+            print('processing {:,} 
requests'.format(int(collection.get('matches'))))
+
+        for request in collection.findall('request'):
+            yield request
+            request_count += 1
 
-        if len(requests) == int(collection.get('matches')):
+        if request_count == int(collection.get('matches')):
             # Stop paging once the expected number of items has been returned.
             break
 
+        # Release memory as otherwise ET seems to hold onto it.
+        collection.clear()
         queries['request']['offset'] += queries['request']['limit']
 
-    _requests = requests
-    return {'request': ET.fromstring('<collection matches="0"></collection>')}
-
 points = []
 
 def point(measurement, fields, datetime, tags={}, delta=False):
     global points
-    points.append({
-        'measurement': measurement,
-        'tags': tags,
-        'fields': fields,
-        'time': timestamp(datetime),
-        'delta': delta,
-    })
+    points.append(Point(measurement, tags, fields, timestamp(datetime), delta))
 
 def timestamp(datetime):
     return int(datetime.strftime('%s'))
@@ -84,7 +91,6 @@
                                 req_state=('accepted', 'revoked', 
'superseded'),
                                 exclude_target_projects=[project],
                                 withfullhistory=True)
-    print('processing {:,} requests'.format(len(requests)))
     for request in requests:
         if request.find('action').get('type') not in ('submit', 'delete'):
             # TODO Handle non-stageable requests via different flow.
@@ -226,9 +232,8 @@
             else:
                 print('unable to find priority history entry for {} to 
{}'.format(request.get('id'), priority.text))
 
-    walk_points(points, project)
-
-    return points
+    print('finalizing {:,} points'.format(len(points)))
+    return walk_points(points, project)
 
 def who_workaround(request, review, relax=False):
     # Super ugly workaround for incorrect and missing data:
@@ -255,42 +260,64 @@
 
     return who
 
+# Walk data points in order by time, adding up deltas and merging points at
+# the same time. Data is converted to dict() and written to influx batches to
+# avoid extra memory usage required for all data in dict() and avoid influxdb
+# allocating memory for entire incoming data set at once.
 def walk_points(points, target):
+    global client
+    # Wait until just before writing to drop database.
+    client.drop_database(client._database)
+    client.create_database(client._database)
+
     counters = {}
     final = []
-    for point in sorted(points, key=lambda l: l['time']):
-        if not point['delta']:
-            final.append(point)
+    time_last = None
+    wrote = 0
+    for point in sorted(points, key=lambda l: l.time):
+        if point.time != time_last and len(final) >= 1000:
+            # Write final point in batches of ~1000, but guard against writing
+            # when in the middle of points at the same time as they may end up
+            # being merged. As such the previous time should not match current.
+            client.write_points(final, 's')
+            wrote += len(final)
+            final = []
+        time_last = point.time
+
+        if not point.delta:
+            final.append(dict(point._asdict()))
             continue
 
         # A more generic method like 'key' which ended up being needed is 
likely better.
-        measurement = counters_tag_key = point['measurement']
+        measurement = counters_tag_key = point.measurement
         if measurement == 'staging':
-            counters_tag_key += point['tags']['id']
+            counters_tag_key += point.tags['id']
         elif measurement == 'review_count':
-            counters_tag_key += '_'.join(point['tags']['key'])
+            counters_tag_key += '_'.join(point.tags['key'])
         elif measurement == 'priority':
-            counters_tag_key += point['tags']['level']
+            counters_tag_key += point.tags['level']
         counters_tag = counters.setdefault(counters_tag_key, {'last': None, 
'values': {}})
 
         values = counters_tag['values']
-        for key, value in point['fields'].items():
+        for key, value in point.fields.items():
             values[key] = values.setdefault(key, 0) + value
 
-        if counters_tag['last'] and point['time'] == 
counters_tag['last']['time']:
+        if counters_tag['last'] and point.time == counters_tag['last']['time']:
             point = counters_tag['last']
         else:
+            point = dict(point._asdict())
             counters_tag['last'] = point
             final.append(point)
         point['fields'].update(counters_tag['values'])
 
-    # In order to allow for merging delta entries for the same time.
-    points = final
+    # Write any remaining final points.
+    client.write_points(final, 's')
+    return wrote + len(final)
 
 def ingest_release_schedule(project):
     points = []
     release_schedule = {}
-    release_schedule_file = 'metrics/annotation/{}.yaml'.format(project)
+    release_schedule_file = os.path.join(SOURCE_DIR, 
'metrics/annotation/{}.yaml'.format(project))
     if project.endswith('Factory'):
         # Extract Factory "release schedule" from Tumbleweed snapshot list.
         command = 'rsync 
rsync.opensuse.org::opensuse-full/opensuse/tumbleweed/iso/Changes.* | ' \
@@ -310,9 +337,13 @@
             'time': timestamp(date),
         })
 
-    return points
+    client.write_points(points, 's')
+    return len(points)
 
 def main(args):
+    global client
+    client = InfluxDBClient(args.host, args.port, args.user, args.password, 
args.project)
+
     osc.conf.get_config(override_apiurl=args.apiurl)
     osc.conf.config['debug'] = args.debug
 
@@ -333,16 +364,8 @@
     print('who_workaround_swap', who_workaround_swap)
     print('who_workaround_miss', who_workaround_miss)
 
-    print('writing {:,} points and {:,} annotation points to db'.format(
-        len(points_requests), len(points_schedule)))
-
-    db = args.project
-    client = InfluxDBClient(args.host, args.port, args.user, args.password, db)
-    client.drop_database(db)
-    client.create_database(db)
-
-    client.write_points(points_requests, 's')
-    client.write_points(points_schedule, 's')
+    print('wrote {:,} points and {:,} annotation points to db'.format(
+        points_requests, points_schedule))
 
 
 if __name__ == '__main__':
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/openSUSE-release-tools-20171109.3d34370/pkglistgen.py 
new/openSUSE-release-tools-20171112.b690943/pkglistgen.py
--- old/openSUSE-release-tools-20171109.3d34370/pkglistgen.py   2017-11-09 
14:57:23.000000000 +0100
+++ new/openSUSE-release-tools-20171112.b690943/pkglistgen.py   2017-11-12 
11:59:03.000000000 +0100
@@ -45,7 +45,7 @@
 
 logger = logging.getLogger()
 
-ARCHITECTURES = ('x86_64', 'ppc64le', 's390x', 'aarch64')
+ARCHITECTURES = ['x86_64', 'ppc64le', 's390x', 'aarch64']
 DEFAULT_REPOS = ("openSUSE:Factory/standard")
 
 class Group(object):
@@ -69,6 +69,8 @@
         self.srcpkgs = None
         self.develpkgs = []
         self.silents = set()
+        self.ignored = set()
+        self.recommended = set()
 
         pkglist.groups[self.safe_name] = self
 
@@ -91,13 +93,18 @@
                 continue
             name = package.keys()[0]
             for rel in package[name]:
+                arch = None
                 if rel == 'locked':
                     self.locked.add(name)
+                    continue
                 elif rel == 'silent':
-                    self._add_to_packages(name)
                     self.silents.add(name)
+                elif rel == 'recommended':
+                    self.recommended.add(name)
                 else:
-                    self._add_to_packages(name, rel)
+                    arch = rel
+
+                self._add_to_packages(name, arch)
 
     def _verify_solved(self):
         if not self.solved:
@@ -109,6 +116,7 @@
 
         self.locked.update(group.locked)
         self.silents.update(group.silents)
+        self.recommended.update(group.recommended)
 
     # do not repeat packages
     def ignore(self, without):
@@ -123,6 +131,9 @@
             self.not_found[p] -= without.not_found[p]
             if not self.not_found[p]:
                 self.not_found.pop(p)
+        for g in without.ignored:
+            self.ignore(g)
+        self.ignored.add(without)
 
     def solve(self, ignore_recommended=False):
         """ base: list of base groups or None """
@@ -140,7 +151,9 @@
             pool = self.pkglist._prepare_pool(arch)
             # pool.set_debuglevel(10)
 
-            for n, group in self.packages[arch]:
+            tosolv = self.packages[arch]
+            while tosolv:
+                n, group = tosolv.pop(0)
                 jobs = list(self.pkglist.lockjobs[arch])
                 sel = pool.select(str(n), solv.Selection.SELECTION_NAME)
                 if sel.isempty():
@@ -148,12 +161,22 @@
                     self.not_found.setdefault(n, set()).add(arch)
                     continue
                 else:
+                    if n in self.recommended:
+                        for s in sel.solvables():
+                            for dep in 
s.lookup_deparray(solv.SOLVABLE_RECOMMENDS):
+                                # only add recommends that exist as packages
+                                rec = pool.select(dep.str(), 
solv.Selection.SELECTION_NAME)
+                                if not rec.isempty():
+                                    tosolv.append([dep.str(), group + 
":recommended:" + n])
+
                     jobs += sel.jobs(solv.Job.SOLVER_INSTALL)
 
-                for l in self.locked:
+                locked = self.locked | self.pkglist.unwanted
+                for l in locked:
                     sel = pool.select(str(l), solv.Selection.SELECTION_NAME)
                     if sel.isempty():
-                        logger.warn('{}.{}: locked package {} not 
found'.format(self.name, arch, l))
+                        # if we can't find it, it probably is not as important
+                        logger.debug('{}.{}: locked package {} not 
found'.format(self.name, arch, l))
                     else:
                         jobs += sel.jobs(solv.Job.SOLVER_LOCK)
 
@@ -171,7 +194,11 @@
                 problems = solver.solve(jobs)
                 if problems:
                     for problem in problems:
-                        logger.error('unresolvable: %s.%s: %s', self.name, 
arch, problem)
+                        msg = 'unresolvable: %s.%s: %s', self.name, arch, 
problem
+                        if self.pkglist.ignore_broken:
+                            logger.debug(msg)
+                        else:
+                            logger.debug(msg)
                         self.unresolvable[arch][n] = str(problem)
                     continue
 
@@ -182,6 +209,8 @@
 
                 if 'get_recommended' in dir(solver):
                     for s in solver.get_recommended():
+                        if s.name in locked:
+                            continue
                         self.recommends.setdefault(s.name, group + ':' + n)
                     for s in solver.get_suggested():
                         self.recommends.setdefault(s.name, group + ':' + n)
@@ -220,6 +249,23 @@
         self.solved_packages = solved
         self.solved = True
 
+    def check_dups(self, modules):
+        packages = set(self.solved_packages['*'])
+        for arch in self.architectures:
+            packages.update(self.solved_packages[arch])
+        for m in modules:
+            # do not check with ourselves and only once for the rest
+            if m.name <= self.name: continue
+            if self.name in m.conflicts or m.name in self.conflicts:
+                continue
+            mp = set(m.solved_packages['*'])
+            for arch in self.architectures:
+                mp.update(m.solved_packages[arch])
+            if len(packages & mp):
+                print 'overlap_between_' + self.name + '_and_' + m.name + ':'
+                for p in sorted(packages & mp):
+                    print '  - ' + p
+
     def collect_devel_packages(self, modules):
         develpkgs = set()
         for arch in self.architectures:
@@ -255,7 +301,6 @@
             for m in modules:
                 for arch in ['*'] + self.architectures:
                     already_present = already_present or (p in 
m.solved_packages[arch])
-                    already_present = already_present or (p in m.recommends)
             if not already_present:
                 self.recommends[p] = recommends[p]
 
@@ -292,12 +337,12 @@
                 name = msg
             if name in unresolvable:
                 msg = ' {} uninstallable: {}'.format(name, unresolvable[name])
-                logger.error(msg)
                 if ignore_broken:
                     c = ET.Comment(msg)
                     packagelist.append(c)
                     continue
                 else:
+                    logger.error(msg)
                     name = msg
             status = self.pkglist.supportstatus(name)
             attrs = { 'name': name }
@@ -344,6 +389,8 @@
         self.lockjobs = dict()
         self.ignore_broken = False
         self.ignore_recommended = False
+        self.unwanted = set()
+        self.output = None
 
     def _dump_supportstatus(self):
         for name in self.packages.keys():
@@ -377,23 +424,29 @@
 
     def _load_group_file(self, fn):
         output = None
+        unwanted = None
         with open(fn, 'r') as fh:
             logger.debug("reading %s", fn)
             for groupname, group in yaml.safe_load(fh).items():
                 if groupname == 'OUTPUT':
                     output = group
                     continue
+                if groupname == 'UNWANTED':
+                    unwanted = set(group)
+                    continue
                 g = Group(groupname, self)
                 g.parse_yml(group)
-        return output
+        return output, unwanted
 
     def load_all_groups(self):
-        output = None
         for fn in glob.glob(os.path.join(self.input_dir, 'group*.yml')):
-            o = self._load_group_file(fn)
-            if not output:
-                output = o
-        return output
+            o, u = self._load_group_file(fn)
+            if o:
+                if self.output is not None:
+                    raise Exception('OUTPUT defined multiple times')
+                self.output = o
+            if u:
+                self.unwanted |= u
 
     def _write_all_groups(self):
         self._check_supplements()
@@ -481,41 +534,34 @@
 
         return pool
 
-    def _collect_unsorted_packages(self):
-        return
+    def _collect_unsorted_packages(self, modules):
         packages = dict()
         for arch in self.architectures:
             pool = self._prepare_pool(arch)
             sel = pool.Selection()
             p = set([s.name for s in
                      pool.solvables_iter() if not
-                     (s.name.endswith('-debuginfo') or
+                     (s.name.endswith('-32bit') or
+                      s.name.endswith('-debuginfo') or
                       s.name.endswith('-debugsource'))])
 
-            for g in self.groups.values():
-                if g.solved:
-                    for a in ('*', arch):
-                        p -= g.solved_packages[a]
-            packages[arch] = p
-
-        common = None
-        # compute common packages across all architectures
-        for arch in packages.keys():
-            if common is None:
-                common = set(packages[arch])
-                continue
-            common &= packages[arch]
-
-        # reduce arch specific set by common ones
-        for arch in packages.keys():
-            packages[arch] -= common
-
-        packages['*'] = common
-
-        g = Group('unsorted', self)
-        g.solved_packages = packages
-        g.solved = True
-
+            p -= self.unwanted
+            for g in modules:
+                for a in ('*', arch):
+                    p -= set(g.solved_packages[a].keys())
+            for package in p:
+                packages.setdefault(package, []).append(arch)
+
+        with open(os.path.join(self.output_dir, 'unsorted.yml'), 'w') as fh:
+            fh.write("unsorted:\n")
+            for p in sorted(packages.keys()):
+                fh.write("  - ")
+                fh.write(p)
+                if len(packages[p]) != len(self.architectures):
+                    fh.write(": [")
+                    fh.write(','.join(sorted(packages[p])))
+                    fh.write("]")
+                fh.write(" \n")
 
 class CommandLineInterface(ToolBase.CommandLineInterface):
 
@@ -597,6 +643,7 @@
 
         # only there to parse the repos
         bs_mirrorfull = os.path.join(os.path.dirname(__file__), 
'bs_mirrorfull')
+        global_update = False
         for prp in self.tool.repos:
             project, repo = prp.split('/')
             for arch in self.tool.architectures:
@@ -608,8 +655,18 @@
                     apiurl = 'https://api.opensuse.org/public'
                 else:
                     apiurl = 'https://api.suse.de/public'
-                subprocess.call(
-                    [bs_mirrorfull, '--nodebug', 
'{}/build/{}/{}/{}'.format(apiurl, project, repo, arch), d])
+                args = [bs_mirrorfull]
+                args.append('--nodebug')
+                args.append('{}/build/{}/{}/{}'.format(apiurl, project, repo, 
arch))
+                args.append(d)
+                p = subprocess.Popen(args, stdout=subprocess.PIPE)
+                repo_update = False
+                for line in p.stdout:
+                    print(line.rstrip())
+                    global_update = True
+                    repo_update = True
+                if not repo_update:
+                    continue
                 files = [os.path.join(d, f)
                          for f in os.listdir(d) if f.endswith('.rpm')]
                 fh = open(d + '.solv', 'w')
@@ -618,6 +675,7 @@
                 p.communicate('\0'.join(files))
                 p.wait()
                 fh.close()
+        return global_update
 
 
     @cmdln.option('--ignore-unresolvable', action='store_true', help='ignore 
unresolvable and missing packges')
@@ -629,8 +687,9 @@
         ${cmd_option_list}
         """
 
-        output = self.tool.load_all_groups()
-        if not output:
+        self.tool.load_all_groups()
+        if not self.tool.output:
+            logger.error('OUTPUT not defined')
             return
 
         if opts.ignore_unresolvable:
@@ -641,19 +700,22 @@
         modules = []
         # the yml parser makes an array out of everything, so
         # we loop a bit more than what we support
-        for group in output:
+        for group in self.tool.output:
             groupname = group.keys()[0]
             settings = group[groupname]
             includes = settings.get('includes', [])
             excludes = settings.get('excludes', [])
             self.tool.solve_module(groupname, includes, excludes)
-            modules.append(self.tool.groups[groupname])
+            g = self.tool.groups[groupname]
+            g.conflicts = settings.get('conflicts', [])
+            modules.append(g)
 
         for module in modules:
+            module.check_dups(modules)
             module.collect_devel_packages(modules)
             module.filter_duplicated_recommends(modules)
 
-        self.tool._collect_unsorted_packages()
+        self.tool._collect_unsorted_packages(modules)
         self.tool._write_all_groups()
 
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/openSUSE-release-tools-20171109.3d34370/repo_checker.py 
new/openSUSE-release-tools-20171112.b690943/repo_checker.py
--- old/openSUSE-release-tools-20171109.3d34370/repo_checker.py 2017-11-09 
14:57:23.000000000 +0100
+++ new/openSUSE-release-tools-20171112.b690943/repo_checker.py 2017-11-12 
11:59:03.000000000 +0100
@@ -125,8 +125,20 @@
             # Corrupted requests may reference non-existent projects and will
             # thus return a None status which should be considered not ready.
             if not status or str(status['overall_state']) not in ('testing', 
'review', 'acceptable'):
-                self.logger.debug('{}: {} not ready'.format(request.reqid, 
group))
-                continue
+                # Not in a "ready" state.
+                openQA_only = False # Not relevant so set to False.
+                if str(status['overall_state']) == 'failed':
+                    # Exception to the rule is openQA only in failed state.
+                    openQA_only = True
+                    for project in api.project_status_walk(status):
+                        if len(project['broken_packages']):
+                            # Broken packages so not just openQA.
+                            openQA_only = False
+                            break
+
+                if not openQA_only:
+                    self.logger.debug('{}: {} not ready'.format(request.reqid, 
group))
+                    continue
 
             # Only interested if request is in consistent state.
             selected = api.project_status_requests('selected')

++++++ openSUSE-release-tools.obsinfo ++++++
--- /var/tmp/diff_new_pack.4mSpa1/_old  2017-11-12 18:11:08.637594846 +0100
+++ /var/tmp/diff_new_pack.4mSpa1/_new  2017-11-12 18:11:08.637594846 +0100
@@ -1,5 +1,5 @@
 name: openSUSE-release-tools
-version: 20171109.3d34370
-mtime: 1510235843
-commit: 3d34370f7904971b92fe687c3d2f702587f12974
+version: 20171112.b690943
+mtime: 1510484343
+commit: b6909435e9c6f4da1d1209378797b0ae78769473
 


Reply via email to