Hi gang,

according to #228 <https://fedorahosted.org/autoqa/ticket/228>,
and discussions with kparal, I've rewritten the koji watcher,
so it can check also the -pending tags in koji.

For the -pendig tags, we're using quite a different querying model
based on parsing the tagHistory (e.g.
# koji list-tag-history --tag='dist-f14-updates-pending'
)
The benefit of this solution is, that we can catch the situations 
like

1) package Foo is built at date XYZ. It gains tag dist-f14-updates-pending
2) koji-watcher founds out "ha, new package, let's test it"
3) tests are OK
4) package Foo gains dist-f14-updates-testing-pending tag
5) we'd like to run tests like depcheck on it, but because the 'built at'
   date, which the actual watcher checks is not altered, we miss the change

But parsing the tag history, we only check for new 'events' in the tag
history (i.e. 'package foo got the -pending tag few minutes ago'), independently
on the build time -> win :)

The drawback is, that at the moment, querying for the tag history takes time
because koji sends the whole tag history (a lot of data). I'm discussing a minor
change which would allow us to specify "give me the history which is newer than
date XYZ" (as is currently used for fetching new builds) with jkeating.

Because of this, the non-pending repos are still handled the 'old' way.

....

We also found us in need of 'batch' scheduling - e.g. we don't want to run
depcheck for every package built, but we'd like just to inform autoqa
"hey, there is new stuff in dist-f14-updates-pending tag, run stuff".
So there is new watch-koji-builds-batch watcher.

The thing is, that it is not really a watcher as such, only the hook.py
file. The reason to do this is, that querying koji is time-consuming operation,
and there is no need to do it twice, to get the same data. So there are two
different 'schedule jobs' methods in the watch-koji-build hook, and one
schedules 'per package' jobs, and the second does it as a 'batch'.

(note that the batch one is not yet tested - I just wanted to post you the
patch, so you can see the new 'concept'. The per-package part is tested,
and considered to be working).


Comments are more than welcomed - looking forward to hearing from you!

Joza
diff --git a/hooks/post-koji-build-batch/README b/hooks/post-koji-build-batch/README
new file mode 100644
index 0000000..1fa953b
--- /dev/null
+++ b/hooks/post-koji-build-batch/README
@@ -0,0 +1,15 @@
+This hook is for scripts that run after a new build in Koji for specific tag.
+
+This is for the tests which need to test the whole repo (e.g. depcheck).
+
+The required arguments are:
+  --kojitag: the koji tag which has 'new packages' (e.g. dist-f14-updates-candidate)
+and the list of ENVRS, specifying the new packages (might not be used in the test)
+
+AutoQA tests can expect the following variables from post-koji-build-batch hook:
+  envrs: list of package ENVRs (epoch may be skipped if 0)
+  kojitag: koji tag applied to this package
+
+NOTE:
+The watcher is actually in the post-koji-build directory, because querying Koji
+takes quite a lot of time, and we do not need to do it twice.
diff --git a/hooks/post-koji-build-batch/hook.py b/hooks/post-koji-build-batch/hook.py
new file mode 100644
index 0000000..fa4efa2
--- /dev/null
+++ b/hooks/post-koji-build-batch/hook.py
@@ -0,0 +1,52 @@
+# post-koji-build hook.py
+#
+# Copyright 2010, Red Hat, Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License along
+# with this program; if not, write to the Free Software Foundation, Inc.,
+# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+#
+# Authors:
+#   Will Woods <[email protected]>
+#   Josef Skladanka <[email protected]>
+
+import optparse
+
+name = 'post-koji-build-batch'
+
+def extend_parser(parser):
+    '''Extend the given OptionParser object with settings for this hook.'''
+    parser.set_usage('%%prog %s [options] PACKAGE_ENVR' % name)
+    group = optparse.OptionGroup(parser, '%s options' % name)
+    group.add_option('-k', '--kojitag', default='',
+        help='Koji tag that has just been applied to this new build')
+    parser.add_option_group(group)
+    return parser
+
+def process_testdata(parser, opts, args, **extra):
+    '''Given an optparse.Values object and a list of args (as returned from
+    OptionParser.parse_args()), return a dict containing the appropriate key=val
+    pairs required by test's control files and test object. The hook can also
+    call parser.error here if it find out that not all options are correctly
+    populated.'''
+
+    if not args:
+        parser.error('No ENVR was specified as a test argument!')
+    if not opts.kojitag:
+        parser.error('--kojitag is a mandatory option!')
+
+    # parse name from ENVR
+    envrs = args
+
+    testdata = {'envrs': envr, 'kojitag': opts.kojitag}
+    return testdata
diff --git a/hooks/post-koji-build/watch-koji-builds.py b/hooks/post-koji-build/watch-koji-builds.py
index 4cc2d7f..33ee8d2 100755
--- a/hooks/post-koji-build/watch-koji-builds.py
+++ b/hooks/post-koji-build/watch-koji-builds.py
@@ -1,7 +1,8 @@
 #!/usr/bin/python
 #
-# watch-koji-builds.py - script to watch a set of koji tags for new builds and
-#                        kick off tests when new builds/packages are found.
+# watch-koji-builds.py - script to watch a set of koji tags for new builds
+#                        and kick off tests when new builds/packages 
+#                        are found.
 #
 # Copyright 2010, Red Hat, Inc.
 #
@@ -20,131 +21,329 @@
 # 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
 #
 # Authors:
-#     Will Woods <[email protected]>
+#     Josef Skladanka <[email protected]>
 
 import os
 import sys
-import koji
 import time
 import pickle
-import xmlrpclib
 import subprocess
 import optparse
 from autoqa import koji_utils
 from autoqa.repoinfo import repoinfo
 
-# Directory where we cache info between runs
-cachedir = '/var/cache/autoqa/watch-koji-builds'
-cachefile = os.path.join(cachedir,"untagged")
-try:
-    os.makedirs(cachedir)
-except OSError, e:
-    if e.errno != 17: # already exists
-        raise
-# XXX configparser? /etc/koji.conf, section 'koji', item 'server'
-#     alternately: read from e.g. /etc/autoqa/autoqa.conf
-kojiserver = 'http://koji.fedoraproject.org/kojihub'
-
-def get_prevtime():
-    '''Returns the prevtime to use when looking for new builds.  Value will be
-       either 1) the mtime of the cachefile, or 2) if no cachefile is found, current
-       time - 3 hours.'''
-    try:
-        return os.stat(cachefile)[8] # mtime
-    except:
-        return time.time() - 10800
 
-def load_untagged():
-    '''Load the list of untagged builds seen by previous runs'''
-    try:
-        f = open(cachefile)
-        untagged = pickle.load(f)
+class KojiWatcher(object):
+    kojiserver = 'http://koji.fedoraproject.org/kojihub'
+    kojiopts = {} # Possible items: user, password, debug_xmlrpc, debug..
+    cachedir = '/var/cache/autoqa/watch-koji-builds/'
+    cachefile = os.path.join(cachedir, 'koji_history_prevtimes.cache')
+
+    def __init__(self, verbose = False, dry_run = False, prevtime = None):
+        self.dry_run = dry_run
+        self.verbose = verbose
+
+        if self.verbose:
+            print "Connecting to koji:", self.kojiserver
+        self.session = koji_utils.SimpleKojiClientSession(self.kojiserver, self.kojiopts)
+
+        # If user did not set the -prevtime parameter, use cached
+        if prevtime is None:
+            if self.verbose:
+                print "Loading prevtimes from cache"
+            self.default_prevtime = time.time() - 10800
+            self.prevtimes = self._load_prevtimes_from_cache() # returns dict
+        # but if the prevtime was set, we must make sure to use it.
+        else: 
+            self.default_prevtime = prevtime
+            self.prevtimes = {} # if this is empty, default_prevtime is used (see list_new_builds
+        if self.verbose:
+            print "Loading taglist"
+        self.taglist, self.master_repos, self.pending_repos = self._load_repoinfo() # returns tuple of sets
+
+    def _load_repoinfo(self):
+        """
+        Loads the tags from repoinfo.conf using autoqa.repoinfo.
+
+        returns (taglist, master_repos, pending_repost)
+            taglist        =   Set of all tags found in repoinfo
+            master_repos   =   Set of repos which do not have 'parent' i.e. the 'main' or 'stable' repos.
+                                Master repos do not have appropriate '-pending' tag in koji, because
+                                no new packages get to these repos.
+                                These are internaly used for grouping & filtering of builds.
+            pending_repos =    Set of all '-pending' tags in koji.
+
+        """
+
+        taglist = set()
+        master_repos = set()
+        pending_repos = set()
+        for repo in repoinfo.repos():
+            # Always include the tag for the development branch (aka rawhide)
+            if repoinfo.get(repo, "collection_name") == "devel":
+                taglist.add(repoinfo.get(repo, "tag"))
+
+            # for all the repos do:
+            #     if repo has no parent (i.e. is base repo):
+            #         add '$repo$-updates-candidate' to taglist
+            #     else:
+            #         add '$repo$-pending' to taglist
+            #
+            # NOTE: This assumes that updates are tagged in the format
+            # 'dist-fNN-updates-candidate'
+            else:
+                if len(repoinfo.getparents(repo)) == 0:
+                    reponame = repoinfo.get(repo, "tag") + "-updates-candidate"
+                    taglist.add(reponame)
+                    master_repos.add(repoinfo.get(repo, "tag")) #save the 'master' repo for later filtering
+                else:
+                    reponame = repoinfo.get(repo, "tag") + "-pending"
+                    taglist.add(reponame)
+                    pending_repos.add(reponame)
+
+        if self.verbose:
+           print "\ntaglist:", taglist, "\n\n", "pending_repos:", pending_repos, "\n"
+        return taglist, master_repos, pending_repos
+
+    def _load_prevtimes_from_cache(self):
+        """
+        Returns stored prevtimes - i.e. the create_ts of the last package in the appropriate tag.
+        Return format = {tag : prevtime, ...}
+        """
+        try:
+            f = open(self.cachefile)
+            prevtimes = pickle.load(f)
+            f.close()
+            return prevtimes
+        except:
+            return {}
+
+    def _save_prevtimes_to_cache(self):
+        """
+        Saves self.prevtimes to self.cachefile.
+        This cache is used to determine the from which to continue during next runs.
+        """
+        try:
+            os.makedirs(self.cachedir)
+        except OSError, e:
+            if e.errno != 17: # already exists
+                raise
+        f = open(self.cachefile, "w")
+        pickle.dump(self.prevtimes, f)
         f.close()
-        return untagged
-    except:
-        return []
-
-def save_list(pkglist):
-    '''Save a list of builds for a subsequent run'''
-    out = open(cachefile,"w")
-    pickle.dump(pkglist, out)
-    out.close()
-
-def new_builds_since(session, taglist, prevtime):
-
-    untagged_builds = []
-    tagged_builds = {}
-    for tag in taglist:
-        tagged_builds[tag] = []
-    builds = session.list_builds_since(prevtime)
-    builds += load_untagged() # untagged builds from previous run
-    for nvr in builds:
-        tags = set(session.tag_history(nvr))
-        if tags:
-            # Only check packages with tags from the taglist
-            for t in taglist.intersection(tags):
-                tagged_builds[t].append(nvr)
-        elif not session.build_failed(nvr):
-            # No tag yet but the build is still OK - check again later
-            untagged_builds.append(nvr)
-    if opts.verbose:
-        print "untagged: %s" % " ".join(
-            sorted([nvr['nvr'] for nvr in untagged_builds]))
-        print "tagged: %s" % " ".join(
-            sorted([nvr['nvr'] for list in tagged_builds.values() for nvr in list]))
-    return (tagged_builds, untagged_builds)
 
-if __name__ == '__main__':
-    # Set up the option parser
-    # TODO: optparse for prevtime
-    parser = optparse.OptionParser(description='Script to watch a set of koji \
-tags for new builds and kick off tests when new builds/packages are found.')
-    parser.add_option('--dryrun', '--dry-run', action='store_true',
-        help='Do not actually execute commands, just show what would be done')
-    parser.add_option('-p', '--prevtime', action='store',
-        type='float', default=get_prevtime(),
-        help='How far back to look for new builds (seconds since last epoch)')
-    parser.add_option('--verbose', action='store_true',
-        help='Print extra information')
-    (opts, args) = parser.parse_args()
 
-    # Using repoinfo, establish the set of tags to look for
-    taglist = set()
-    for repo in repoinfo.repos():
-        # Always include the tag for the development branch (aka rawhide)
-        if repoinfo.get(repo, "collection_name") == "devel":
-            taglist.add(repoinfo.get(repo, "tag"))
-        # otherwise, find all *-updates-candidate tags to include
-        # NOTE: This assumes that updates are tagged in the format
-        # 'dist-fNN-updates-candidate'
-        elif len(repoinfo.getparents(repo)) == 0:
-            taglist.add(repoinfo.get(repo, "tag") + "-updates-candidate")
-
-    if opts.verbose:
-        print "Looking up builds since %s (%s)" % (opts.prevtime, time.ctime(opts.prevtime))
-        print "Looking for builds with tags %s" % " ".join(sorted(taglist))
-
-    # Set up the koji connection
-    kojiopts = {} # Possible items: user, password, debug_xmlrpc, debug..
-    session = koji_utils.SimpleKojiClientSession(kojiserver, kojiopts)
-    untagged_builds = []
-    tagged_builds = []
-    exit_status = 0
-    try:
-        session.ensure_connection()
-        (tagged_builds, untagged_builds) = new_builds_since(session,
-                                            taglist, opts.prevtime)
-        if not opts.dryrun:
-            save_list(untagged_builds)
-        for tag, builds in tagged_builds.items():
-            for b in builds:
+    def new_builds_since(self, tag, prevtime):
+        """
+        Inspects the tag history in Koji, and returns all new "active" packages for that tag
+        which have crete_ts > prevtime.
+
+        Internaly used only for -pending tags, because this tends to be sloooow for
+        -updates-candidate.
+
+        Return format = [koji_build_info, ...]
+        """
+        self.session.ensure_connection()
+        updates = []
+        # get history from koji & filter everything newer than prevtime
+        if self.verbose:
+            print "    Fetching tagHistory from koji"
+        updates = filter(lambda pkg: pkg['create_ts'] > prevtime, self.session.tagHistory(tag = tag))
+        if self.verbose:
+            print "    Sorting according to create_ts"
+        updates.sort(key = lambda pkg: pkg['create_ts'])
+        # sanity_check - filter just "active" builds
+        if self.verbose:
+            print "    Filtering out non-active builds"
+        non_active = filter(lambda pkg: pkg['active'] is None, updates)
+        #TODO: print non-active builds to stderr - these skipped our attention somehow
+        if self.verbose:
+            print "    Filtering active builds"
+        updates = filter(lambda pkg: pkg['active'] is not None, updates)
+
+        # update the appropriate prevtime in self.prevtimes
+        if len(updates) > 0:
+            self.prevtimes[tag] = updates[-1]['create_ts']
+
+        # sadly the tagHistory does not all the required information
+        # we need to run the tests, so let's fetch the 'good' info
+        if self.verbose:
+            print "    Getting real info about builds"
+        for i in range(len(updates)):
+            b = updates[i]
+            updates[i] = self.session.getBuild("%s-%s-%s" % (b['name'], b['version'], b['release']))
+            # TODO: maybe this is bug in KOJI - check it out
+            # because getBuild returns 'id' instead of 'build_id'
+            updates[i]['build_id'] = updates[i]['id']
+
+        return updates
+
+    def new_builds_in_updates_candidate(self, taglist, prevtime):
+        """
+        Because listing tag history for -updates-candidate is slooow,
+        we're internally using this method to deal with updates-candidate tags.
+        This is the 'old' implementation from post-koji hook
+
+        Hopefully koji will get an enhancement, which will allow to query
+        only a subset of the tag history, so it would be significantly faster.
+
+        This is now being consulted with jkeating.
+
+        Return format = {tag: [new_builds], ...}
+        """
+
+        untagged_builds = []
+        tagged_builds = {}
+        for tag in taglist:
+            tagged_builds[tag] = []
+
+        if self.verbose:
+            print "    Fetching builds since:", prevtime
+
+        builds = self.session.list_builds_since(prevtime)
+        try:
+            builds += self.prevtimes['untagged_builds'] # untagged builds from previous run
+            if self.verbose:
+                print "    Addind previously stored untagged builds"
+        except KeyError:
+            # no previously stored undagged builds, pass
+            if self.verbose:
+                print "    No previously stored untagged builds"
+            pass
+
+        if self.verbose:
+            print "    Sorting according to completion_ts"
+        builds.sort(key = lambda pkg: pkg['completion_ts'])
+        try:
+            self.prevtimes['updates_candidate'] = builds[-1]['completion_ts']
+        except IndexError:
+            if self.verbose:
+                print "    No new builds since last check"
+            pass
+
+        if self.verbose:
+            print "    Assorting according to tags"
+        for nvr in builds:
+            tags = set(self.session.tag_history(nvr))
+            if tags:
+                # Only check packages with tags from the taglist
+                for t in taglist.intersection(tags):
+                    tagged_builds[t].append(nvr)
+            elif not self.session.build_failed(nvr):
+                # No tag yet but the build is still OK - check again later
+                untagged_builds.append(nvr)
+        if self.verbose:
+            print "    Assoring done"
+
+#        if self.verbose:
+#            print "    untagged: %s" % " ".join(
+#                sorted([nvr['nvr'] for nvr in untagged_builds]))
+#            print "    tagged: %s" % " ".join(
+#                sorted([nvr['nvr'] for list in tagged_builds.values() for nvr in list]))
+
+        self.prevtimes['untagged_builds'] = untagged_builds
+
+        return tagged_builds
+
+    def list_new_builds(self):
+        """
+        Cretes a list of new builds for all the tags specified in repoinfo.conf
+        (see _load_repoinfo for more information). It also updates the self.prevtimes
+        to the 'crete_ts' of the build in the appropriate tag.
+
+        There are more tags for each distro:
+          '*-updates-candidate' which contains 
+             1) all updates which have not yet been pushed to the '-testing' or '-updates'
+                and which are not yet in '-pending'
+             2) all updates which have '-pending' tag.
+
+        Because '-pending' have also the '-updates-candidate' for the same build,
+        we remove all the builds which are in '-pending' from the '-updates-candidate'
+        (so we do not have duplicate entries).
+
+        Return format: {tag: [list of packages], ...}
+        """
+        history = {}
+        # this will list the tag history from koji for each tag
+        # we need to care just about the still 'active' builds
+        # so let's filter these, and store the list of 'active' packages
+        # into the dictionary with the tag as key
+        # so we'll have something like:
+        #   history['dist-f14-updates-pending'] = [list of active packages in f14-updates-pending]
+        #   ...
+        taglist = list(self.taglist)
+        for tag in filter(lambda x: x.endswith("-pending"), taglist):
+            if self.verbose:
+                print "Checking", tag
+            try:
+                prevtime = self.prevtimes[tag]
+            except KeyError:
+                prevtime = self.default_prevtime
+
+            history[tag] = self.new_builds_since(tag, prevtime)
+
+        # checking the non-pending tags the old way
+        taglist = filter(lambda x: not x.endswith("-pending"), taglist)
+        if self.verbose:
+            print "Checking the old way:", taglist
+
+        try:
+            prevtime = self.prevtimes['updates_candidate']
+        except KeyError:
+            prevtime = self.default_prevtime
+
+        # fetch the new builds
+        new_builds = self.new_builds_in_updates_candidate(set(taglist), prevtime)
+        # and store them to the history dict the 'right' way
+        for tag in new_builds.keys():
+            history[tag] = new_builds[tag]
+
+        # -pending tags should be subset of the -update-candidate, so let's remove duplicates
+        if self.verbose:
+            print "Filtering duplicates"
+        for master in self.master_repos:
+            updates_candidate_tag = master + "-updates-candidate"
+            if updates_candidate_tag not in history.keys():
+                continue
+            for pending in filter(lambda x: x.startswith(master), self.pending_repos):
+                if pending not in history.keys():
+                    continue
+                for build in history[pending]:
+                    history[updates_candidate_tag] = filter(lambda x: x['build_id'] != build['build_id'], history[updates_candidate_tag])
+
+        # now the -update-candidate & -pending groups should not contain the same packages
+        # let's return the dictionary for further usage
+        return history
+
+
+    def schedule_jobs(self, new_builds):
+        """
+        This should actually do the appropriate scheduling - 
+        i.e. calling the harness in the required way.
+
+        This should be the only method to reimplement in all
+        the different post-koji wathers (i.e. the batch watch etc.)
+
+        Or we can implement several 'schedule_jobs' methods in
+        this particular instance, and run them in the run() method.
+        """
+        #raise NotImplementedError("Not implemented")
+
+        # Original post-koji build implementation
+        exit_status = 0
+        self.session.ensure_connection()
+        for tag in sorted(new_builds.keys()):
+            if self.verbose:
+                print "Scheduling jobs for", tag
+            for b in new_builds[tag]:
                 # Get a list of all package arches in this build
-                arches = [r['arch'] for r in session.listRPMs(b['build_id'])]
+                arches = [r['arch'] for r in self.session.listRPMs(b['build_id'])]
                 harnesscall = ['autoqa', 'post-koji-build',
                                '--kojitag', tag]
 
                 # Invoke autoqa with an --arch arg for each arch we care about
-                repoarches = set(repoinfo.getrepo_by_tag(tag).get("arches"))
+                p_tag = tag.replace('-pending', '')
+                repoarches = set(repoinfo.getrepo_by_tag(p_tag).get("arches"))
                 testarches = set(repoarches).intersection(arches)
 
                 for arch in testarches:
@@ -154,13 +353,111 @@ tags for new builds and kick off tests when new builds/packages are found.')
                 else:
                     envr = b['nvr']
                 harnesscall.append(envr)
-                if opts.dryrun:
+
+                if self.dry_run:
                     print " ".join(harnesscall)
                     continue
+
                 retval = subprocess.call(harnesscall)
                 if retval != 0:
                     exit_status = 10
 
+        return exit_status
+
+    def schedule_jobs_batch(self, new_builds):
+        """
+        This is alternate job scheduler for the 'virtual'
+        post-koji-build-batch watcher.
+
+        We decided, that pulling the information from koji twice
+        would not be efficient, so we schedule both 'per package' and
+        'batch' jobs using the data gathered once.
+        """
+        exit status = 0
+        self.session.ensure_connection()
+        for tag in sorted(new_builds.keys()):
+            harness_arches = set()
+            harness_envrs = []
+            p_tag = tag.replace('-pending', '')
+            repoarches = set(repoinfo.getrepo_by_tag(p_tag).get("arches"))
+
+            for b in new_builds[tag]:
+                # do not query koji for arches, if we already have rpms from all
+                # possible arches from the repoarches
+                if harness_arches != repoarches:
+                    arches = [r['arch'] for r in self.session.listRPMs(b['build_id'])]
+                    testarches = set(repoarches).intersection(arches)
+                    harness_arches.update(testarches)
+
+                if b['epoch']:
+                    envr = '%s:%s' % (b['epoch'], b['nvr'])
+                else:
+                    envr = b['nvr']
+
+                harness_envrs.append(envr)
+
+            harnescall = ['autoqa', 'post-koji-build-batch',
+                          '--kojitag', tag]
+            harnescall.append(harness_envrs)
+
+            if self.dry_run:
+                print " ".join(harnescall)
+                continue
+
+            retval = subprocess.call(harnescall)
+            if retval != 0:
+                exit_status = 10
+
+        return exit_status
+
+
+    def run(self):
+        """
+        Call this method to run the watcher.
+
+        It calls all the method in order to schedule jobs to autotest.
+        (or list them, if self.dry_run is set to true)
+        """
+        if self.verbose:
+            print "Looking for new builds"
+        new_builds = self.list_new_builds() # this updates self.prevtimes
+        if not self.dry_run:
+            if self.verbose:
+                print "Saving prevtimes to cache"
+            self._save_prevtimes_to_cache()
+
+        if self.verbose:
+            print "Scheduling jobs"
+
+        exit_status = []
+        exit_status.append(self.schedule_jobs(new_builds))
+        exit_status_2.append(self.schedule_jobs_batch(new_builds))
+        for e in exit_status:
+            if e != 0:
+                return e
+        return 0
+
+
+
+if __name__ == '__main__':
+    # Set up the option parser
+    # TODO: optparse for prevtime
+    parser = optparse.OptionParser(description='Script to watch a set of koji \
+tags for new builds and kick off tests when new builds/packages are found.')
+    parser.add_option('--dryrun', '--dry-run', action='store_true',
+        help='Do not actually execute commands, just show what would be done')
+    parser.add_option('-p', '--prevtime', action='store',
+        type='float', default=None,
+        help='How far back to look for new builds (seconds since last epoch)')
+    parser.add_option('--verbose', action='store_true',
+        help='Print extra information')
+    (opts, args) = parser.parse_args()
+
+
+    try:
+        watcher = KojiWatcher(verbose = opts.verbose, dry_run = opts.dryrun, prevtime = opts.prevtime)
+        exit_status = watcher.run()
+
     except KeyboardInterrupt:
         print "Exiting on keyboard interrupt."
         sys.exit(1)
_______________________________________________
autoqa-devel mailing list
[email protected]
https://fedorahosted.org/mailman/listinfo/autoqa-devel

Reply via email to