Ok, so now I'm convinced that going the fork()/exec() route is the only sane option.
> What setup code? The "download helper" only needs to know the information we are passing to urlgrabber ... I mean stuff like: - selecting the reget strategy - http caching, keepalive, basic authentication - ssl verification, client certificates - proxies (http, ftp) - throttling, timeouts If the downloader is external, it has to: a) parse and verify in yum, then pass it to the helper. b) use independent setup for the helper. Doing a) keeps things as they used to be, but b) might make more sense, to keep information only at one place. > There's drpm and metadata. For drpm as an end result we want a merged download path ... > For metadata we really need good APIs from urlgrabber, and the experience of doing it for packages ... and a bunch of work :). I think the async interface I just put in yumRepo.py might work well for drpms too.. but for metadata, we'll likely need something different. Putting a similar API in urlgrabber is impossible without rewriting the stack of code above it to use some early return + callback model. Right now I think it's easier just to step it over, as I did. _______________________________________________ Yum-devel mailing list [email protected] http://lists.baseurl.org/mailman/listinfo/yum-devel
