Sorry for being late on this, I had some email-troubles.

On Jan 4, 2011, at 7:53 PM, Toshio Kuratomi wrote:

> On Tue, Jan 4, 2011 at 3:05 AM, Diez Roggisch <[email protected]> wrote:
>>> The OS dependency management is really not what I'm trying to be
>>> illusttrative of here.  I'm trying to show that deploying a single rpm of
>>> all your dependencies is problematic compared to deploying with an rpm for
>>> each of your dependencies.  This is not a comparison of system package
>>> managers vs virtualenvs.
>> 
>> You claim it is problematic. I see no attempt at proving that. But see below.
>> 
> I've already said that in the first email
> """
> A little unrelated, putting all of the eggs you depend on into a single
> deb/rpm really is a recipe for disaster as it means that you need to manage
> the builds for all of the packages should you need to make changes to just
> one.
> """

Which is in what respect more disastrous or even tedious than having to manage 
all the builds for the *various* deb/rpm packages?

my scenario:

 - a virtualenv
 - my own software I develop
 - one way to wrap it up in a deb/rpm
 - in case of updated package, 

  1) "easy_install -U <the_package>",
  2)  create deb
  3) install on target machine

whereas in your scenario, you need 

 - checkouts for all packages you install
 - packaging scripts for all of them
 - in case of updated package

   1) update the working copy of the respective package, 
   2) package it as deb/rpm
   3) package your software, with updated dependencies the way you describe 
below
   4) install both packages

This is more work, and especially updating the meta-information is tedious and 
much more error-prone than simply to not having to do it at all.

So, again: where is the disaster looming?


>>>>  - manage to install it without provoking a conflict with the system's 
>>>> version
>>> 
>>> In either case, once Step 5 above is complete, installation is:
>>> 
>>> yum install -y python-sqlalchemy0.6
>>> 
>>> There will be no conflicts because of the way we installed took care to
>>> use eggs to install in parallel.
>> 
>> This is the interesting bit. You say they are installed in parallel. How so 
>> exactly?
> 
> As written above, the following two steps:
>  CFLAGS="$RPM_OPT_FLAGS" %{__python} setup.py--with-cextensions bdist_egg
>  easy_install -m --prefix %{buildroot}%{_usr} dist/*.egg
> 
> That installs the package into the
> /usr/lib64/python2.7/site-packages/SQLAlchemy-0.6.5-py2.7.egg-info/
> directory and makes it available to the egg mechanism to find.
> 
>> And how do I determine which version I get if I do
>> 
>> $ python
>>>>> import sqlalchemy
>> 
> Taking that invocation literally, you would get the version installed
> by the OS vendor.  (In my example, that's SQLAlchemy 0.3)  If you add
> the code that I mentioned in the previous email, the code that you
> write will pick up version 0.6 of sqlalchemy when you do import
> sqlalchemy.  See the next section for more on that:
> 
>> Especially, if I'm going to deploy/develop code for both respective versions?
>> 
> 
> As I wrote before, to get the specific versions that you wish, you
> need to get the dependency (indirectly is fine and what I consider
> best practice) into __requires__ before you import pkg_resources.  For
> instance, if you wanted to import the 0.6 version of sqlalchemy in my
> example from the python interpreter you could do this:
> 
> $ python
>>>> __requires__ = ['SQLAlchemy >= 0.6']
>>>> import pkg_resources
>>>> import sqlalchemy
>>>> sqlalchemy.__version__
> '0.6.5'
> 
> In what I wrote before, I made the requirement for SQLAlchemy >= 0.6
> part of the egg metadata for the application (listing it in setup.py)
> and then set __requires__ to require the app.  This indirectly
> references the required version of SQLAlchemy and I consider it a
> better practice in general as it means setup.py is the only place you
> need to update your versions should things chang  e(For instance if
> you update your SQLAlchemy requirement or if you require a specific
> version of Mako and Pylons as well).
> 
> See my previous message for where the places I've found necessary to
> set __requires__ for deploying via paster vs deploying via mod_wsgi
> are.


Sorry, I somehow missed the __requires__, no idea why. It's interesting & good 
to know that debs/rpms can hook into the setuptools mechanism for pre-selecting 
versions, I didn't know that.

So I will refrain from thinking (and telling) that packages aren't up to the 
task, apparently they are.

But IMHO the whole procedure is rather elaborated, and and explicit repetition 
of implicit version dependencies I get if I "groom" my virtualenv. No need to 
create anything for the dependencies, no need to work with working-copies and 
so forth.

I'm sorry, but I'm still  not convinced that this is a road I would like to 
take. If the involved efford was for "the greater good", meaning that it's done 
with the intent to make the packages available for 3rd parties to use - fine.

But for my own system, especially for e.g. my own versions of standard packages 
(we run a patched version of ToscaWidgets), it's all the work without the gain 
to me.

> Of course.  I was going to write something about that but decided not
> to because it seemed obvious that you, yourself, always carry this
> burden once you decide not to use the vendor supplied package.  The
> burden of monitoring for security and bugfixes, incompatibility in
> potential updates, etc becomes yours to carry whether you're building
> a single rpm for the dependent modules, many rpms, or deploying in a
> virtualenv.  From this point of view, I'd say that using system
> packages as much as possible is highly to your benefit (as that means
> that you have to watch for changes in the fewest upstreams possible).
> Of course, you need to settle on a Linux distribution that maintains
> API compatibility when doing updates so that you can thoughtfully
> build your requirements on top of it rather than fighting with it when
> it upgrades to an incompatible version and you have to scramble to
> either deploy an older compatible version or update your code.

Of course I'm aware that this burden is shared in both approaches. It's 
obvious, as you say. But as long as one doesn't strive for the above mentioned  
general availability of packages for other users as well (or in turn you use 
such packages), it just adds to the effort.

And also of course we use e.g. psycopg2 from the system's distro, gaining the 
benefit of any updates it might experience through the lifetime of the 
distribution's version in use.

But we also have been bitten numerous times by implicit installs of other 
python packages as system-packages that conflicted with our needs. It has not 
yet driven us to forego system packages fully by --no-site-packages isolated 
venvs. But again, it tips the scales.

In summary, I'm happy to hear that things work on a technical level, and if I'd 
suffer from a BOFHA that forced me to install all dependencies as single deb, 
it's good to know I can do it.

But I'm still thinking that a self-managed venv (potentially packaged up for 
distribution purposes) serves our and the most needs better, as it requires 
less work.

Diez

-- 
You received this message because you are subscribed to the Google Groups 
"TurboGears" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/turbogears?hl=en.

Reply via email to