Single string vs list of strings
Hi All, I have a need to determine whether a passed variable is a single string, or a list of strings. What is the most pythonic way to do this? Thanks. -Scott -- http://mail.python.org/mailman/listinfo/python-list
Eggs, VirtualEnv, and Apt - best practices?
Hello all, Our development group at work seems to be heading towards adopting python as one of our standard systems languages for internal application development (yeah!). One of the issues that's come up is the problem with apt (deb packages) vs eggs, vs virtual environments. We're probably gonna end up using Pylons or TurboGears for web-based apps, and I've recommended virtualenv, but one of the other developers has had some inconsistencies when mixing systems with python installed from apt (all our servers are debian or ubuntu based) vs when installed under virtualenv. I have basically recommended that we only install the python base (core language) from apt, and that everything else should be installed into virtual environments. But I wanted to check to see how other enterprises are handling this issue? Are you building python from scratch, or using specific sets of .deb packages, or some other process. Any insight into the best way to have a consistent, repeatable, controllable development and production environment would be much appreciated. Suggestions on build/rollout tools (like zc.buildout, Paver, etc) would also be appreciated. Thanks!!! -Scott -- http://mail.python.org/mailman/listinfo/python-list
Re: Eggs, VirtualEnv, and Apt - best practices?
Diez B. Roggisch wrote: Dmitry S. Makovey schrieb: Scott Sharkey wrote: Any insight into the best way to have a consistent, repeatable, controllable development and production environment would be much appreciated. you have just described OS package building ;) Except that we do need multiple different environments on one server, and also have issues where our servers may be Windows. I can't speak for everybody, but supporting multiple platforms (PHP, Perl, Python, Java) we found that the only way to stay consistent is to use OS native packaging tools (in your case apt and .deb ) and if you're missing something - roll your own package. After a while you accumulate plenty of templates to chose from when you need yet-another-library not available upstream in your preferred package format. Remember that some python tools might depend on non-python packages, so the only way to make sure all that is consistent across environment - use unified package management. That this is a desirable goal can't be argued against. Yet two big hurdles make it often impractical to be dogmatic about that: - different OS. I for one don't know about a package management tool for windows. And while our servers use Linux (and I as developer as well), all the rest of our people use windows. No use telling them to apt-get instal python-imaging. Exactly! - keeping track of recent developments. In the Python webframework world for example (which the OP seems to be working with), things move fast. Or extremly slow, regarding releases. Take Django - until 2 month ago, there hasn't been a stable release for *years*. Virtually everybody was working with trunk. And given the rather strict packaging policies of debian and consorts, you'd be cut off of recent developments as well as of bugfixes. Very much the case. Most of debian's packages for python are woefully out of date, it seems. And then we're at the whim of the os provider as to when updates happen, rather than being controlled by our staff. I am very interested in the eggbasket project - that's something that's been needed for a while. And I'm aware of the setuptools fork, and the discussion on the distutils sig mailing list. Thanks. -Scott -- http://mail.python.org/mailman/listinfo/python-list
naive packaging question
Hello all, I've read a number of the python books, and several online tutorials about modules and packaging, but not one addresses this issue, so I thought I'd ask here... I am building a library for use in an internal project. This library is the client side interface to a REST-ful service that provides access to parts of our accounting database. BUT, we are pretty sure that the accounting database and hence the service implementation will change in the future. So, I want to design a generic (abstract) api for fetching various info from the accounting db, but I want to isolate the specific details into a module/package that can be changed in future (and co-exist with the old one). I've designed a generic api class, with functions to fetch the various info into python data structures (mostly lists of dictionaries, some just single values). And I've got an interface-specific version that follows that same api, and which is derived from the generic api. I'm a bit unclear on the best way to implement the module and package. Here's the directory structure that I've got so far: project dir top level directory setup.py company direventually, we'll have other modules __init__.py error.py some classes that will be used in all log.py modules acctdb dir the acct db interface directory __init__.py api.py the abstract class def (derived from object) specific.py the specific implementation, derived from the api base class For arguments sake, let's call the base class (defined in api.py) 'BaseClass', and the specific implementation 'SpecificClass(BaseClass)' So, in the acctdb/__init__.py, do I do something like this: if SPECIFIC_CLASS: from company.acctdb.specific import SpecificClass as BaseClass with the idea that at some point in the future I'd designate a different class in some other way? Hopefully this is enough info for you to see what I'm trying to accomplish. It's a bit like the DB interfaces, where there is a generic DB API, and then the different drivers to implement that API (MySQL, etc). Thanks for any suggestions! -scott -- http://mail.python.org/mailman/listinfo/python-list
Persistent HTTP Connections with Python?
Hello All, I am trying to write a python script to talk to an xml-based stock feed service. They are telling me that I must connect and login, and then issue refresh requests to fetch the data. This sounds a lot (to me) like HTTP 1.1 persistent connections. Can I do that with the urllib functions, or do I need to use the httplib functions for this kind of work. Pointers and/or sample code would be much appreciated. Thanks! -scott -- http://mail.python.org/mailman/listinfo/python-list