Re: tab completion?
On Mar 4, 8:13 am, Siddhant <[EMAIL PROTECTED]> wrote: > Hi people. > I was just wondering if a tab-completion feature in python command > line interface would be helpful? > If yes, then how can I implement it? > Thanks, > Siddhant Is this what you are looking for? http://docs.python.org/lib/module-rlcompleter.html -- http://mail.python.org/mailman/listinfo/python-list
Re: psycopg2: connect copy_from and copy_to
On Feb 20, 9:27 am, Thomas Guettler <[EMAIL PROTECTED]> wrote: > Up to now I am happy with psycopg2. Yeah. psyco is good. > Why do you develop pg_proboscis? [Good or bad as they may be] 1. Alternate interface ("greentrunk") 2. Non-libpq implementation yields better control over the wire that allows: a. Custom communication channels (not limited to libpq's worldview) b. Leveraging of protocol features that libpq's API does not fully accommodate (think bulk INSERTs using prepared statements with less round-trip costs) c. Allows custom "sub-protocols"(I use this to implement a remote python command/console[pdb'ing stored Python procedures, zero network latency scripts]). 3. Makes use of binary types to reduce bandwidth usage. (I started developing this before libpq had the ability to describe statements to derive statement parameter types and cursor column types(?), so using the binary format was painful at best) 4. Has potential for being used in event driven applications without threads. 5. Better control/understanding of running queries allows for automatic operation interrupts in exception cases: [The last two may be possible using libpq's async interfaces, but I'm not entirely sure] 6. Arguably greater(well, *easier* is likely a better word) portability. While I have yet to get it to work with other Pythons(pypy, ironpython, jython), the potential to work with these alternate implementations is there. The real impediment here is missing/inconsistent features in the implementations(setuptools support, missing os module in ironpython(i know, i know, and I don't care. Last time I checked, it's missing from the default install that's broken :P), jython is still at 2.2, iirc) 7. Bit of a license zealot. psycopg2 is [L?]GPL, pg_proboscis is BSD[or MIT if you like] like PostgreSQL. (Yes, Darcy's interface is BSD licensed(iirc), but it too is libpq based) In sum, it yields greater control over the connection which I believe will lead to a more flexible and higher quality interface than a libpq solution. [The primary pain point I've had is implementing all the authentication mechanisms supported by PG] -- http://mail.python.org/mailman/listinfo/python-list
Re: distutils setup - changing the location in site-packages
On Feb 21, 9:33 am, imageguy <[EMAIL PROTECTED]> wrote: > I have the setup script working, however, when I run the install, it > places the module in the root of site-packages. > > The following is the deatils from the script > setup ( > name = "mymodule", > version = "0.1", > description = "My modules special description", > author = "me", > author_email = "[EMAIL PROTECTED]", > py_modules = ["exceptionhandler"] > ) Yeah, you need to specify the module path. ie, ``py_modules = ["mytools.exceptionhandler"]`` However, chances are that you want use ``packages``: setup ( ... packages = ["mytools"] ) This should include the ``exceptionhandler`` module in the package. Additionally, you'll need to structure the project to have a ``mytools`` directory that contains an ``__init__.py`` file(package initialization module), and the ``exceptionhandler.py`` file: projectdir/ | |- mytools/ || ||- __init__.py ||- exceptionhandler.py | |- setup.py ... This can be somewhat undesirable if you're using cvs, as chances are you'll want to change the package's name at some point in the future. However, I have found that the annoyance of empty directories littering the module's tree does not outweigh the annoyance of not being able to use setuptools' ``develop`` command. Not to mention the simplicity of just using ``packages``. > This is for development purposes. I would like to have a development > copy of some "tools", but when ready and tested "publish" them to the > site-packages where they can be included in "production" code. setuptools' 'develop' command can be handy for this. Hope this helps. -- http://mail.python.org/mailman/listinfo/python-list
Re: psycopg2: connect copy_from and copy_to
On Feb 19, 9:23 am, Thomas Guettler <[EMAIL PROTECTED]> wrote: > Yes, you can use "pg_dump production ... | psql testdb", but > this can lead to dead locks, if you call this during > a python script which is in the middle of a transaction. The python > script locks a table, so that psql can't write to it. Hrm? Dead locks where? Have you considered a cooperative user lock? Are just copying data? ie, no DDL or indexes? What is the script doing? Updating a table with unique indexes? > I don't think calling pg_dump and psql/pg_restore is faster. Normally it will be. I've heard people citing cases of COPY at about a million records per second into "nicely" configured systems. However, if psycopg2's COPY is in C, I'd imagine it could achieve similar speeds. psql and psycopg2 both being libpq based are bound to have similar capabilities assuming the avoidance of interpreted Python code in feeding the data to libpq. > I know, but COPY is much faster. yessir. -- http://mail.python.org/mailman/listinfo/python-list
Re: Py_Finalize ERROR!
On Feb 19, 12:11 am, zaley <[EMAIL PROTECTED]> wrote: > Py_Finalize ERROR! > > In my C++ program ,python is embeded . I create one win thread to run > embedded Python code . > So at the begin of thread function I call "Py_Initialize" and at the > end of thread function call "Py_Finalize" . > But after I began thread several times,the program crashed in > function "Py_Finalize". > I can see the error occured at function "PyObject_ClearWeakRefs" when > "Py_Finalize" called "type_dealloc"; > > Note: the python25.dll(lib) is builded by VC6(SP6) I think I ran into this error with my pgsql PL project--at some point. I think I "fixed" it by *not* calling Py_Finalize(). =) However, I'm sure a report would be welcome, so if you don't mind going through some hassle, I'd suggest making a trip to the bug tracker. -- http://mail.python.org/mailman/listinfo/python-list
Re: psycopg2: connect copy_from and copy_to
On Feb 19, 8:06 am, Thomas Guettler <[EMAIL PROTECTED]> wrote: > Any suggestions? If you don't mind trying out some beta quality software, you can try my pg_proboscis driver. It has a DBAPI2 interface, but for you to use COPY, you'll need to use the GreenTrunk interface: import postgresql.interface.proboscis.dbapi2 as db # yeah, it doesn't use libpq, so you'll need to "spell" everything out. And, no dsn either, just keywords. src = db.connect(user = 'pgsql', host = 'localhost', port = 5432, database = 'src') dst = db.connect(suer = 'pgsql', host = 'localhost', port = 5432, database = 'dst') fromq = src.greentrunk.Query("COPY tabl TO STDOUT") toq = dst.greentrunk.Query("COPY tabl FROM STDIN") toq(fromq()) It's mostly pure-python, so if you don't have any indexes on the target table, you'll probably only get about 100,000 - 150,000 records per second(of course, it depends on how beefy your CPU is). With indexes on a large destination table, I don't imagine the pure Python COPY being the bottleneck. $ easy_install pg_proboscis Notably, it currently(version 0.9) uses the qmark paramstyle, but I plan to make 1.0 much more psyco. =) [python.projects.postgresql.org, some of the docs are outdated atm due to a recent fury of development =] -- http://mail.python.org/mailman/listinfo/python-list
Re: psycopg2: connect copy_from and copy_to
> Doesn't PostGres come with Export/Import apps ? That would be easiest > (and faster). Yes, PostgreSQL core has import/export apps, but they tend to target general administration rather than transactional loading/moving of data. ie, dump and restore a database or schema. There is a pgfoundry project called pgloader that appears to be targeting a more general scenario of, well, loading data, but I imagine most people end up writing custom ETL for data flow. > Else, > > prod_cursor.execute('select data from production') > for each_record in cursor.fetchall(): > dev_cursor.execute('insert into testing') > > that is one way of doing it In any high volume cases you don't want to do that. The current best practice for loading data into an existing PostgreSQL database is create temp, load into temp using COPY, merge from temp into destination(the last part is actually the tricky one ;). -- http://mail.python.org/mailman/listinfo/python-list
Re: isgenerator(...) - anywhere to be found?
On Jan 22, 6:20 am, "Diez B. Roggisch" <[EMAIL PROTECTED]> wrote: > For a simple greenlet/tasklet/microthreading experiment I found myself in > the need to ask the question > > isgenerator(v) > > but didn't find any implementation in the usual suspects - builtins or > inspect. types.GeneratorType exists in newer Pythons, but I'd suggest just checking for a send method. ;) That way, you can use something that emulates the interface without being forced to use a generator. hasattr(ob, 'send').. -- http://mail.python.org/mailman/listinfo/python-list