[sqlalchemy] test/dialect/test_sqlite.py::TypeReflectionTest::()::test_round_trip_direct_type_affinity fails

2018-06-22 Thread Hans-Peter Jansen
Hi %,

packaging 1.2.8 (and before) fails for openSUSE Tumbleweed, while it succeeds 
for many former distributions due to a single failing test with Python 2.7.15, 
sqlite 3.24, pytest 3.6.0:

FAIL test/dialect/test_sqlite.py::TypeReflectionTest::
()::test_round_trip_direct_type_affinity

Full log available here:

https://build.opensuse.org/build/home:frispete:python/openSUSE_Tumbleweed/x86_64/python-SQLAlchemy/_log

Python 3.6.5 testing isn't performed due to this issue.

Packaging overview:
https://build.opensuse.org/package/show/home:frispete:python/python-SQLAlchemy
Note: we're packaging and testing for all Python variants from a single spec.

For comparison, building with a slightly older environment (openSUSE Leap 
15.0) using Python 2.7.14, Python 3.6.5, sqlite 3.23.1, pytest 3.6.0 succeeds:

https://build.opensuse.org/build/home:frispete:python/openSUSE_Leap_15.0/x86_64/python-SQLAlchemy/_log

It strongly smells like sqlite 3.24 related. Nevertheless, I want to mention a 
significant difference of these environments, Tumbleweeds default gcc is 
8.1.1, while 15.0 uses 7.3.1 still, so it might boil down to a compiler issue 
somewhere under the covers..

Does somebody in the audience use sqlite 3.24 already, want to perform the 
SQLAlchemy testsuite, and report back?

Thanks in advance,
Pete

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] senseless warning messages escape python logging

2014-07-02 Thread Hans-Peter Jansen
Dear SQLAchemistas,

this is an issue, that my apps choke on from time to time, _related_ to SQLA. 

Although, logging is set up correctly, some operations spit out senseless 
warning messages like this:

/usr/lib/python2.6/site-packages/sqlalchemy/engine/default.py:324: Warning: 
Data truncated for column 'sendungref1' at row 1
  cursor.execute(statement, parameters)

FYI, this is the solely message, catched by cron, from code that is exercised 
heavily.

Sure, I understand, that with some probability, this is triggered from MySql-
python-1.2.3 under the covers, and I know, that the value given for this 
column was too big, but without any context, it doesn't help in any way. 

So strictly speaking, I'm barking up the wrong tree, but the question is, have 
you figured out a way to either catch or suppress those warnings? Is there a 
mysql adapter, that cooperates better in this respect?

Thanks for your insights,

Pete

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Performance problem of simple orm usage

2014-06-29 Thread Hans-Peter Jansen
Dear Jonathan,

thank you for your cool remix of recommendations. Very appreciated.

On Donnerstag, 26. Juni 2014 15:09:03 Jonathan Vanasco wrote:
 In case this helps...
 
 This reminds me slightly of some RFID work I did years ago.  We had a lot
 of reads coming in from different units, several reads per unit per second.

Fun.

 I found the best way to handle writing was to just access the db directly,
 but kept the ORM on the read side.

I'm doing this in other projects, but writing SQL feels so 80ies, doesn't it.. 
I even did my own ORM with a small 'r' (how did not?) in about a 150 lines of 
code years ago.

 I recall a few things from that experience.  some of this might be
 applicable to you:
 
 • doing a Select else Insert pattern was faster than Insert All and
 just letting the db constraint catch errors.  there were still a lot of
 exceptions from different transactions.  this is probably dependent on the
 database/version/size/etc.  but it's worth profiling.

Yes, that's what I found, too. In the rollback part of it.

 • we had a low number of devices, so just cached their data into the
 app's memory.  losing the extra select helped a bit.

 * our transaction got pared down to looking like this:
 
  begin;
  select record ;
  insert record;
  commit;

 * i think someone eventually migrated the db interaction to use a stored
 procedure.

An example of that would be interesting.
 
 * we kept an in-memory array of recently seen transactions.  it was just a
 simple list that would have elements inserted at 0 and then truncated.  we
 checked that list first, then hit the database if it wasn't seen .

I'm doing that, but using a dict with the unique index members as the key.

 our performance greatly increased, and frustrations decreased, as we moved
 more logic out of the database and into python.
 
 on another RFID project, the lead engineer i worked with had a similar
 approach... however to keep performance better, he used an in-memory
 database to record data and then did periodic flushes to the real database
 after the data was smoothed out ( to compensate for known hardware issues
 ).

Oh, that case sounds pretty pathologic..

After coping with Mike's notes, I faced some bottlenecks on the sending side, 
which are plain Raspberry Pis, and the socket interface. Disabling Nagle 
helped a lot, as well as avoiding datetime.datetime.strptime(). I've learned, 
that some unavoidable datetime and timezone operations are still the most 
expensive ones.. Oh, well.

Net result of this optimization quest: down to 1/10th compared to the starting 
point, and all that without getting too dirty. ;)

Cheers,
Pete

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Performance problem of simple orm usage

2014-06-26 Thread Hans-Peter Jansen
Dear Mike,

sorry for not coping with preferred reply behavior..

On Donnerstag, 26. Juni 2014 15:26:02 Mike Bayer wrote:
 On 6/26/14, 3:07 PM, Hans-Peter Jansen wrote:
  
  Obviously, some operation triggers the flush method with about the
  same consequences..
 
 OK, turn off autoflush - either Session(autoflush=False), or within the
 critical block, do with session.no_autoflush:

Okay, that makes a difference, shaving of about 40% of total runtime,
after dealing with identical records in one session..

Still, attributes is highest in profiles. I guess, this is, what you 
called overkill in your first response. A helper for dealing with this 
situation sounds interesting, though.

Thu Jun 26 20:41:50 2014srelay.pstats

 55993702 function calls (54767154 primitive calls) in 533.479 seconds

   Ordered by: internal time
   List reduced from 1277 to 30 due to restriction 30

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
42160  292.5360.007  292.9100.007 
attributes.py:1321(from_collection)
63209   39.1850.001   39.2940.001 attributes.py:1255(as_state)
   39   39.0371.001   39.0371.001 {time.sleep}
45816   30.9490.001   30.9490.001 {method 'query' of 
'_mysql.connection' objects}
  392   30.0390.077   30.0390.077 {method 'recv' of 
'_socket.socket' objects}
422409.8700.000  302.8040.007 attributes.py:871(get_history)
210834.2840.0006.5000.000 attributes.py:868(__copy)
421582.2300.0002.2520.000 collections.py:711(__len__)
210431.4660.0001.8510.000 topological.py:51(find_cycles)
  41594301.3690.0001.3690.000 {isinstance}
246831.3300.000  391.9840.016 session.py:1790(flush)
642077/247121.2920.000   12.8400.001 
visitors.py:74(_compiler_dispatch)
   1975701.2850.0004.2150.000 compiler.py:389(visit_label)
457351.2350.0003.6150.000 default.py:391(_init_compiled)
   2468471.1450.0001.6370.000 compiler.py:421(visit_column)
245581.0600.000  458.6330.019 srelay.py:74(store_enos_rec)
246361.0480.000   12.7090.001 compiler.py:1136(visit_select)
   1975521.0300.0006.0000.000 
compiler.py:1019(_label_select_column)
246361.0150.0001.7930.000 result.py:171(__init__)
631260.9390.0003.2020.000 
persistence.py:275(_collect_update_commands)
49200/491720.9370.0003.5390.000 db_scheme.py:91(__repr__)
   1472400.9040.0001.2460.000 topological.py:15(sort_as_subsets)
   1265140.8140.000  348.4870.003 unitofwork.py:411(execute)
631910.7710.0001.9840.000 
dependency.py:67(per_property_flush_actions)
   6229250.7510.0000.9730.000 attributes.py:308(__get__)
   6996820.7450.0000.7450.000 state.py:185(dict)
   1898090.7260.000  343.3410.002 
unitofwork.py:177(get_attribute_history)
739280.6530.0000.7810.000 expression.py:3538(__init__)
421700.6480.0000.6480.000 {method 'clear' of 'dict' objects}
   1262520.6390.0001.7950.000 persistence.py:802(_sort_states)


Cheers,
Pete

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Modification tracking

2013-08-22 Thread Hans-Peter Jansen
Dear Michael,

thanks for the detailed response.

On Mittwoch, 21. August 2013 16:55:18 Michael Bayer wrote:
 On Aug 21, 2013, at 12:40 PM, Hans-Peter Jansen h...@urpla.net wrote:
  Hi,
  
  being new to SQLAlchemy, I try to get my way through it.
  
  In an application, I have rather elaborate needs to track changes.
  
  I've defined 3 classes with declarative, where the main class has
  relationships with two auxiliary classes, that refer to the main class
  with foreign references. All pretty basic stuff.
  
  Now I need to track all modifications to all fields, including the
  relationship list objects.
  
  What is the best way to accomplish this task with SQLAlchemy?
  
  Is there some boilerplate available to support this, or do I have to carry
  around two objects and compare them item by item?
 you can intercept changes on attributes using the attribute event system: 
 http://docs.sqlalchemy.org/en/rel_0_8/orm/events.html#attribute-events
 
 otherwise you can get at the changes on an attribute after the fact (but
 before a flush) using the history interface:
 http://docs.sqlalchemy.org/en/rel_0_8/orm/session.html?highlight=history#sq
 lalchemy.orm.attributes.History

Hmm, it looks like that needs to be applied on every column attribute..

 you can get a list of all attributes mapped using mapper.attrs:
 http://docs.sqlalchemy.org/en/rel_0_8/orm/mapper_config.html?highlight=mapp
 er.attrs#sqlalchemy.orm.mapper.Mapper.attrs

from here. Since I need to track all 'net' changes, that's rather unpractical.

I've changed my code to cope with best practices hopefully (from what I can 
extract from the overwhelming docs).

That is: just change instance attributes in place, and check for dirtiness 
later on, with session.is_modified(obj, passive = True). I've chosen this 
interface, because the fact of *really* being modified is essential here.

This scheme would do just fine, but doesn't work as expected.

See attached code. Depending on modification state of parent (line 51), the 
modification state of the child is detected (parent changed) or not (parent 
unchanged).

In my real code, it's the other way around, modifications to relations are 
detected (as I iterate through all of them), but changes to the parent 
aren't, although the correct SQL UPDATE code is executed after commit(). Since 
it isn't detected correctly, my app falls flat on its nose..

Do you have any idea on this one?

Cheers,
Pete

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.
from sqlalchemy import Column, Integer, String, ForeignKey, create_engine
from sqlalchemy.orm import relationship, backref, sessionmaker
from sqlalchemy.ext.declarative import declarative_base

Base = declarative_base()

ERR = 1

class Parent(Base):
__tablename__ = 'parent'
id = Column(Integer, primary_key=True)
name = Column(String(100), nullable = False, index = True)
children = relationship('Child', backref = 'parent',
single_parent = True, # lazy = 'joined',
cascade = 'all, delete-orphan')


def __repr__(self):
cl = [repr(c) for c in self.children]
return 'Parent(%r, children: %s)' % (self.name, ', '.join(cl))

class Child(Base):
__tablename__ = 'child'
id = Column(Integer, primary_key=True)
name = Column(String(100), nullable = False, index = True)
parent_id = Column(Integer, ForeignKey('parent.id'), nullable = False)

def __repr__(self):
return 'Child(%r)' % (self.name)

if __name__ == '__main__':
engine = create_engine('sqlite://', echo = True)
Base.metadata.create_all(engine)
session = sessionmaker(engine, expire_on_commit=False)()

def pr(obj, exp, msg):
res = session.is_modified(obj, passive = True)
print msg + ',', 'expected: %s,' % exp, 'outcome: %s,' % res, exp == res and 'okay' or 'FAILED'

p = Parent(name = 'pa')
c1 = Child(name = 'li')
p.children.append(c1)

print 'Starting with:', p

session.add(p)
session.commit()

pr(p, False, 'initial session committed')

if ERR:
pr(p, False, 'parent not renamed')
else:
p.name = 'po'
pr(p, True, 'parent renamed to po')

c1.name = 'lo'
pr(c1, True, 'child renamed to lo, testing child')
pr(p, True, 'child renamed to lo, testing parent')

session.commit()




Re: [sqlalchemy] Modification tracking

2013-08-22 Thread Hans-Peter Jansen
On Donnerstag, 22. August 2013 23:58:17 Hans-Peter Jansen wrote:
 Dear Michael,
 
Pardon, I'm using 0.8.2 ATM.
 
 Cheers,
 Pete

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


[sqlalchemy] Modification tracking

2013-08-21 Thread Hans-Peter Jansen
Hi,

being new to SQLAlchemy, I try to get my way through it.

In an application, I have rather elaborate needs to track changes.

I've defined 3 classes with declarative, where the main class has relationships 
with two 
auxiliary classes, that refer to the main class with foreign references. All 
pretty basic 
stuff.

Now I need to track all modifications to all fields, including the relationship 
list objects. 

What is the best way to accomplish this task with SQLAlchemy?

Is there some boilerplate available to support this, or do I have to carry 
around two 
objects and compare them item by item?

TIA,
Pete

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] 0.7.10 build test failures

2013-07-15 Thread Hans-Peter Jansen
On Montag, 15. Juli 2013 13:19:55 Michael Bayer wrote:
 issue http://www.sqlalchemy.org/trac/ticket/2782 is added to merge this to
 all three branches

patch for 0.7 attached to the ticket.

tested-by: Hans-Peter Jansen h...@urpla.net

Thanks, Mike. This nice experience will encourage me to report some more 
issues, and tackle upgrading the package to 0.8.xx.

Cheers,
Pete

 On Jul 15, 2013, at 1:14 PM, Michael Bayer mike...@zzzcomputing.com wrote:
  oh I know, you're running python setup.py test. I'd strongly
  recommend running with the sqla_nose.py runner directly, however this
  issue is a setuptools one, adding this to setup.py will fix:
  
  
  # Hack to prevent TypeError: 'NoneType' object is not callable error
  # in multiprocessing/util.py _exit_function when running `python
  # setup.py test` (see
  # http://www.eby-sarna.com/pipermail/peak/2010-May/003357.html)
  
  try:
 import multiprocessing
  
  except ImportError:
 pass
  
  Seems like this might not be in the current master so this can be added.
  
  On Jul 15, 2013, at 1:08 PM, Michael Bayer mike...@zzzcomputing.com 
wrote:
  im trying to recall if I've seen that one before.   how are tests being
  run?   this error does not occur in any environment for me. 
  On Jul 14, 2013, at 6:18 PM, Hans-Peter Jansen h...@urpla.net wrote:
  On Sonntag, 14. Juli 2013 17:38:12 Michael Bayer wrote:
  you can get that right here:
  
  www.sqlalchemy.org/changelog/CHANGES_0_7_11
  
  Thanks, great. That fixed all tests, but this  is left still:
  
  [  156s]
  --
  [  156s] Ran 4075 tests in 148.973s
  [  156s]
  [  156s] OK (SKIP=140)
  [  156s] Error in atexit._run_exitfuncs:
  [  156s] Traceback (most recent call last):
  [  156s]   File /usr/lib64/python2.7/atexit.py, line 24, in
  _run_exitfuncs [  156s] func(*targs, **kargs)
  [  156s]   File /usr/lib64/python2.7/multiprocessing/util.py, line
  284, in _exit_function [  156s] info('process shutting down')
  [  156s] TypeError: 'NoneType' object is not callable
  [  156s] Error in sys.exitfunc:
  [  156s] Traceback (most recent call last):
  [  156s]   File /usr/lib64/python2.7/atexit.py, line 24, in
  _run_exitfuncs [  156s] func(*targs, **kargs)
  [  156s]   File /usr/lib64/python2.7/multiprocessing/util.py, line
  284, in _exit_function [  156s] info('process shutting down')
  [  156s] TypeError: 'NoneType' object is not callable
  
  Pete

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.




[sqlalchemy] 0.7.10 build test failures

2013-07-14 Thread Hans-Peter Jansen
Dear Mike,

while building version 0.7.10 on openSUSE build service, I noticed, that there 
are a bunch of test failures to care about. 

My SQLAlchemy project:

https://build.opensuse.org/package/show/home:frispete:python/python-SQLAlchemy

The build logs are located here:

https://build.opensuse.org/package/live_build_log/home:frispete:python/python-SQLAlchemy/openSUSE_12.2/x86_64

Looking from 248s downwards, you will find this:

[  248s] ==
[  248s] ERROR: test.engine.test_execute.EchoTest.test_echo_flag_independence
[  248s] --
[  248s] Traceback (most recent call last):
[  248s]   File /usr/lib/python2.7/site-packages/nose/case.py, line 381, in 
setUp
[  248s] try_run(self.inst, ('setup', 'setUp'))
[  248s]   File /usr/lib/python2.7/site-packages/nose/util.py, line 469, in 
try_run
[  248s] return func()
[  248s]   File 
/home/abuild/rpmbuild/BUILD/SQLAlchemy-0.7.10/./test/engine/test_execute.py, 
line 708, in setup
[  248s] self.buf = logging.handlers.BufferingHandler(100)
[  248s] AttributeError: 'module' object has no attribute 'handlers'
[  248s] 
[  248s] ==
[  248s] ERROR: test.engine.test_execute.EchoTest.test_levels
[  248s] --
[  248s] Traceback (most recent call last):
[  248s]   File /usr/lib/python2.7/site-packages/nose/case.py, line 381, in 
setUp
[  248s] try_run(self.inst, ('setup', 'setUp'))
[  248s]   File /usr/lib/python2.7/site-packages/nose/util.py, line 469, in 
try_run
[  248s] return func()
[  248s]   File 
/home/abuild/rpmbuild/BUILD/SQLAlchemy-0.7.10/./test/engine/test_execute.py, 
line 708, in setup
[  248s] self.buf = logging.handlers.BufferingHandler(100)
[  248s] AttributeError: 'module' object has no attribute 'handlers'
[  248s] 
[  248s] ==
[  248s] ERROR: test.engine.test_execute.EchoTest.testing_engine
[  248s] --
[  248s] Traceback (most recent call last):
[  248s]   File /usr/lib/python2.7/site-packages/nose/case.py, line 381, in 
setUp
[  248s] try_run(self.inst, ('setup', 'setUp'))
[  248s]   File /usr/lib/python2.7/site-packages/nose/util.py, line 469, in 
try_run
[  248s] return func()
[  248s]   File 
/home/abuild/rpmbuild/BUILD/SQLAlchemy-0.7.10/./test/engine/test_execute.py, 
line 708, in setup
[  248s] self.buf = logging.handlers.BufferingHandler(100)
[  248s] AttributeError: 'module' object has no attribute 'handlers'
[  248s] 
[  248s] ==
[  248s] ERROR: test.engine.test_execute.LogParamsTest.test_error_large_dict
[  248s] --
[  248s] Traceback (most recent call last):
[  248s]   File /usr/lib/python2.7/site-packages/nose/case.py, line 381, in 
setUp
[  248s] try_run(self.inst, ('setup', 'setUp'))
[  248s]   File /usr/lib/python2.7/site-packages/nose/util.py, line 469, in 
try_run
[  248s] return func()
[  248s]   File 
/home/abuild/rpmbuild/BUILD/SQLAlchemy-0.7.10/./test/engine/test_execute.py, 
line 545, in setup
[  248s] self.buf = logging.handlers.BufferingHandler(100)
[  248s] AttributeError: 'module' object has no attribute 'handlers'


Given, that many of those seem to suffer from a similar issue, there might
be an easy fix. For the time being, I don't fail the build, but it would
be nice to enable those tests and bail out on failures in order to raise
confidence for this package/project.

Some advice is appreciated.

Cheers,
Pete 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] 0.7.10 build test failures

2013-07-14 Thread Hans-Peter Jansen
Hi Mike,

On Sonntag, 14. Juli 2013 17:03:14 Michael Bayer wrote:
 hi Hans -
 
 this issue, a missing import that only triggers on certain platforms,  has
 been fixed in all branches since 0.7.   But there's no 0.7.11 released
 planned at this time.

Thanks for the quick answer. Mind pointing me to the changeset of 0.7?

Cheers,
Pete

 
 - mike

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] 0.7.10 build test failures

2013-07-14 Thread Hans-Peter Jansen
On Sonntag, 14. Juli 2013 17:38:12 Michael Bayer wrote:
 you can get that right here:
 
 www.sqlalchemy.org/changelog/CHANGES_0_7_11

Thanks, great. That fixed all tests, but this  is left still:

[  156s] --
[  156s] Ran 4075 tests in 148.973s
[  156s] 
[  156s] OK (SKIP=140)
[  156s] Error in atexit._run_exitfuncs:
[  156s] Traceback (most recent call last):
[  156s]   File /usr/lib64/python2.7/atexit.py, line 24, in _run_exitfuncs
[  156s] func(*targs, **kargs)
[  156s]   File /usr/lib64/python2.7/multiprocessing/util.py, line 284, in 
_exit_function
[  156s] info('process shutting down')
[  156s] TypeError: 'NoneType' object is not callable
[  156s] Error in sys.exitfunc:
[  156s] Traceback (most recent call last):
[  156s]   File /usr/lib64/python2.7/atexit.py, line 24, in _run_exitfuncs
[  156s] func(*targs, **kargs)
[  156s]   File /usr/lib64/python2.7/multiprocessing/util.py, line 284, in 
_exit_function
[  156s] info('process shutting down')
[  156s] TypeError: 'NoneType' object is not callable

Pete

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Problems with retroactive changes to Query objects

2013-01-30 Thread Hans Meine
Hi Michael!

Am Dienstag, 29. Januar 2013, 12:23:01 schrieb Michael Bayer:
 … it's usually going to complicate
 things if you mix implicit and explicit joins together.   The issue should
 be solved if you create a clean string of joins:
 
 s.query(results1.id, results2.id).select_from(results1).join(results2,
 results1.foo=results2.bar).join(...).join(...)

Thank you for the quick and helpful answer and the hint to select_from(); you 
are right that this solves my main problem (it does suffer from the same bad 
performance, but that's most likely a problem with my ERM).

 that way when you add another join to OutputParameterValue the query will
 not have any confusion in how to string this additional JOIN.

I am still not sure if this confusion isn't a bug or weakness in SA that 
should be either fixed or handled with a more explicit error message.

E.g. the join() could raise an error instead of silently throwing the first 
part of the FROM clause away.  (OTOH, I don't see why the latter has to happen 
in the first place.)

Thanks again, and have a nice day,
  Hans

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [sqlalchemy] Array with custom types in SqlAlchemy 0.8b2

2013-01-07 Thread Hans Lellelid
I am looking to adapt this code for a related array/type issue.  The code 
from https://gist.github.com/4433940 works just fine for me (as expected) 
when building/executing the stmt directly, but not when using the ORM.

When row is created using ORM, like this:

# snip
s = Session(bind=engine)
e = Example()
e.timestamp=datetime.datetime.utcnow(),
e.num=2,
e.guids = [uuid.uuid4(), uuid.uuid4()]
s.add(e)
s.commit()

I get an error like this:

snip
return getter(visitor)(self, **kw)
  File 
/home/hans/workspace/providence/env/lib/python2.7/site-packages/SQLAlchemy-0.8.0b2-py2.7-linux-x86_64.egg/sqlalchemy/sql/compiler.py,
 
line 760, in visit_bindparam
bind_expression = bindparam.type.bind_expression(bindparam)
  File test_array.py, line 38, in bind_expression
for uuid_val in val
TypeError: 'object' object is not iterable

(I can dump in full stack if that would be helpful.) 

Indeed, inspecting that reveals that it is simply an object() instance. 
 I'm not sure where that is being set or whether there is an obvious 
workaround here.  I'm sure I'm simply missing obvious when it comes to 
dealing with native array types and ORM entity instances.

Thanks,
Hans 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/ZwxFp2iasq0J.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Array with custom types in SqlAlchemy 0.8b2

2013-01-07 Thread Hans Lellelid
Thanks for the follow up.  

Upon further reading/reflection, I discovered that wrapping my string IP 
address values in the psycopg2.extras.Inet object and then passing that in 
-- and executing psycopg2.extras.register_inet() as with the UUID example 
-- seems to work fine for both ORM and non-ORM.

So, it sounds like I can get the data into the right form without too many 
hoops.  I will probably create a custom TypeDecorator that simply wraps 
incoming values in the Inet type (or convert from python netaddr objects 
into pg's Inet), but that seems like a relatively straightforward 
customization.

Thanks again!
Hans


On Monday, January 7, 2013 10:09:53 AM UTC-5, Michael Bayer wrote:

 change again, that doesn't work.

 Upon reflection, I think the case here is that there's no alternative but 
 to make sure psycopg2 can properly format the contents of the ARRAY itself. 
  This is because SQLAlchemy is producing a completed INSERT statement for 
 preparation, without the parameters actually filled in:

 INSERT INTO table (x) VALUES (%(x)s)


 At that point, SQLAlchemy is done producing strings, and sends it off to 
 psycopg2 along with an array value for x.   So there is no opportunity 
 here for a user-defined bind expression generator to further modify the 
 expression above.More fundamentally, the above statement can be called 
 using DBAPI executemany(); each value for x can be an array of a 
 *different* length. So it's totally up to psycopg2 here to handle this 
 case, and it even suggests that prepared statements can't be used very 
 effectively with Postgresql ARRAY types (psycopg2 doesn't use prepared 
 statements AFAIK).



 On Jan 7, 2013, at 9:58 AM, Michael Bayer wrote:

 correction, this seems to work, though will try to improve:

 class UUID_ARRAY(TypeDecorator):
 impl = ARRAY(UUID, dimensions=1)

 def bind_expression(self, bindvalue):
 if bindvalue.callable:
 val = bindvalue.callable()
 else:
 val = bindvalue.value
 if val is None:
 val = []
 elif not hasattr(val, '__iter__'):
 return bindvalue
 return array(
 cast(literal(str(uuid_val)), UUID())
 for uuid_val in val
 )




 On Jan 7, 2013, at 9:55 AM, Michael Bayer wrote:

 this is ticket http://www.sqlalchemy.org/trac/ticket/2648 and cannot be 
 worked around at this time.   If you're working with arrays of UUID I'd 
 recommend using psycopg2 type processors, as the previous poster has had 
 success with.



 On Jan 7, 2013, at 9:39 AM, Hans Lellelid wrote:

 I am looking to adapt this code for a related array/type issue.  The code 
 from https://gist.github.com/4433940 works just fine for me (as expected) 
 when building/executing the stmt directly, but not when using the ORM.

 When row is created using ORM, like this:

 # snip
 s = Session(bind=engine)
 e = Example()
 e.timestamp=datetime.datetime.utcnow(),
 e.num=2,
 e.guids = [uuid.uuid4(), uuid.uuid4()]
 s.add(e)
 s.commit()

 I get an error like this:

 snip
 return getter(visitor)(self, **kw)
   File 
 /home/hans/workspace/providence/env/lib/python2.7/site-packages/SQLAlchemy-0.8.0b2-py2.7-linux-x86_64.egg/sqlalchemy/sql/compiler.py,
  
 line 760, in visit_bindparam
 bind_expression = bindparam.type.bind_expression(bindparam)
   File test_array.py, line 38, in bind_expression
 for uuid_val in val
 TypeError: 'object' object is not iterable

 (I can dump in full stack if that would be helpful.) 

 Indeed, inspecting that reveals that it is simply an object() instance. 
  I'm not sure where that is being set or whether there is an obvious 
 workaround here.  I'm sure I'm simply missing obvious when it comes to 
 dealing with native array types and ORM entity instances.

 Thanks,
 Hans 

 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To view this discussion on the web visit 
 https://groups.google.com/d/msg/sqlalchemy/-/ZwxFp2iasq0J.
 To post to this group, send email to sqlal...@googlegroups.comjavascript:
 .
 To unsubscribe from this group, send email to 
 sqlalchemy+...@googlegroups.com javascript:.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.



 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlal...@googlegroups.comjavascript:
 .
 To unsubscribe from this group, send email to 
 sqlalchemy+...@googlegroups.com javascript:.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.



 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlal...@googlegroups.comjavascript:
 .
 To unsubscribe from this group, send email to 
 sqlalchemy+...@googlegroups.com javascript:.
 For more options, visit

[sqlalchemy] Re: Column order with declarative base and @declared_attr

2011-07-21 Thread Hans-Martin
 The order is determined by the order in which the actual Column constructor 
 is called, and an ordering token is applied.    When a mixin is used, the 
 Column is copied from the mixin, but it's likely that the ordering token is 
 preserved.   You'd declare them on Foo directly without using a 
 @declared_attr function.  Columns that have no ForeignKey can be created in 
 this way.

Yip, did that first -- but then these appeared always before any
columns declared via @declared_attr (no surprise, if I my very limited
understanding of SQLAlchemy's internals is correct). Unfortunately,
I'd like to have the columns with foreign keys up front :-(

 There's no other way to approach this as the dictionary created by a new 
 class is unordered in Python (unless perhaps we added ordering tokens to the 
 usage of @declared_attr also, that could work).

Would be wonderful... Although it's more of a nuisance than a real
problem, it only really affects browsing through the tables after
all.

Thanks,
Hans-Martin

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Python 3.1 + SQLAlchemy 0.6 or 0.7

2011-03-30 Thread Hans-Martin
 On Mar 21, 3:31 pm, Michael Bayer mike...@zzzcomputing.com wrote:

  Yeah psycopg2 with python 3 / sqlalchemy is not supported yet.  The python 
  3 supported version of psycopg2 came out like, in the past two weeks.    It 
  will be 0.7 where its supported at all, hopefully soon as its a matter of 
  setting up some flags in the dialects.     In the meantime if you want to 
  start working with pg + SQLA + py3k you can try the python 3 version of 
  pg8000 - its not as nice or anywhere near as fast as psycopg2 but it works 
  in at least a rudimentary fashion.

  Also Python 3.2 is the latest version of py3k and actually has some fairly 
  dramatic behavioral differences vs. 3.1.   If you want to work with 3.2 and 
  let us know what quirks you find...

Below is one (Python 3.2, sqlalchemy 0.7b4 (downloaded snapshot 10min
ago), Debian). Best wishes, Hans-Martin

Traceback (most recent call last):
  File src/library/db_connect.py, line 60, in existing_schemata
tmp = session.execute(SELECT schema_name from
information_schema.schemata;)
  File lib/python3.2/site-packages/SQLAlchemy-0.7b4dev-py3.2.egg/
sqlalchemy/orm/session.py, line 758, in execute
return self._connection_for_bind(bind,
close_with_result=True).execute(
  File lib/python3.2/site-packages/SQLAlchemy-0.7b4dev-py3.2.egg/
sqlalchemy/orm/session.py, line 694, in _connection_for_bind
return self.transaction._connection_for_bind(engine)
  File lib/python3.2/site-packages/SQLAlchemy-0.7b4dev-py3.2.egg/
sqlalchemy/orm/session.py, line 246, in _connection_for_bind
conn = bind.contextual_connect()
  File lib/python3.2/site-packages/SQLAlchemy-0.7b4dev-py3.2.egg/
sqlalchemy/engine/base.py, line 2062, in contextual_connect
self.pool.connect(),
  File lib/python3.2/site-packages/SQLAlchemy-0.7b4dev-py3.2.egg/
sqlalchemy/pool.py, line 208, in connect
return _ConnectionFairy(self).checkout()
  File lib/python3.2/site-packages/SQLAlchemy-0.7b4dev-py3.2.egg/
sqlalchemy/pool.py, line 369, in __init__
rec = self._connection_record = pool._do_get()
  File lib/python3.2/site-packages/SQLAlchemy-0.7b4dev-py3.2.egg/
sqlalchemy/pool.py, line 695, in _do_get
con = self._create_connection()
  File lib/python3.2/site-packages/SQLAlchemy-0.7b4dev-py3.2.egg/
sqlalchemy/pool.py, line 173, in _create_connection
return _ConnectionRecord(self)
  File lib/python3.2/site-packages/SQLAlchemy-0.7b4dev-py3.2.egg/
sqlalchemy/pool.py, line 257, in __init__
pool.dispatch.first_connect.exec_once(self.connection, self)
  File lib/python3.2/site-packages/SQLAlchemy-0.7b4dev-py3.2.egg/
sqlalchemy/event.py, line 227, in exec_once
self(*args, **kw)
  File lib/python3.2/site-packages/SQLAlchemy-0.7b4dev-py3.2.egg/
sqlalchemy/event.py, line 236, in __call__
fn(*args, **kw)
  File lib/python3.2/site-packages/SQLAlchemy-0.7b4dev-py3.2.egg/
sqlalchemy/engine/strategies.py, line 162, in first_connect
dialect.initialize(c)
  File lib/python3.2/site-packages/SQLAlchemy-0.7b4dev-py3.2.egg/
sqlalchemy/dialects/postgresql/base.py, line 793, in initialize
super(PGDialect, self).initialize(connection)
  File lib/python3.2/site-packages/SQLAlchemy-0.7b4dev-py3.2.egg/
sqlalchemy/engine/default.py, line 171, in initialize
self._get_server_version_info(connection)
  File lib/python3.2/site-packages/SQLAlchemy-0.7b4dev-py3.2.egg/
sqlalchemy/dialects/postgresql/base.py, line 964, in
_get_server_version_info
v = connection.execute(select version()).scalar()
  File lib/python3.2/site-packages/SQLAlchemy-0.7b4dev-py3.2.egg/
sqlalchemy/engine/base.py, line 1259, in execute
params)
  File lib/python3.2/site-packages/SQLAlchemy-0.7b4dev-py3.2.egg/
sqlalchemy/engine/base.py, line 1436, in _execute_text
statement, parameters
  File lib/python3.2/site-packages/SQLAlchemy-0.7b4dev-py3.2.egg/
sqlalchemy/engine/base.py, line 1519, in _execute_context
result = context.get_result_proxy()
  File lib/python3.2/site-packages/SQLAlchemy-0.7b4dev-py3.2.egg/
sqlalchemy/dialects/postgresql/psycopg2.py, line 198, in
get_result_proxy
return base.ResultProxy(self)
  File lib/python3.2/site-packages/SQLAlchemy-0.7b4dev-py3.2.egg/
sqlalchemy/engine/base.py, line 2420, in __init__
self._init_metadata()
  File lib/python3.2/site-packages/SQLAlchemy-0.7b4dev-py3.2.egg/
sqlalchemy/engine/base.py, line 2427, in _init_metadata
self._metadata = ResultMetaData(self, metadata)
  File lib/python3.2/site-packages/SQLAlchemy-0.7b4dev-py3.2.egg/
sqlalchemy/engine/base.py, line 2283, in __init__
colname = dialect._description_decoder(colname)
  File lib/python3.2/site-packages/SQLAlchemy-0.7b4dev-py3.2.egg/
sqlalchemy/processors.py, line 69, in process
return decoder(value, errors)[0]
  File lib/python3.2/encodings/utf_8.py, line 16, in decode
return codecs.utf_8_decode(input, errors, True)
TypeError: 'str' does not support the buffer interface

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group

[sqlalchemy] Re: Python 3.1 + SQLAlchemy 0.6 or 0.7

2011-03-30 Thread Hans-Martin
Sorry, I was not expecting it to work, just trying to follow your
suggestion.

SQLAlchemy Dev snapshot + pg8000 from 
http://pybrary.net/pg8000/dist/pg8000-py3-1.08.tar.gz
throws this error:

Traceback (most recent call last):
  File set_meta_info_levels_0_1.py, line 60, in module
set_up_schemata()
  File  src/library/db_connect.py, line 64, in set_up_schemata
if s not in existing_schemata():
  File  src/library/db_connect.py, line 60, in existing_schemata
tmp = session.execute(SELECT schema_name FROM
information_schema.schemata;)
  File  lib/python3.2/site-packages/SQLAlchemy-0.7b4dev-py3.2.egg/
sqlalchemy/orm/session.py, line 758, in execute
return self._connection_for_bind(bind,
close_with_result=True).execute(
  File  lib/python3.2/site-packages/SQLAlchemy-0.7b4dev-py3.2.egg/
sqlalchemy/orm/session.py, line 694, in _connection_for_bind
return self.transaction._connection_for_bind(engine)
  File  lib/python3.2/site-packages/SQLAlchemy-0.7b4dev-py3.2.egg/
sqlalchemy/orm/session.py, line 246, in _connection_for_bind
conn = bind.contextual_connect()
  File  lib/python3.2/site-packages/SQLAlchemy-0.7b4dev-py3.2.egg/
sqlalchemy/engine/base.py, line 2062, in contextual_connect
self.pool.connect(),
  File  lib/python3.2/site-packages/SQLAlchemy-0.7b4dev-py3.2.egg/
sqlalchemy/pool.py, line 208, in connect
return _ConnectionFairy(self).checkout()
  File  lib/python3.2/site-packages/SQLAlchemy-0.7b4dev-py3.2.egg/
sqlalchemy/pool.py, line 369, in __init__
rec = self._connection_record = pool._do_get()
  File  lib/python3.2/site-packages/SQLAlchemy-0.7b4dev-py3.2.egg/
sqlalchemy/pool.py, line 695, in _do_get
con = self._create_connection()
  File  lib/python3.2/site-packages/SQLAlchemy-0.7b4dev-py3.2.egg/
sqlalchemy/pool.py, line 173, in _create_connection
return _ConnectionRecord(self)
  File  lib/python3.2/site-packages/SQLAlchemy-0.7b4dev-py3.2.egg/
sqlalchemy/pool.py, line 254, in __init__
self.connection = self.__connect()
  File  lib/python3.2/site-packages/SQLAlchemy-0.7b4dev-py3.2.egg/
sqlalchemy/pool.py, line 314, in __connect
connection = self.__pool._creator()
  File  lib/python3.2/site-packages/SQLAlchemy-0.7b4dev-py3.2.egg/
sqlalchemy/engine/strategies.py, line 84, in connect
e, dialect.dbapi.Error) from e
sqlalchemy.exc.DBAPIError: (TypeError) connect() takes at least 1
argument (1 given) None None


I have a working setup with 2.7 + 0.6.6, but would like to port it to
3.2 relatively soon -- if I can help trying out things, just let me
know.

Best wishes,
Hans-Martin

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Python 3.1 + SQLAlchemy 0.6 or 0.7

2011-03-30 Thread Hans-Martin
On Mar 30, 5:43 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 That's not an error I'm familiar with how to reproduce; it suggests an 
 incorrect string passed to create_engine().   Working code with Python 3.2 
 plus pg8000 looks like:

 from sqlalchemy import create_engine

 e = create_engine(postgresql+pg8000://scott:tiger@localhost/test)
 print(e.execute(select 1).scalar())

That might be it -- I have the Postgres installation set up to work
with 'ident sameuser' authentication, so after

e = create_engine(postgresql+pg8000:///test)

I get the same error as before.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Python 3.1 + SQLAlchemy 0.6 or 0.7

2011-03-30 Thread Hans-Martin
 you'd need an @ sign in there perhaps:

 db://@/test

Same error. But no big deal, I can wait for psycopg2 support in Python
3.2.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Python 3.1 + SQLAlchemy 0.6 or 0.7

2011-03-30 Thread Hans-Martin
On Mar 30, 7:13 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 well its not going to work there either most likely since we don't support 
 connecting without a hostname, probably.  You can't put localhost in there ?

Doesn't work either, same error. Adding my system username (and
password) then throws errors from the Postgres backend.

I also installed pg8000 for the 0.6.6 + Py 2.7 setup and this gives
the same error. With psycopg2, it works. Is there a reason that it
would stop doing so in 3.2?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Inactive transactions and commit/rollback from Session

2009-11-12 Thread Hans Lellelid

Hi -

I'm using SA in conjunction w/ Pylons, in their default (0.9.7)
approach -- i.e. using scoped session classes created by calling
something like:

sm = orm.sessionmaker(autoflush=True, autocommit=False, bind=engine)
meta.Session = orm.scoped_session(sm)

I have a base controller that is designed to handle any session
cleanup -- i.e. do any pending commits and rollback and then remove
the session.

This is mostly working; however, I'm getting errors about The
transaction is invalid when attempting to perform commits.  I'm not
sure that this is actually resulting in an error -- and while I think
a refactor is in order here anyway, I would like to better understand
what I'm doing wrong. :)

Here's my base controller:

class CommitDisposeBaseController(WSGIController):
 A base controller class that automatically commits any
outstanding SA changes, closes (removes) the SA Session,
and disposes of the engine (or pool, if applicable). 
def __call__(self, environ, start_response):
try:
result = WSGIController.__call__(self, environ,
start_response)
if meta.Session.is_active:
meta.Session.commit()
return result
except Exception, e:
if meta.Session.is_active:
meta.Session.rollback()
_log.exception(e)
raise
finally:
meta.Session.remove()
meta.engine.dispose()

The errors (The transaction is inactive.) are coming from my
meta.Session.commit() line.  Two questions:

1) Isn't this what meta.Session.is_active should be testing for?

2) Is there something special about using these methods as  class
methods instead of instance methods?  My assumption is that this
transparently grabs the current/active Session instance and executes
on that.   I do know, though, that some methods such as remove() are
not available on instances -- only on the class.  This is a bit
confusing.

Any help would be appreciated.

Thanks,
Hans
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Inactive transactions and commit/rollback from Session

2009-11-12 Thread Hans Lellelid

Hi Michael -

Thanks for the response!

  1) Isn't this what meta.Session.is_active should be testing for?

 it is.   I'm not familiar with a codepath which can cause that to happen,
 and in fact even if the session isn't active, it still should be totally
 fine to call commit().  Try this as an example:
 snip

Ok -- I'm glad that this is the case; this does conform with what I'd
seen in other cases in the past.  Indeed -- I also noticed that
is_active seemed to be unnecessary.

 It seems like some unhandled exception on the connection itself might have
 occurred, i.e. after acquiring it via session.connection(), and the
 session wasn't notified of this failure (i.e. via rollback()).

Thank you; that's a helpful direction.  Indeed in this particular
case, I had passed the exception (from session.connection()) into a
class which performed a number of operations.  While that was supposed
to be an atomic unit, perhaps an exception is not being properly
handled in there.

  I do know, though, that some methods such as remove() are
  not available on instances -- only on the class.  This is a bit
  confusing.

 remove() is a method on scoped_session() which is not a session.  It is a
 thread local container object with proxying behavior.   If it makes it
 less confusing, you can use scoped_session like this:

 # create session registry
 my_scoped_session = scoped_session(sessionmaker())

 # acquire the session from the registry
 session = my_scoped_session()

 # use the session
 session.commit()

 # remove session from the current scope
 my_scoped_session.remove()

Yeah, actually, I think that is a clearer convention.  Since we
typically fetch instances and work with them (rather than using
meta.Session.query(), for example), I think I will adopt this naming
convention.

Thanks again!
Hans
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Filtering lazy='dynamic' collection on class type (single-table inheritance)

2009-01-30 Thread Hans Lellelid

Hi,

I did a quick search but couldn't find the right way to do this in
SA.

For the sake of example, I have a many-to-many relationshp between
Book and Reader.  The 'books' property of reader is loaded using
lazy='dynamic'.  And, finally, Book is mapped with single-table
inheritance and has subclasses for various genres.  -- this is just an
example :)

This works:
reader = session.query(Reader).first()
mysterybooks = reader.books.filter
(Book.genre==MysteryBook.DESCRIMINATOR_KEY)

However, this doesn't feel like the *right* way.  I experimented with
various forms of of_type() but didn't get it to work.  I'm sure I'm
just not fully understanding how that is supposed to work.

Thanks in advance!

Hans
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Filtering lazy='dynamic' collection on class type (single-table inheritance)

2009-01-30 Thread Hans Lellelid



On Jan 30, 2:37 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 On Jan 30, 2009, at 1:39 PM, Hans Lellelid wrote:





  Hi,

  I did a quick search but couldn't find the right way to do this in
  SA.

  For the sake of example, I have a many-to-many relationshp between
  Book and Reader.  The 'books' property of reader is loaded using
  lazy='dynamic'.  And, finally, Book is mapped with single-table
  inheritance and has subclasses for various genres.  -- this is just an
  example :)

  This works:
  reader = session.query(Reader).first()
  mysterybooks = reader.books.filter
  (Book.genre==MysteryBook.DESCRIMINATOR_KEY)

  However, this doesn't feel like the *right* way.  I experimented with
  various forms of of_type() but didn't get it to work.  I'm sure I'm
  just not fully understanding how that is supposed to work.

 of_type() is currently only implemented for a RelationProperty and  
 takes effect for EXISTS subqueries generated by has()/any() and  
 query.join().    So for the dynamic loader's list of books, what  
 you're doing is the best option we have at the moment.   it seems like  
 of_type() existing on Query as a generative method would be the way to  
 go here...until someone says query(A, B, C).of_type(A).

:) -- Ok, if what I'm doing is right, I'll leave it there.  It does
work fine afterall, but obviously is a little less resilient to me
changing my inheritance mapping strategy down the road.  (Not that I
plan to...)

Thanks!
Hans
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Using database functions in UPDATE statement

2008-09-19 Thread Hans Lellelid

Hi,

I'm using SA 0.4.6 and I'm having trouble using the result of a
database function / stored procedure in an UPDATE statement (being
constructed with SQL expression lang).   This happens to be for using
PostGIS columns; however, that is not relevant to the problem here.  I
have tried doing some searching on this in the manual and in user
group, but have not found an answer.

A simplified version of my python code looks like this:

mytable.update(mytable.c.id==idvar, {'geocolumn':
func.GeomFromText(wkt, 4326)})

The function is being quoted and some other weird stuff is happening,
such that I have result SQL that looks like:

UPDATE mytable SET geocolumn =
E'GeomFromText(''GeomFromText(:GeomFromText_1, :GeomFromText_2)'',
4326)' WHERE id = 1

So, without having looked into this more deeply (within SA), it would
appear that I need to tell SA to treat my parameters to the func as
literals (?) and somehow tell it to not escape the function itself
(?).,  Is there a better / correct way to do this?  I'd be happy to
write up some instructions for the Functions section of the manual
if there is a solution or best-practice for this (will need some
guidance on how to submit that documentation).

Thanks in advance!

Hans
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Using database functions in UPDATE statement

2008-09-19 Thread Hans Lellelid

In response to myself ... I discovered that it was only when I was
specifying the function clauses as bind params in the conn.execute()
method that it was failing.  The code I presented should actually be
working correctly.

To be explicit, this seems to be working:

mytable.update(mytable.c.id==idvar, {'geocolumn':
func.GeomFromText(wkt, 4326)})

while this does not:

up = mytable.update(mytable.c.id==idvar)
conn.execute(up, {'geocolumn': func.GeomFromText(wkt, 4326)})

It wasn't obvious to me that these were not equivalent.

Sorry for the confusion!

Hans

On Sep 19, 12:52 pm, Hans Lellelid [EMAIL PROTECTED] wrote:
 Hi,

 I'm using SA 0.4.6 and I'm having trouble using the result of a
 database function / stored procedure in an UPDATE statement (being
 constructed with SQL expression lang).   This happens to be for using
 PostGIS columns; however, that is not relevant to the problem here.  I
 have tried doing some searching on this in the manual and in user
 group, but have not found an answer.

 A simplified version of my python code looks like this:

 mytable.update(mytable.c.id==idvar, {'geocolumn':
 func.GeomFromText(wkt, 4326)})

 The function is being quoted and some other weird stuff is happening,
 such that I have result SQL that looks like:

 UPDATE mytable SET geocolumn =
 E'GeomFromText(''GeomFromText(:GeomFromText_1, :GeomFromText_2)'',
 4326)' WHERE id = 1

 So, without having looked into this more deeply (within SA), it would
 appear that I need to tell SA to treat my parameters to the func as
 literals (?) and somehow tell it to not escape the function itself
 (?).,  Is there a better / correct way to do this?  I'd be happy to
 write up some instructions for the Functions section of the manual
 if there is a solution or best-practice for this (will need some
 guidance on how to submit that documentation).

 Thanks in advance!

 Hans
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Using database functions in UPDATE statement

2008-09-19 Thread Hans Lellelid

 ah.   right, the parameter argument of execute() does not handle SQL  
 expressions as values - the keys are used to generate a corresponding  
 list of bind parameter clauses.   Earlier versions of SQLA were more  
 liberal in this regard but the current behavior was based around  
 simplifying and improving the performance of the execute/compile  
 chain.   I usually use the values() method for the VALUES clause -  
 table.update().where(whereclause).values(key=value, key=value).

Thanks for the replies!  -- I like the method you propose (values
clause), much clearer to me.  Is that documented?  In general I do
find that there are often many ways to do the same thing in SQLAlchemy
and it's not immediately clear from reading the docs (though when you
start seeing variety in the examples it does help).   As I spend more
time with the tool (and figure some of these things out through trial/
error and list responses), I'd be happy to help fill in some of the
sparse areas of the documentation.  Is there a standard procedure for
submitting documentation patches?

Thanks,
Hans
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: custom type for PostGIS geometry

2008-07-23 Thread Hans

I just wanted to followup to this thread to report success using the
patch provided below.

 You might want to try experimenting with a simple monkeypatch approach
 just to see what kind of results you get - this is just a guess based
 on a particular compiler hook we have that specifically processes
 columns being rendered within a select:

 from sqlalchemy.databases.postgres import PGCompiler

 def label_select_column(self, select, column, asfrom):
 if isinstance(col.type, MyGISType):
 return func.some_kml_function(col).label(column.name)
 else:
 return super(self, PGCompiler).label_select_column(select, 
 column,
 asfrom)

 PGCompiler.label_select_column = label_select_column

 if the above leads to decent results we can attempt to add a hook onto
 TypeEngine perhaps.

Here's the exact patch that I ended up applying (only trivial
modifications from suggestion provided above).

from sqlalchemy.databases.postgres import PGCompiler
from sqlalchemy import func

def label_select_column(self, select, column, asfrom):
if isinstance(column.type, GeometryType):
return func.ST_AsEWKT(column).label(column.name)
else:
return super(PGCompiler, self).label_select_column(select,
column, asfrom)

PGCompiler.label_select_column = label_select_column

I assume that I should probably be doing something to handle the
asfrom parameter; however, this hasn't been a concern for me yet in
my codebase so I've put off investigating how the original
label_select_column does that.

Thanks again for the help.

Hans
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] custom type for PostGIS geometry

2008-06-30 Thread Hans

Hi,

Based on some posts I was able to find scouring Google, I was trying
to implement a custom type (GeometryType) for the PostGIS GEOMETRY
column.

Now, I should point out that I am not using sqlalchemy to build the
DDL, since the geometry columns are indirectly created by invoking
stored procedures (and I couldn't really figure out a clean way to
handle that).  I am, however, using these Geometry types in my Table
instances [mapping to already-created tables].

The problem is that I'd like to have sqlalchemy return KML for all
geometry types by default, but I don't know how to setup my type to
specify a SQL function that needs to be applied to the column in the
select clause.  Is this possible?  Alternatively, it would be
*perfect* to have additional *_kml properties added to the result
objects (one for each GEOMETRY column), but this seems even more
complex (?).

Here is my simple type class:

class GeometryType(sa.types.TypeEngine):

def __init__(self, SRID, typeName, dimension):
super(GeometryType, self).__init__()
self.mSrid = SRID
self.mType = typeName.upper()
self.mDim = dimension

def __repr__(self):
return %s:%s-%s(%s) % (self.__class__.__name__, self.mType,
self.mDim, self.mSrid)

def get_col_spec(self):
return GEOMETRY

def convert_bind_param(self, value, engine):
if not value:
return None
else:
return 'GeomFromText(\'%s\',%s)' % (value, self.mSrid)

def convert_result_value(self, value, engine):
return 'AsKml(%s)' % value

I was expecting my convert_result_value method to operate on the SQL,
but I am assuming from looking at the generated SQL that this method
is actually going to operate on the raw data that was returned from
SQL.  So, is there a way to accomplish what I want?

Thanks in advance!

Hans
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: custom type for PostGIS geometry

2008-06-30 Thread Hans



On Jun 30, 1:24 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 On Jun 30, 2008, at 1:04 PM, Hans wrote:



  The problem is that I'd like to have sqlalchemy return KML for all
  geometry types by default, but I don't know how to setup my type to
  specify a SQL function that needs to be applied to the column in the
  select clause.  Is this possible?  Alternatively, it would be
  *perfect* to have additional *_kml properties added to the result
  objects (one for each GEOMETRY column), but this seems even more
  complex (?).

  I was expecting my convert_result_value method to operate on the SQL,
  but I am assuming from looking at the generated SQL that this method
  is actually going to operate on the raw data that was returned from
  SQL.  So, is there a way to accomplish what I want?

 TypeEngine currently has no hook that's consulted during SQL
 rendering, which is where you're looking for this to happen.Its
 not impossible to add such a feature but its not yet clear to me that
 TypeEngine is the appropriate place for this decision to be made.
 Its also not apparent yet how specifically within the SQL it would be
 appropriate to render the function, and how labeling would work -
 since SQLA, when it generates nested select statements, uses labels to
 track each column named against a table outwards...such as select
 foo, bar from (select a as foo, b as bar from table).  In the case of
 a SQL function, the column name becomes somewhat undefined until a
 label is applied.  SQLA nests SELECT statements very often.

 You might want to try experimenting with a simple monkeypatch approach
 just to see what kind of results you get - this is just a guess based
 on a particular compiler hook we have that specifically processes
 columns being rendered within a select:

 from sqlalchemy.databases.postgres import PGCompiler

 def label_select_column(self, select, column, asfrom):
 if isinstance(col.type, MyGISType):
 return func.some_kml_function(col).label(column.name)
 else:
 return super(self, PGCompiler).label_select_column(select, 
 column,
 asfrom)

 PGCompiler.label_select_column = label_select_column

 if the above leads to decent results we can attempt to add a hook onto
 TypeEngine perhaps.

Thank you -- I will give that a shot!

Hans
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] understanding thread-safe connections, engines and sessions

2008-06-18 Thread Hans

Hi -

This is a general clarification question.  I have read the docs, but
am still a little unclear on the exact relationship between sessions,
engines, and connections -- at least as this concerns thread safety.

Background:  I'm using sqlalchemy in a web application context
(cherrypy) and hence figure that I should make sure that I'm doing
things in a thread-safe way :)

Currently, I am just creating a normal engine and then using that to
initialize a scoped session.  (I am setting Session =
scoped_session(sessionmaker(, bind=engine)).  I understand that
this is the way to ensure that Session instances are thread-local.

Now, if I want to do some stuff with the engine or connection
directly, should I create my engine with context=threadlocal
additionally?  (And then pass that threadlocal engine off to my
scoped_session ?)

And when I get a connection from such an engine, do I need to always
specify that I want a contextual connection?  i.e. conn =
engine.contextual_connect() ?  (What happens if I just run
engine.connect() as normal with a threadlocal connection?)

I assume all of this is necessary in a multi-threaded environment?

Thanks for the help!

Hans
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: understanding thread-safe connections, engines and sessions

2008-06-18 Thread Hans

On Jun 18, 12:12 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 On Jun 18, 2008, at 11:43 AM, Hans wrote:



  Now, if I want to do some stuff with the engine or connection
  directly, should I create my engine with context=threadlocal
  additionally?  (And then pass that threadlocal engine off to my
  scoped_session ?)

 if you deal with the engine, it uses connection resources on an as
 needed basis, such as when you say engine.execute(somestatement).  the
 threadlocal context doesn't really grant any additional thread
 safety, its just a convenience method so that operations common to a
 thread can all implicitly share a transaction without the need to pass
 an explicit Connection object.  It is typically useful in a non-ORM
 oriented application.  In an ORM oriented application, the Session
 tends to be the home base for transactional/database context.

Ok, I think I understand that.  So threadlocal basically let's the
Engine reuse  connection objects and is really for implicit
connections rather than creating and using connections explicitly.

 Im not entirely sure what you mean by pass that engine to my
 scoped_session.   You typically bind the underlying sessionmaker() to
 an engine when its first created, threadlocal or not, and you're done.

Yeah, that's what I meant -- I would bind the threadlocal engine to
the sessionmaker in the app setup phase.

  And when I get a connection from such an engine, do I need to always
  specify that I want a contextual connection?  i.e. conn =
  engine.contextual_connect() ?  (What happens if I just run
  engine.connect() as normal with a threadlocal connection?)

 the difference between connect() and contextual_connect() is only
 significant if you're using the threadlocal context.  I would advise
 not using threadlocal for now as you should familiarize with the
 basics first.   If you are using Connections, its best to stick with
 connect().  contextual_connect() is only for applications that are
 trying to interact with the engine in such a way that they participate
 within the threadlocal context if used that way.  But normally you
 wouldn't use explicit Connections with threadlocal since the whole
 point of threadlocal is to remove the need to pass around Connection
 objects.

Ok, this makes sense.  I guess I don't need to worry about the
threadlocal stuff if I'm always creating and disposing of connections.

For example, if I have a DAO method (classmethod) that peforms some
custom SQL query, it sounds like the clean  simple way to do this
would be to just create and dispose of the connection within that
method.  Then I don't need to worry about thread safety issues
(right?).

def get_some_data(cls, params):
  conn = engine.connect()
  rs = conn.execute('stored proc or complex SQL here')
  return rs

(And then close connection via rs.close() in calling code, for
example.)

On a follow-up note, If I am creating a new connection (calling
engine.connect()) for every method, will that actually be creating a
new connection (assuming I'm not using strategy=threadlocal)?  If
so, is there a better convention?  -- I assume that something like a
singleton pattern would be right out, since that would imply sharing
between threads.

 I would comment that if you are working with ORM transactions and wish
 to have direct execute() access within the context of that
 transaction, you're best sticking with the execute() and connection()
 methods off of Session itself; this will ensure that those operations
 are participating with whatever transaction the current Session may
 have in progress.

Yeah, for the ORM stuff, I definitely plan to stick with the
ScopedSession and let that handle it all for me.  We have a lot of
legacy SQL at this point and for the sake of simplicity (and also
because rewriting some of this SQL is not going to be any prettier
with the sql builder api) it will be necessary to use connections
directly for some components of the application.

Thanks again for the help in understanding this.

Hans
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: understanding thread-safe connections, engines and sessions

2008-06-18 Thread Hans



On Jun 18, 1:18 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 On Jun 18, 2008, at 12:41 PM, Hans wrote:



  Ok, this makes sense.  I guess I don't need to worry about the
  threadlocal stuff if I'm always creating and disposing of connections.

  For example, if I have a DAO method (classmethod) that peforms some
  custom SQL query, it sounds like the clean  simple way to do this
  would be to just create and dispose of the connection within that
  method.  Then I don't need to worry about thread safety issues
  (right?).

  def get_some_data(cls, params):
   conn = engine.connect()
   rs = conn.execute('stored proc or complex SQL here')
   return rs

 you could just as easily use engine.execute() for a case like the
 above, which does basically the same thing.  theres no thread safety
 issues inherent.

Ok, that's good to know.  I thought that somehow that was for more ORM-
related queries (or, rather, using the metadata Tables).  So I suppose
the only reason I'd need to work with a Connection is if I wanted the
transaction control.  For most of my single-statement cases, I don't
care about the transactions (or rather, want them to happen in single
transaction).

  On a follow-up note, If I am creating a new connection (calling
  engine.connect()) for every method, will that actually be creating a
  new connection (assuming I'm not using strategy=threadlocal)?  If
  so, is there a better convention?  -- I assume that something like a
  singleton pattern would be right out, since that would imply sharing
  between threads.

 when you say engine.connect() it draws the connection from the
 connection pool.  So whether or not a new connection is created at
 that point depends on the configuration and current state of the pool.

Ok, that's what I suspected, but wanted to make sure I understood that
correctly.  I read the section on connection pooling, but that seemed
to involve wrapping the low-level driver (e.g. psycopg) with a proxy
object.  Not sure if that's a completely different beast.  In any
event, it sounds like I can worry about the pooling (and general
resource performance) at a later point in the development -- and make
those change w/o having to change the API my app uses to get
connections at all.

Thanks again -
Hans
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Executing post-create_all() SQL causing rollback?

2008-06-17 Thread Hans

On Jun 17, 11:12 am, Michael Bayer [EMAIL PROTECTED] wrote:
 On Jun 17, 10:36 am, Hans [EMAIL PROTECTED] wrote:



  I'm not sure I completely understand what's going on, so I apologize
  is this is obvious use error :)

  I'm trying to add to the create_all() method (by wrapping in my own
  method) for some custom SQL to be executed after the basic
  metadata.create_all() method is finished.

  More specifically, I have something like this:

  def create_all(metadata, engine):
  metadata.create_all(engine)
  s = Session()
  s.begin()
  engine.func.AddGeometryColumn('mytable', 'geo_col', 4326, 'POINT',
  2).execute()
  s.commit()

 what's going on here is that the Session's transaction has no
 interaction with the call to engine.func.XXX().execute().  There is no
 thread-local context in progress by default.  Additionally, the
 specific call you are making, AddGeometryColumn, is not recognized
 by SQLA as an autocommit phrase so even the engine.execute() does
 not issue its own COMMIT.

 Your above app would work if you instead bound the Session to the
 engine (Session(bind=engine)) and used Session.execute().  Its a
 little weird to use a Session here at all since no ORM functionality
 is needed and I'd opt instead to use a Connection/Transaction combo,
 but even better in this specific case is to use a DDL() construct,
 which will execute automatically after a table create:

 http://www.sqlalchemy.org/docs/04/sqlalchemy_schema.html#docstrings_s...

Thanks so much for the reply!  -- I think I understand and I agree
that an ORM transaction is probably not necessary here at all.  I
*just* finished reading the Transaction page and now I understand that
distinction.  I'll use a connection directly.

Thanks again -
Hans
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Executing post-create_all() SQL causing rollback?

2008-06-17 Thread Hans



On Jun 17, 11:15 am, Hans [EMAIL PROTECTED] wrote:
 On Jun 17, 11:12 am, Michael Bayer [EMAIL PROTECTED] wrote:



  On Jun 17, 10:36 am, Hans [EMAIL PROTECTED] wrote:

   I'm not sure I completely understand what's going on, so I apologize
   is this is obvious use error :)

   I'm trying to add to the create_all() method (by wrapping in my own
   method) for some custom SQL to be executed after the basic
   metadata.create_all() method is finished.

   More specifically, I have something like this:

   def create_all(metadata, engine):
   metadata.create_all(engine)
   s = Session()
   s.begin()
   engine.func.AddGeometryColumn('mytable', 'geo_col', 4326, 'POINT',
   2).execute()
   s.commit()

  what's going on here is that the Session's transaction has no
  interaction with the call to engine.func.XXX().execute().  There is no
  thread-local context in progress by default.  Additionally, the
  specific call you are making, AddGeometryColumn, is not recognized
  by SQLA as an autocommit phrase so even the engine.execute() does
  not issue its own COMMIT.

  Your above app would work if you instead bound the Session to the
  engine (Session(bind=engine)) and used Session.execute().  Its a
  little weird to use a Session here at all since no ORM functionality
  is needed and I'd opt instead to use a Connection/Transaction combo,
  but even better in this specific case is to use a DDL() construct,
  which will execute automatically after a table create:

 http://www.sqlalchemy.org/docs/04/sqlalchemy_schema.html#docstrings_s...

 Thanks so much for the reply!  -- I think I understand and I agree
 that an ORM transaction is probably not necessary here at all.  I
 *just* finished reading the Transaction page and now I understand that
 distinction.  I'll use a connection directly.

... and actually, after doing some more reading on DDL() I think that
will indeed prove to be the easiest and cleanest.

Hans
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---