[sqlalchemy] Re: isolation level not supported on MySQL 3.23

2012-09-18 Thread Ids
Mmm. The traceback I got last (TypeError), only occurs on a client platform 
(redhat 8.0) using mysql 3.23 client software. When run on a platform with 
MySQL 5.0 (suse 10.1) software you're solution by raising the 
NotImplemented exception works.
redhat 8.0 + mysql 3.23 client + python-mysql 1.2.2 + sqlalchemy 0.7.8 + 
mysql server 3.23.31 (on other host) fails
suse 10.1 + mysql 5.0 client + python-mysql 1.2.2 + sqlalchemy 0.7.8 + 
mysql server 3.23.54 (on other host) works

Thanks 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/RXAaI5QEo48J.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] joinedload option changes outcome of recursive query

2012-09-18 Thread Ladislav Lenart
Ok, thank you.

I know about

 SELECT a, b FROM x ORDER BY c

I just don't know how to write it because I (think I) need to define the literal
column as one of the CTE columns. I have to increment it in each iteration. I
will play with it for a while and see where it will lead me to.

Thank you again for your time,

Ladislav Lenart


On 17.9.2012 16:05, Michael Bayer wrote:
 
 On Sep 17, 2012, at 9:23 AM, Ladislav Lenart wrote:
 

 Hm, I see. My train of thought: This particular query returns the path from a
 selected node to the root. Each iteration adds one row to the result set (one
 new parent). I thought that UNION ALL keeps the order intact and thus no
 order_by clause is needed. I guess I was wrong.
 
 nothing except ORDER BY orders rows in SQL.  All the rest is an artifact of 
 how the query executes.
 

 I can add a depth/iteration column (via literal_column) and order_by it. 
 However
 I still want the results to be a list of Node instances (i.e. without the 
 added
 depth column). How can I write such a query?
 
 you can order_by() any expression, and that expression does not need to be in 
 the columns clause.   That is:
 
 SELECT a, b FROM x ORDER BY c
 
 
 is just as valid as
 
 SELECT a, b, c FROM x ORDER BY c

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] [Q] Lack of isolation in unit tests

2012-09-18 Thread Ladislav Lenart
Hello.

I have isolation issues with unit testing with SQLAlchemy. My DbTestCase looks
like this (I use nose test framework):


class DbTestCase(object):
Db-aware test case superclass.
__engine_name__ = 'postgres'
__db_name__ = 'unit_tests'
__echo__ = True

@property
def engine(self):
return self.Base.metadata.bind

def setUp(self):
self.Base = create_base()
self._create_db()
self.connection = self.engine.connect()
self.trans = self.connection.begin()
self.session = Session(bind=self.connection)

def _create_db(self):
# construct conn_string from __engine_name__ and __db_name__
engine = create_engine(conn_string, echo=echo)
self.Base.metadata.bind = engine

def tearDown(self):
# Rollback: Everything that happened with the Session above (including
# calls to commit()) is rolled back.
self.trans.rollback()
self.session.close()

# Return connection to the engine.
self.connection.close()

# Remove all tables while we know what we have defined.
self._drop_all()

def _recreate_all(self):
self._drop_all()
self._create_all()

def _drop_all(self):
self.Base.metadata.drop_all()

def _create_all(self):
self.Base.metadata.create_all()


A complete usage looks like this:

import re
from sqlalchemy import Column, Integer, ForeignKey
from sqlalchemy.orm import relationship
from sqlalchemy.ext.declarative import declarative_base, declared_attr


class _Base(object):
@declared_attr
def __tablename__(cls): #@NoSelf
return camel_case_to_underscore(cls.__name__)


def camel_case_to_underscore(name):
Convert CamelCaseForm to camel_case_form.
Taken from:
http://stackoverflow.com/questions/1175208/elegant-python-function-to-convert-camelcase-to-camel-case

s1 = re.sub('(.)([A-Z][a-z]+)', r'\1_\2', name)
return re.sub('([a-z0-9])([A-Z])', r'\1_\2', s1).lower()


def create_base():
return declarative_base(cls=_Base)


class Id(object):
Add id column as primary key.

@declared_attr
def id(cls): #@NoSelf
return Column(Integer(), primary_key=True)


class TestFooBar(DbTestCase):
def test_foo(self):
class Foo(self.Base, Id):
qux = relationship('Qux') # -- intentional bug
pass

self._recreate_all()

foo = Foo()
self.session.add(foo)
self.session.flush()
# ...

def test_foo_bar(self):
class Foo(self.Base, Id):
bars = relationship('Bar', back_populates='foo')

class Bar(self.Base, Id):
foo_id = Column(Integer(), ForeignKey('foo.id'), nullable=False)
foo = relationship('Foo', back_populates='bars')


self._recreate_all()

foo = Foo()
bar = Bar(foo=foo)
self.session.add(foo)
self.session.add(bar)
self.session.flush()
# ...

When I run the above two tests, BOTH fail because of the bug in test_foo. The
test test_foo_bar fails with:

InvalidRequestError: One or more mappers failed to initialize - can't proceed
with initialization of other mappers.  Original exception was: When initializing
mapper Mapper|Foo|foo, expression 'Qux' failed to locate a name (name 'Qux' is
not defined). If this is a class name, consider adding this relationship() to
the class 'zfp.tests.model.test_foo.Foo' class after both dependent classes
have been defined.

If I comment the 'qux = ...' line, BOTH tests pass.

I thought that using separate declarative bases, each with its own metadata, is
enough to ensure isolation among tests. Was my assumption wrong? Can I achieve
proper isolation of unit tests somehow?


Thank you,

Ladislav Lenart


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] [Q] Lack of isolation in unit tests

2012-09-18 Thread Ladislav Lenart
Hello again.

The problem is in sqlalchemy.orm.mapper. There are globals _mapper_registry and
a boolean _new_mappers that triggers recompilation of the mappers. Is there a
safe way to clear them in each test's case tearDown?

Thank you in advance,

Ladislav Lenart


On 18.9.2012 16:07, Ladislav Lenart wrote:
 Hello.
 
 I have isolation issues with unit testing with SQLAlchemy. My DbTestCase looks
 like this (I use nose test framework):
 
 
 class DbTestCase(object):
 Db-aware test case superclass.
 __engine_name__ = 'postgres'
 __db_name__ = 'unit_tests'
 __echo__ = True
 
 @property
 def engine(self):
 return self.Base.metadata.bind
 
 def setUp(self):
 self.Base = create_base()
 self._create_db()
 self.connection = self.engine.connect()
 self.trans = self.connection.begin()
 self.session = Session(bind=self.connection)
 
 def _create_db(self):
   # construct conn_string from __engine_name__ and __db_name__
 engine = create_engine(conn_string, echo=echo)
 self.Base.metadata.bind = engine
 
 def tearDown(self):
 # Rollback: Everything that happened with the Session above (including
 # calls to commit()) is rolled back.
 self.trans.rollback()
 self.session.close()
 
 # Return connection to the engine.
 self.connection.close()
 
 # Remove all tables while we know what we have defined.
 self._drop_all()
 
 def _recreate_all(self):
 self._drop_all()
 self._create_all()
 
 def _drop_all(self):
 self.Base.metadata.drop_all()
 
 def _create_all(self):
 self.Base.metadata.create_all()
 
 
 A complete usage looks like this:
 
 import re
 from sqlalchemy import Column, Integer, ForeignKey
 from sqlalchemy.orm import relationship
 from sqlalchemy.ext.declarative import declarative_base, declared_attr
 
 
 class _Base(object):
 @declared_attr
 def __tablename__(cls): #@NoSelf
 return camel_case_to_underscore(cls.__name__)
 
 
 def camel_case_to_underscore(name):
 Convert CamelCaseForm to camel_case_form.
 Taken from:
 http://stackoverflow.com/questions/1175208/elegant-python-function-to-convert-camelcase-to-camel-case
 
 s1 = re.sub('(.)([A-Z][a-z]+)', r'\1_\2', name)
 return re.sub('([a-z0-9])([A-Z])', r'\1_\2', s1).lower()
 
 
 def create_base():
 return declarative_base(cls=_Base)
 
 
 class Id(object):
 Add id column as primary key.
 
 @declared_attr
 def id(cls): #@NoSelf
 return Column(Integer(), primary_key=True)
 
 
 class TestFooBar(DbTestCase):
 def test_foo(self):
 class Foo(self.Base, Id):
 qux = relationship('Qux') # -- intentional bug
 pass
 
 self._recreate_all()
 
 foo = Foo()
 self.session.add(foo)
 self.session.flush()
 # ...
 
 def test_foo_bar(self):
 class Foo(self.Base, Id):
 bars = relationship('Bar', back_populates='foo')
 
 class Bar(self.Base, Id):
 foo_id = Column(Integer(), ForeignKey('foo.id'), nullable=False)
 foo = relationship('Foo', back_populates='bars')
 
 
 self._recreate_all()
 
 foo = Foo()
 bar = Bar(foo=foo)
 self.session.add(foo)
 self.session.add(bar)
 self.session.flush()
 # ...
 
 When I run the above two tests, BOTH fail because of the bug in test_foo. The
 test test_foo_bar fails with:
 
 InvalidRequestError: One or more mappers failed to initialize - can't proceed
 with initialization of other mappers.  Original exception was: When 
 initializing
 mapper Mapper|Foo|foo, expression 'Qux' failed to locate a name (name 'Qux' 
 is
 not defined). If this is a class name, consider adding this relationship() to
 the class 'zfp.tests.model.test_foo.Foo' class after both dependent classes
 have been defined.
 
 If I comment the 'qux = ...' line, BOTH tests pass.
 
 I thought that using separate declarative bases, each with its own metadata, 
 is
 enough to ensure isolation among tests. Was my assumption wrong? Can I achieve
 proper isolation of unit tests somehow?
 
 
 Thank you,
 
 Ladislav Lenart
 
 


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] [Q] Lack of isolation in unit tests

2012-09-18 Thread Michael Bayer

the _mapper_registry is weak referencing, so doesn't have any impact on mappers 
hanging around or not.   Ultimately, the mapper is associated with your mapped 
class.   The clear_mappers() API call will de-associate mappers from classes 
and remove instrumentation that was affixed by the mapper() call.

However, the vast majority of applications have no need for such a phase, and 
there is no benefit to calling clear_mappers().   This is discussed in the docs 
for clear_mappers(): 
http://docs.sqlalchemy.org/en/rel_0_7/orm/mapper_config.html?highlight=clear_mappers#sqlalchemy.orm.clear_mappers

If you're using declarative, it is especially useless, since the directives 
placed on a declarative class are destroyed by the mapping process in the first 
place, and those classes can't be re-mapped unless you defined the mappings 
again non-declaratively, thus making the usage of declarative in the first 
place completely redundant.

The mapped class is best considered as a single, composed unit.Just like 
when you create a class in your application that has, for example, an 
__init__() method and a get_stuff() method, there's no need when tearing down 
tests to remove the __init__() and get_stuff() methods from those classes; 
these are part of that structure.

A frequent misunderstanding is that mappers have some connection to the 
database, either through engines or connections, and that as long as mappers 
exist, this means we are connected to the database.  This is not at all true, 
any more than a SQL string you might pass to cursor.execute() in DBAPI has any 
relationship to database connections.The mapper() only represents in-memory 
information about how your classes are structured in terms of hypothetical 
database tables.   It has no connection whatsoever to actual database 
connections.

When unit testing, the thing that needs to be torn down is the transactional, 
connection, and Session state, that is, those things which represent per-use 
resources that need to be closed down.An example of how to associate a 
Session with connection resources for the lifespan of a test is here: 
http://docs.sqlalchemy.org/en/rel_0_7/orm/session.html#joining-a-session-into-an-external-transaction
 .It seems like you're already working with this code based on what I see 
below.   



On Sep 18, 2012, at 10:20 AM, Ladislav Lenart wrote:

 Hello again.
 
 The problem is in sqlalchemy.orm.mapper. There are globals _mapper_registry 
 and
 a boolean _new_mappers that triggers recompilation of the mappers. Is there a
 safe way to clear them in each test's case tearDown?
 
 Thank you in advance,
 
 Ladislav Lenart
 
 
 On 18.9.2012 16:07, Ladislav Lenart wrote:
 Hello.
 
 I have isolation issues with unit testing with SQLAlchemy. My DbTestCase 
 looks
 like this (I use nose test framework):
 
 
 class DbTestCase(object):
Db-aware test case superclass.
__engine_name__ = 'postgres'
__db_name__ = 'unit_tests'
__echo__ = True
 
@property
def engine(self):
return self.Base.metadata.bind
 
def setUp(self):
self.Base = create_base()
self._create_db()
self.connection = self.engine.connect()
self.trans = self.connection.begin()
self.session = Session(bind=self.connection)
 
def _create_db(self):
  # construct conn_string from __engine_name__ and __db_name__
engine = create_engine(conn_string, echo=echo)
self.Base.metadata.bind = engine
 
def tearDown(self):
# Rollback: Everything that happened with the Session above (including
# calls to commit()) is rolled back.
self.trans.rollback()
self.session.close()
 
# Return connection to the engine.
self.connection.close()
 
# Remove all tables while we know what we have defined.
self._drop_all()
 
def _recreate_all(self):
self._drop_all()
self._create_all()
 
def _drop_all(self):
self.Base.metadata.drop_all()
 
def _create_all(self):
self.Base.metadata.create_all()
 
 
 A complete usage looks like this:
 
 import re
 from sqlalchemy import Column, Integer, ForeignKey
 from sqlalchemy.orm import relationship
 from sqlalchemy.ext.declarative import declarative_base, declared_attr
 
 
 class _Base(object):
@declared_attr
def __tablename__(cls): #@NoSelf
return camel_case_to_underscore(cls.__name__)
 
 
 def camel_case_to_underscore(name):
Convert CamelCaseForm to camel_case_form.
Taken from:
 http://stackoverflow.com/questions/1175208/elegant-python-function-to-convert-camelcase-to-camel-case

s1 = re.sub('(.)([A-Z][a-z]+)', r'\1_\2', name)
return re.sub('([a-z0-9])([A-Z])', r'\1_\2', s1).lower()
 
 
 def create_base():
return declarative_base(cls=_Base)
 
 
 class Id(object):
Add id column as primary key.
 
@declared_attr
def id(cls): #@NoSelf
return Column(Integer(), primary_key=True)
 
 
 class 

Re: [sqlalchemy] [Q] Lack of isolation in unit tests

2012-09-18 Thread Claudio Freire
On Tue, Sep 18, 2012 at 11:07 AM, Ladislav Lenart lenart...@volny.cz wrote:
 def create_base():
 return declarative_base(cls=_Base)

Move the declaration of _Base to within create_base, and I think that
should fix your problem.

(I've had a similar one, not with test cases, but with a replica schema)

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] [Q] Lack of isolation in unit tests

2012-09-18 Thread Ladislav Lenart
Hello.

Adding call to clear_mappers() to tearDown fixed my problem.

For the rest of the discussion I am not sure I completely follow.

I use declarative exclusively but each test defines its own Base and its own set
of ORM classes. However when one test has a bug in an ORM class definition, ALL
the following tests fail because of this. I tried to delete all ORM classes
defined in the test but this did not help. Perhaps there are lingering
references to them somewhere. I am not sure how Python's GC works.

Anyway, thank you again for your quick help,

Ladislav Lenart


On 18.9.2012 16:39, Michael Bayer wrote:
 
 the _mapper_registry is weak referencing, so doesn't have any impact on 
 mappers hanging around or not.   Ultimately, the mapper is associated with 
 your mapped class.   The clear_mappers() API call will de-associate mappers 
 from classes and remove instrumentation that was affixed by the mapper() call.
 
 However, the vast majority of applications have no need for such a phase, and 
 there is no benefit to calling clear_mappers().   This is discussed in the 
 docs for clear_mappers(): 
 http://docs.sqlalchemy.org/en/rel_0_7/orm/mapper_config.html?highlight=clear_mappers#sqlalchemy.orm.clear_mappers
 
 If you're using declarative, it is especially useless, since the directives 
 placed on a declarative class are destroyed by the mapping process in the 
 first place, and those classes can't be re-mapped unless you defined the 
 mappings again non-declaratively, thus making the usage of declarative in the 
 first place completely redundant.
 
 The mapped class is best considered as a single, composed unit.Just like 
 when you create a class in your application that has, for example, an 
 __init__() method and a get_stuff() method, there's no need when tearing down 
 tests to remove the __init__() and get_stuff() methods from those classes; 
 these are part of that structure.
 
 A frequent misunderstanding is that mappers have some connection to the 
 database, either through engines or connections, and that as long as mappers 
 exist, this means we are connected to the database.  This is not at all 
 true, any more than a SQL string you might pass to cursor.execute() in DBAPI 
 has any relationship to database connections.The mapper() only represents 
 in-memory information about how your classes are structured in terms of 
 hypothetical database tables.   It has no connection whatsoever to actual 
 database connections.
 
 When unit testing, the thing that needs to be torn down is the transactional, 
 connection, and Session state, that is, those things which represent per-use 
 resources that need to be closed down.An example of how to associate a 
 Session with connection resources for the lifespan of a test is here: 
 http://docs.sqlalchemy.org/en/rel_0_7/orm/session.html#joining-a-session-into-an-external-transaction
  .It seems like you're already working with this code based on what I see 
 below.   
 
 
 
 On Sep 18, 2012, at 10:20 AM, Ladislav Lenart wrote:
 
 Hello again.

 The problem is in sqlalchemy.orm.mapper. There are globals _mapper_registry 
 and
 a boolean _new_mappers that triggers recompilation of the mappers. Is there a
 safe way to clear them in each test's case tearDown?

 Thank you in advance,

 Ladislav Lenart


 On 18.9.2012 16:07, Ladislav Lenart wrote:
 Hello.

 I have isolation issues with unit testing with SQLAlchemy. My DbTestCase 
 looks
 like this (I use nose test framework):


 class DbTestCase(object):
Db-aware test case superclass.
__engine_name__ = 'postgres'
__db_name__ = 'unit_tests'
__echo__ = True

@property
def engine(self):
return self.Base.metadata.bind

def setUp(self):
self.Base = create_base()
self._create_db()
self.connection = self.engine.connect()
self.trans = self.connection.begin()
self.session = Session(bind=self.connection)

def _create_db(self):
 # construct conn_string from __engine_name__ and __db_name__
engine = create_engine(conn_string, echo=echo)
self.Base.metadata.bind = engine

def tearDown(self):
# Rollback: Everything that happened with the Session above 
 (including
# calls to commit()) is rolled back.
self.trans.rollback()
self.session.close()

# Return connection to the engine.
self.connection.close()

# Remove all tables while we know what we have defined.
self._drop_all()

def _recreate_all(self):
self._drop_all()
self._create_all()

def _drop_all(self):
self.Base.metadata.drop_all()

def _create_all(self):
self.Base.metadata.create_all()


 A complete usage looks like this:

 import re
 from sqlalchemy import Column, Integer, ForeignKey
 from sqlalchemy.orm import relationship
 from sqlalchemy.ext.declarative import declarative_base, declared_attr


 class _Base(object):
@declared_attr
def 

Re: [sqlalchemy] [Q] Lack of isolation in unit tests

2012-09-18 Thread Michael Bayer

On Sep 18, 2012, at 11:36 AM, Ladislav Lenart wrote:

 Hello.
 
 Adding call to clear_mappers() to tearDown fixed my problem.
 
 For the rest of the discussion I am not sure I completely follow.
 
 I use declarative exclusively but each test defines its own Base and its own 
 set
 of ORM classes.


OK, that's unusual but that is actually the case where clear_mappers() is fine 
to use.  SQLAlchemy's own unit tests create new classes for each test because 
we're testing SQLAlchemy itself.

 However when one test has a bug in an ORM class definition, ALL
 the following tests fail because of this.

OK, yes this is correct, as the compilation step tries to get at all mappers.  
Sorry I didn't realize this, it's very strange for end-user unit tests to be 
testing the creation of ad-hoc mapper configurations.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Double quoted name column problem (Sqlite)

2012-09-18 Thread Massi
Hi everyone, I'm using sqlalchemy 0.7.8 with sqlite and I'm encountering a 
problem trying to retrieve data from a table having a column named 
input_1. If I run this simple code:

from sqlalchemy import *

db = create_engine('sqlite:///test.db')
db.echo = False 
metadata = MetaData(db)
t = Table('my_table', metadata, autoload=True)
print t.select().execute()

i get the following error: 
OperationalError: (OperationalError) no such column: my_table.input_1 
u'SELECT my_table.id, my_table.input_1 FROM my_table'. It seems that the 
first double quotes are truncated by the query function. Of course I can 
arrange things such that no double quotes are present in column names, but 
I would like to know if SA can somehow handle this situation automatically 
and in a platform-independent way. 
Thanks in advance for your help!

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/FtTj3V7BtcYJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Using unlisted dialect to connect with sqlanydb

2012-09-18 Thread lehmann
Hello,

the Python DB-API 2.0 driver 'sqlanydb' for SQL Anywhere works fine and I'd 
like to use the SQLAlchemy connection pool within Pyramid. Unfortunately, 
SQLAlchemy doesn't let me use sqlanydb.

Actually, I don't need any ORM stuff for the start, but if its required, 
the mssql dialect should be pretty close to SQL Anywhere. But using 
create_engine() with special connect_args for sqlanydb doesn't work and 
using the creator=callback parameter doesn't work either. I always get 
No module named pyodbc because the dialects mssql and sybase seem to 
default to pyodbc.

Why isn't it possible to use just any DB-API compliant database driver?

I also read a post which claimed, that until SQLAlchemy 0.5 there was a 
driver for sqlanydb included, but I couldn't find one in that release. It 
would be very bad if I had to resign from using SQLAlchemy, but I have no 
choice in the database driver. I'm also not that much into SQLAlchemy to 
write an own driver and dialect, but I'm happy to run a test suite on a 
current SQL Anywhere database if someone else does.

Kind regards
Marten

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/19fdkEbEB_AJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Using unlisted dialect to connect with sqlanydb

2012-09-18 Thread Michael Bayer

On Sep 18, 2012, at 1:14 PM, lehm...@cnm.de wrote:

 Hello,
 
 the Python DB-API 2.0 driver 'sqlanydb' for SQL Anywhere works fine and I'd 
 like to use the SQLAlchemy connection pool within Pyramid. Unfortunately, 
 SQLAlchemy doesn't let me use sqlanydb.

if you want pooling of a DBAPI and nothing else, create_engine() is not 
required.   use pool.manage:

http://docs.sqlalchemy.org/en/rel_0_7/core/pooling.html?highlight=manage#pooling-plain-db-api-connections

 
 Actually, I don't need any ORM stuff for the start, but if its required, the 
 mssql dialect should be pretty close to SQL Anywhere.

in fact it's closer to Sybase, and we used to have a sybase dialect someone 
provided which was actually SQL Anywhere, though it was in disrepair.   At some 
point we were tasked with creating a real Sybase dialect, which replaced the 
SQL Anywhere dialect.  

 But using create_engine() with special connect_args for sqlanydb doesn't work 
 and using the creator=callback parameter doesn't work either. I always get 
 No module named pyodbc because the dialects mssql and sybase seem to 
 default to pyodbc.

The pyodbc usage there is a default, which you can actually change by passing 
in the sqlanydb module as an argument to create_engine(), called dbapi:

engine = create_engine(sybase+pyodbc://..., dbapi=sqlanydb)

However this can still have issues as DBAPIs all have non-standard behaviors 
(see below).if you wanted to adapt the sybase or mssql dialects to SQL 
Anywhere fully, you can create a sqlanydb dialect of your own with a dozen 
lines of code or so.   The dialect can then be installed using a setuptools 
entrypoint.Short DBAPI stubs like this don't really require programming 
as much as a little bit of cut and paste.   Examples of super-short DBAPI stubs 
include dialects/informix/informix.py and dialects/sybase/pysybase.py. 


 
 Why isn't it possible to use just any DB-API compliant database driver?

within the create_engine()/dialect system, all DBAPIs have tons of 
idiosyncrasies that make them all effectively incompatible.   Right from 
dbapi.connect(), the format of arguments accepted by connect() is entirely 
unspecified and changes dramatically with all DBAPIs.   From there, there are 
dozens of other areas where non-standard or inconsistent behaviors must be 
normalized.   This is why you see in the dialect system not just a Python 
module for every database we support, but beyond that an extra module for every 
possible DBAPI running against that target DB.

 
 I also read a post which claimed, that until SQLAlchemy 0.5 there was a 
 driver for sqlanydb included, but I couldn't find one in that release. It 
 would be very bad if I had to resign from using SQLAlchemy, but I have no 
 choice in the database driver. I'm also not that much into SQLAlchemy to 
 write an own driver and dialect, but I'm happy to run a test suite on a 
 current SQL Anywhere database if someone else does.
 
 Kind regards
 Marten
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To view this discussion on the web visit 
 https://groups.google.com/d/msg/sqlalchemy/-/19fdkEbEB_AJ.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] aborting insert/update from within before_insert/update events?

2012-09-18 Thread Gerald Thibault
I am working with 2 models, a Location model, and a Phone model. There 
is a one-to-many relationship between them.

When a phone number is submitted, I need to format it using the 
phonenumbers modules, which requires a country code, which exists on the 
Location object. So the formatting can only happen after the flush(), as I 
need to have the location_id populated, so I can grab the country code from 
the parent Location. If the formatting of the phone number fails, I want 
the entire object eliminated and not written to the db.

This is the current (nonworking) code I am using.

@event.listens_for(Phone, 'before_insert', raw=True)
@event.listens_for(Phone, 'before_update', raw=True)
def save_phone(mapper, connection, target):
phone = target.obj()
country = object_session(phone) \
.query(Location) \
.get(phone.location_id) \
.country
try:
number = phonenumbers.parse(phone.number, country)
phone.number = phonenumbers.format_number(number,
phonenumbers.PhoneNumberFormat.E164)
except:
print 'failed on phone number %s' % phone.number

I'm not sure how to abort the insertion/update of the object. Is this even 
possible? Is there something I can put in the except section to cancel the 
pending commit of the object in question?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/zvjgVyGg9ZUJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] aborting insert/update from within before_insert/update events?

2012-09-18 Thread Michael Bayer

On Sep 18, 2012, at 6:28 PM, Gerald Thibault wrote:

 I am working with 2 models, a Location model, and a Phone model. There is 
 a one-to-many relationship between them.
 
 When a phone number is submitted, I need to format it using the phonenumbers 
 modules, which requires a country code, which exists on the Location object. 
 So the formatting can only happen after the flush(), as I need to have the 
 location_id populated, so I can grab the country code from the parent 
 Location. If the formatting of the phone number fails, I want the entire 
 object eliminated and not written to the db.

At some point, the Phone is being associated with a Location object in memory, 
and this would be independent of whether or not location_id is present.The 
location_id can only be set, assuming this is relationship() mechanics, if this 
is the case.   So you shouldn't need a flush() for this to happen, and you can 
perform this validation before a flush plan is established.

Otherwise if location_id is populated by some other means, that would point to 
an area where you'd want to get Location objects present in memory ahead of 
time, rather than relying upon primary keys alone.

This might not be enough to solve your issue so feel free to add some detail 
how location_id is coming into being here, such that the Location isn't nearby.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Double quoted name column problem (Sqlite)

2012-09-18 Thread Massi
After a little search, I found that the problem is due to line 684 of the 
file C:\Python27\Lib\site-packages\sqlalchemy\dialects\sqlite\base.py. The 
column names of the table are processed with this regular expression 
command:

name = re.sub(r'^\|\$', '', name)

which substitutes the first double quotes with a blank. Is this really 
necessary? Does there exist any workaround to overcome this problem?

Il giorno martedì 18 settembre 2012 18:21:36 UTC+2, Massi ha scritto:

 Hi everyone, I'm using sqlalchemy 0.7.8 with sqlite and I'm encountering a 
 problem trying to retrieve data from a table having a column named 
 input_1. If I run this simple code:

 from sqlalchemy import *

 db = create_engine('sqlite:///test.db')
 db.echo = False 
 metadata = MetaData(db)
 t = Table('my_table', metadata, autoload=True)
 print t.select().execute()

 i get the following error: 
 OperationalError: (OperationalError) no such column: my_table.input_1 
 u'SELECT my_table.id, my_table.input_1 FROM my_table'. It seems that the 
 first double quotes are truncated by the query function. Of course I can 
 arrange things such that no double quotes are present in column names, but 
 I would like to know if SA can somehow handle this situation automatically 
 and in a platform-independent way. 
 Thanks in advance for your help!


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/0aCst9j8XcgJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] aborting insert/update from within before_insert/update events?

2012-09-18 Thread Gerald Thibault
We're using a modified version of tastypie, with all the django-specific 
stuff modified to work with sqlalchemy. One of the things this offers is 
the ability to submit nested resources to API endpoints, and have it 
recursively build them by creating the parents, then appending the children 
to the relationship as they are created. In this case, Location is built 
before the phones, and location_id is set (i think) when the Phone is added 
to the Location.phones relation.

It is also possible to submit directly to the Phones endpoint and create a 
phone number for an existing location which one already has the id for, in 
which case the Location does not exist until the event listener (from my 
first post) uses the location_id to pull up the Location, from which it 
extracts the country code.

Currently, the country code extraction is working perfectly, and I am able 
to see which records fail. I am unable to stop those records from being 
written to the db, that is where my troubles are. How can I stop a record 
from being written to the db from within the before_insert event listener?

On Tuesday, September 18, 2012 4:29:43 PM UTC-7, Michael Bayer wrote:


 On Sep 18, 2012, at 6:28 PM, Gerald Thibault wrote: 

  I am working with 2 models, a Location model, and a Phone model. 
 There is a one-to-many relationship between them. 
  
  When a phone number is submitted, I need to format it using the 
 phonenumbers modules, which requires a country code, which exists on the 
 Location object. So the formatting can only happen after the flush(), as I 
 need to have the location_id populated, so I can grab the country code from 
 the parent Location. If the formatting of the phone number fails, I want 
 the entire object eliminated and not written to the db. 

 At some point, the Phone is being associated with a Location object in 
 memory, and this would be independent of whether or not location_id is 
 present.The location_id can only be set, assuming this is 
 relationship() mechanics, if this is the case.   So you shouldn't need a 
 flush() for this to happen, and you can perform this validation before a 
 flush plan is established. 

 Otherwise if location_id is populated by some other means, that would 
 point to an area where you'd want to get Location objects present in memory 
 ahead of time, rather than relying upon primary keys alone. 

 This might not be enough to solve your issue so feel free to add some 
 detail how location_id is coming into being here, such that the Location 
 isn't nearby. 




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/mElFMIjDpsEJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] aborting insert/update from within before_insert/update events?

2012-09-18 Thread Michael Bayer

On Sep 18, 2012, at 8:04 PM, Gerald Thibault wrote:

 We're using a modified version of tastypie, with all the django-specific 
 stuff modified to work with sqlalchemy. One of the things this offers is the 
 ability to submit nested resources to API endpoints, and have it recursively 
 build them by creating the parents, then appending the children to the 
 relationship as they are created. In this case, Location is built before the 
 phones, and location_id is set (i think) when the Phone is added to the 
 Location.phones relation.
 
 It is also possible to submit directly to the Phones endpoint and create a 
 phone number for an existing location which one already has the id for, in 
 which case the Location does not exist until the event listener (from my 
 first post) uses the location_id to pull up the Location, from which it 
 extracts the country code.

Each of those descriptions seems to indicate that a Location object is 
available in memory and there's nothing special about a flush(), there's no 
database-side triggers or defaults you're waiting on, so this issue needs to be 
addressed before the flush proceeds, the latest would be in the before_flush() 
event.

 Currently, the country code extraction is working perfectly, and I am able to 
 see which records fail. I am unable to stop those records from being written 
 to the db, that is where my troubles are. How can I stop a record from being 
 written to the db from within the before_insert event listener?

You cannot, except for raising an exception which would abort the whole 
transaction, as before_insert() occurs within the core of the flush process 
well after the flush plan has been finalized.before_flush() is provided for 
pre-flush modifications to the flush plan.

before_insert() is not generally necessary, people frequently want to take 
advantage of the fact that it provides a built-in iteration of objects to be 
inserted, but this iteration can be done for the purposes of affecting the 
flush plan ahead of time within the before_flush() event, by iterating 
session.new.  An object detected at this stage can be omitted from the insert 
by calling session.expunge(obj).


 
 On Tuesday, September 18, 2012 4:29:43 PM UTC-7, Michael Bayer wrote:
 
 On Sep 18, 2012, at 6:28 PM, Gerald Thibault wrote: 
 
  I am working with 2 models, a Location model, and a Phone model. There 
  is a one-to-many relationship between them. 
  
  When a phone number is submitted, I need to format it using the 
  phonenumbers modules, which requires a country code, which exists on the 
  Location object. So the formatting can only happen after the flush(), as I 
  need to have the location_id populated, so I can grab the country code from 
  the parent Location. If the formatting of the phone number fails, I want 
  the entire object eliminated and not written to the db. 
 
 At some point, the Phone is being associated with a Location object in 
 memory, and this would be independent of whether or not location_id is 
 present.The location_id can only be set, assuming this is relationship() 
 mechanics, if this is the case.   So you shouldn't need a flush() for this to 
 happen, and you can perform this validation before a flush plan is 
 established. 
 
 Otherwise if location_id is populated by some other means, that would point 
 to an area where you'd want to get Location objects present in memory ahead 
 of time, rather than relying upon primary keys alone. 
 
 This might not be enough to solve your issue so feel free to add some detail 
 how location_id is coming into being here, such that the Location isn't 
 nearby. 
 
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To view this discussion on the web visit 
 https://groups.google.com/d/msg/sqlalchemy/-/mElFMIjDpsEJ.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: Double quoted name column problem (Sqlite)

2012-09-18 Thread Michael Bayer
I'm surprised that I can get you a really good chronology for why this is here, 
as it is some very old stuff.

Here's the SQLAlchemy changeset which added that logic, including the test, 
which was failing at that time:

http://hg.sqlalchemy.org/sqlalchemy/rev/cf5fbf20da45

The issue at that time was specifically the PRAGMA foreign_key_list(), for a 
table setup like the one in that test, would come back like this (I'm able to 
reproduce on a very old sqlite I was lucky enough to find):

sqlite pragma foreign_key_list(django_admin_log);
0|0|django_content_type|content_type_id|id

on a modern sqlite, we get the correct result without the quotes:

sqlite pragma foreign_key_list(django_admin_log);
0|0|django_content_type|content_type_id|id|NO ACTION|NO ACTION|NONE

and, the bug can also be traced to sqlite, where it was fixed in April of 
2009, so this would be fixed as of sqlite 3.6.14:

http://www.sqlite.org/src/info/600482d161

So one thing to note is, the change we made was overly defensive; the quoting 
issue, per my testing on that old sqlite version just now, is limited to just 
the foreign_key_list().

I've summed this up in http://www.sqlalchemy.org/trac/ticket/2568 and an 
adjustment is short, only do the regexp for the tablename in foreign_key_list, 
and only if we are on sqlite 3.6.13 or earlier.

  I would like to know if SA can somehow handle this situation automatically 
 and in a platform-independent way. 

i dont think any other dialects have something like this going on, though using 
the column quote character *in* the column name is sure to not be supported by 
many other databases.while I can adjust SQLA for this particular bit of 
history, it's generally poor form to have the quotes in the name like that.


On Sep 18, 2012, at 7:52 PM, Massi wrote:

 After a little search, I found that the problem is due to line 684 of the 
 file C:\Python27\Lib\site-packages\sqlalchemy\dialects\sqlite\base.py. The 
 column names of the table are processed with this regular expression command:
 
 name = re.sub(r'^\|\$', '', name)
 
 which substitutes the first double quotes with a blank. Is this really 
 necessary? Does there exist any workaround to overcome this problem?
 
 Il giorno martedì 18 settembre 2012 18:21:36 UTC+2, Massi ha scritto:
 Hi everyone, I'm using sqlalchemy 0.7.8 with sqlite and I'm encountering a 
 problem trying to retrieve data from a table having a column named input_1. 
 If I run this simple code:
 
 from sqlalchemy import *
 
 db = create_engine('sqlite:///test.db')
 db.echo = False 
 metadata = MetaData(db)
 t = Table('my_table', metadata, autoload=True)
 print t.select().execute()
 
 i get the following error: 
 OperationalError: (OperationalError) no such column: my_table.input_1 
 u'SELECT my_table.id, my_table.input_1 FROM my_table'. It seems that the 
 first double quotes are truncated by the query function. Of course I can 
 arrange things such that no double quotes are present in column names, but I 
 would like to know if SA can somehow handle this situation automatically and 
 in a platform-independent way. 
 Thanks in advance for your help!
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To view this discussion on the web visit 
 https://groups.google.com/d/msg/sqlalchemy/-/0aCst9j8XcgJ.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.