[sqlalchemy] Seemingly inconsistent results with transaction isolation

2011-09-08 Thread Russ
I was getting some strange transaction isolation behavior with
SQLAlchemy (0.7.2), psycopg2 (2.4.2), and PostgreSQL 8.4.  In order to
investigate I wrote up a usage sequence that does this:

1. starts a transaction with a session (s1)
2. starts another transaction with session (s2)
3. updates a value in s1 and commits it
4. reads the value back in s2 using the ORM...
  - and it does not get the updated value, although with READ COMMITED
it should
5. reads the value back in s2 using a direct s2.execute statement...
  - and it DOES get the updated value (??)

I don't understand why the ORM-triggered read (which does emit SQL) is
not getting the update value, but the direct statement is getting the
update.

When the logger emits SQL I thought it always sent the SQL to the
database.  Is this a correct assumption?

Here is my complete (and somewhat verbose) test code that shows the
behaviour...
http://static.inky.ws/syn/325

The postgres engine string obviously needs to be changed
appropriately.  And WATCH OUT for the drop tables code in there for
anyone who tries this, if nutty enough to point at a live database.

What is going on?  I expect/hope it is something dumb on my end, but I
just don't see it.

Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Implementing a "radio button" behavior?

2011-09-08 Thread Vlad K.


Yes, I earlier said it was merge() that took effect before update() 
because that's how it looked like (didn't know about autoflush). Putting 
a sleep before update() and merge() showed that merge() issued no SQL 
because the autoflush (as you say) of the update() practically synced 
the session with the database.


The update() does technically affect the row in teh database which is 
already selected and in session and dirtied, but not via primary key. 
What I'm doing is this:


1. select a row into session
2. assign some data to it (dirties it)
3. if this row's "flag" property is set to true, first set flag=false to 
all rows in the same group (the Update), this one included

4. now merge this row

Or via plain SQL:

1. SELECT ...
2. UPDATE tablename SET flag=false WHERE group_id=123;
3. UPDATE tablename SET flag=true, ... WHERE primary_key=456;


The end result is that only one row in the group can have the flag set 
to true. The blanket set flag=false is imho faster and cleaner than 
finding out which row in the group has the flag and then updating just 
that one row, before our main model row. No?


I thought that session was only tracking changes via primary key so it 
never occurred to me that session would realize it is holding a row 
that's about to be updated, so it issues a flush first... Or am I 
misunderstanding what is going on here?


Turning autoflush off did the trick and the updates are now in order. 
Many thanks for your help!




.oO V Oo.


On 09/09/2011 12:17 AM, Michael Bayer wrote:

On Sep 8, 2011, at 6:00 PM, Vlad K. wrote:


Yes that's how I know the order of events. I just checked the logs again and put some 
sleep() between update() and merge(). It appears that the update() does some kind of 
implicit flush because that "commits" the dirtied properties of the row 
instance BEFORE the update is issued, so that when merge() comes, everything appears in 
sync to the session.

that's autoflush, which is part of the update() (earlier you said the merge() 
was taking effect before the update()).Its a little strange to change an 
attribute on an object that dirties it for update, then manually do an update() 
that affects the same object - is it the same attribute you're trying to update 
there ?

Anyway, turn off autoflush.   Here's some recipes to do that as needed:

http://www.sqlalchemy.org/trac/wiki/UsageRecipes/DisableAutoflush






--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Implementing a "radio button" behavior?

2011-09-08 Thread Michael Bayer

On Sep 8, 2011, at 6:00 PM, Vlad K. wrote:

> 
> Yes that's how I know the order of events. I just checked the logs again and 
> put some sleep() between update() and merge(). It appears that the update() 
> does some kind of implicit flush because that "commits" the dirtied 
> properties of the row instance BEFORE the update is issued, so that when 
> merge() comes, everything appears in sync to the session.

that's autoflush, which is part of the update() (earlier you said the merge() 
was taking effect before the update()).Its a little strange to change an 
attribute on an object that dirties it for update, then manually do an update() 
that affects the same object - is it the same attribute you're trying to update 
there ?

Anyway, turn off autoflush.   Here's some recipes to do that as needed:

http://www.sqlalchemy.org/trac/wiki/UsageRecipes/DisableAutoflush




-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Implementing a "radio button" behavior?

2011-09-08 Thread Vlad K.


Yes that's how I know the order of events. I just checked the logs again 
and put some sleep() between update() and merge(). It appears that the 
update() does some kind of implicit flush because that "commits" the 
dirtied properties of the row instance BEFORE the update is issued, so 
that when merge() comes, everything appears in sync to the session.


So, in short, I'm doing this:

1. select row or new
2. assign some values to it
3. issue an update to a group of other rows (this should happen FIRST)
4. merge the row with db, flush, commit. (this should happen SECOND)

What I'm getting:

1. SELECTed row data
2. UPDATE on changed values (implicit flush of dirtied session data)
3. UPDATE as expected in step 3 above
4. COMMIT



.oO V Oo.


On 09/08/2011 10:04 PM, Michael Bayer wrote:

On Sep 8, 2011, at 3:32 PM, Vlad K. wrote:


For example the following:

row = session.query(Model).filter_by(pkey=pkey_value).first() or Model()
row.some_field = 123;
...


session.query(Model).filter_by(nonprimary_key=some_value).update({...}, false)
session.merge(row)
session.flush()

When flush() gets called, the merge() is executed (query sent to DB) before the 
update called above it, in this particular example.

That isn't correct, query.update() emits UPDATE immediately.   Do you have a 
SQL log illustrating what is being emitted ?


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] order_by on model property

2011-09-08 Thread Michael Bayer

On Sep 8, 2011, at 5:32 PM, Tim Black wrote:

> Ok, because the totalDue and totalPaid attributes are also SQLAlchemy
> declarative model object properties, I converted them (and all other
> similar property dependencies) to hybrid_properties and created the
> associated @[property].expression methods for every hybrid_property
> (though I think those @[property].expression methods are not needed
> where SQLAlchemy is not converting a Python method like float() into an
> SQL function--I'll test for that later).  Now I get this error:
> 
> Module projects.model.main:189 in totalDue
>>> return sum([w.totalDue for w in self.workDone]) +
> cast(self.itemsPurchasedTotal, Float)
> TypeError: 'InstrumentedAttribute' object is not iterable
> 
> By writing line 189 on several lines, it's apparent self.workDone causes
> the error.  self.workDone is a relation:
> 
> class Project(DeclarativeBase):
># ...
>workDone = relation('WorkDone') # one-to-many
># ...
>@totalDue.expression
>def totalDue(self):
>'''Allow this property to be accessed at the class level'''
>return sum([w.totalDue for w in self.workDone]) +
> cast(self.itemsPurchasedTotal, Float)
> 
> Do I need to convert that relation into a hybrid_property, or do
> something else in order to use it in this order_by query?  I'm beginning
> to wonder if it's easier to deal with sorting by @properties by sorting
> in Python after running the query--is that the case?



When you're inside of @expression, everything you're doing is towards the goal 
of producing a SQL expression construct that generates a string, which is sent 
over the wire to the database where it's evaluated as part of a SQL string.   
So you can't use any Python expressions that aren't supportable as a SQL 
string, which includes list comprehensions and the sum() function. In this 
specific case your query likely needs to join() out to a subquery that 
calculates the sum using the SQL SUM function.   The doc at 
http://www.sqlalchemy.org/docs/orm/tutorial.html#using-subqueries shows the 
general idea.
 

> I'm beginning
> to wonder if it's easier to deal with sorting by @properties by sorting
> in Python after running the query--is that the case?

it depends very much on the specifics.  If these are records that are already 
to be loaded in memory, and you're dealing with small collections, then sure.   
 If its a huge report you're doing across thousands of rows, then maybe not.







> 
> Tim
> 
> On 09/07/2011 03:19 PM, Michael Bayer wrote:
>> You'd use a hybrid for this case, and due to the usage of float() you'd 
>> probably want to produce a separate @expression that doesn't rely on a 
>> Python function.
>> 
>> Docs and examples for hybrid are at  
>> http://www.sqlalchemy.org/docs/orm/extensions/hybrid.html
>> 
>> Separate @expression:
>> 
>> http://www.sqlalchemy.org/docs/orm/extensions/hybrid.html#defining-expression-behavior-distinct-from-attribute-behavior
>> 
>> The "float()" call in SQL would likely be using CAST, so take a look at 
>> http://www.sqlalchemy.org/docs/core/expression_api.html?highlight=cast#sqlalchemy.sql.expression.cast
>>  for that.
>> 
>> 
>> 
>> On Sep 7, 2011, at 2:27 PM, Tim Black wrote:
>> 
>>> What is the right way to use .order_by() to order by the values returned by 
>>> a model object property?  My model object is like this:
>>> 
>>> class Project(DeclarativeBase):
>>>__tablename__ = 'project'
>>>id = Column(Integer, primary_key=True)
>>>...
>>>@property
>>>def remainderDue(self):
>>>return self.totalDue - float(self.totalPaid)
>>> 
>>> The query I'm trying to run is:
>>> 
>>> projects = 
>>> DBSession.query(model.Project).order_by(desc(model.Project.remainderDue))
>>> 
>>> This returns the following error:
>>> 
>>> Module sqlalchemy.sql.expression:1279 in _literal_as_text
>>> ArgumentError: SQL expression object or string expected.
>>> 
>>> Tim
>>> 
>>> 
>>> 
>>> -- 
>>> You received this message because you are subscribed to the Google Groups 
>>> "sqlalchemy" group.
>>> To post to this group, send email to sqlalchemy@googlegroups.com.
>>> To unsubscribe from this group, send email to 
>>> sqlalchemy+unsubscr...@googlegroups.com.
>>> For more options, visit this group at 
>>> http://groups.google.com/group/sqlalchemy?hl=en.
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy" group.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> To unsubscribe from this group, send email to 
> sqlalchemy+unsubscr...@googlegroups.com.
> For more options, visit this group at 
> http://groups.google.com/group/sqlalchemy?hl=en.
> 

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.

Re: [sqlalchemy] order_by on model property

2011-09-08 Thread Tim Black
Ok, because the totalDue and totalPaid attributes are also SQLAlchemy
declarative model object properties, I converted them (and all other
similar property dependencies) to hybrid_properties and created the
associated @[property].expression methods for every hybrid_property
(though I think those @[property].expression methods are not needed
where SQLAlchemy is not converting a Python method like float() into an
SQL function--I'll test for that later).  Now I get this error:

Module projects.model.main:189 in totalDue
>>  return sum([w.totalDue for w in self.workDone]) +
cast(self.itemsPurchasedTotal, Float)
TypeError: 'InstrumentedAttribute' object is not iterable

By writing line 189 on several lines, it's apparent self.workDone causes
the error.  self.workDone is a relation:

class Project(DeclarativeBase):
# ...
workDone = relation('WorkDone') # one-to-many
# ...
@totalDue.expression
def totalDue(self):
'''Allow this property to be accessed at the class level'''
return sum([w.totalDue for w in self.workDone]) +
cast(self.itemsPurchasedTotal, Float)

Do I need to convert that relation into a hybrid_property, or do
something else in order to use it in this order_by query?  I'm beginning
to wonder if it's easier to deal with sorting by @properties by sorting
in Python after running the query--is that the case?

Tim

On 09/07/2011 03:19 PM, Michael Bayer wrote:
> You'd use a hybrid for this case, and due to the usage of float() you'd 
> probably want to produce a separate @expression that doesn't rely on a Python 
> function.
>
> Docs and examples for hybrid are at  
> http://www.sqlalchemy.org/docs/orm/extensions/hybrid.html
>
> Separate @expression:
>
> http://www.sqlalchemy.org/docs/orm/extensions/hybrid.html#defining-expression-behavior-distinct-from-attribute-behavior
>
> The "float()" call in SQL would likely be using CAST, so take a look at 
> http://www.sqlalchemy.org/docs/core/expression_api.html?highlight=cast#sqlalchemy.sql.expression.cast
>  for that.
>
>
>
> On Sep 7, 2011, at 2:27 PM, Tim Black wrote:
>
>> What is the right way to use .order_by() to order by the values returned by 
>> a model object property?  My model object is like this:
>>
>> class Project(DeclarativeBase):
>> __tablename__ = 'project'
>> id = Column(Integer, primary_key=True)
>> ...
>> @property
>> def remainderDue(self):
>> return self.totalDue - float(self.totalPaid)
>>
>> The query I'm trying to run is:
>>
>> projects = 
>> DBSession.query(model.Project).order_by(desc(model.Project.remainderDue))
>>
>> This returns the following error:
>>
>> Module sqlalchemy.sql.expression:1279 in _literal_as_text
>> ArgumentError: SQL expression object or string expected.
>>
>> Tim
>>
>>
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "sqlalchemy" group.
>> To post to this group, send email to sqlalchemy@googlegroups.com.
>> To unsubscribe from this group, send email to 
>> sqlalchemy+unsubscr...@googlegroups.com.
>> For more options, visit this group at 
>> http://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: PyODBCConnector, possible wrong assumption re Unicode in bind parameters

2011-09-08 Thread Michael Bayer
well I'm visibly upset about this one because i really *don't* have the 
motivation to go down another FreeTDS hole  :)


On Sep 8, 2011, at 4:13 PM, Victor Olex wrote:

> Your're welcome. As for no response from pyodbc that is indeed sloppy
> as is the fact that PyPi package does not work. Hats off to you for
> always being responsive (afaik). I often wonder what keeps you so
> motivated but that's off topic.
> 
> On Sep 8, 4:07 pm, Michael Bayer  wrote:
>> thanks, the two test failures would be expected in this case, will see what 
>> response I get on the FreeTDS list if any.
>> 
>> On Sep 8, 2011, at 3:07 PM, Victor Olex wrote:
>> 
>>> Unfortunately I don't have access to a blank database and I took the
>>> chance and ran your tests on a non-empty database. Tests are mostly
>>> good: 5/7 pass. You should know that I used current trunk and simply
>>> commented out the line, which resets the supports_unicode_binds but it
>>> should be equivalent in effect to your patch. Without the patch all
>>> tests fail. Below is pretty verbose result for you.
>> 
>>> I hope this helps,
>> 
>>> Victor
>> 
>>> ./sqla_nose.py -v test.sql.test_types:UnicodeTest --dburi="mssql
>>> +pyodbc://:xxx@xxx:2431/X?
>>> driver=SQLServer&port=2431&TDS_Version=8.0" --log-
>>> debug=sqlalchemy.orm.mapper --log-debug=sqlalchemy.pool --log-
>>> debug=sqlalchemy.engine
>>> DEBUG:sqlalchemy.pool.QueuePool:Created new connection
>>> 
>>> INFO:sqlalchemy.engine.base.Engine:SELECT user_name() as user_name;
>>> INFO:sqlalchemy.engine.base.Engine:()
>>> DEBUG:sqlalchemy.engine.base.Engine:Col ('user_name',)
>>> DEBUG:sqlalchemy.engine.base.Engine:Row (u'SPEED_IT', )
>>> INFO:sqlalchemy.engine.base.Engine:
>>>SELECT default_schema_name FROM
>>>sys.database_principals
>>>WHERE name = ?
>>>AND type = 'S'
>> 
>>> INFO:sqlalchemy.engine.base.Engine:(u'SPEED_IT',)
>>> DEBUG:sqlalchemy.engine.base.Engine:Col ('default_schema_name',)
>>> DEBUG:sqlalchemy.engine.base.Engine:Row (u'dbo', )
>>> DEBUG:sqlalchemy.pool.QueuePool:Connection >> at 0xa3fdfa0> checked out from pool
>>> INFO:sqlalchemy.engine.base.Engine:SELECT [COLUMNS_1].[TABLE_SCHEMA],
>>> [COLUMNS_1].[TABLE_NAME], [COLUMNS_1].[COLUMN_NAME], [COLUMNS_1].
>>> [IS_NULLABLE], [COLUMNS_1].[DATA_TYPE], [COLUMNS_1].
>>> [ORDINAL_POSITION], [COLUMNS_1].[CHARACTER_MAXIMUM_LENGTH],
>>> [COLUMNS_1].[NUMERIC_PRECISION], [COLUMNS_1].[NUMERIC_SCALE],
>>> [COLUMNS_1].[COLUMN_DEFAULT], [COLUMNS_1].[COLLATION_NAME]
>>> FROM [INFORMATION_SCHEMA].[COLUMNS] AS [COLUMNS_1]
>>> WHERE [COLUMNS_1].[TABLE_NAME] = ? AND [COLUMNS_1].[TABLE_SCHEMA] = ?
>>> INFO:sqlalchemy.engine.base.Engine:(u'unicode_table', u'dbo')
>>> DEBUG:sqlalchemy.engine.base.Engine:Col ('TABLE_SCHEMA', 'TABLE_NAME',
>>> 'COLUMN_NAME', 'IS_NULLABLE', 'DATA_TYPE', 'ORDINAL_POSITION',
>>> 'CHARACTER_MAXIMUM_LENGTH', 'NUMERIC_PRECISION', 'NUMERIC_SCALE',
>>> 'COLUMN_DEFAULT', 'COLLATION_NAME')
>>> INFO:sqlalchemy.engine.base.Engine:
>>> CREATE TABLE unicode_table (
>>>id INTEGER NOT NULL IDENTITY(1,1),
>>>unicode_varchar NVARCHAR(250) NULL,
>>>unicode_text NTEXT NULL,
>>>PRIMARY KEY (id)
>>> )
>> 
>>> INFO:sqlalchemy.engine.base.Engine:()
>>> INFO:sqlalchemy.engine.base.Engine:COMMIT
>>> DEBUG:sqlalchemy.pool.QueuePool:Connection >> at 0xa3fdfa0> being returned to pool
>>> test.sql.test_types.UnicodeTest.test_blank_strings ...
>>> DEBUG:sqlalchemy.pool.QueuePool:Connection >> at 0xa3fdfa0> checked out from pool
>>> INFO:sqlalchemy.engine.base.Engine:INSERT INTO unicode_table
>>> (unicode_varchar) OUTPUT inserted.id VALUES (?)
>>> INFO:sqlalchemy.engine.base.Engine:(u'',)
>>> DEBUG:sqlalchemy.engine.base.Engine:Col ('id',)
>>> DEBUG:sqlalchemy.engine.base.Engine:Row (1, )
>>> INFO:sqlalchemy.engine.base.Engine:COMMIT
>>> DEBUG:sqlalchemy.pool.QueuePool:Connection >> at 0xa3fdfa0> being returned to pool
>>> DEBUG:sqlalchemy.pool.QueuePool:Connection >> at 0xa3fdfa0> checked out from pool
>>> INFO:sqlalchemy.engine.base.Engine:SELECT
>>> unicode_table.unicode_varchar
>>> FROM unicode_table
>>> INFO:sqlalchemy.engine.base.Engine:()
>>> DEBUG:sqlalchemy.engine.base.Engine:Col ('unicode_varchar',)
>>> DEBUG:sqlalchemy.engine.base.Engine:Row (u'', )
>>> DEBUG:sqlalchemy.pool.QueuePool:Connection >> at 0xa3fdfa0> being returned to pool
>>> DEBUG:sqlalchemy.pool.QueuePool:Connection >> at 0xa3fdfa0> checked out from pool
>>> INFO:sqlalchemy.engine.base.Engine:DELETE FROM unicode_table
>>> INFO:sqlalchemy.engine.base.Engine:()
>>> INFO:sqlalchemy.engine.base.Engine:COMMIT
>>> DEBUG:sqlalchemy.pool.QueuePool:Connection >> at 0xa3fdfa0> being returned to pool
>>> ok
>>> test.sql.test_types.UnicodeTest.test_ignoring_unicode_error ...
>>> DEBUG:sqlalchemy.pool.QueuePool:Created new connection
>>> 
>>> INFO:sqlalchemy.engine.base.Engine:SELECT user_name() as user_name;
>>> INFO:sqlalchemy.engine.base.Engine:()
>>> DEBUG:sqlalchemy.engine.base.Engine:Col ('user_name',)
>>> DEB

[sqlalchemy] Re: PyODBCConnector, possible wrong assumption re Unicode in bind parameters

2011-09-08 Thread Victor Olex
Your're welcome. As for no response from pyodbc that is indeed sloppy
as is the fact that PyPi package does not work. Hats off to you for
always being responsive (afaik). I often wonder what keeps you so
motivated but that's off topic.

On Sep 8, 4:07 pm, Michael Bayer  wrote:
> thanks, the two test failures would be expected in this case, will see what 
> response I get on the FreeTDS list if any.
>
> On Sep 8, 2011, at 3:07 PM, Victor Olex wrote:
>
> > Unfortunately I don't have access to a blank database and I took the
> > chance and ran your tests on a non-empty database. Tests are mostly
> > good: 5/7 pass. You should know that I used current trunk and simply
> > commented out the line, which resets the supports_unicode_binds but it
> > should be equivalent in effect to your patch. Without the patch all
> > tests fail. Below is pretty verbose result for you.
>
> > I hope this helps,
>
> > Victor
>
> > ./sqla_nose.py -v test.sql.test_types:UnicodeTest --dburi="mssql
> > +pyodbc://:xxx@xxx:2431/X?
> > driver=SQLServer&port=2431&TDS_Version=8.0" --log-
> > debug=sqlalchemy.orm.mapper --log-debug=sqlalchemy.pool --log-
> > debug=sqlalchemy.engine
> > DEBUG:sqlalchemy.pool.QueuePool:Created new connection
> > 
> > INFO:sqlalchemy.engine.base.Engine:SELECT user_name() as user_name;
> > INFO:sqlalchemy.engine.base.Engine:()
> > DEBUG:sqlalchemy.engine.base.Engine:Col ('user_name',)
> > DEBUG:sqlalchemy.engine.base.Engine:Row (u'SPEED_IT', )
> > INFO:sqlalchemy.engine.base.Engine:
> >            SELECT default_schema_name FROM
> >            sys.database_principals
> >            WHERE name = ?
> >            AND type = 'S'
>
> > INFO:sqlalchemy.engine.base.Engine:(u'SPEED_IT',)
> > DEBUG:sqlalchemy.engine.base.Engine:Col ('default_schema_name',)
> > DEBUG:sqlalchemy.engine.base.Engine:Row (u'dbo', )
> > DEBUG:sqlalchemy.pool.QueuePool:Connection  > at 0xa3fdfa0> checked out from pool
> > INFO:sqlalchemy.engine.base.Engine:SELECT [COLUMNS_1].[TABLE_SCHEMA],
> > [COLUMNS_1].[TABLE_NAME], [COLUMNS_1].[COLUMN_NAME], [COLUMNS_1].
> > [IS_NULLABLE], [COLUMNS_1].[DATA_TYPE], [COLUMNS_1].
> > [ORDINAL_POSITION], [COLUMNS_1].[CHARACTER_MAXIMUM_LENGTH],
> > [COLUMNS_1].[NUMERIC_PRECISION], [COLUMNS_1].[NUMERIC_SCALE],
> > [COLUMNS_1].[COLUMN_DEFAULT], [COLUMNS_1].[COLLATION_NAME]
> > FROM [INFORMATION_SCHEMA].[COLUMNS] AS [COLUMNS_1]
> > WHERE [COLUMNS_1].[TABLE_NAME] = ? AND [COLUMNS_1].[TABLE_SCHEMA] = ?
> > INFO:sqlalchemy.engine.base.Engine:(u'unicode_table', u'dbo')
> > DEBUG:sqlalchemy.engine.base.Engine:Col ('TABLE_SCHEMA', 'TABLE_NAME',
> > 'COLUMN_NAME', 'IS_NULLABLE', 'DATA_TYPE', 'ORDINAL_POSITION',
> > 'CHARACTER_MAXIMUM_LENGTH', 'NUMERIC_PRECISION', 'NUMERIC_SCALE',
> > 'COLUMN_DEFAULT', 'COLLATION_NAME')
> > INFO:sqlalchemy.engine.base.Engine:
> > CREATE TABLE unicode_table (
> >        id INTEGER NOT NULL IDENTITY(1,1),
> >        unicode_varchar NVARCHAR(250) NULL,
> >        unicode_text NTEXT NULL,
> >        PRIMARY KEY (id)
> > )
>
> > INFO:sqlalchemy.engine.base.Engine:()
> > INFO:sqlalchemy.engine.base.Engine:COMMIT
> > DEBUG:sqlalchemy.pool.QueuePool:Connection  > at 0xa3fdfa0> being returned to pool
> > test.sql.test_types.UnicodeTest.test_blank_strings ...
> > DEBUG:sqlalchemy.pool.QueuePool:Connection  > at 0xa3fdfa0> checked out from pool
> > INFO:sqlalchemy.engine.base.Engine:INSERT INTO unicode_table
> > (unicode_varchar) OUTPUT inserted.id VALUES (?)
> > INFO:sqlalchemy.engine.base.Engine:(u'',)
> > DEBUG:sqlalchemy.engine.base.Engine:Col ('id',)
> > DEBUG:sqlalchemy.engine.base.Engine:Row (1, )
> > INFO:sqlalchemy.engine.base.Engine:COMMIT
> > DEBUG:sqlalchemy.pool.QueuePool:Connection  > at 0xa3fdfa0> being returned to pool
> > DEBUG:sqlalchemy.pool.QueuePool:Connection  > at 0xa3fdfa0> checked out from pool
> > INFO:sqlalchemy.engine.base.Engine:SELECT
> > unicode_table.unicode_varchar
> > FROM unicode_table
> > INFO:sqlalchemy.engine.base.Engine:()
> > DEBUG:sqlalchemy.engine.base.Engine:Col ('unicode_varchar',)
> > DEBUG:sqlalchemy.engine.base.Engine:Row (u'', )
> > DEBUG:sqlalchemy.pool.QueuePool:Connection  > at 0xa3fdfa0> being returned to pool
> > DEBUG:sqlalchemy.pool.QueuePool:Connection  > at 0xa3fdfa0> checked out from pool
> > INFO:sqlalchemy.engine.base.Engine:DELETE FROM unicode_table
> > INFO:sqlalchemy.engine.base.Engine:()
> > INFO:sqlalchemy.engine.base.Engine:COMMIT
> > DEBUG:sqlalchemy.pool.QueuePool:Connection  > at 0xa3fdfa0> being returned to pool
> > ok
> > test.sql.test_types.UnicodeTest.test_ignoring_unicode_error ...
> > DEBUG:sqlalchemy.pool.QueuePool:Created new connection
> > 
> > INFO:sqlalchemy.engine.base.Engine:SELECT user_name() as user_name;
> > INFO:sqlalchemy.engine.base.Engine:()
> > DEBUG:sqlalchemy.engine.base.Engine:Col ('user_name',)
> > DEBUG:sqlalchemy.engine.base.Engine:Row (u'SPEED_IT', )
> > INFO:sqlalchemy.engine.base.Engine:
> >            SELECT default_schema_name FROM
> >            sys.database_principals
> >            WHERE nam

Re: [sqlalchemy] Re: PyODBCConnector, possible wrong assumption re Unicode in bind parameters

2011-09-08 Thread Michael Bayer
thanks, the two test failures would be expected in this case, will see what 
response I get on the FreeTDS list if any.


On Sep 8, 2011, at 3:07 PM, Victor Olex wrote:

> Unfortunately I don't have access to a blank database and I took the
> chance and ran your tests on a non-empty database. Tests are mostly
> good: 5/7 pass. You should know that I used current trunk and simply
> commented out the line, which resets the supports_unicode_binds but it
> should be equivalent in effect to your patch. Without the patch all
> tests fail. Below is pretty verbose result for you.
> 
> I hope this helps,
> 
> Victor
> 
> ./sqla_nose.py -v test.sql.test_types:UnicodeTest --dburi="mssql
> +pyodbc://:xxx@xxx:2431/X?
> driver=SQLServer&port=2431&TDS_Version=8.0" --log-
> debug=sqlalchemy.orm.mapper --log-debug=sqlalchemy.pool --log-
> debug=sqlalchemy.engine
> DEBUG:sqlalchemy.pool.QueuePool:Created new connection
> 
> INFO:sqlalchemy.engine.base.Engine:SELECT user_name() as user_name;
> INFO:sqlalchemy.engine.base.Engine:()
> DEBUG:sqlalchemy.engine.base.Engine:Col ('user_name',)
> DEBUG:sqlalchemy.engine.base.Engine:Row (u'SPEED_IT', )
> INFO:sqlalchemy.engine.base.Engine:
>SELECT default_schema_name FROM
>sys.database_principals
>WHERE name = ?
>AND type = 'S'
> 
> INFO:sqlalchemy.engine.base.Engine:(u'SPEED_IT',)
> DEBUG:sqlalchemy.engine.base.Engine:Col ('default_schema_name',)
> DEBUG:sqlalchemy.engine.base.Engine:Row (u'dbo', )
> DEBUG:sqlalchemy.pool.QueuePool:Connection  at 0xa3fdfa0> checked out from pool
> INFO:sqlalchemy.engine.base.Engine:SELECT [COLUMNS_1].[TABLE_SCHEMA],
> [COLUMNS_1].[TABLE_NAME], [COLUMNS_1].[COLUMN_NAME], [COLUMNS_1].
> [IS_NULLABLE], [COLUMNS_1].[DATA_TYPE], [COLUMNS_1].
> [ORDINAL_POSITION], [COLUMNS_1].[CHARACTER_MAXIMUM_LENGTH],
> [COLUMNS_1].[NUMERIC_PRECISION], [COLUMNS_1].[NUMERIC_SCALE],
> [COLUMNS_1].[COLUMN_DEFAULT], [COLUMNS_1].[COLLATION_NAME]
> FROM [INFORMATION_SCHEMA].[COLUMNS] AS [COLUMNS_1]
> WHERE [COLUMNS_1].[TABLE_NAME] = ? AND [COLUMNS_1].[TABLE_SCHEMA] = ?
> INFO:sqlalchemy.engine.base.Engine:(u'unicode_table', u'dbo')
> DEBUG:sqlalchemy.engine.base.Engine:Col ('TABLE_SCHEMA', 'TABLE_NAME',
> 'COLUMN_NAME', 'IS_NULLABLE', 'DATA_TYPE', 'ORDINAL_POSITION',
> 'CHARACTER_MAXIMUM_LENGTH', 'NUMERIC_PRECISION', 'NUMERIC_SCALE',
> 'COLUMN_DEFAULT', 'COLLATION_NAME')
> INFO:sqlalchemy.engine.base.Engine:
> CREATE TABLE unicode_table (
>id INTEGER NOT NULL IDENTITY(1,1),
>unicode_varchar NVARCHAR(250) NULL,
>unicode_text NTEXT NULL,
>PRIMARY KEY (id)
> )
> 
> 
> INFO:sqlalchemy.engine.base.Engine:()
> INFO:sqlalchemy.engine.base.Engine:COMMIT
> DEBUG:sqlalchemy.pool.QueuePool:Connection  at 0xa3fdfa0> being returned to pool
> test.sql.test_types.UnicodeTest.test_blank_strings ...
> DEBUG:sqlalchemy.pool.QueuePool:Connection  at 0xa3fdfa0> checked out from pool
> INFO:sqlalchemy.engine.base.Engine:INSERT INTO unicode_table
> (unicode_varchar) OUTPUT inserted.id VALUES (?)
> INFO:sqlalchemy.engine.base.Engine:(u'',)
> DEBUG:sqlalchemy.engine.base.Engine:Col ('id',)
> DEBUG:sqlalchemy.engine.base.Engine:Row (1, )
> INFO:sqlalchemy.engine.base.Engine:COMMIT
> DEBUG:sqlalchemy.pool.QueuePool:Connection  at 0xa3fdfa0> being returned to pool
> DEBUG:sqlalchemy.pool.QueuePool:Connection  at 0xa3fdfa0> checked out from pool
> INFO:sqlalchemy.engine.base.Engine:SELECT
> unicode_table.unicode_varchar
> FROM unicode_table
> INFO:sqlalchemy.engine.base.Engine:()
> DEBUG:sqlalchemy.engine.base.Engine:Col ('unicode_varchar',)
> DEBUG:sqlalchemy.engine.base.Engine:Row (u'', )
> DEBUG:sqlalchemy.pool.QueuePool:Connection  at 0xa3fdfa0> being returned to pool
> DEBUG:sqlalchemy.pool.QueuePool:Connection  at 0xa3fdfa0> checked out from pool
> INFO:sqlalchemy.engine.base.Engine:DELETE FROM unicode_table
> INFO:sqlalchemy.engine.base.Engine:()
> INFO:sqlalchemy.engine.base.Engine:COMMIT
> DEBUG:sqlalchemy.pool.QueuePool:Connection  at 0xa3fdfa0> being returned to pool
> ok
> test.sql.test_types.UnicodeTest.test_ignoring_unicode_error ...
> DEBUG:sqlalchemy.pool.QueuePool:Created new connection
> 
> INFO:sqlalchemy.engine.base.Engine:SELECT user_name() as user_name;
> INFO:sqlalchemy.engine.base.Engine:()
> DEBUG:sqlalchemy.engine.base.Engine:Col ('user_name',)
> DEBUG:sqlalchemy.engine.base.Engine:Row (u'SPEED_IT', )
> INFO:sqlalchemy.engine.base.Engine:
>SELECT default_schema_name FROM
>sys.database_principals
>WHERE name = ?
>AND type = 'S'
> 
> INFO:sqlalchemy.engine.base.Engine:(u'SPEED_IT',)
> DEBUG:sqlalchemy.engine.base.Engine:Col ('default_schema_name',)
> DEBUG:sqlalchemy.engine.base.Engine:Row (u'dbo', )
> DEBUG:sqlalchemy.pool.QueuePool:Connection  at 0xa4490e0> checked out from pool
> INFO:sqlalchemy.engine.base.Engine:SELECT [COLUMNS_1].[TABLE_SCHEMA],
> [COLUMNS_1].[TABLE_NAME], [COLUMNS_1].[COLUMN_NAME], [COLUMNS_1].
> [IS_NULLABLE], [COLUMNS_1].[DA

Re: [sqlalchemy] Implementing a "radio button" behavior?

2011-09-08 Thread Michael Bayer

On Sep 8, 2011, at 3:32 PM, Vlad K. wrote:

> 
> For example the following:
> 
> row = session.query(Model).filter_by(pkey=pkey_value).first() or Model()
> row.some_field = 123;
> ...
> 
> 
> session.query(Model).filter_by(nonprimary_key=some_value).update({...}, false)
> session.merge(row)
> session.flush()
> 
> When flush() gets called, the merge() is executed (query sent to DB) before 
> the update called above it, in this particular example.

That isn't correct, query.update() emits UPDATE immediately.   Do you have a 
SQL log illustrating what is being emitted ?



> 
> 
> 
> .oO V Oo.
> 
> 
> On 09/08/2011 04:37 PM, Michael Bayer wrote:
>> On Sep 8, 2011, at 9:32 AM, Vlad K. wrote:
>> 
>>> 
>>> As a "by the way" to this question, I've noticed that the order of queries 
>>> given before flush() is not preserved for the flush(). Any way to enforce 
>>> the order?
>> Trying to parse what this means.   Suppose you did a single SELECT, loaded 
>> five objects.  Then changed them and did a flush.   What is the "order of 
>> queries" to be preserved?
>> 
>> Guessing, perhaps you mean, the order in which a particular object became 
>> present in the Session, that's the order in which UPDATE statements should 
>> be emitted.UPDATE statements are in fact ordered in terms of the primary 
>> key of the row.   The reason for this is to minimize the chance of 
>> deadlocks.Process A and process B both need to update primary key 1 and 
>> 2 in a table.   If process A starts with 1 and process B starts with 2, you 
>> have a deadlock.So an ordering that is deterministic across 
>> transactions, where PK ordering is a pretty good assumption in most cases, 
>> is the best behavior here.
>> 
>> If you need UPDATE statements in a specific order, you can A. emit flush() 
>> specifically against a Session in which you're controlling what's "dirty", 
>> B. use query.update(), C. use the Core SQL language instead of the ORM for 
>> this particular series of operations (though query.update() likely a happy 
>> medium).
>> 
>> 
>> 
>> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy" group.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> To unsubscribe from this group, send email to 
> sqlalchemy+unsubscr...@googlegroups.com.
> For more options, visit this group at 
> http://groups.google.com/group/sqlalchemy?hl=en.
> 

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Implementing a "radio button" behavior?

2011-09-08 Thread Vlad K.


For example the following:

row = session.query(Model).filter_by(pkey=pkey_value).first() or Model()
row.some_field = 123;
...


session.query(Model).filter_by(nonprimary_key=some_value).update({...}, 
false)

session.merge(row)
session.flush()

When flush() gets called, the merge() is executed (query sent to DB) 
before the update called above it, in this particular example.




.oO V Oo.


On 09/08/2011 04:37 PM, Michael Bayer wrote:

On Sep 8, 2011, at 9:32 AM, Vlad K. wrote:



As a "by the way" to this question, I've noticed that the order of queries 
given before flush() is not preserved for the flush(). Any way to enforce the order?

Trying to parse what this means.   Suppose you did a single SELECT, loaded five objects.  
Then changed them and did a flush.   What is the "order of queries" to be 
preserved?

Guessing, perhaps you mean, the order in which a particular object became 
present in the Session, that's the order in which UPDATE statements should be 
emitted.UPDATE statements are in fact ordered in terms of the primary key 
of the row.   The reason for this is to minimize the chance of deadlocks.
Process A and process B both need to update primary key 1 and 2 in a table.   
If process A starts with 1 and process B starts with 2, you have a deadlock.
So an ordering that is deterministic across transactions, where PK ordering is 
a pretty good assumption in most cases, is the best behavior here.

If you need UPDATE statements in a specific order, you can A. emit flush() specifically 
against a Session in which you're controlling what's "dirty", B. use 
query.update(), C. use the Core SQL language instead of the ORM for this particular 
series of operations (though query.update() likely a happy medium).






--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: Question on session.expunge.all()

2011-09-08 Thread Vlad K.


No, I can't truncate the table for other reasons, as I mentioned in my 
original question. :)


The issue here was not how to sync the data, but whether processed rows 
stay in session even though the objects (model instances) are discarded 
at the end of each iteration (each csv row), or in other words whether I 
have to expunge_all() or not.


It seems I don't have to (SQLAlchemy 0.7, PostgreSQL backend).


Thanks!

.oO V Oo.


On 09/08/2011 06:47 PM, Victor Olex wrote:

Since you are effectively overwriting the table with new file
contents, the fastest may well be to truncate the table then insert
all contents. If you were to just append and update then
session.merge() is convenient way to do this though I am not sure if
the fastest.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: PyODBCConnector, possible wrong assumption re Unicode in bind parameters

2011-09-08 Thread Victor Olex
Unfortunately I don't have access to a blank database and I took the
chance and ran your tests on a non-empty database. Tests are mostly
good: 5/7 pass. You should know that I used current trunk and simply
commented out the line, which resets the supports_unicode_binds but it
should be equivalent in effect to your patch. Without the patch all
tests fail. Below is pretty verbose result for you.

I hope this helps,

Victor

./sqla_nose.py -v test.sql.test_types:UnicodeTest --dburi="mssql
+pyodbc://:xxx@xxx:2431/X?
driver=SQLServer&port=2431&TDS_Version=8.0" --log-
debug=sqlalchemy.orm.mapper --log-debug=sqlalchemy.pool --log-
debug=sqlalchemy.engine
DEBUG:sqlalchemy.pool.QueuePool:Created new connection

INFO:sqlalchemy.engine.base.Engine:SELECT user_name() as user_name;
INFO:sqlalchemy.engine.base.Engine:()
DEBUG:sqlalchemy.engine.base.Engine:Col ('user_name',)
DEBUG:sqlalchemy.engine.base.Engine:Row (u'SPEED_IT', )
INFO:sqlalchemy.engine.base.Engine:
SELECT default_schema_name FROM
sys.database_principals
WHERE name = ?
AND type = 'S'

INFO:sqlalchemy.engine.base.Engine:(u'SPEED_IT',)
DEBUG:sqlalchemy.engine.base.Engine:Col ('default_schema_name',)
DEBUG:sqlalchemy.engine.base.Engine:Row (u'dbo', )
DEBUG:sqlalchemy.pool.QueuePool:Connection  checked out from pool
INFO:sqlalchemy.engine.base.Engine:SELECT [COLUMNS_1].[TABLE_SCHEMA],
[COLUMNS_1].[TABLE_NAME], [COLUMNS_1].[COLUMN_NAME], [COLUMNS_1].
[IS_NULLABLE], [COLUMNS_1].[DATA_TYPE], [COLUMNS_1].
[ORDINAL_POSITION], [COLUMNS_1].[CHARACTER_MAXIMUM_LENGTH],
[COLUMNS_1].[NUMERIC_PRECISION], [COLUMNS_1].[NUMERIC_SCALE],
[COLUMNS_1].[COLUMN_DEFAULT], [COLUMNS_1].[COLLATION_NAME]
FROM [INFORMATION_SCHEMA].[COLUMNS] AS [COLUMNS_1]
WHERE [COLUMNS_1].[TABLE_NAME] = ? AND [COLUMNS_1].[TABLE_SCHEMA] = ?
INFO:sqlalchemy.engine.base.Engine:(u'unicode_table', u'dbo')
DEBUG:sqlalchemy.engine.base.Engine:Col ('TABLE_SCHEMA', 'TABLE_NAME',
'COLUMN_NAME', 'IS_NULLABLE', 'DATA_TYPE', 'ORDINAL_POSITION',
'CHARACTER_MAXIMUM_LENGTH', 'NUMERIC_PRECISION', 'NUMERIC_SCALE',
'COLUMN_DEFAULT', 'COLLATION_NAME')
INFO:sqlalchemy.engine.base.Engine:
CREATE TABLE unicode_table (
id INTEGER NOT NULL IDENTITY(1,1),
unicode_varchar NVARCHAR(250) NULL,
unicode_text NTEXT NULL,
PRIMARY KEY (id)
)


INFO:sqlalchemy.engine.base.Engine:()
INFO:sqlalchemy.engine.base.Engine:COMMIT
DEBUG:sqlalchemy.pool.QueuePool:Connection  being returned to pool
test.sql.test_types.UnicodeTest.test_blank_strings ...
DEBUG:sqlalchemy.pool.QueuePool:Connection  checked out from pool
INFO:sqlalchemy.engine.base.Engine:INSERT INTO unicode_table
(unicode_varchar) OUTPUT inserted.id VALUES (?)
INFO:sqlalchemy.engine.base.Engine:(u'',)
DEBUG:sqlalchemy.engine.base.Engine:Col ('id',)
DEBUG:sqlalchemy.engine.base.Engine:Row (1, )
INFO:sqlalchemy.engine.base.Engine:COMMIT
DEBUG:sqlalchemy.pool.QueuePool:Connection  being returned to pool
DEBUG:sqlalchemy.pool.QueuePool:Connection  checked out from pool
INFO:sqlalchemy.engine.base.Engine:SELECT
unicode_table.unicode_varchar
FROM unicode_table
INFO:sqlalchemy.engine.base.Engine:()
DEBUG:sqlalchemy.engine.base.Engine:Col ('unicode_varchar',)
DEBUG:sqlalchemy.engine.base.Engine:Row (u'', )
DEBUG:sqlalchemy.pool.QueuePool:Connection  being returned to pool
DEBUG:sqlalchemy.pool.QueuePool:Connection  checked out from pool
INFO:sqlalchemy.engine.base.Engine:DELETE FROM unicode_table
INFO:sqlalchemy.engine.base.Engine:()
INFO:sqlalchemy.engine.base.Engine:COMMIT
DEBUG:sqlalchemy.pool.QueuePool:Connection  being returned to pool
ok
test.sql.test_types.UnicodeTest.test_ignoring_unicode_error ...
DEBUG:sqlalchemy.pool.QueuePool:Created new connection

INFO:sqlalchemy.engine.base.Engine:SELECT user_name() as user_name;
INFO:sqlalchemy.engine.base.Engine:()
DEBUG:sqlalchemy.engine.base.Engine:Col ('user_name',)
DEBUG:sqlalchemy.engine.base.Engine:Row (u'SPEED_IT', )
INFO:sqlalchemy.engine.base.Engine:
SELECT default_schema_name FROM
sys.database_principals
WHERE name = ?
AND type = 'S'

INFO:sqlalchemy.engine.base.Engine:(u'SPEED_IT',)
DEBUG:sqlalchemy.engine.base.Engine:Col ('default_schema_name',)
DEBUG:sqlalchemy.engine.base.Engine:Row (u'dbo', )
DEBUG:sqlalchemy.pool.QueuePool:Connection  checked out from pool
INFO:sqlalchemy.engine.base.Engine:SELECT [COLUMNS_1].[TABLE_SCHEMA],
[COLUMNS_1].[TABLE_NAME], [COLUMNS_1].[COLUMN_NAME], [COLUMNS_1].
[IS_NULLABLE], [COLUMNS_1].[DATA_TYPE], [COLUMNS_1].
[ORDINAL_POSITION], [COLUMNS_1].[CHARACTER_MAXIMUM_LENGTH],
[COLUMNS_1].[NUMERIC_PRECISION], [COLUMNS_1].[NUMERIC_SCALE],
[COLUMNS_1].[COLUMN_DEFAULT], [COLUMNS_1].[COLLATION_NAME]
FROM [INFORMATION_SCHEMA].[COLUMNS] AS [COLUMNS_1]
WHERE [COLUMNS_1].[TABLE_NAME] = ? AND [COLUMNS_1].[TABLE_SCHEMA] = ?
INFO:sqlalchemy.engine.base.Engine:(u'unicode_err_table', u'dbo')
DEBUG:sqlalchemy.engine.base.Engine:Col ('TABLE_SCHEMA', 'TABLE_NAME',
'COLUMN_NAME', 'IS_NUL

Re: [sqlalchemy] Re: PyODBCConnector, possible wrong assumption re Unicode in bind parameters

2011-09-08 Thread Michael Bayer
Hi Victor  -

Since you're there, do you have any luck actually running unit tests ?   The 
test in particular here is:

./sqla_nose.py  -v test.sql.test_types:UnicodeTest 
--dburi=mssql+pyodbc://user:pass@dsn

Also, on the Mac, iODBC is the standard ODBC product.  unixODBC can be built 
though in the past I've found this to have problems - I'll see if I can look 
into it again if I have time.

I have no doubt that this works on linux so the path to my getting this patch 
committed is to get everything installed on a linux VM, and getting the tests 
to run as well as they did before at least on that platform.

For OSX I'm not sure where that will lead - if we need to put a doc on the site 
saying, "you need to replace OSX's standard ODBC install", then that will be 
that, but would need to check that all out first.

Also sorry I missed that pyodbc ticket 192 was yours.Disturbing that 
there's no response from the maintainer ?



On Sep 8, 2011, at 12:41 PM, Victor Olex wrote:

> I know of those issues with pyodbc package. Michael, please read my
> first response where I wrote how to build the unixODBC, FreeTDS and
> pyodbc stack. I gave this detail for a reason - i.e. that you can
> replicate my built.
> 
> By the way I did sqlalchemy level testing as promised. Predictably,
> the DDL using both patched and non-patched PyODBCCOnnector executes
> correctly. The only difference is that parameters are bound to unicode
> strings in the former. This actually works exactly as I would expect
> it i.e. accepts plain string for Unicode fields. Most fields in
> COLUMNS table are sysdate, which generally equates to nvarchar(128).
> Here's complete screenshot for you.
> 
> ### Without patch:
> 
> In [29]: speed.metadata.create_all()
> 2011-09-08 12:17:01,094 INFO sqlalchemy.engine.base.Engine SELECT
> [COLUMNS_1].[TABLE_SCHEMA], [COLUMNS_1].[TABLE_NAME], [COLUMNS_1].
> [COLUMN_NAME], [COLUMNS_1].[IS_NULLABLE], [COLUMNS_1].[DATA_TYPE],
> [COLUMNS_1].[ORDINAL_POSITION], [COLUMNS_1].
> [CHARACTER_MAXIMUM_LENGTH], [COLUMNS_1].[NUMERIC_PRECISION],
> [COLUMNS_1].[NUMERIC_SCALE], [COLUMNS_1].[COLUMN_DEFAULT], [COLUMNS_1].
> [COLLATION_NAME]
> FROM [INFORMATION_SCHEMA].[COLUMNS] AS [COLUMNS_1]
> WHERE [COLUMNS_1].[TABLE_NAME] = ? AND [COLUMNS_1].[TABLE_SCHEMA] = ?
> 2011-09-08 12:17:01,094 INFO sqlalchemy.engine.base.Engine ('A',
> 'dbo')
> 2011-09-08 12:17:01,099 INFO sqlalchemy.engine.base.Engine SELECT
> [COLUMNS_1].[TABLE_SCHEMA], [COLUMNS_1].[TABLE_NAME], [COLUMNS_1].
> [COLUMN_NAME], [COLUMNS_1].[IS_NULLABLE], [COLUMNS_1].[DATA_TYPE],
> [COLUMNS_1].[ORDINAL_POSITION], [COLUMNS_1].
> [CHARACTER_MAXIMUM_LENGTH], [COLUMNS_1].[NUMERIC_PRECISION],
> [COLUMNS_1].[NUMERIC_SCALE], [COLUMNS_1].[COLUMN_DEFAULT], [COLUMNS_1].
> [COLLATION_NAME]
> FROM [INFORMATION_SCHEMA].[COLUMNS] AS [COLUMNS_1]
> WHERE [COLUMNS_1].[TABLE_NAME] = ? AND [COLUMNS_1].[TABLE_SCHEMA] = ?
> 2011-09-08 12:17:01,099 INFO sqlalchemy.engine.base.Engine ('B',
> 'CMBS')
> 2011-09-08 12:17:01,102 INFO sqlalchemy.engine.base.Engine
> CREATE TABLE [A] (
>[ID] INTEGER NOT NULL IDENTITY(1,1),
>col1 NTEXT NULL,
>col2 VARCHAR(255) NULL,
>PRIMARY KEY ([ID])
> )
> 
> 
> 2011-09-08 12:17:01,102 INFO sqlalchemy.engine.base.Engine ()
> 2011-09-08 12:17:01,152 INFO sqlalchemy.engine.base.Engine COMMIT
> 2011-09-08 12:17:01,154 INFO sqlalchemy.engine.base.Engine
> CREATE TABLE [CMBS].[B] (
>[ID] INTEGER NOT NULL IDENTITY(1,1),
>col1 NTEXT NULL,
>PRIMARY KEY ([ID])
> )
> 
> 
> 2011-09-08 12:17:01,154 INFO sqlalchemy.engine.base.Engine ()
> 2011-09-08 12:17:01,184 INFO sqlalchemy.engine.base.Engine COMMIT
> 
> In [30]: speed.metadata.drop_all()
> 2011-09-08 12:17:04,729 INFO sqlalchemy.engine.base.Engine SELECT
> [COLUMNS_1].[TABLE_SCHEMA], [COLUMNS_1].[TABLE_NAME], [COLUMNS_1].
> [COLUMN_NAME], [COLUMNS_1].[IS_NULLABLE], [COLUMNS_1].[DATA_TYPE],
> [COLUMNS_1].[ORDINAL_POSITION], [COLUMNS_1].
> [CHARACTER_MAXIMUM_LENGTH], [COLUMNS_1].[NUMERIC_PRECISION],
> [COLUMNS_1].[NUMERIC_SCALE], [COLUMNS_1].[COLUMN_DEFAULT], [COLUMNS_1].
> [COLLATION_NAME]
> FROM [INFORMATION_SCHEMA].[COLUMNS] AS [COLUMNS_1]
> WHERE [COLUMNS_1].[TABLE_NAME] = ? AND [COLUMNS_1].[TABLE_SCHEMA] = ?
> 2011-09-08 12:17:04,729 INFO sqlalchemy.engine.base.Engine ('B',
> 'CMBS')
> 2011-09-08 12:17:04,734 INFO sqlalchemy.engine.base.Engine SELECT
> [COLUMNS_1].[TABLE_SCHEMA], [COLUMNS_1].[TABLE_NAME], [COLUMNS_1].
> [COLUMN_NAME], [COLUMNS_1].[IS_NULLABLE], [COLUMNS_1].[DATA_TYPE],
> [COLUMNS_1].[ORDINAL_POSITION], [COLUMNS_1].
> [CHARACTER_MAXIMUM_LENGTH], [COLUMNS_1].[NUMERIC_PRECISION],
> [COLUMNS_1].[NUMERIC_SCALE], [COLUMNS_1].[COLUMN_DEFAULT], [COLUMNS_1].
> [COLLATION_NAME]
> FROM [INFORMATION_SCHEMA].[COLUMNS] AS [COLUMNS_1]
> WHERE [COLUMNS_1].[TABLE_NAME] = ? AND [COLUMNS_1].[TABLE_SCHEMA] = ?
> 2011-09-08 12:17:04,734 INFO sqlalchemy.engine.base.Engine ('A',
> 'dbo')
> 2011-09-08 12:17:04,737 INFO sqlalchemy.engine.base.Engine
> DROP TABLE [

Re: [sqlalchemy] Unregister an event listener

2011-09-08 Thread Michael Bayer
There's a remove() method that isn't documented, since there are event-specific 
implementations that need to be built out, and there is also no test coverage.

Ticket 2268 is the placeholder for the task:

http://www.sqlalchemy.org/trac/ticket/2268

On Sep 8, 2011, at 12:49 PM, Mike Conley wrote:

> Is there a method available to unregister an event listener? Can't think of a 
> specific use case right now, but it would go something like this.
> 
> define a listener with some complex logic
> do stuff that uses that listener
> unregister the listener because the complex stuff is done
> continue on without the listener
> 
> -- 
> Mike Conley
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy" group.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> To unsubscribe from this group, send email to 
> sqlalchemy+unsubscr...@googlegroups.com.
> For more options, visit this group at 
> http://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Unregister an event listener

2011-09-08 Thread Mike Conley
Is there a method available to unregister an event listener? Can't think of
a specific use case right now, but it would go something like this.

define a listener with some complex logic
do stuff that uses that listener
unregister the listener because the complex stuff is done
continue on without the listener

-- 
Mike Conley

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Question on session.expunge.all()

2011-09-08 Thread Victor Olex
Since you are effectively overwriting the table with new file
contents, the fastest may well be to truncate the table then insert
all contents. If you were to just append and update then
session.merge() is convenient way to do this though I am not sure if
the fastest.

On Sep 7, 5:53 pm, "Vlad K."  wrote:
> Great, thanks!
>
> .oO V Oo.
>
> On 09/06/2011 04:48 PM, Michael Bayer wrote:
>
> > On Sep 6, 2011, at 10:40 AM, Vlad K. wrote:
>
> >> I have a products database which is daily syncronized with an external 
> >> source via a csv file. There are several thousand rows in question. The 
> >> synchronization does two things:
>
> >> 1. Update only price if changed for existing products
> >> 2. Insert new products if they don't exist with all fields from csv
>
> >> But basically, for each row in the csv, after the row is processed (one of 
> >> the above two things is done), I don't need the object in session anymore. 
> >> Memory and performance are of course an issue, and I can't find a way to 
> >> test memory consumption with or without expunge_all() so my questions are:
>
> >> 1. Do I need to session.expunge_all() after each csv row is processed, or 
> >> are they automatically garbage collected?
> >> 2. Is there any significant overhead inherent in expunge_all() that I'm 
> >> not seeing right now?
>
> >> Performance-wise, it seems the task is complete in more or less same time 
> >> with or without expunge_all()
> > In modern SQLAlchemy, the Session maintains only weak references to objects 
> > that are "clean", that is, are persistent in the database and have no 
> > pending changes to be flushed.    As all references to them are lost, they 
> > are garbage collected by the Python interpreter.    Note that objects are 
> > strongly referenced when they are present in the collection or attribute of 
> > a parent object, until that parent is also garbage collected.    There is 
> > an overhead to process which occurs when the object is dereferenced and 
> > removed from the session (weakref callbacks handle the accounting).  But 
> > calling expunge_all() probably isn't doing much here as the objects are 
> > likely being cleaned out in the same way regardless.
>
> >> While I'm at it, I also need to delete rows in the database that do not 
> >> have corresponding row in the csv file (say linked by csv_key field), the 
> >> first solution that comes to mind is building a list of keys in the csv 
> >> file (few thousand keys) and then doing:
>
> >> session.query(Product).filter(not_(Product.product_id.in_(csv_keys))).delete()
>
> >> I believe there is less overhead in sending such a large (but single!) 
> >> query to the database and leaving it to determine what to delete by 
> >> itself, than selecting each row in the database and checking if its 
> >> csv_key exists in the csv_keys list on the application side and then 
> >> issuing delete statements for rows that matched the criteria. Am I wrong?
> > That's definitely a dramatically faster way to do things, rather than to 
> > load each record individually and mark as deleted - it's the primary reason 
> > delete() and update() are there.   You'll probably want to send "False" as 
> > the value of synchronize_session to the delete() call so that it doesn't go 
> > through the effort of locating local records that were affected (unless you 
> > need that feature).

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: PyODBCConnector, possible wrong assumption re Unicode in bind parameters

2011-09-08 Thread Victor Olex
I know of those issues with pyodbc package. Michael, please read my
first response where I wrote how to build the unixODBC, FreeTDS and
pyodbc stack. I gave this detail for a reason - i.e. that you can
replicate my built.

By the way I did sqlalchemy level testing as promised. Predictably,
the DDL using both patched and non-patched PyODBCCOnnector executes
correctly. The only difference is that parameters are bound to unicode
strings in the former. This actually works exactly as I would expect
it i.e. accepts plain string for Unicode fields. Most fields in
COLUMNS table are sysdate, which generally equates to nvarchar(128).
Here's complete screenshot for you.

### Without patch:

In [29]: speed.metadata.create_all()
2011-09-08 12:17:01,094 INFO sqlalchemy.engine.base.Engine SELECT
[COLUMNS_1].[TABLE_SCHEMA], [COLUMNS_1].[TABLE_NAME], [COLUMNS_1].
[COLUMN_NAME], [COLUMNS_1].[IS_NULLABLE], [COLUMNS_1].[DATA_TYPE],
[COLUMNS_1].[ORDINAL_POSITION], [COLUMNS_1].
[CHARACTER_MAXIMUM_LENGTH], [COLUMNS_1].[NUMERIC_PRECISION],
[COLUMNS_1].[NUMERIC_SCALE], [COLUMNS_1].[COLUMN_DEFAULT], [COLUMNS_1].
[COLLATION_NAME]
FROM [INFORMATION_SCHEMA].[COLUMNS] AS [COLUMNS_1]
WHERE [COLUMNS_1].[TABLE_NAME] = ? AND [COLUMNS_1].[TABLE_SCHEMA] = ?
2011-09-08 12:17:01,094 INFO sqlalchemy.engine.base.Engine ('A',
'dbo')
2011-09-08 12:17:01,099 INFO sqlalchemy.engine.base.Engine SELECT
[COLUMNS_1].[TABLE_SCHEMA], [COLUMNS_1].[TABLE_NAME], [COLUMNS_1].
[COLUMN_NAME], [COLUMNS_1].[IS_NULLABLE], [COLUMNS_1].[DATA_TYPE],
[COLUMNS_1].[ORDINAL_POSITION], [COLUMNS_1].
[CHARACTER_MAXIMUM_LENGTH], [COLUMNS_1].[NUMERIC_PRECISION],
[COLUMNS_1].[NUMERIC_SCALE], [COLUMNS_1].[COLUMN_DEFAULT], [COLUMNS_1].
[COLLATION_NAME]
FROM [INFORMATION_SCHEMA].[COLUMNS] AS [COLUMNS_1]
WHERE [COLUMNS_1].[TABLE_NAME] = ? AND [COLUMNS_1].[TABLE_SCHEMA] = ?
2011-09-08 12:17:01,099 INFO sqlalchemy.engine.base.Engine ('B',
'CMBS')
2011-09-08 12:17:01,102 INFO sqlalchemy.engine.base.Engine
CREATE TABLE [A] (
[ID] INTEGER NOT NULL IDENTITY(1,1),
col1 NTEXT NULL,
col2 VARCHAR(255) NULL,
PRIMARY KEY ([ID])
)


2011-09-08 12:17:01,102 INFO sqlalchemy.engine.base.Engine ()
2011-09-08 12:17:01,152 INFO sqlalchemy.engine.base.Engine COMMIT
2011-09-08 12:17:01,154 INFO sqlalchemy.engine.base.Engine
CREATE TABLE [CMBS].[B] (
[ID] INTEGER NOT NULL IDENTITY(1,1),
col1 NTEXT NULL,
PRIMARY KEY ([ID])
)


2011-09-08 12:17:01,154 INFO sqlalchemy.engine.base.Engine ()
2011-09-08 12:17:01,184 INFO sqlalchemy.engine.base.Engine COMMIT

In [30]: speed.metadata.drop_all()
2011-09-08 12:17:04,729 INFO sqlalchemy.engine.base.Engine SELECT
[COLUMNS_1].[TABLE_SCHEMA], [COLUMNS_1].[TABLE_NAME], [COLUMNS_1].
[COLUMN_NAME], [COLUMNS_1].[IS_NULLABLE], [COLUMNS_1].[DATA_TYPE],
[COLUMNS_1].[ORDINAL_POSITION], [COLUMNS_1].
[CHARACTER_MAXIMUM_LENGTH], [COLUMNS_1].[NUMERIC_PRECISION],
[COLUMNS_1].[NUMERIC_SCALE], [COLUMNS_1].[COLUMN_DEFAULT], [COLUMNS_1].
[COLLATION_NAME]
FROM [INFORMATION_SCHEMA].[COLUMNS] AS [COLUMNS_1]
WHERE [COLUMNS_1].[TABLE_NAME] = ? AND [COLUMNS_1].[TABLE_SCHEMA] = ?
2011-09-08 12:17:04,729 INFO sqlalchemy.engine.base.Engine ('B',
'CMBS')
2011-09-08 12:17:04,734 INFO sqlalchemy.engine.base.Engine SELECT
[COLUMNS_1].[TABLE_SCHEMA], [COLUMNS_1].[TABLE_NAME], [COLUMNS_1].
[COLUMN_NAME], [COLUMNS_1].[IS_NULLABLE], [COLUMNS_1].[DATA_TYPE],
[COLUMNS_1].[ORDINAL_POSITION], [COLUMNS_1].
[CHARACTER_MAXIMUM_LENGTH], [COLUMNS_1].[NUMERIC_PRECISION],
[COLUMNS_1].[NUMERIC_SCALE], [COLUMNS_1].[COLUMN_DEFAULT], [COLUMNS_1].
[COLLATION_NAME]
FROM [INFORMATION_SCHEMA].[COLUMNS] AS [COLUMNS_1]
WHERE [COLUMNS_1].[TABLE_NAME] = ? AND [COLUMNS_1].[TABLE_SCHEMA] = ?
2011-09-08 12:17:04,734 INFO sqlalchemy.engine.base.Engine ('A',
'dbo')
2011-09-08 12:17:04,737 INFO sqlalchemy.engine.base.Engine
DROP TABLE [CMBS].[B]
2011-09-08 12:17:04,737 INFO sqlalchemy.engine.base.Engine ()
2011-09-08 12:17:04,825 INFO sqlalchemy.engine.base.Engine COMMIT
2011-09-08 12:17:04,827 INFO sqlalchemy.engine.base.Engine
DROP TABLE [A]
2011-09-08 12:17:04,827 INFO sqlalchemy.engine.base.Engine ()
2011-09-08 12:17:04,850 INFO sqlalchemy.engine.base.Engine COMMIT

### With patch:

In [5]: speed.metadata.drop_all()
2011-09-08 12:20:35,596 INFO sqlalchemy.engine.base.Engine SELECT
user_name() as user_name;
2011-09-08 12:20:35,597 INFO sqlalchemy.engine.base.Engine ()
2011-09-08 12:20:35,599 INFO sqlalchemy.engine.base.Engine
SELECT default_schema_name FROM
sys.database_principals
WHERE name = ?
AND type = 'S'

2011-09-08 12:20:35,599 INFO sqlalchemy.engine.base.Engine (u'',)
2011-09-08 12:20:35,608 INFO sqlalchemy.engine.base.Engine SELECT
[COLUMNS_1].[TABLE_SCHEMA], [COLUMNS_1].[TABLE_NAME], [COLUMNS_1].
[COLUMN_NAME], [COLUMNS_1].[IS_NULLABLE], [COLUMNS_1].[DATA_TYPE],
[COLUMNS_1].[ORDINAL_POSITION], [COLUMNS_1].
[CHARACTER_MAXIMUM_LENGTH], [COLUMNS_1].[NUMERIC_PRECISION],
[COLUMNS_1].[NUMERIC_SCALE], [COLUMNS_1].[C

Re: [sqlalchemy] Re: PyODBCConnector, possible wrong assumption re Unicode in bind parameters

2011-09-08 Thread Michael Bayer

On Sep 8, 2011, at 11:37 AM, Victor Olex wrote:

> Pyodbc issue 209 works fine in my setup.

that is very strange ?   There are files missing from the .zip.  If you 
installed from the zip I don't see how it built for you.  Here's the original 
issue:

http://code.google.com/p/pyodbc/issues/detail?id=192

> I think the key thing is
> matching SQL Server version with the correct TDS protocol version and
> correct FreeTDS version.

I use tds version 8.0 for this particular DSN and that is working fine in 
production with FreeTDS 0.82, on both linux and OSX platforms.   These issues 
have all been introduced with FreeTDS 0.91.   Here, I tried 7.2 and got 
slightly better results, though unicode round trips still fail when Python 
unicodes are passed.PyODBC still dies with "MemoryError" if I attempt to 
query for a table that already exists.


> Also with regards to your Mac testing, check
> if you have the libiconv installed and that FreeTDS is built with it.
> http://www.freetds.org/userguide/config.htm

yup that's in my configure:

checking for iconv... yes
checking how to link with libiconv... -liconv
checking for iconv declaration... install-shextern size_t iconv (iconv_t cd, 
char * *inbuf, size_t *inbytesleft, char * *outbuf, size_t *outbytesleft);




> 
> On Sep 8, 10:32 am, Michael Bayer  wrote:
>> On Sep 8, 2011, at 9:37 AM, Victor Olex wrote:
>> 
>>> I never for a moment thought that your change was thoughtless. To the
>>> contrary, I have huge respect for SQLAlchemy. I will try to test the
>>> drop_all and your pyodbc issue with my setup and to report here later
>>> today.
>> 
>> thanks ! Unfortunately I've tested this some more and things are looking 
>> very, very bad.For us to support 0.91, we'd need to figure out how to 
>> get all of our "table exists" functions to work.   If you look 
>> athttp://www.sqlalchemy.org/trac/ticket/2273, I've now added a patch that 
>> detects 0.82 vs. 0.91 and sets the flag, but you can see that we can't send 
>> u'' strings when we query INFORMATION_SCHEMA still - literally, the number 
>> of characters present in one of the bind parameters changes the behavior.   
>> So there is something very strange and arbitrary (seems basically like it's 
>> just making guesses about datatypes) going on with the internals of FreeTDS, 
>> and I'm not optimistic about being able to get clear answers from their 
>> list.
>> 
>> Would you have any resources to evaluate the test cases on that ticket , 
>> both are now against pure PyODBC 2.1.9?Without being able to query 
>> information schema, none of our unit tests can run period with 0.91 - I need 
>> a reliable way to do so, hopefully without losing support for table names 
>> that contain non-ascii characters.A lot of adjustments to the MSSQL 
>> dialect and testing will be needed.
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy" group.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> To unsubscribe from this group, send email to 
> sqlalchemy+unsubscr...@googlegroups.com.
> For more options, visit this group at 
> http://groups.google.com/group/sqlalchemy?hl=en.
> 

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: PyODBCConnector, possible wrong assumption re Unicode in bind parameters

2011-09-08 Thread Victor Olex
Pyodbc issue 209 works fine in my setup. I think the key thing is
matching SQL Server version with the correct TDS protocol version and
correct FreeTDS version. Also with regards to your Mac testing, check
if you have the libiconv installed and that FreeTDS is built with it.
http://www.freetds.org/userguide/config.htm

On Sep 8, 10:32 am, Michael Bayer  wrote:
> On Sep 8, 2011, at 9:37 AM, Victor Olex wrote:
>
> > I never for a moment thought that your change was thoughtless. To the
> > contrary, I have huge respect for SQLAlchemy. I will try to test the
> > drop_all and your pyodbc issue with my setup and to report here later
> > today.
>
> thanks !     Unfortunately I've tested this some more and things are looking 
> very, very bad.    For us to support 0.91, we'd need to figure out how to get 
> all of our "table exists" functions to work.   If you look 
> athttp://www.sqlalchemy.org/trac/ticket/2273, I've now added a patch that 
> detects 0.82 vs. 0.91 and sets the flag, but you can see that we can't send 
> u'' strings when we query INFORMATION_SCHEMA still - literally, the number of 
> characters present in one of the bind parameters changes the behavior.   So 
> there is something very strange and arbitrary (seems basically like it's just 
> making guesses about datatypes) going on with the internals of FreeTDS, and 
> I'm not optimistic about being able to get clear answers from their list.    
>
> Would you have any resources to evaluate the test cases on that ticket , both 
> are now against pure PyODBC 2.1.9?    Without being able to query information 
> schema, none of our unit tests can run period with 0.91 - I need a reliable 
> way to do so, hopefully without losing support for table names that contain 
> non-ascii characters.    A lot of adjustments to the MSSQL dialect and 
> testing will be needed.    

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Implementing a "radio button" behavior?

2011-09-08 Thread Michael Bayer

On Sep 8, 2011, at 9:32 AM, Vlad K. wrote:

> 
> 
> As a "by the way" to this question, I've noticed that the order of queries 
> given before flush() is not preserved for the flush(). Any way to enforce the 
> order?

Trying to parse what this means.   Suppose you did a single SELECT, loaded five 
objects.  Then changed them and did a flush.   What is the "order of queries" 
to be preserved?

Guessing, perhaps you mean, the order in which a particular object became 
present in the Session, that's the order in which UPDATE statements should be 
emitted.UPDATE statements are in fact ordered in terms of the primary key 
of the row.   The reason for this is to minimize the chance of deadlocks.
Process A and process B both need to update primary key 1 and 2 in a table.   
If process A starts with 1 and process B starts with 2, you have a deadlock.
So an ordering that is deterministic across transactions, where PK ordering is 
a pretty good assumption in most cases, is the best behavior here.

If you need UPDATE statements in a specific order, you can A. emit flush() 
specifically against a Session in which you're controlling what's "dirty", B. 
use query.update(), C. use the Core SQL language instead of the ORM for this 
particular series of operations (though query.update() likely a happy medium).




-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: PyODBCConnector, possible wrong assumption re Unicode in bind parameters

2011-09-08 Thread Michael Bayer

On Sep 8, 2011, at 9:37 AM, Victor Olex wrote:

> I never for a moment thought that your change was thoughtless. To the
> contrary, I have huge respect for SQLAlchemy. I will try to test the
> drop_all and your pyodbc issue with my setup and to report here later
> today.

thanks ! Unfortunately I've tested this some more and things are looking 
very, very bad.For us to support 0.91, we'd need to figure out how to get 
all of our "table exists" functions to work.   If you look at 
http://www.sqlalchemy.org/trac/ticket/2273, I've now added a patch that detects 
0.82 vs. 0.91 and sets the flag, but you can see that we can't send u'' strings 
when we query INFORMATION_SCHEMA still - literally, the number of characters 
present in one of the bind parameters changes the behavior.   So there is 
something very strange and arbitrary (seems basically like it's just making 
guesses about datatypes) going on with the internals of FreeTDS, and I'm not 
optimistic about being able to get clear answers from their list.

Would you have any resources to evaluate the test cases on that ticket , both 
are now against pure PyODBC 2.1.9?Without being able to query information 
schema, none of our unit tests can run period with 0.91 - I need a reliable way 
to do so, hopefully without losing support for table names that contain 
non-ascii characters.A lot of adjustments to the MSSQL dialect and testing 
will be needed.


-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: PyODBCConnector, possible wrong assumption re Unicode in bind parameters

2011-09-08 Thread Victor Olex
I never for a moment thought that your change was thoughtless. To the
contrary, I have huge respect for SQLAlchemy. I will try to test the
drop_all and your pyodbc issue with my setup and to report here later
today.

Meanwhile, I can tell you how to build the stack because it is a bit
tricky given certain package issues. I chose to build unixODBC and
FreeTDS and pyodbc from sources into /usr/local.

1. unixODBC

First I removed system packages unixODBC and unixODBC-dev (names may
vary by Linux distro). Then

wget ftp://ftp.unixodbc.org/pub/unixODBC/unixODBC-2.3.0.tar.gz
tar zxvf unixODBC-2.3.0.tar.gz
cd unixODBC-2.3.0
./configure
make
sudo make install

As root ensure that /usr/local/etc/odbcinst.ini has the following
entries:

[FreeTDS]
Driver  = /usr/local/lib/libtdsodbc.so

[SQLServer]
Driver  = /usr/local/lib/libtdsodbc.so

The second entry is for aesthetics only - so you can use "SQLServer"
in your connection strings. Odbc.ini file is not important if you use
fully qualified host names in your connection string but at your
option you may configure DSNs there.

2. FreeTDS

Also uninstall any system packages that you may have for this, then:

wget http://www.ibiblio.org/pub/Linux/ALPHA/freetds/stable/freetds-stable.tgz
tar xvf freetds-stable.tgz
cd freetds-0.91
./configure --with-unixodbc=/usr/local  --enable-msdblib --with-
tdsver=8.0
make
sudo make install

Remove or otherwise disable /etc/freetds.conf if any. Modify /usr/
local/etc/freetds.conf to read as follows:

[global]
# TDS protocol version
tds version = 8.0
client charset = UTF-8
# This is probably not needed...
text size = 64512

# Typical SQLServer
[server_name]
host = hostname.somedomain
port = 2431
tds version = 8.0

At this point you should be able to login successfully using:

tsql -S server_name -U username -P password

3. pyodbc

Pyodbc package on pypi is currently broken in that it is missing a
utils and web folders from sources and does not build. There is a bug
for this (with my comments too) at 
http://code.google.com/p/pyodbc/issues/detail?id=192.
There are two ways around it but before you start (and at runtime)
make sure that /usr/local/lib is in LD_LIBRARY_PATH (or add it
permanently system-wide using ldconfig).

export LD_LIBRARY_PATH=/usr/local/lib/:$LD_LIBRARY_PATH

3.1 Quick way

pip install https://github.com/mkleehammer/pyodbc/zipball/2.1.9

This will result in pyodbc-2.1.0-unsupported out of 2.1.9 code base
installed, which looks ugly and may potentially confuse other
packages, which check its version. But it works fine. The reason for
this is that the github code version does not hat PKG-INFO file.

3.2 Longer way, which I followed

3.2.1 Get official package and git sources

wget http://pyodbc.googlecode.com/files/pyodbc-2.1.9.zip
unzip pyodbc-2.1.9.zip
wget --no-check-certificate -O git-pyodbc-2.1.9.zip
https://github.com/mkleehammer/pyodbc/zipball/2.1.9
unzip git-pyodbc-2.1.9.zip

3.2.2 Copy missing utils and web folders to the packaged version

cp -R mkleehammer-pyodbc-e3c95dc/utils/ pyodbc-2.1.9/
cp -R mkleehammer-pyodbc-e3c95dc/web/ pyodbc-2.1.9/

3.2.3 Build and install

python setup.py bdist_egg
cd dist
easy_install pyodbc-2.1.9-py2.6-linux-i686.egg # filename may vary
based on architecture

At run time be sure to have the LD_LIBRARY_PATH and TDSVER=8.0
variables set. The latter is not needed if you put the same into
connection string as I have in the example above.

I hope this helps.

Victor Olex
http://linkedin.com/in/victorolex
http://twitter.com/agilevic




On Sep 7, 11:53 pm, Michael Bayer  wrote:
> The ticket for SQLAlchemy is:
>
> http://www.sqlalchemy.org/trac/ticket/2273
>
> For Pyodbc I've opened:
>
> http://code.google.com/p/pyodbc/issues/detail?id=209http://code.google.com/p/pyodbc/issues/detail?id=210
>
> as you can see, issue 210 is quite serious.    Would be curious what results 
> you get for the script there.
>
> On Sep 7, 2011, at 11:25 PM, Michael Bayer wrote:
>
> > I can't actually make that string work at all with FreeTDS, but I am on 
> > 0.82.   If I turn on Python unicodes with FreeTDS 0.82, which until 
> > recently was the FreeTDS release for years, everything breaks immediately - 
> > the CREATE TABLE statements won't even work, as you can see below just the 
> > strings u'A', u'dbo' blow it up:
>
> > sqlalchemy.exc.DBAPIError: (Error) ('HY004', '[HY004] [FreeTDS][SQL 
> > Server]Invalid data type (0) (SQLBindParameter)') 'SELECT 
> > [COLUMNS_1].[TABLE_SCHEMA], [COLUMNS_1].[TABLE_NAME], 
> > [COLUMNS_1].[COLUMN_NAME], [COLUMNS_1].[IS_NULLABLE], 
> > [COLUMNS_1].[DATA_TYPE], [COLUMNS_1].[ORDINAL_POSITION], 
> > [COLUMNS_1].[CHARACTER_MAXIMUM_LENGTH], [COLUMNS_1].[NUMERIC_PRECISION], 
> > [COLUMNS_1].[NUMERIC_SCALE], [COLUMNS_1].[COLUMN_DEFAULT], 
> > [COLUMNS_1].[COLLATION_NAME] \nFROM [INFORMATION_SCHEMA].[COLUMNS] AS 
> > [COLUMNS_1] \nWHERE [COLUMNS_1].[TABLE_NAME] = ? AND 
> > [COLUMNS_1].[TABLE_SCHEM

[sqlalchemy] Implementing a "radio button" behavior?

2011-09-08 Thread Vlad K.


Hi all!

I'm looking for a programming pattern to deal with a "radio button" like 
behavior, thatis in a group of rows, all with same group_id, only one 
can have a flag column set as true. Without using table triggers that 
make all rows in the group lose the flag if the updating row carries it 
(or doing it in the application), is there any pattern with SQLAlchemy 
that I'm looking for here?


As a "by the way" to this question, I've noticed that the order of 
queries given before flush() is not preserved for the flush(). Any way 
to enforce the order?




Thanks!

--

.oO V Oo.

--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.