[sqlalchemy] Detect runaway query

2011-03-30 Thread Rick Morrison
Hi list:

I've recently been plagued by a runaway query somewhere in one of my apps
that mistakenly loads 10s of 1000's of rows, swamping the working set of the
Python process and eventually invoking the OOM killer.

Unfortunately, the database backend I'm using (MSSQL 2005) doesn't provide a
lot in the way of throttling or detection tools for this, and so I'd like to
inject some detection code into SQLA to track this thing down. Is there an
existing listener interface (or an appropriate injection location for some
code) in the (0.6.6) Engine or ResultProxy where it's possible to watch the
number of rows retrieved?

Thanks,
Rick

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Detect runaway query

2011-03-30 Thread Rick Morrison
Beautiful - there's plenty to work with here. Thanks much, Mike!

On Wed, Mar 30, 2011 at 11:51 AM, Michael Bayer mike...@zzzcomputing.comwrote:

 For quick hits I usually turn debug logging on, or echo='debug' with
 create_engine(), that shows the rows coming in in the log.

 There's no per row event interface, you can of course intercept execute
 events at the engine and cursor level with ConnectionProxy.

 A blunt approach may be to intercept cursor executes, ensure the statement
 is a SELECT, then run the statement distinctly (i.e. a second time) in the
 handler, count the rows, or at least do a fetchmany() of N number of rows
 and alert if over a certain threshold.Or wrap the query inside of
 SELECT COUNT(*) FROM (query) and get a count that way.

 Another depends on your DBAPI - if its observed that rows are pre-fetched
 within the execute, then you could just apply timing to a ConnectionProxy
 and look for slow queries (or large jumps in the size of gc.get_objects()
 maybe).

 Still another tack, subclass Query, override __iter__, pull out the result
 from super().__iter__(), count it, then return iter(result).  Query by
 default buffers everything anyway.   This would depend though on the fact
 that your query is returning distinct primary keys - if you have a basic
 cartesian product occurring (which is likely), Query's uniquifying of
 results might conceal that.


 On Mar 30, 2011, at 11:31 AM, Rick Morrison wrote:

  Hi list:
 
  I've recently been plagued by a runaway query somewhere in one of my apps
 that mistakenly loads 10s of 1000's of rows, swamping the working set of the
 Python process and eventually invoking the OOM killer.
 
  Unfortunately, the database backend I'm using (MSSQL 2005) doesn't
 provide a lot in the way of throttling or detection tools for this, and so
 I'd like to inject some detection code into SQLA to track this thing down.
 Is there an existing listener interface (or an appropriate injection
 location for some code) in the (0.6.6) Engine or ResultProxy where it's
 possible to watch the number of rows retrieved?
 
  Thanks,
  Rick
 
 
  --
  You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
  To post to this group, send email to sqlalchemy@googlegroups.com.
  To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.

 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] pyodbc, unixodbc and multiple connections

2010-09-29 Thread Rick Morrison
Hi Everybody,

Crossposting this from the pyodbc ML, in the hopes that someone here has run
into this.. Are there any SQLA unit tests that exercise multiple DB
connections over unixODBC?

Hi all,

Having trouble getting multiple MSSQL connections to work with unixODBC on
CentOS. Confirmed that isql and a single pyodbc connection works fine,
but 2nd connection fails as follows:

 import pyodbc
 pyodbc.pooling = False
 cs = 'DRIVER={FreeTDS}; SERVER=server; UID=user; PWD=pwd; DATABASE=db'
# note that DSN connections fail the same way
 c1 = pyodbc.connect(cs)
 c2 = pyodbc.connect(cs)
 cur1 = c1.cursor()
 cur2 = c2.cursor()
 cur1.execute(select * from table)
pyodbc.Cursor object at 0x98339c0
 cur2.execute(select * from table)
Traceback (most recent call last):
 File stdin, line 1, in module
pyodbc.Error: ('IM001', '[IM001] [unixODBC][Driver Manager]Driver does
not support this function (0) (SQLColAttribute)')

Has anybody seen this?

Thanks,
Rick
Richard A. Morrison
Driveazon
38 Washington Square
Newport, RI 02840-2946
V: (203) 254-2054 F:(203) 220-2255

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Problems with pyodbc on RHEL 64

2009-05-12 Thread Rick Morrison
AFAIK, there's nothing in SQLA that will address this -- the issue sounds
new to me, and it seems to me that it's pretty clearly some kind of
pyodbc/FreeTDS issue. Check your character encoding settings, there's quite
a few reported issues with MSSQL + pyodbc + unicode statements. You may want
to browse the FreeTDS lists as well. Please report back anything pertinent
that you find.



On Tue, May 12, 2009 at 4:13 PM, Yannick Gingras yging...@ygingras.netwrote:



 Hi, I also reported this problem on the pyodbc mailing list but maybe
 one of you know a workaround.

 I'm trying to use pyodbc on RHEL 5.3 64 bit but all my strings are
 filled with garbage after position 1024.  Here is an example:

  import pyodbc
  conn = pyodbc.connect('DRIVER={SQL
 Server};UID=foo;PWD=bar;DATABASE=qux;SERVER=quux;TDS_Version=8.0')
  conn.execute(select %r % (= * 1030))

 This is what I get back:


 [('===\x00\x01\x00i;S+',
 )]

 This is the content of my /etc/odbcinst.ini:
 --
 [SQL Server]
 Description = FreeTDS Driver
 Driver = /usr/lib64/libtdsodbc.so.0
 UsageCount = 1
 --

 When I try to do the same on Ubuntu 8.10, both 32 bit and 64 bit, I
 get expected result, that is, a string of = 1030 character long.

 On RHEL 5.3, unixodbc is 2.2.11-7.1, on Ubuntu it's 2.2.11-16build2,
 what ever that means.  I'm running Pyodbc 2.1.5 on Python 2.5.

 I get the same error with SQLAlchemy 0.5.3 with the following:

  from sqlalchemy import create_engine
  eng = create_engine(mssql://foo:b...@qux/quux?DRIVER={SQL
 Server}TDS_Version=7.0)
  conn = eng.connect()
  conn.execute(...).fetchall()

 Anyone has an idea on what can cause this and how it can be solved?

 My idea was to fall back on pymssql but Alchemy 0.5.3 does not seem to
 like pymssql 1.0.2 and I find 0.8 has documented problems on 64 bit
 systems.  What do you guys recommend?  Running the experimental 0.6
 Alchemy branch?

 --
 Yannick Gingras
 http://ygingras.net/

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: accepts_scalar_loader attribute of AttributeImpl() object - BUG?

2009-04-06 Thread Rick Morrison
 I think its just a flag on the class.  you can probably monkeypatch it to
get your app working for now (and it would be False).

That works fine, many thanks.


(for the archive, here's what I'm doing right after the initial sqlalchemy
import:)

# remove for SQLA 0.5.3+
from sqlalchemy.orm.dynamic import DynamicAttributeImpl
DynamicAttributeImpl.accepts_scalar_loader = False

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: accepts_scalar_loader attribute of AttributeImpl() object - BUG?

2009-04-04 Thread Rick Morrison
I'm running into this one now as well, though it's with Session.expire(),
not Session.merge().
The attached script will reproduce the issue against the 0.4 trunk




On Mon, Dec 1, 2008 at 11:05 AM, Michael Bayer mike...@zzzcomputing.comwrote:


 this could be a bug, depending on if we have test coverage for
 session.merge() in conjunction with dynamic relations.  can you
 reproduce this against the latest 0.5.0rc4 or 0.4.8 release, and if so
 can you make a short test case ?


 On Dec 1, 2008, at 6:43 AM, Nabla wrote:

 
  Hi,
  can anybody please check this:
  I have some script using SA and somewhere in the script I call
  sess.merge(obj) and got this error:
 
  'DynamicAttributeImpl' object has no attribute 'accepts_scalar_loader'
 
  I don't understand fully what 'accepts_scalar_loader' is, but, I tried
  to add this to attributes.py:
 
  class AttributeImpl(object):
 internal implementation for instrumented attributes.
 
 accepts_scalar_loader = True
 
  ...
  And it works I hope.
  I don't know if this is my misunderstanding something or SA bug?
  




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



expire_bug.py
Description: Binary data


[sqlalchemy] Re: accepts_scalar_loader attribute of AttributeImpl() object - BUG?

2009-04-04 Thread Rick Morrison
Yep, current trunk works ok.

Any chance of getting a 0.4 backport of the fix? I'm stuck on 0.4 for the
next couple of months.



On Sat, Apr 4, 2009 at 4:44 PM, Michael Bayer mike...@zzzcomputing.comwrote:

 I think this may have been fixed in 0.5.3 or trunk, see if you see a diff.

 On Apr 4, 2009, at 3:01 PM, Rick Morrison wrote:

 I'm running into this one now as well, though it's with Session.expire(),
 not Session.merge().
 The attached script will reproduce the issue against the 0.4 trunk




 On Mon, Dec 1, 2008 at 11:05 AM, Michael Bayer 
 mike...@zzzcomputing.comwrote:


 this could be a bug, depending on if we have test coverage for
 session.merge() in conjunction with dynamic relations.  can you
 reproduce this against the latest 0.5.0rc4 or 0.4.8 release, and if so
 can you make a short test case ?


 On Dec 1, 2008, at 6:43 AM, Nabla wrote:

 
  Hi,
  can anybody please check this:
  I have some script using SA and somewhere in the script I call
  sess.merge(obj) and got this error:
 
  'DynamicAttributeImpl' object has no attribute 'accepts_scalar_loader'
 
  I don't understand fully what 'accepts_scalar_loader' is, but, I tried
  to add this to attributes.py:
 
  class AttributeImpl(object):
 internal implementation for instrumented attributes.
 
 accepts_scalar_loader = True
 
  ...
  And it works I hope.
  I don't know if this is my misunderstanding something or SA bug?
  






 expire_bug.py



 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: 0.5.2 seems to always roll back with mssql

2009-03-25 Thread Rick Morrison
Yep, same here.

..on my mssql 2005, I tried this query batch:


set implicit_transactions on
go
select 'After implicit ON', @@trancount

exec sp_datatype_info
go
select 'After query w/implicit', @@trancount

begin transaction
go
select 'After BEGIN', @@trancount


Here's the output:

-  
After implicit ON 0

 
After query w/implicit  1

  
After BEGIN  2


Our support team also found that calling commit() after the connect also
 worked. I guess this will
 close the outer transaction.


It's a bit of a hack, but it sounds like a simple 2-cent solution. We could
issue something like:

IF @@TRANCOUNT  0
COMMIT

on connection establishment.

Rick

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: 0.5.2 seems to always roll back with mssql

2009-03-24 Thread Rick Morrison
Jeez, yet another wonky mssql behavior switch?  That things got more flags
than the U.N.

I believe that the MSSQL ODBC driver on Windows automatically sets
IMPLICIT_TRANSACTION
off on connect, whereas FreeTDS likely does not, which is perhaps the source
of the problem.

Here's what I think the problem might be:
   - The code the OP posted has no initial .begin(), so the program starts
with a random IMPLICIT_TRANSACTION state depending upon ODBC driver + server
settings.
   - For FreeTDS and his server, let's assume IMPLICIT_TRANSACTION is ON
   - The session.execute() then issues a BEGIN (since the session is not in
autocommit mode)
   - If the server started in IMPLICIT_TRANSACTION mode, the @@TRANCOUNT
would now be TWO, not one (one implicit, one explicit)
   - The next .commit() closes the second tx, now it's @@TRANCOUNT == 1
   - Another BEGIN and SELECT finds the inserted data, and returns it (hence
the rowcount==1)
   - Another explicit COMMIT, @@TRANCOUNT again goes from 2 = 1
   - Program ends (with a rollback of the starting implicit transaction)
   - Data is now gone

It seems to me that the only time you'd want IMPLICIT_TRANSACTION on is in
autocommit mode, where savepoints don't make any sense anyway.  Otherwise,
since SQLA defaults to explicit transaction semantics everywhere, you'd want
IMPLICIT_TRANSACTION turned OFF on initial connect and left off for the
duration of the connection.

Is there a hook to execute setup SQL on connection establishment, or when a
session claims a connection? If so, those might be the places to set that
damned flag OFF. It could be tricky getting session autocommit vs.
non-session SQL to work right on shared connections.

I would even be +1 on disallowing autocommit mode on MSSQL, since it
complicates the connection setup vs. session connection claim logic so much.

Rick

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: How to use a subquery to emulate an auxilliary numbers table

2009-03-03 Thread Rick Morrison
Seems to me the issued SQL would work if the innermost query (the UNION
query) was phrased as a subquery. Have you tried simply wrapping the literal
SQL text in parenthesis to force it into a subquery like this?

nums = sql.select(['n'], from_obj=sql.text(r(SELECT 1 as n
  UNION ALL SELECT 2
  UNION ALL SELECT 3
  UNION ALL SELECT 4
  UNION ALL SELECT 5
  UNION ALL SELECT 5
  UNION ALL SELECT 6
  UNION ALL SELECT 7
  UNION ALL SELECT 8
  UNION ALL SELECT 9
  UNION ALL SELECT 10
  UNION ALL SELECT 11
  UNION ALL SELECT 12
  UNION ALL SELECT 13
  UNION ALL SELECT 14
  UNION ALL SELECT 15
  UNION ALL SELECT 16) )).alias('nums')


Alternatively, (if SQLA supports aliasing of sql text literals, I haven't
tried this), you could alias the innermost query, and use that as virtual
table:

uq = sql.text(rSELECT 1 as n
  UNION ALL SELECT 2
  UNION ALL SELECT 3
  UNION ALL SELECT 4
  UNION ALL SELECT 5
  UNION ALL SELECT 5
  UNION ALL SELECT 6
  UNION ALL SELECT 7
  UNION ALL SELECT 8
  UNION ALL SELECT 9
  UNION ALL SELECT 10
  UNION ALL SELECT 11
  UNION ALL SELECT 12
  UNION ALL SELECT 13
  UNION ALL SELECT 14
  UNION ALL SELECT 15
  UNION ALL SELECT 16).alias('uq')

nums = sql.select(['n'], from_obj=uq).alias('nums')

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: How to use a subquery to emulate an auxilliary numbers table

2009-03-03 Thread Rick Morrison
On Tue, Mar 3, 2009 at 5:31 PM, phrrn...@googlemail.com 
phrrn...@googlemail.com wrote:


 Thanks. I wrapped it as ' (original_sql) as foo' as Sybase needs a
 name for the derived table.You have helped to get primary key and
 index introspection working on Sybase!


Huh, I thought you were using mssql, as the odbc error in your posted
traceback seems to indicate a SQL Server ODBC driver. I'm surprised that
even the system tables in Sybase and Mssql have identical names as well,
although knowing a bit about mssql's history, I guess I shouldn't be.




 One interesting and useful bit of information is that one can't use
 placeholders in the WHERE clause for anything other than column
 values: Pyodbc over FreeTDS barfs with an unknown type error as it
 does not know the datatype of functions.


I think there's a way to annotate database functions in SQLA with a return
type, although I can't remember the syntax right now, or if it would work
with pyodbc. Might be worth of bit of investigation, though.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SQLAlchemy and scrollable cursors

2009-02-19 Thread Rick Morrison


 Unfortunately, AFAICT, MS-SQL does not have an OFFSET clause (it uses
 TOP instead of LIMIT). How does SQLA handle this situation?


For mssql2005 and higher, (those versions of mssql that support window
functions using OVER,  row_number(), rank(), etc.), we simulate an OFFSET by
wrapping the query with an outer query, and using where row_number() =
offset and row_number()  offset + limit

For mssql 2000 and lower, you'll be limited to use of LIMIT, and the mssql
dialect will know to use TOP instead of the LIMIT syntax.

For this to work in SA 0.4 and 0.5, you'll need to add the engine keyword
has_window_funcs=1 to your connection string. From what I understand, SA
0.6+ will sniff out the mssql version and automatically toggle the behavior.

Rick

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: referencing table in other databases inside a MSSQL server

2009-02-11 Thread Rick Morrison
The dotted schema notation discussed in that ticket should fix the issue,
yes.

Meantime a short-term workaround might be to define a view in the local
database that does the cross-database reference and use the view in your
query.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: cpython jdbc

2009-02-05 Thread Rick Morrison
 I was wondering if anyone was aware of a JDBC DBAPI module for
 cpython.

Interesting idea, and could be a killer feature for SA 0.6+ if it could be
made to work

Jpype could perhaps do the job:
  http://jpype.sourceforge.net/

There's been at least some activity with accessing JDBC drivers from CPython
using it, though I don't know of anything recent:
   http://mail.python.org/pipermail/python-list/2006-April/377434.html

Good luck and report back what you find!

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Full connection pool close

2009-01-29 Thread Rick Morrison
I worked up a test case that simulates your usage, and checks the number of
open MSSQL connections using a call to the system stored proc sp_who, so
it can run in a more automated fashion.

I originally got mixed results on this, it would pass about 50% of the time
and fail about 50% of the time.

So I then added some options that would force a GC collection (the idea
being to force any finalizers for the pyodbc sockets to close them), which
increased the percentage of the time the test would pass, but not eliminate
the failures.

I then added a wait option which simply sleeps for brief period after
closing the SA connections, and then does the connection count check. With a
1/2 second delay between the closing of the SA connection pool and the check
for all connections closed, I get pretty reliable results for closing all
connections.

Please try the attached test on your machine and see if you get similar
results.

Rick

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---

# -*- encoding: utf-8


   This test checks that SQLAlchemy fully closes all pooled pyodbc connection when we dispose() the engine.
   For this test to be meaningful, this test should be the only process opening and closing connections on the server.


# Import pyodbc first and force off ODBC connection pooling
import pyodbc
pyodbc.pooling = False

import sqlalchemy as sa
import sqlalchemy.orm

user = ''
pwd = ''
dsn = ''
dbname = 'satest'
server = 'localhost'
wait_time = .5
force_gc = False


# establish a baseline # of connections
c_pyodbc = pyodbc.connect('DRIVER={Sql Server}; SERVER=%s; DATABASE=%s; UID=%s; PWD=%s' %(server, dbname, user, pwd))
initial_connections = c_pyodbc.execute('sp_who %s' % user).fetchall()

# open the pooled connections from SA
dburi = 'mssql://%s:%...@%s' % (user, pwd, dsn)
sa_engine = sa.create_engine(dburi, echo=False, strategy='threadlocal')
sa_Session = sa.orm.scoped_session( sa.orm.sessionmaker( bind=sa_engine ) )
metadata = sa.MetaData(sa_engine)
sa_session = sa_Session()

t = sa.Table(closetest, metadata,
 sa.Column('id', sa.INT, primary_key=True),
 sa.Column('nm', sa.VARCHAR(20))
 )
# TODO: try with autoload table

# exercise the connection
metadata.create_all()
t.insert().execute(nm='test')
tlist = t.select().execute().fetchall()
assert len(tlist) == 1
metadata.drop_all()

# close the connection
sa_session.close()
sa_Session.close_all()
sa_engine.dispose()
del sa_session
del sa_Session
del sa_engine

if wait_time:
import time
time.sleep(wait_time)

if force_gc:
import gc
gc.collect()

# ensure the number of connections has not grown
post_test_connections = c_pyodbc.execute('sp_who %s' % user).fetchall()
try:
assert len(post_test_connections) == len(initial_connections)
print 'Passed'
except:
print  'Open connections!: ', len(post_test_connections), len(initial_connections)



[sqlalchemy] Re: Full connection pool close

2009-01-24 Thread Rick Morrison


 Hey, seems that you've got the problem. conn = self._pool.get( False )
 is the problem
 It raises an Empty error...:


It's supposed to; that's the exit condition for the while True loop.  It
does make it at least once through the loop, though right? Enough to close
any connections you may have open?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Full connection pool close

2009-01-24 Thread Rick Morrison
 Oh... i didn't explain myself... I mean that it's already empty at the
 first cycle of the loop...

It would be normal to not enter the loop if you haven't yet opened any
connections, as connections are opened on demand. Make sure your program
issues at least one query during this test. If you are already issuing
queries, then bundle up this as a simple test case as you can make, and
we'll have a look at it.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Full connection pool close

2009-01-23 Thread Rick Morrison

  To make pyodbc close the connection is settine pyodbc.pooling = False.


Whoa, I didn't know pyodbc automatically used ODBC connection pooling. Seems
like we should be turning that off if the user is using SQLA pooling.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL Reflection Error

2009-01-23 Thread Rick Morrison
Uh, did you guys not see my last message in this thread?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Full connection pool close

2009-01-23 Thread Rick Morrison
Good question, I don't know the answer.

But even if it were a DSN option, it's likely to be an optional one. In the
absence of an explicit setting, shouldn't we default to having the setting
off, not on? It sounds as if the pyodbc default is 'on'.

I would argue for forcing it off anyway, even if set on: this potential
double-layered pooling would make trying to do any cohesive state management
strategy on the connections just about impossible, and would also
effectively render any SQLA pool size settings rather meaningless.


On Fri, Jan 23, 2009 at 12:51 PM, Michael Bayer mike...@zzzcomputing.comwrote:


 I believe this is a setting you establish when you create the DSN
 yourself, no ?

 On Jan 23, 2009, at 12:27 PM, Rick Morrison wrote:

   To make pyodbc close the connection is settine pyodbc.pooling =
  False.
 
  Whoa, I didn't know pyodbc automatically used ODBC connection
  pooling. Seems like we should be turning that off if the user is
  using SQLA pooling.
 
 
  


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Full connection pool close

2009-01-23 Thread Rick Morrison

  his answer here:
 http://groups.google.com/group/pyodbc/browse_thread/thread/320eab14f6f8830f


You say in that thread that you're already turning off the setting by
issuing:

   import pyodbc
   pyodbc.pooling = False

before you ever open an SQLAlchemy connection.

Is that still the case? That would imply that the connection is being held
open by SQLAlchemy, not the ODBC connection pooling. So Mike's original
advice about using the NullPool should close the connections when you're
done with them -- did that work for you?

Mike / Jason: Wasn't there also some sort of verify connection feature
that was added in the 0.5 series that would issue a do-nothing query on a
connection when it was checked out from a pool just to make sure the
connection was still working?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Full connection pool close

2009-01-23 Thread Rick Morrison
On Fri, Jan 23, 2009 at 3:05 PM, Michael Bayer mike...@zzzcomputing.comwrote:

 pyodbc has the pooling implemented in Python ??? that seems weird ?



How did you get that idea from this thread? My read on it is that it uses
ODBC connection pooling.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Full connection pool close

2009-01-23 Thread Rick Morrison

 OK, it should use whatever is set on the ODBC DSN then.   im not sure that
 pyodbc should have an opinion about it.


Eh?


 is there a way to set pyodbc.pooling = None or some equivalent ?


It's pyodbc.pooling = False, as appears many times upthread

From the OP's description, it sounds like SA is somehow not forcefully
closing the DBAPI connection (perhaps not disposing of the connection using
del).

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Full connection pool close

2009-01-23 Thread Rick Morrison
From your earlier post:

 a_session.close()
 sa_Session.close_all()
 sa_engine.dispose()
 
 del sa_engine

 but it does not close the connection!


Here's Engine.dispose (line 1152, engine/base.py)

def dispose(self):
self.pool.dispose()
self.pool = self.pool.recreate()

..and here's QueuePool.dispose (the default pool, line 646, pool.py)

def dispose(self):
while True:
try:
conn = self._pool.get(False)
conn.close()
except Queue.Empty:
break

self._overflow = 0 - self.size()
if self._should_log_info:
self.log(Pool disposed.  + self.status())

So the normal path would be to indeed close the connection (but not
necessarily to delete the connection itself, it just falls out of scope).
Can you trace into the dispose() call and verify that these are being run?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL Reflection Error

2009-01-22 Thread Rick Morrison
Please try r5718, it contains an updated method of column construction that
should fix this issue.

Rick

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL Reflection Error

2009-01-21 Thread Rick Morrison
Hey Greg, please set the output format to text (if you're in mssql 2005,
there's a button over the query window with a tooltip that should say
Results to text) and re-run the query. The text output will be a lot
easier to read.

Thanks

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL Reflection Error

2009-01-21 Thread Rick Morrison
On Wed, Jan 21, 2009 at 12:16 PM, Michael Bayer mike...@zzzcomputing.comwrote:



 I think we might need to just change the *args approach in mssql
 reflecttable to do everything based on keyword arguments, and add in
 some isinstance(String) / isinstance(Numeric) to determine what args
 get sent where.   I can see the if a is not None:  on line 1138 not
 being reliable.


Yeah, that sounds like a good approach. I'll have a look later today.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL Reflection Error

2009-01-21 Thread Rick Morrison
 I think we might need to just change the *args approach in mssql
 reflecttable to do everything based on keyword arguments


 Yeah, that sounds like a good approach. I'll have a look later today.


Attached is an untested patch against trunk that uses only kwargs to build
out the tabledef. I won't be able to test this until late tomorrow, but feel
free to apply and try it out.

Rick

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



relect_via_kwargs.patch
Description: Binary data


[sqlalchemy] Re: MSSQL Reflection Error

2009-01-20 Thread Rick Morrison
 I'm just trying to introspect an existing production database, not
 create any new tables.

The structure of the table is read when reflecting the table: it's likely
that an unusual column definition would trigger an error like this, and it
would be helpful to someone diagnosing the problem to get a better clue as
to what might be happening.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Error Connecting to MSSQL using pyodbc and SqlAlchemy 0.5.0

2009-01-09 Thread Rick Morrison
The MSSQL connection string changed for the 0.5 final release. In
particular, the dsn keyword is removed, and the pyodbc connection string
now expects the DSN to be named where the host was previously placed, so
the new connection URL would be:

   mssql://username:passw...@mydbodbc

For fine-grained control over pyodbc connections, a new odbc_connect
keyword allows a full ODBC connection string to be specified.

See the mssql module doc comments for details.

Rick

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL default_schema

2008-12-23 Thread Rick Morrison
I've got a few concerns with the just-committed get_default_schema_name()
approach:

1) What about pre-2005 users?  The query used in this patch to fetch the
schema name won't work. There was not even a real hard concept of 'schema'
pre-MSSQL-2005.

2) Prior to MSSQL 2005, MSSQL conflated user name and schema name in a
fashion similar to that of Oracle. People use this to override some
tables, because the server search for locating a table in Oracle and MSSQL
is to check the user schema first, then look through a schema search path
looking for a table matching the given identifier.
So if you have a table named [mylogin.tablename] and there is also a table
[public.tablename], if you issue SELECT * FROM tablename, you'll get the
contents of [public.tablename] UNLESS you're logged in as 'mylogin', in
which case you'll get the contents of [mylogin.tablename].

This is IMO a kind of questionable practice, but there are existing uses
like this out there, and making the default schema always explicit breaks
this. Note this is not only a pre-MSSQL2005 issue, I think 2005+ will still
recognize user style schemas.

3) Isn't Micheal's concern below still valid?



 just FTR, the current expected behavior of default schemas is that if
 your tables are known to exist in the default schema configured on the
 database connection, you leave the schema attribute on Table blank.
 otherwise, you set it.

 this is actually a convention that we've had to choose.  if the two
 conventions are mixed, table reflection gets confused over whether to
 look for/place tables with the key tablename or
 schemaname.tablename into the MetaData object.  While we could try
 building an enhanced key object which hashes both tablename and
 schema.tablename as the same key depending on the default schema
 of the DB, this requires a live connection to the database in order to
 function, and as we all know a MetaData can be bound to a DB later
 on...so for now we have the convention.



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL default_schema

2008-12-23 Thread Rick Morrison
 Im curious, is the MSSQL dialect rendering tables as
schemaname.tablename in all cases ?

No, I don't think so: the module uses non-overridden calls to
compiler.IdentifierPreparer.format_table() and format_column().

So then the only usage of the get_default_schema_name() is for table
existence checks, and to fetch schema metadata for table reflection? Then
I'll withdraw my concerns, that's no big deal.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL pyodbc connection string - broken for me!

2008-12-10 Thread Rick Morrison
This has been bandied back and forth for months, and I think it's becoming
clear that having sqla map dburl's to ODBC connection strings is a losing
battle. Yet another connection argument is not sounding very attractive to
me.

Perhaps a simple reductionist policy for ODBC connections would be best. The
user would either specify a DSN, which would be passed to the driver, or
give a full ODBC connection string (presumably via a keyword arg) which
would be used verbatim. Anything else seems to degrade to the kind of
error-prone heuristics we see here.

It is a big change from the current behavior of trying to fit in with the
way that the dburl works for other dialects, though. Jason's dialect
refactor is going to confront this problem head-on as well. Any thoughts
regarding this from that perspective?

Rick

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL pyodbc connection string - broken for me!

2008-12-10 Thread Rick Morrison

  Whats the status of 0.5, is DSN the default in trunk now ?

 DSN is the first choice in MSSQLDialect_pyodbc.make_connect_string
 right now.


That's not what I see. I just pulled the 0.5 trunk, which I haven't been
tracking lately. Still uses the 'dsn' keyword build a connection string with
DSN, otherwise defaults to the former dsn-less connection string behavior.

What Mike is taking about is an idea initially advanced by Marc-Andre
Lemburg to make the 'host' portion of the dburl to represent a DSN for
pyodbc connections (any supplied database name would be ignored) for the 0.5
trunk, and have the existing dsn-less connection behavior become a
keyword-only kind of thing.

I made a patch for that many moons ago, but never committed it; it makes a
lot of sense, and the 0.5 release would be an appropriate time to introduce
this kind of compatibilty-breaking behavior. But there's been a rather
surprising lack of fans for the idea, and I would expect to hear some
grumbling from users if/when we do it.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL pyodbc connection string - broken for me!

2008-12-10 Thread Rick Morrison
Something like this:

As of 0.5 for pyodbc connections:

a) If the keyword argument 'odbc_connect' is given, it is assumed to be a
full ODBC connection string, which is used for the connection (perhaps we
can include a facility for Python sting interpolation into this string from
the dburi components).

b) otherwise, the host portion of the dburl represents a ODBC DSN. A simple
connection string is constructed using the user name and password and DSN
(host) from the dburl. Any given database name is ignored.

Finally, if present, the contents of the keyword argument 'odbc_options'
(assumed to be a string) are concatenated to the connection string generated
in either (a) or (b).

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL pyodbc connection string - broken for me!

2008-12-10 Thread Rick Morrison

 mssql://user:[EMAIL PROTECTED]/database?
 connect_type=TDS7other=argsthat=areneeded=foo

 using connect_type, or some better name, we can map the URL scheme
 to an unlimited number of vendor specific connect strings on the back.


Yeah, it's exactly that kind of mapping that has so far been a fussy pain in
the neck to use and maintain. I'm for nuking the idea and just going with a
straight-up ODBC connection string as in my last last posting.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: BINARY Columns in MS SQL

2008-09-23 Thread Rick Morrison
So I'm working a bit on this.


 If base64 encoding within the bind_processor() can fix
 MS-SQL for now, I'd say that would be the approach for the time
 being.

turns out base64 encoding is problematic: it requires a corresponding decode
for the data retrieval, which makes it effectively useless for legacy
databases with extant values stored.

I think an out-only encoding as follows would be all that's needed for now:


class MSBinary(sqltypes.Binary):
def bind_processor(self, dialect):
def process(value):
return '0x' + value.encode('hex')
return process

...the issue is that the emitted value is getting quoted somewhere after the
type conversion:

 INSERT INTO binary_table VALUES(, '0x6df02da', ...)

but MSSQL's parser can recognize the '0x' prefix and wants the value
unquoted:

 INSERT INTO binary_table VALUES(, 0x6df02da, ...)

So how do I get the Dialect to drop the quoting behavior for the return from
bind_processor() ?



 I guess you've never gotten testtypes.py BinaryTest to run with MS-SQL?

Nope. I run so far only with pymssql, which has never had a chance of
passing with binary data.

 I would definitely want some test coverage in test/dialect/mssql.py for
this, I'm pretty shocked nobody has had this problem before.

If this works, the regular BinaryTest unit test should pass. There's a few
other basic tests in typetest.py that still fail for MSSQL. Someday in a
dream world where I have time to look at them and fix them, I will.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Trouble inserting Binary into MSSQL DB's

2008-09-19 Thread Rick Morrison
There's no doubt that pyodbc is the better-supported option; the pymssql
module hasn't been updated in two years, and it relies on the Microsoft DB
lib, which has been deprecated for a long, long time and is no longer
supported, and may not even work with MSSQL 2008. Here's the deprecation
notice from MSSQL 2000 docs, and I remember earlier warnings from MS way
back in the MSSQL 7.0 days

 (http://msdn.microsoft.com/en-us/library/aa936940.aspx).

That said, pymssql does have some things going in its favor: it's simpler to
set up on *nix OS's than an iODBC/FreeTDS setup, and it's arguably the best
performing option for MSSQL.

 appears to have poor support for non-ASCII streams.

Pretty much, yeah. It is ignorant of the high-order bit on 8-bit characters
and just passes it through, much like a C memcpy() might, but there is
certainly no support at all for unicode: The MS-DBlib is essentially an MS
riff of the old Sybase DBlib, which was floating around well before the
first published Unicode standard in 1991.

Anyway, back to the issue at hand - the original SQLalchemy MSSQL module was
written against a pretty old version of pymssql (prior to .7.2) and that old
version had pretty weak support for Binary columns. The SQLA MSSQL module
dealt with that by monkeypatching a handler that replaces a Binary column
with a String column, which is almost certainly the cause of the problem at
hand here.

It may be possible to fix that issue, but I'm not sure I want to invest any
more effort into pymssql as it's clearly a dead duck moving forward.  Should
we? I'm not sure why we continue to see list users trying out pymssql - is
there still interest in that path for a reason, or was it just the first hit
on a Google search for python+mssql?

Rick

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: BINARY Columns in MS SQL

2008-09-19 Thread Rick Morrison
Hi John,

Is the column you're having issues with really a VARCHAR, or is the message
misleading? How did you create the table, pre-existing or via SQLAlchemy?
Can you show the schema and the code you're trying to access it with?

Thanks,
Rick



On Fri, Sep 19, 2008 at 2:54 PM, John Hampton [EMAIL PROTECTED]wrote:


 I have a similar problem to Sam as shown in the following thread:


 http://groups.google.com/group/sqlalchemy/browse_thread/thread/3be1df474de602d0

 I get the same error regarding conversion of VARCHAR to BINARY.  The
 difference is that I am using pyodbc and unixODBC instead of pymssql.

 Is anyone else using BINARY columns in MS SQL successfully?  If so,
 could you share some sample code so that I can figure out what I'm doing
 wrong? Thanks.

 -John

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: BINARY Columns in MS SQL

2008-09-19 Thread Rick Morrison
Hmmm, looks to me as if SQLA is generating the query correctly, but that the
DBAPI passes along the Binary() value encoded in a Python binary string,
which MSSQL then interprets as Varchar, and then complains that it can't do
an implicit conversion. That's a surprise to me; I had thought that this had
been working.

Googling for the appropriate syntax to keep MSSQL happy, most users end up
using the CONVERT() syntax, but I think that might be clumsy to wedge into
the Dialect compiler. Here's a ticket for a similar issue from the ROR
folks, with a interesting and different solution:
 http://dev.rubyonrails.org/ticket/7699

Mike, any perspective on how difficult it might be to get the MSSQL Dialect
to emit the CONVERT syntax for binary fields, or would converting the value
itself using base64 encoding be the better path?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Unicode not getting translated to SQL Server?

2008-08-29 Thread Rick Morrison
 and then I do the following where someValue happens to be a unicode
 string:

 pref.pref_value = someValue
 session.commit()


That's not going to work with pymssql as your DBAPI. Pymssql is based on
Microsoft's DBLib (circa 1991 or so), which does not support unicode and
never will: it's been unmaintained for something like 10 years now. You'll
either need to convert everything to str() before it hits the DBAPI (which
is what I do, I'm still using pymssql), or switch to pyodbc.

 I am using Python 2.5.2 on Windows XP.

on windows, it should be easy to switch to pyodbc. Just install pyodbc,
SQLAlchemy will look for it first, in preference over pymssql.


Rick

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SystemError: 'finally' pops bad exception with MS SQL Server

2008-07-21 Thread Rick Morrison
There have been other reports of this issue, all specific to pyodbc. It
doesn't appear to be an issue with SA, as other MSSQL DBAPI modules don't
exhibit the problem.

Please raise the issue on the pyodbc list, I'll work with you if needed to
help resolve it.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SystemError: 'finally' pops bad exception with MS SQL Server

2008-07-21 Thread Rick Morrison
I read through the pyodbc forum quickly, and it looks as though this issue,
first reported back in April, cannot be reproduced by the maintainer. Pick
up the forum thread entitled
http://sourceforge.net/forum/forum.php?thread_id=1942313forum_id=550700
pyodbc pops bad exception since
2.0.39http://sourceforge.net/forum/forum.php?thread_id=1942313forum_id=550700

and add your simple pyodbc-only testcase - see if you can get a working
patch from the maintainer. A patch that fixes your testcase might prompt a
new release (last release was back in April).

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Uncharted waters? attributes with 30 chars in FreeTDS, unixODBC, PyODBC and SA 0.5beta2

2008-07-18 Thread Rick Morrison
pymssql has a 30 char identifier limit, but pyodbc should work with
identifiers up to 128 chars. I can't tell from your message if you're
running pymssql or pyodbc, but that may be the issue.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Injecting business objects into constructor

2008-07-16 Thread Rick Morrison


 Are you aware of any IoC frameworks which have been adapted to
 inject/autowire things into SQLAlchemy transient business objects?


There was some talk a few months ago about integration of SA with the
Trellis component of PEAK, which I think was one of the primary motivators
for the user defined state branch that eventually became SA 0.5.0.  ...But
since then I haven't heard much on the lists about it, and I don't know if
that came to fruition or not.

Mike can answer this best.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Injecting business objects into constructor

2008-07-15 Thread Rick Morrison
I'm not sure where this is going with the 0.5 version, but I believe that
MappedClass.__int__ is still not called when objects are loaded from the DB.


If that's the case, and there isn't some alternate that SA provides like
MappedClass.__onload__, You can look into Mapper Extensions to provide this.
Check out the 'create_instance' method.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: mysql innodb table insert problem

2008-07-12 Thread Rick Morrison
 Insert into myisam table worked because
 it does not support transactions?

Yes, to my knowledge mysql with myiasm tables will accept, but ignore any
'begin transaction' or 'commit transaction' statements: they are no-ops.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: mysql innodb table insert problem

2008-07-11 Thread Rick Morrison


 I have shard session set to transactional.  Does this conflict with
 innodb transaction?


No, but it means your inner sess.begin() and sess.commit() are now within
the scope of an outer transaction, so your inner sess.commit() has no
effect. Since you immediately issue a sess.clear() after your ineffective
sess.commit(), when the outer transaction finally gets a chance to commit,
the changes are now gone. If you're going to be handling transaction state
yourself, then don't use a transactional session.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Connecting to MySQL

2008-07-10 Thread Rick Morrison
Session.add is a version 0.5 method, you're maybe running 0.4.6?

In the 0.4.x series, it's going to be:

Session.save() for objects that are to be newly added to the session
Session.update() for objects that are already in the session, or
Session.save_or_update() to have the library figure it out as it does for
Session.add in v0.5.x

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Connecting to MySQL

2008-07-10 Thread Rick Morrison

 That's exactly what the problem was :-) Is there any reason I should avoid
 using 0.5? I'm running python 2.4 at the moment, are they compatible?


0.5 is still in beta, and I don't have much experience with it myself, but
if were just starting out, I would probably be using that, otherwise you'll
need to migrate later; it's easier to just start out with the new API.




 Next quick question: I have a habbit of using 'created' and 'modified'
 columns on my tables, is there any way in which I can have the ORM update
 the dates for me when creating and modifying rows?


Yes, check out mapper extensions in the docs, you're going to want
after_insert and after_update extensions.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: do not load large binary column but make available on demand

2008-07-09 Thread Rick Morrison
Sounds like you want deferred loading for the column:

http://www.sqlalchemy.org/docs/04/mappers.html#advdatamapping_mapper_deferred

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: do not load large binary column but make available on demand

2008-07-09 Thread Rick Morrison
I started off with using only the SQL-API part of SA myself, but the ORM is
way too good to ignore, and I've since converted piles of code over to using
the ORM, typically with a 50% loss in lines of code and while getting much
better code reuse.

The query as a mapper-aware select orientation of the library in the 0.5
version promises the ability to do even more from the ORM side of the
street.

Anyway, welcome aboard!

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Moving On

2008-06-24 Thread Rick Morrison
Hey I'll miss you Paul; thanks for all of your help with MSSQL and for being
the pyodbc trailblazer.

Good luck with whatever your new direction in life brings -- turning off the
computer and a taking bit of travel time sounds pretty appealing!



On Tue, Jun 24, 2008 at 3:37 PM, Paul Johnston [EMAIL PROTECTED] wrote:


 Hi,

 I've had fun over the last 18 months doing odd bits of work on
 SQLAlchemy. It works pretty damn well on MSSQL now, although I never did
 quite get all the unit tests nailed. It's been great seeing the library
 continue to evolve, and particularly satisfying to see things I've
 started (e.g. AutoCode) being taken forward.

 Just of late, I've been reassessing priorities in my life, and open
 source development isn't going to be a big one going forward. In fact, I
 may even be giving up the computer completely for a year or two and
 going travelling.

 I'll be unsubscribing from the mailing list in a couple of days,
 although I'm happy to receive SA related emails at my personal address,
 for the next couple of months at least.

 Thanks for the interesting times,

 Paul


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Latest updates in SQL server 2008 tools and techniques

2008-06-10 Thread Rick Morrison
sorry, bad mod.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Create polymorphic mapped instance from type discriminator

2008-06-06 Thread Rick Morrison
Is there an API-stable way to create a polymorphic instance from only the
type discriminator key? I've got a case where I need to create a mapped
instance from some JSON data that contains the type discriminator, but I'd
rather get the (key -- mapped class) from the sqla map and not maintain my
own, which may fall out of sync as the data model changes over time.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Multiprocess issues

2008-06-04 Thread Rick Morrison
 neative built a bunch of those job engines in other languages, is
 it something you could post as a recipe / example ?

sure, I'll put up something in the next few days - it will let me see if it
works with sqlite as well

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: mssql and offset support

2008-06-04 Thread Rick Morrison
There's offset support in the current sqla mssql driver. It's implemented
using the ansi ROW_NUMBER() OVER construct, which is supported only in mssql
2005 and higher. To turn it on, add the has_window_funcs keyword in the
dburi, or as an engine constructor keyword.

the broken traceback is some as-of-yet unknown issue with pyodbc that
reports the wrong stack trace.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: mssql and offset support

2008-06-04 Thread Rick Morrison
er, that's, has_window_funcs=1

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: mssql and offset support

2008-06-04 Thread Rick Morrison

 Since I am using 2000 I don't think its going to work for me do?!?


nope

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Multiprocess issues

2008-06-03 Thread Rick Morrison
I'm looking at using the pyprocessing module to set up a (dispatcher --
workqueue -- slave job) kind of environment.

The dispatcher will be my app server, and I've decided to just use a table
in the app database for the persistent job queue, since everything is going
to have to connect to that database anyway.

Here's the question: Assuming the dispatcher is reading the job queue using
SQLA, and therefore has a connection pool open, and the children of the
dispatcher are created using fork(), they will also have an open connection
pool to start -- is there any way to assure that I don't have
multiple-reader socket issues, maybe by assuring that the children are using
either a fully closed connection pool, or that the next DB request will get
a fresh connection, distinct from that of the parent dispatcher?

Thanks,
Rick

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Multiprocess issues

2008-06-03 Thread Rick Morrison
Yeah, I wouldn't expect one to survive a fork, in fact I was looking for a
way to ensure that there were **no** open connections before a fork. Looking
over the pool module, I guess when I said fully closed pool, what I'm
really looking for is a fully empty pool.

I'm going to try a call to engine.dispose() in the child after the fork,
which should invalidate all the connections in the pool and force the pool
to start making new ones and see how that works out.

thanks

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Multiprocess issues

2008-06-03 Thread Rick Morrison

 I'm going to try a call to engine.dispose() in the child after the fork,
 which should invalidate all the connections in the pool and force the pool
 to start making new ones and see how that works out.


Update: seems to work.

The combination of polymorphic mapping and the processing module actually
makes a pretty nifty job dispatch engine. Each type of job gets put into the
persistent queue table as a subclass of a Job class, each with their own
respective .run() methods. When the child wakes up, it can reconstitute the
job object using session.load(Job, idjob), and via polymorphic loading, you
get the right class of job. Just call .run() on the new instance and it's
off and running. I get about 20 child processes/sec here on a 256MB VMware
guest running Ubuntu, including the fork(), child database reconnect time,
the job object fetch, and two sql update operations per child.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Using a non-primary key as ORM identifier

2008-05-22 Thread Rick Morrison
Have you considered using a discriminator column, an additional integer that
identifies the shard and is part of a two-integer primary key?

You could then use concrete polymorphic inheritance to set up mappers for
both tables that would automatically set the discriminator column to the
appropriate shard id for saves.

I'm not familiar with the current shard support in SQLA, but it may already
provide something along these lines

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: sqlalchemy decides to pull entire table

2008-05-21 Thread Rick Morrison
The and_ function is expecting two arguments, not a series of *args. It
works when you remove the third argument because you then have the expected
two arguments.

Either use nested calls to and_, or multiple calls to filter(), and build
up the query in a generative fashion, like this:

Event.query.filter(Event.id  id_under).filter(Event.feed ==
True).filter(Event.ns_id.in_(ns_list)).limit(.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SqlAlchemy and Stored Procedures with variables

2008-05-21 Thread Rick Morrison

 You're right, that was the original motivation. I tried just changing
 @@identity for scope_identity(), which worked just fine on pymssql, but not
 on the other adapters. Did eventually get it working, but it involved pyodbc
 changes, that I was unable to do. Fortunately someone on the list
 volunteered, which was most appreciated.


Ah I missed that, thanks. Do you recall the nature of the changes? Could
they be related to the mangled stack trace we recently saw on a different
list thread?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SqlAlchemy and Stored Procedures with variables

2008-05-20 Thread Rick Morrison

 I think we're using pymssql from a Linux box. Is there a way to tell
 which Python module SQLAlchemy is using? We tried running it with
 straight pymssql instead and it works in there:


The MSSQL module does an auto-detect of the supported DB-API modules and
uses the first one that imports without error. The sequence for the 0.4
series is [pyodbc, pymssql, adodbapi]. You can force module selection by
using a 'module=' keyword argument to the create_engine call.


Crumb. Thanks.


Here's another:
   http://rcrumb.com/

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SqlAlchemy and Stored Procedures with variables

2008-05-20 Thread Rick Morrison
We should really be using the ODBC sanctioned syntax for procedure call,
which is still unsupported by pyodbc, AFAIK.  ODBC on *nix is over 10 years
old at this point, you'd think we'd have a better story to tell by now,
jeez.


 Dunno if this is related, but pyodbc and adodbapi execute each statement in
 a separate context. This caused a problem with scope_identity, as in the
 original implementation with pyodbc, scope_identity always returned null.


I thought the original impetus for scope_identity was not multiple execution
contexts, but rather things being fouled up for some users where they had
nested INSERTs being done via a trigger on the mapped table, and the
brain-dead SELECT @@identity_insert wasn't able to pluck out the correct PK.
Was there another reason I'm missing?

Jason, how are coming on the dialect refactor? Things are heating up out
here

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL dislikes schema= on table.create()

2008-05-19 Thread Rick Morrison
That error would be thrown by an insert, not a table create, and I believe
there are other users using pyodbc with schema-specified tables without
problems.

I won't have a chance to look at this under pyodbc until tomorrow. In the
meantime, if you could try with pymssql to see if you get the same error,
that will help in debugging this.

BTW, the dbo schema is implicit if you don't specify an explicit schema.

Rick

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL dislikes schema= on table.create()

2008-05-19 Thread Rick Morrison
 So now I'm of two minds about which module to use and if I should use
 a schema or not for these porposes.

There's a few arcane limitations when using the pymssql module, pyodbc will
be better-supported going into the future.
As for using schema vs. other namespace tricks, that's up to you. I would
stick with schemas - SA includes good schema support.

 had a broken exception message

Does anyone know what the story is with pyodbc and mangled tracebacks? That
sounds kind of disturbing, like a corrupted stack or otherwise scrambled
frame data inside of the Python interpreter.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SqlAlchemy and Stored Procedures with variables

2008-05-19 Thread Rick Morrison
Does the same statement work in an interactive query window, complete with
the embedded semicolon you're using?

Also, you should be able to use positional parameters instead of named
parameters in your call:
  cur.execute(execute stored_proc 'gra%' )

Note that as of yet there is no SQLAlchemy support for OUT or IN/OUT
parameters; currently you can return a single set of results via a SELECT in
the stored procedure.

Rick

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: How to specify NOLOCK queries in SA (mssql)

2008-05-15 Thread Rick Morrison
Hi Bruce,

I'm considering a switch from pymssql to pyodbc myself in the
not-too-distance future, and this thread has me a bit curious about what's
going on. This is a subject that may affect SQL more in the future when ODBC
and JDBC drivers get more use.

I think there's two distinct questions that need to be answered to get to
the bottom of this. The first question is why are these queries being
issued at all, and from where? Like Mike says, SQLA is playing no part in
constructing or issuing these queries.

From the bit of googling that I've done so far, it seems that the FMTONLY
queries are issued behind the scenes by the data connector to fetch metadata
regarding the query. While there's a lot of reasons a data connector might
need to have metadata, there's two that seem especially likely when SQLA
comes into play:

   a) There are un-typed bind parameters in the query, and the connector
needs to know the data types for some reason.

   b) There is going to be a client-side cursor constructed, and result
metadata is needed to allocate the cursor. From the description you give, I
would bet that this is your main issue.

If the cause is (a), a fix might be problematic, as SQLA issues all of its
queries using bind parameters, and I'm not sure if type information is used
for each. But if you're using explicit bind parameters, you may want to
specify the type on those.

As for the more likely cause (b) I would think this could be gotten around
by making sure you specify firehose (read-only, forward-processing,
non-scrollable) cursors for retrieval, but I'm not sure what the pyodbc
settings for this might be. As a bonus, you'll probably see a bit of a
performance boost using these types of cursors as well.


The second question is more of a mystery to me: ok, so the data connector
issues a FMTONLY queryif it's just fetching metadata, why would that
cause database locks?.

This one I can't figure out. Unless you're calling stored procedures or
UDF's that have locking side effects, It's got to be a bug in the data
connector.  From what I read a FMTONLY query should be pretty fast (other
than the round-trip network time), and should lock nothing.

Are you running on Windows, or on Unix? What's your ODBC connector?

Please post to the list as you work through this and let us know what you
find...

Rick

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Default collection class for unordered relations

2008-05-15 Thread Rick Morrison
I think Jason hits the nail on the head with his response - my first
reaction on the initial post was that was splitting hairs to enforce the
difference between an ordered list and an (allegedly) unordered list, but I
thought it was going to be a non-starter until I read Mike's reply. It seems
like overhead and unneeded complexity for what is really just a reminder.
And as his iterative example points out, not a very effective reminder at
that.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Connecting to an MS SQL server ?

2008-05-15 Thread Rick Morrison
The DSN method should work with Integrated Security as well. Here's a short
writeup of the DSN configuration:

   http://support.microsoft.com/kb/176378

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Connecting to an MS SQL server ?

2008-05-15 Thread Rick Morrison
You really should reconsider. DSN is a much easier setup method than trying
to specify a myriad of ODBC options in a connection string. There's a GUI
user interface for setting up DSN's etc. It's the simpler and better
supported method.

If you really are dead-set against it, you'll need to use the 'odbc_options'
keyword in the dburi or as a keyword argument to create_engine() and specify
the ODBC connection options as a string.
For the details of the ODBC connection string contents, you'll need to
consult the ODBC documentation.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: trunk is now on 0.5

2008-05-12 Thread Rick Morrison
-1.

It's confusing, and there's already an extant or_ function that's documented
and not confusing. The proposal is no more cooked than it was five months
ago.



On Mon, May 12, 2008 at 11:58 AM, [EMAIL PROTECTED] wrote:


 On Monday 12 May 2008 17:01:23 Michael Bayer wrote:
  what does
  q.filter(x==5).filter_or(x==9).filter(y=17).filter_or(x==27) do ?
  (x=5 or x=9) and (y=17 or x=27) ?  ((x=5 or x=9) and y=17) or x=27
  ?  etc ..

 what pythons/C x==5 or x==9 and y==17 or x==27 does?

 i know... the parenthesises. cant we invent something? it's not for
 tomorrow...
 the resetjoinpoint is one possibility, and some
 left_bracket()/right_bracket() is another.
 another way is to be able to do boolean arithmetics over whole
 queries, maybe thats even better?

 query.or_(
   query.filter(this).join(that), query.filter(that).join(this)
 )

  On May 12, 2008, at 8:06 AM, [EMAIL PROTECTED] wrote:
   one thing that might go in a wishlist - query.filter_or()
   http://groups.google.com/group/sqlalchemy/browse_thread/thread/f6
  798eb5ef2c0bfe should i make it into a ticket?
  
   as you might have noticed we've merged 0.5 into the trunk.
 

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: mssql, Linux, unixODBC - Could not locate column in row for column

2008-05-06 Thread Rick Morrison


 I was thinking of a user-level option for liveliness checking on pool
 checkout, with dialect-specific implementations (e.g. execute a 'SELECT
 1', or something more efficient if the driver allows).  Is that in line
 with what you were thinking?


I had in mind something more of a optimistic / reactive nature, like a retry
on a cursor failure. But this could work equally well and could be much
simpler, albeit at some round-trip time on every pool checkout.

What's the recovery strategy if the connection is found to be dead? An
auto-reconnect with some retry count limit, or would it just notify the
dialect and that's it?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: mssql, Linux, unixODBC - Could not locate column in row for column

2008-05-02 Thread Rick Morrison
There's a Dialect refactor underway for 0.5.0 that will likely change the
way that options are fed to db engines:


http://groups.google.com/group/sqlalchemy/browse_thread/thread/36fd2e935b165d70

Part of that work will probably have some influence on the dburi and
create_engine(**kwargs) option specification, as both ODBC and JDBC have a
lot of options that can be specified.

So the odbc_options= keyword is likely to be short-lived anyway.

For now, I've applied your patch verbatim, since you're the only one using
it. r4621

-rick

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: mssql, Linux, unixODBC - Could not locate column in row for column

2008-05-02 Thread Rick Morrison
Sounds nice, thanks for the heads-up.


 There'll be opportunities for dialects to set up pool events as well.

One of the things I'm looking to see is better reconnect support for dead
database connections from network partitions, sql server restarts, etc.

Is that going to be fully Dialect controlled, or is there some coming
support for auto-reconnect as well?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: is MapperExtension.after_update() called before the UPDATE is issued to the DB?

2008-04-30 Thread Rick Morrison
 then queried the db *directly using sql*.  It looks like the change
 hasn't made it to the DB yet

Also possible is that you're using a an MVCC DB such as Postgres or Oracle,
and you're looking at an old, pre-update version of the data, as your direct
SQL would be in a separate transaction

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: is MapperExtension.after_update() called before the UPDATE is issued to the DB?

2008-04-30 Thread Rick Morrison
I suppose that depends on the behavior of the DB-API interface, in this case
I guess that's psycopg.

Anyway, I'm certainly not sure if an MVCC snapshot that's causing your
problem, but it does seem like at least a possibility. The certain way is to
check the update status inside the same transaction that did the update --
in general you can't count on transaction B to see transaction A changes
unless B starts after A commits. Whether an open pyscopg cursor implies an
open transaction -- beats me.

Rick

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Session.merge vs. mapper extensions

2008-04-30 Thread Rick Morrison
I just started using session.merge, and I noticed that on session.flush(),
the before_update mapper extension for the objects that have been merged
into the session are not called.

These are new instances, not previously persisted.

Is there something I need to do to trigger this, or eesss a bug?

Thx,
Rick

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Session.merge vs. mapper extensions

2008-04-30 Thread Rick Morrison
right, sorry, before_insert


On Wed, Apr 30, 2008 at 11:18 PM, Michael Bayer [EMAIL PROTECTED]
wrote:



 On Apr 30, 2008, at 10:47 PM, Rick Morrison wrote:

  I just started using session.merge, and I noticed that on
  session.flush(), the before_update mapper extension for the objects
  that have been merged into the session are not called.
 
  These are new instances, not previously persisted.

 so..before_insert() would be called in that case no ?


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Session.merge vs. mapper extensions

2008-04-30 Thread Rick Morrison
is not being called. as you call tell, I'm having trouble with the
keyboard tonight. first message was a typo,

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: ODBC Connection is busy error

2008-04-27 Thread Rick Morrison
 What I'm not sure of at this point is if theres some cursor usage
 specific to the MS-SQL dialect that might be external to the
 ResultProxyif Rick could comb through that for me that would be
 helpful.

The cursor is used pre_exec() if an INSERT statement tries to set a literal
PK on a sequence-like column (called IDENTITY in MSSQL). MSSQL needs to have
a special mode turned on to do those.

It's used in do_exec() to turn that literal insert mode back off.

Finally it's also used post_exec() after INSERT statements on tables with
IDENTITY columns to fetch the newly-inserted PK value.


that's about it for cursor usage -- nothing on SELECT, only inserts.














 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: ODBC Connection is busy error

2008-04-27 Thread Rick Morrison


 It is possible we could re-introduce check for open cursors as a
 pool events extension.  It would raise an error if any connection is
 returned with associated cursors still opened and could track down
 issues like these.



That would be a great diagnostic tool for this: It's hard to track down the
issue now, as the problem doesn't show itself until the cursor is re-used on
some later, unrelated query.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: ODBC Connection is busy error

2008-04-24 Thread Rick Morrison
perhaps we could simply reset the pyodbc cursor before issuing a new SQL
operation?


class MSSQLExecutionContext(default.DefaultExecutionContext):
def pre_exec(self):
if self.dialect.clear_previous_results:
self.cursor.clear_previous_results_somehow_idunnohow()


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: mssql, Linux, unixODBC - Could not locate column in row for column

2008-04-23 Thread Rick Morrison
Look, relax:

No one is suggesting that we *eliminate* DSN-less connections, only to come
up with a reasonable *default* for ODBC connection specifications. A
mechanism for non-DSN connections will certainly be provided.



 Well,
 Based on :
 http://www.4guysfromrolla.com/webtech/070399-1.shtml

 These tests showed that DSN-less connections were slightly faster
 than System DSN connections. The increase in performance was nothing
 monumental; the greatest performance boost was a mere 13% faster with
 64 concurrent requests. For one, two, or four concurrent requests,
 there was virtually no performance improvement. In fact, no noticeable
 improvement is seen in a DSN-less connection over a System DSN until
 there are 10 or more concurrent connections.


 Also, I don't know how things work in hosted environments but if they
 charge for setting up system dsn then that might be another reason to
 use dsn-less connection.

 Also porting an application that that uses dsn-less connection is
 easier then porting an application that requires system dsn to be
 setup.

 Lucas

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: mssql, Linux, unixODBC - Could not locate column in row for column

2008-04-18 Thread Rick Morrison

 Reading this thread, I keep wondering why you are trying to put
 all that connection setup configuration into the connection string...

 Such setting are normally configured in the odbc.ini file and then
 you just reference data source name in the connection string.

 That's the standard way of using ODBC and the reason why you
 have ODBC managers with nice setup GUIs. A DSN-less setup
 like the one created by SA bypasses the ODBC manager
 configuration.


Only via an option: DSN connections have been supported for some time via
the 'dsn' keyword, the OP seems to either not want that or can't get it to
work.



 SA should really adapt to the ODBC standard of using data source
 names, as it moves the connection configuration where it should be:
 Into the scope of the ODBC manager you are using to configure your
 ODBC drivers.


SA already has a de-facto standard using a db-uri scheme that works with
non-ODBC datasources as well. It makes sense for ODBC compliant SA database
drivers to conform to that form, not to just displace it and force a
config-file style of setup.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: mssql, Linux, unixODBC - Could not locate column in row for column

2008-04-18 Thread Rick Morrison
hey thanks Jason, that's a nice shortcut.

Lukasz, can you please give that a try?

On Fri, Apr 18, 2008 at 12:07 PM, jason kirtland [EMAIL PROTECTED]
wrote:


 Rick Morrison wrote:
  Yeah, I was under the impression that config args passed in via
  create_engine() ctor and via dburi were treated the same, but looking
  over engine/strategies.py, it looks as if they have two separate
  injection points. I'll see if I can get it to allow either, stay tuned.

 create_engine('mssql://h/db', connect_args=dict(odbc_options='bar'))
 create_engine('mssql://h/db?odbc_options=bar')

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: mssql, Linux, unixODBC - Could not locate column in row for column

2008-04-18 Thread Rick Morrison
Yeah, I was under the impression that config args passed in via
create_engine() ctor and via dburi were treated the same, but looking over
engine/strategies.py, it looks as if they have two separate injection
points. I'll see if I can get it to allow either, stay tuned.



On Thu, Apr 17, 2008 at 4:24 PM, Lukasz Szybalski [EMAIL PROTECTED]
wrote:


 On Thu, Apr 17, 2008 at 3:04 PM, Rick Morrison [EMAIL PROTECTED]
 wrote:
  It's a two-line change that pops the new keyword out of the config dict
 just
  like the others that were added.
 
  Mike, can you take a quick look at mssql.py line 804 and see why this
 might
  be complaining? I've got to run out.
 

 well I don't know if that is a right place to add  that?

 from the code this what it would expect
 e = sqlalchemy.create_engine(mssql://xxx:[EMAIL PROTECTED]
 :1433/xxx?odbc_options=Driver=TDS;TDS_Version=8.0)
 vs you said you wanted:
 sqlalchemy.create_engine('mssql://xxx:[EMAIL PROTECTED]
 :1433/',odbc_options='DRIVER={TDS};TDS_Version=8.0')

 to create

 Server=xx;Database=xx;Port=1433;UID=xx;PWD=xxx;Driver={TDS};TDS_Version=8.0


 Lucas

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: mssql, Linux, unixODBC - Could not locate column in row for column

2008-04-17 Thread Rick Morrison
 Here are the options as specified by free TDS. What you are talking
 about is setting it in conf file which is used only for dsn
 connection.


No, I meant as the *default* TDS version here. See here:

   http://www.freetds.org/userguide/freetdsconf.htm

I'm talking about the [global] setting, which is the default used unless
overridden in a different [dataserver] config.


 here is what freetds said:
 to use TDS 7.0 in a DSN-less connection, your options are:

 1.  Rebuild FreeTDS --with-tdsver=7.0, or
 2.  Set the environment variable TDSVER=7.0 before starting Python, or
 3.  Add TDS_Version=7.0; to your connection string.
 4.  Use Servername in your connection string.


or 5. Change the default version to the desired version

options 3 seems the easies to me.


OK. I'm just suggesting that if you'd rather specify the version in the
dburi instead of changing it on the server, that we allow the specification
of the ODBC connect string directly, rather than provide a bunch of separate
parameters that are in turn only used to build an ODBC connection string
anyway.

Finally, as I mentioned in an earlier thread, you should most likely be
using TDS version 8.0, not 7.0.

7.0 is for SQL Server 7,
8.0 is for SQL Server 2000 and 2005

Rick

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: mssql, Linux, unixODBC - Could not locate column in row for column

2008-04-17 Thread Rick Morrison
ok, ok, assuming that dsn-less connections actually do ignore the .conf file
and require all that stuff to be specified.

here's the question that I'm trying to ask:

instead of  something like this:
  create_engine('mssql://user:[EMAIL PROTECTED]/database',
odbc_driver='TDS', odbc_autotranslate='No', odbc_tds_ver='8.0')

   how about this:
  create_engine('mssql://user:[EMAIL PROTECTED]/database',
odbc_connect='DRIVER=TDS; TDS_Version=8.0; OtherODBCStuff=goes here')


do you see the difference? Instead of providing a hundred and one
ODBC-specific keyword options, we just allow you to specify part of the ODBC
connect string directly.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: mssql, Linux, unixODBC - Could not locate column in row for column

2008-04-17 Thread Rick Morrison
It's in trunk r4518. Take 'er for a spin and let me know how it works out.


On Thu, Apr 17, 2008 at 2:54 PM, Lukasz Szybalski [EMAIL PROTECTED]
wrote:


 On Thu, Apr 17, 2008 at 1:22 PM, Rick Morrison [EMAIL PROTECTED]
 wrote:
  ok, ok, assuming that dsn-less connections actually do ignore the .conf
 file
  and require all that stuff to be specified.
 
  here's the question that I'm trying to ask:
 
  instead of  something like this:
 create_engine('mssql://user:[EMAIL PROTECTED]/database',
  odbc_driver='TDS', odbc_autotranslate='No', odbc_tds_ver='8.0')
 
 how about this:
create_engine('mssql://user:[EMAIL PROTECTED]/database',
  odbc_connect='DRIVER=TDS; TDS_Version=8.0; OtherODBCStuff=goes here')
 

 how about:
 odbc_options='DRIVER=TDS; TDS_Version=8.0; OtherODBCStuff=goes here'

 
  do you see the difference? Instead of providing a hundred and one
  ODBC-specific keyword options, we just allow you to specify part of the
 ODBC
  connect string directly.

 This would be perfect, since who knows what other options might need
 to be specified as things progress.
 Let me know when I would be able to try this out.
 Lucas

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: mssql, Linux, unixODBC - Could not locate column in row for column

2008-04-17 Thread Rick Morrison

 Does it matter what case are the parameters? DRIVER in pyodbc, we used
 'driver' in previous connection strings etc...


No the parameters are a straight pass-through, that traceback is complaining
about the 'odbc_options' keyword itself. Are you sure you're running the
current trunk?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: mssql, Linux, unixODBC - Could not locate column in row for column

2008-04-17 Thread Rick Morrison
It's a two-line change that pops the new keyword out of the config dict just
like the others that were added.

Mike, can you take a quick look at mssql.py line 804 and see why this might
be complaining? I've got to run out.



On Thu, Apr 17, 2008 at 3:58 PM, Lukasz Szybalski [EMAIL PROTECTED]
wrote:


 On Thu, Apr 17, 2008 at 2:35 PM, Rick Morrison [EMAIL PROTECTED]
 wrote:
 
   Does it matter what case are the parameters? DRIVER in pyodbc, we used
   'driver' in previous connection strings etc...
  
  
  
  
 
  No the parameters are a straight pass-through, that traceback is
 complaining
  about the 'odbc_options' keyword itself. Are you sure you're running the
  current trunk?

 svn update
 At revision 4518.
 [EMAIL PROTECTED]:~/tmp/sqlalchemy/sqlalchemy/lib$ python
 Python 2.4.4 (#2, Apr  5 2007, 20:11:18)
 [GCC 4.1.2 20061115 (prerelease) (Debian 4.1.1-21)] on linux2
 Type help, copyright, credits or license for more information.
  import sqlalchemy
  sqlalchemy.__version__
 'svn'
  e = sqlalchemy.create_engine('mssql://xxx:[EMAIL PROTECTED]
 :1433/xxx',odbc_options='DRIVER={TDS};TDS_Version=8.0')
 Traceback (most recent call last):
  File stdin, line 1, in ?
  File sqlalchemy/engine/__init__.py, line 160, in create_engine
return strategy.create(*args, **kwargs)
  File sqlalchemy/engine/strategies.py, line 114, in create
raise TypeError(
 TypeError: Invalid argument(s) 'odbc_options' sent to create_engine(),
 using configuration MSSQLDialect_pyodbc/QueuePool/Engine.  Please
 check that the keyword arguments are appropriate for this combination
 of components.

 not sure why strategies.py would complain?

 Are you converting:
 'mssql://xxx:[EMAIL 
 PROTECTED]:1433/xxx',odbc_options='DRIVER={TDS};TDS_Version=8.0'
 to
 SERVER=xxx;UID=xx;PWD=xxx;DRIVER={TDS};TDS_Version=7.0


 I have added the print line that we had used before that outputs the
 connection string:

  import sqlalchemy
  e = sqlalchemy.create_engine('mssql://xx:[EMAIL PROTECTED]
 :1433/xxx',odbc_options='Driver=TDS;TDS_Version=8.0')
 DRIVER={SQL Server};Server=xxx;Database=xxx;Port=1433;UID=xxx;PWD=xxx

 The driver is not changed and tds_version is not added
 Lucas

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: group_by, count, and name count field as ?

2008-04-11 Thread Rick Morrison


   File sqlalchemy/databases/mssql.py, line 499, in do_execute
 cursor.execute(SET IDENTITY_INSERT %s OFF %
  self
  .identifier_preparer.format_table(context.compiled.statement.table))
  SystemError: 'finally' pops bad exception

 This seems to be some weird error either with pyodbc or with the MS-
 SQL dialect, MS-SQL people we've seen this before, correct ?


That statement should only every be issued on an insert to a table with an
auto-generated PK that contains an explicit value for the PK, not on any
select() or query().

Even then it should be fine, unless pyodbc is having trouble with SQL
statement sequence.

A small testcase would be needed to track this one down.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Sqlalchemy Stored Procedures

2008-04-08 Thread Rick Morrison
Would you please post the traceback that you're getting with this?

Note that you don't need the session.begin() and session.flush() with a
transactional sessions, the .begin() is implicit and the .flush() will be
issued by the .commit()

On Tue, Apr 8, 2008 at 1:51 AM, Madhu Alagu [EMAIL PROTECTED] wrote:


 Hi,


 I would like to use advantage of the Sqlalchemy   Stored Procedures.


 engine = create_engine('postgres://postgres:[EMAIL PROTECTED]:5432/
 neithal', echo=True)
 session =
 scoped_session(sessionmaker(bind=eng,transaction=True,autoflush=False))
 trans=session.begin()
 sql = select([func.add_user_f(108,'kk','kk')])
 result = session.execute(sql)
 result.close()
 session.flush()
 session.commit()





 Thanks

 Madhu Alagu
 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: patch for mssql odbc

2008-04-08 Thread Rick Morrison
I'll reply here rather than on the ticket as I'm unable to stay logged into
Trac from here (dual TCP/IP address problem).

 I have just posted a patch for the MSSQL_odbc dialect. The ticket
 number is #1005.

The patch is certainly simple enough, but any objection if we call the key
odbc_autotranslate instead of simply autotranslate?

Thanks for the patch,
Rick

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Getting error while using MapperExtension

2008-04-08 Thread Rick Morrison
For your call to mapper.populate_instance, you have the arguments for 'row'
and 'instance' reversed.

On Tue, Apr 8, 2008 at 6:56 AM, Sanjay [EMAIL PROTECTED] wrote:


 Hi,

 In a project, I am using MapperExtension this way:

 class TaskExtension(MapperExtension):
def populate_instance(self, mapper, selectcontext, row, instance,
 **flags):
 mapper.populate_instance(selectcontext, row, instance,
 **flags)
 instance.on_load() # does some custom initialization

 It gives me the following exception:

 TypeError: ('Task' object is unsubscriptable...

 Needing help.

 thanks
 Sanjay
 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: patch for mssql odbc

2008-04-08 Thread Rick Morrison
r4479 has the new 'odbc_autotranslate' flag

Currently documented only in the CHANGES file, more docs will follow later.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



  1   2   3   4   >