On 29 Gen, 21:03, Rick Morrison rickmorri...@gmail.com wrote:
I then added a wait option which simply sleeps for brief period after
closing the SA connections, and then does the connection count check. With a
1/2 second delay between the closing of the SA connection pool and the check
for
On 24 Gen, 23:31, Rick Morrison rickmorri...@gmail.com wrote:
Oh... i didn't explain myself... I mean that it's already empty at the
first cycle of the loop...
It would be normal to not enter the loop if you haven't yet opened any
connections, as connections are opened on demand. Make
On 23 Gen, 23:43, Rick Morrison rickmorri...@gmail.com wrote:
From your earlier post:
a_session.close()
sa_Session.close_all()
sa_engine.dispose()
del sa_engine
but it does not close the connection!
Here's Engine.dispose (line 1152, engine/base.py)
def
On 24 Gen, 21:27, Rick Morrison rickmorri...@gmail.com wrote:
Hey, seems that you've got the problem. conn = self._pool.get( False )
is the problem
It raises an Empty error...:
It's supposed to; that's the exit condition for the while True loop. It
does make it at least once
On 21 Gen, 16:18, Michael Bayer mike...@zzzcomputing.com wrote:
On Jan 21, 2009, at 5:22 AM, Smoke wrote:
Hi,
I'm not a SQLAchemy expert ( just an average user... ). I have an
application that's causing me some problems... It's a monitoring
application that connects to a MS Sql Server
On 23 Gen, 19:21, Rick Morrison rickmorri...@gmail.com wrote:
Good question, I don't know the answer.
But even if it were a DSN option, it's likely to be an optional one. In the
absence of an explicit setting, shouldn't we default to having the setting
off, not on? It sounds as if the
On 23 Gen, 20:46, Rick Morrison rickmorri...@gmail.com wrote:
his answer here:
http://groups.google.com/group/pyodbc/browse_thread/thread/320eab14f6...
You say in that thread that you're already turning off the setting by
issuing:
import pyodbc
pyodbc.pooling = False
before
On 23 Gen, 21:24, Rick Morrison rickmorri...@gmail.com wrote:
OK, it should use whatever is set on the ODBC DSN then. im not sure that
pyodbc should have an opinion about it.
Eh?
is there a way to set pyodbc.pooling = None or some equivalent ?
It's pyodbc.pooling = False, as
Hi,
I'm not a SQLAchemy expert ( just an average user... ). I have an
application that's causing me some problems... It's a monitoring
application that connects to a MS Sql Server, so it's always on.
Sometimes happens that casualy I have a DBAPIError with pyodbc. The
error is something like
On 19 Dic, 01:37, Rick Morrison [EMAIL PROTECTED] wrote:
Same here on pymssql.
I tried it with 'start' as the only PK, and with both 'identifier' and
'start' as PK. Both work fine.
Are you sure your in-database tabledef matches your declared schema?
I've attached a script that works here.
Sorry because i'm a bit late ( work deadlines are struggling my
time! :) ).
I've made some different configurations and schema definitions... and
i've noticed that it never updates a row if i set the datetime field
as PK ( never! even if i set it as the only PK .. ). If i set
composite PKs
Sorry because i'm a bit late ( work deadlines are struggling my
time! :) ).
I've made some different configurations and schema definitions... and
i've noticed that it never updates a row if i set the datetime field
as PK ( never! even if i set it as the only PK .. ). If i set
composite PKs
I'm not on my pc right now so I can send you the non working copy only
tomorrow
I've tried several schemas changes to try and see if the problem
always occurs or if there cases that it works, not necessary because i
need all those schemas In the former table schema, as i said, i've
On 10 Dic, 03:11, Michael Bayer [EMAIL PROTECTED] wrote:
I cant reproduce your problem, although i dont have access to MSSQL
here and there may be some issue on that end. Attached is your script
using an in-memory sqlite database, with the update inside of a while
loop, and it updates
Hi,
These days i'm playing with sqlalchemy to know if it can fit my
needs... I'm having some troubles with this ( maybe it's a real dumb
question.. or maybe a non supported feature.. :) ):
I have a database (mssql) with some tables with composite primary
keys... something like this:
t_jobs =
On 9 Dic, 21:37, Michael Bayer [EMAIL PROTECTED] wrote:
theyre entirely supported. try to provide a fully working example
illustrating the problem youre having.
Here's a small example just to simulate the problem.. The last part of
this code is there just to simulate the problem... normally
implications, I'm sure you can find more if you look:
http://www.sql-server-performance.com/articles/per/guid_performance_p...
Thans much! :)
FP
Rick
On 9/12/07, Smoke [EMAIL PROTECTED] wrote:
Still have problems...
if i change:
t = sa.Table(Machines, metadata, autoload=True
from
sqlalchemy.databases.mssql
On Sep 11, 9:01 am, Smoke [EMAIL PROTECTED] wrote:
Hi All,
I'm new to sqlalchemy and was checking if i can introduce it into some
of my projects... I have some database ( sql server ) tables with
MSUniqueidenfier columns set as PK and with newid
18 matches
Mail list logo