[sqlalchemy] Re: Full connection pool close

2009-02-02 Thread Smoke



On 29 Gen, 21:03, Rick Morrison rickmorri...@gmail.com wrote:
 I then added a wait option which simply sleeps for brief period after
 closing the SA connections, and then does the connection count check. With a
 1/2 second delay between the closing of the SA connection pool and the check
 for all connections closed, I get pretty reliable results for closing all
 connections.

 Please try the attached test on your machine and see if you get similar
 results.

Great Rick! Thanks!
I've run the test for a good number ( tons ) of times and i've had
some bad closing results ( more or less 30% ) with a 1/2 a second
delay. With 0.9 secs i had no problems at all!

Cheers!

Fabio
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Full connection pool close

2009-01-27 Thread Smoke



On 24 Gen, 23:31, Rick Morrison rickmorri...@gmail.com wrote:
  Oh... i didn't explain myself... I mean that it's already empty at the
  first cycle of the loop...

 It would be normal to not enter the loop if you haven't yet opened any
 connections, as connections are opened on demand. Make sure your program
 issues at least one query during this test. If you are already issuing
 queries, then bundle up this as a simple test case as you can make, and
 we'll have a look at it.

I was already issuing some queries... ( that's why sql server profiler
tells me that there's an opened connection ). Here's a more complete
example:

import pyodbc
pyodbc.pooling = False
import sqlalchemy as sa
sa_engine=sa.create_engine(dbUri, echo=False, strategy='threadlocal')
sa_Session = sa.orm.scoped_session( sessionmaker( bind=sa_engine ) )
metadata = sa.MetaData(sa_engine)
sa_session = sa_Session()

# The table definition... could be any table anyway..
#stations = sa.Table(Stations, metadata,
#sa.Column(name, sa.VARCHAR(20), primary_key=True),
#sa.Column(description, sa.String(200)),
#sa.Column(priority, sa.SmallInteger()),
#autoload=aload)

stations.select().execute().fetchall()

#Sql Server Profilers tells me that a connection is opened
sa_session.close()
sa_Session.close_all()
sa_engine.dispose()
del sa_session
del sa_Session
del sa_engine



PS: Is there any method, function, class or whatever in sqlalchemy to
get all opened connection pool to the DB ? In this case my sqlalchemy
connection is closed but che conn pool il still alive at the db
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Full connection pool close

2009-01-24 Thread Smoke



On 23 Gen, 23:43, Rick Morrison rickmorri...@gmail.com wrote:
 From your earlier post:



  a_session.close()
  sa_Session.close_all()
  sa_engine.dispose()
      
  del sa_engine
  but it does not close the connection!

 Here's Engine.dispose (line 1152, engine/base.py)

     def dispose(self):
         self.pool.dispose()
         self.pool = self.pool.recreate()

 ..and here's QueuePool.dispose (the default pool, line 646, pool.py)

     def dispose(self):
         while True:
             try:
                 conn = self._pool.get(False)
                 conn.close()
             except Queue.Empty:
                 break

         self._overflow = 0 - self.size()
         if self._should_log_info:
             self.log(Pool disposed.  + self.status())

 So the normal path would be to indeed close the connection (but not
 necessarily to delete the connection itself, it just falls out of scope).
 Can you trace into the dispose() call and verify that these are being run?

Hey, seems that you've got the problem. conn = self._pool.get( False )
is the problem
It raises an Empty error...:

File E:\test.py, line 175, in module
  Data.sa_engine.dispose()
File c:\Python25\Lib\site-packages\sqlalchemy-0.4.8-py2.5.egg
\sqlalchemy\engine\base.py, line 1133, in dispose
  self.pool.dispose()
File C:\Python25\Lib\site-packages\sqlalchemy-0.4.8-py2.5.egg
\sqlalchemy\pool.py, line 626, in dispose
  conn = self._pool.get(False)
File c:\Python25\Lib\site-packages\sqlalchemy-0.4.8-py2.5.egg
\sqlalchemy\queue.py, line 140, in get
  raise Empty
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Full connection pool close

2009-01-24 Thread Smoke



On 24 Gen, 21:27, Rick Morrison rickmorri...@gmail.com wrote:
  Hey, seems that you've got the problem. conn = self._pool.get( False )
  is the problem
  It raises an Empty error...:

 It's supposed to; that's the exit condition for the while True loop.  It
 does make it at least once through the loop, though right? Enough to close
 any connections you may have open?

Oh... i didn't explain myself... I mean that it's already empty at the
first cycle of the loop...
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Full connection pool close

2009-01-23 Thread Smoke



On 21 Gen, 16:18, Michael Bayer mike...@zzzcomputing.com wrote:
 On Jan 21, 2009, at 5:22 AM, Smoke wrote:
  Hi,

  I'm not a SQLAchemy expert ( just an average user... ). I have an
  application that's causing me some problems... It's a monitoring
  application that connects to a MS Sql Server, so it's always on.
  Sometimes happens that casualy I have a DBAPIError with pyodbc. The
  error is something like [Microsoft][ODBC Driver Manager] Driver's
  SQLAllocHandle on SQL_HANDLE_DBC failed  After the first time I
  have  this error every other DB operation generates this Error.
  So.. what I would like to do is completely close ( kill ) che active
  connection pool and recreate it. My code is somethink like this:

  sa_engine=sa.create_engine(dbUri, echo=False, strategy='threadlocal')
  sa_Session = sa.orm.scoped_session( sessionmaker(bind=sa_engine,
  autoflush=False) )
  metadata = sa.MetaData(sa_engine)
  sa_session = sa_Session()

  and then:

  sa_session.close()
  sa_Session.close_all()
  sa_engine.dispose()
  del sa_session
  del sa_Session
  del sa_engine

  But after executing this Sql Server Profiler tells me that the
  connection is still opened. The only way is to kill the application.
  My sqlalchemy.__version__ is 0.4.8

 assuming no other connections are checked out, that would close all  
 connections.  its possible the DBAPI is just not able to close the  
 connection fully.   try with a raw pyodbc application to see if this  
 is the case.   Other things to try are to use the NullPool with  
 create_engine() which doesn't pool connections.   Its also extremely  
 unlikely that you should be using strategy='threadlocal' unless you  
 are calling begin()/commit() from your Engine directly so you might  
 want to take that out.

So... i've tried raw pyodbc and i keeps che connection alive too if
you just import pyodbc and create a connection instance. To make
pyodbc close the connection is settine pyodbc.pooling = False. The
people from pyodbc told me they will check out this behavior
Using NullPool open and immediatly close the connection after doing
the sql stuff, so it should be ok. I'll try and check if it doesn't
affect my app behavior.
About threadlocal.. yeap.. my fault! :P
Thanks very much for the answer.

Fabio
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Full connection pool close

2009-01-23 Thread Smoke



On 23 Gen, 19:21, Rick Morrison rickmorri...@gmail.com wrote:
 Good question, I don't know the answer.

 But even if it were a DSN option, it's likely to be an optional one. In the
 absence of an explicit setting, shouldn't we default to having the setting
 off, not on? It sounds as if the pyodbc default is 'on'.

Well... to me it seems the default is on.. mkleehammer, the pyodbc
maintainer told me that, anyway, even if it's on it should close the
connection anyway when you do the con.close() and then del
con ( to me this behavior seems the most correct.. it's not nice to
have dead connection opened until the process is dead ). You can read
his answer here:  
http://groups.google.com/group/pyodbc/browse_thread/thread/320eab14f6f8830f





 I would argue for forcing it off anyway, even if set on: this potential
 double-layered pooling would make trying to do any cohesive state management
 strategy on the connections just about impossible, and would also
 effectively render any SQLA pool size settings rather meaningless.

 On Fri, Jan 23, 2009 at 12:51 PM, Michael Bayer 
 mike...@zzzcomputing.comwrote:



  I believe this is a setting you establish when you create the DSN
  yourself, no ?

  On Jan 23, 2009, at 12:27 PM, Rick Morrison wrote:

To make pyodbc close the connection is settine pyodbc.pooling =
   False.

   Whoa, I didn't know pyodbc automatically used ODBC connection
   pooling. Seems like we should be turning that off if the user is
   using SQLA pooling.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Full connection pool close

2009-01-23 Thread Smoke



On 23 Gen, 20:46, Rick Morrison rickmorri...@gmail.com wrote:
   his answer here:
 http://groups.google.com/group/pyodbc/browse_thread/thread/320eab14f6...

 You say in that thread that you're already turning off the setting by
 issuing:

    import pyodbc
    pyodbc.pooling = False

 before you ever open an SQLAlchemy connection.

 Is that still the case?

Yes. I've tried with raw pyodbc and the code:

import pyodbc
pyodbc.pooling = False
c = pyodbc.connect('DRIVER={SQL
Server};SERVER=localhost;DATABASE=DB_TEST;UID=sa;PWD=pass') # This
opens a real connection
c.close()
del c

closes the connection.

So i've tried another session like this:
import pyodbc
pyodbc.pooling = False
sa_engine=sa.create_engine(dbUri, echo=False, strategy='threadlocal')
sa_Session = sa.orm.scoped_session( sessionmaker(bind=sa_engine,
autoflush=False) )
metadata = sa.MetaData(sa_engine)
sa_session = sa_Session()

# Some queries in here ( and is only when i fire the first query that
Sql Server Profilers tells me that a connection is opened )

sa_session.close()
sa_Session.close_all()
sa_engine.dispose()
del sa_session
del sa_Session
del sa_engine


but it does not close the connection!

 That would imply that the connection is being held
 open by SQLAlchemy, not the ODBC connection pooling.

mmm.. Yes... that wath i thought at first too. But it's just a case
that pyodbc with pooling = True have the same behavior?

 So Mike's original
 advice about using the NullPool should close the connections when you're
 done with them -- did that work for you?

Yes. But it's behavior is a little bit different. With NullPool every
database action opens and closes a connection. So basically the
connection is created every time i need it.
With pyodbc.pooling = False ( in raw pyodbc, and that's what i'd
expect with SA too... but maybe i'm wrong ) it opens a connection (  c
= pyodbc.connect('bla bla') ) and keeps it alive until I do c.close()


 Mike / Jason: Wasn't there also some sort of verify connection feature
 that was added in the 0.5 series that would issue a do-nothing query on a
 connection when it was checked out from a pool just to make sure the
 connection was still working?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Full connection pool close

2009-01-23 Thread Smoke



On 23 Gen, 21:24, Rick Morrison rickmorri...@gmail.com wrote:
  OK, it should use whatever is set on the ODBC DSN then.   im not sure that
  pyodbc should have an opinion about it.

 Eh?

  is there a way to set pyodbc.pooling = None or some equivalent ?

 It's pyodbc.pooling = False, as appears many times upthread

 From the OP's description, it sounds like SA is somehow not forcefully
 closing the DBAPI connection (perhaps not disposing of the connection using
 del).

it's seems that it's not using close() at all, because my connection
was closed before doing del con...
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Full connection pool close

2009-01-21 Thread Smoke

Hi,

I'm not a SQLAchemy expert ( just an average user... ). I have an
application that's causing me some problems... It's a monitoring
application that connects to a MS Sql Server, so it's always on.
Sometimes happens that casualy I have a DBAPIError with pyodbc. The
error is something like [Microsoft][ODBC Driver Manager] Driver's
SQLAllocHandle on SQL_HANDLE_DBC failed  After the first time I
have  this error every other DB operation generates this Error.
So.. what I would like to do is completely close ( kill ) che active
connection pool and recreate it. My code is somethink like this:

sa_engine=sa.create_engine(dbUri, echo=False, strategy='threadlocal')
sa_Session = sa.orm.scoped_session( sessionmaker(bind=sa_engine,
autoflush=False) )
metadata = sa.MetaData(sa_engine)
sa_session = sa_Session()

and then:

sa_session.close()
sa_Session.close_all()
sa_engine.dispose()
del sa_session
del sa_Session
del sa_engine

But after executing this Sql Server Profiler tells me that the
connection is still opened. The only way is to kill the application.
My sqlalchemy.__version__ is 0.4.8

cheers

Fabio


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: save_or_update and composit Primary Keys... MSSQL / pyodbc issue ?

2007-12-19 Thread Smoke

On 19 Dic, 01:37, Rick Morrison [EMAIL PROTECTED] wrote:
 Same here on pymssql.

 I tried it with 'start' as the only PK, and with both 'identifier' and
 'start' as PK. Both work fine.

 Are you sure your in-database tabledef matches your declared schema?

 I've attached a script that works here. This one has both 'identifier' and
 'start' set as PK.

   ***---WARNING ---***:
 I've added a table.drop() to the script to simplify testing and make
 sure the schemas match

I understand it could seem impossible Rick, but if i run your script
it doesn't update the row!!! ( I swear!! ). I'm really confused on
what's going on...  maybe py_odbc?

Here's the log:

2007-12-19 10:18:29,421 INFO sqlalchemy.engine.base.Engine.0x..d0
DROP TABLE jobs
2007-12-19 10:18:29,421 INFO sqlalchemy.engine.base.Engine.0x..d0 {}
2007-12-19 10:18:29,421 INFO sqlalchemy.engine.base.Engine.0x..d0
COMMIT
2007-12-19 10:18:29,421 INFO sqlalchemy.engine.base.Engine.0x..d0
CREATE TABLE jobs (
identifier NUMERIC(18, 2) NOT NULL,
section VARCHAR(20),
start DATETIME NOT NULL,
stop DATETIME,
station VARCHAR(20),
PRIMARY KEY (identifier, start)
)


2007-12-19 10:18:29,421 INFO sqlalchemy.engine.base.Engine.0x..d0 {}
2007-12-19 10:18:29,437 INFO sqlalchemy.engine.base.Engine.0x..d0
COMMIT
2007-12-19 10:18:29,437 INFO sqlalchemy.engine.base.Engine.0x..d0
BEGIN
2007-12-19 10:18:29,437 INFO sqlalchemy.engine.base.Engine.0x..d0 SET
nocount ON

2007-12-19 10:18:29,437 INFO sqlalchemy.engine.base.Engine.0x..d0 {}
2007-12-19 10:18:29,437 INFO sqlalchemy.engine.base.Engine.0x..d0
INSERT INTO jo
bs (identifier, section, start, stop, station) VALUES (?, ?, ?, ?, ?)
2007-12-19 10:18:29,437 INFO sqlalchemy.engine.base.Engine.0x..d0
['22', None, d
atetime.datetime(2007, 12, 19, 10, 18, 29, 437000), None, 'TCHUKI']
2007-12-19 10:18:29,437 INFO sqlalchemy.engine.base.Engine.0x..d0
COMMIT
2007-12-19 10:18:30,437 INFO sqlalchemy.engine.base.Engine.0x..d0
BEGIN
2007-12-19 10:18:30,437 INFO sqlalchemy.engine.base.Engine.0x..d0 SET
nocount ON

2007-12-19 10:18:30,437 INFO sqlalchemy.engine.base.Engine.0x..d0 {}
2007-12-19 10:18:30,437 INFO sqlalchemy.engine.base.Engine.0x..d0
SELECT jobs.id
entifier AS jobs_identifier, jobs.section AS jobs_section, jobs.start
AS jobs_st
art, jobs.stop AS jobs_stop, jobs.station AS jobs_station
FROM jobs ORDER BY jobs.identifier
2007-12-19 10:18:30,437 INFO sqlalchemy.engine.base.Engine.0x..d0 []
2007-12-19 10:18:30,453 INFO sqlalchemy.engine.base.Engine.0x..d0 SET
nocount ON

2007-12-19 10:18:30,453 INFO sqlalchemy.engine.base.Engine.0x..d0 {}
2007-12-19 10:18:30,453 INFO sqlalchemy.engine.base.Engine.0x..d0
UPDATE jobs SE
T stop=? WHERE jobs.identifier = ? AND jobs.start = ?
2007-12-19 10:18:30,453 INFO sqlalchemy.engine.base.Engine.0x..d0
[datetime.date
time(2007, 12, 19, 10, 18, 30, 453000), '22.00',
datetime.datetime(2007, 12, 19,
 10, 18, 29)]
2007-12-19 10:18:30,467 INFO sqlalchemy.engine.base.Engine.0x..d0
COMMIT


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: save_or_update and composit Primary Keys... MSSQL / pyodbc issue ?

2007-12-18 Thread Smoke

Sorry because i'm a bit late (  work deadlines are struggling my
time! :) ).
I've made some different configurations and schema definitions... and
i've noticed that it never updates a row if i set the datetime field
as PK ( never! even if i set it as the only PK .. ). If i set
composite PKs excluding any datetime column everything works fine ( it
also works if i set a datetime as PK at the database side and
excluding it at the schema on sqlalchemy side.. ). Sorry about the
mess with the PKs between the former examples but i was only trying to
say that changing the schema results that everything works fine...

Here's a little piece of code just as an example to start playing...
( it works  for me.., but if i include start as PK it crashes)

import sqlalchemy as sa
import datetime, time
from sqlalchemy.orm import sessionmaker

sa_engine=sa.create_engine(mssql://sa:[EMAIL PROTECTED]/siaDB_BR,
echo=True)
metadata = sa.MetaData(sa_engine)
Session = sessionmaker(bind=sa_engine, autoflush=True,
transactional=True)
sa_session = Session()


jobs = sa.Table('jobs', metadata,
sa.Column('identifier', sa.Numeric(18),
primary_key=True),
sa.Column('section', sa.VARCHAR(20)),
sa.Column(start,sa.DateTime),
sa.Column(stop,sa.DateTime),
sa.Column(station, sa.VARCHAR(20),  
primary_key=True),
autoload=False)#ok

class Job(object):
def __init__(self, identifier, start, station=TCHUKI):
self.identifier, self.start, self.station=identifier, start, 
station

sa.orm.mapper(Job, jobs)

j = Job(22, datetime.datetime.now())
sa_session.save(j)
sa_session.commit()
sa_session.clear()
time.sleep(1)
j1=sa_session.query(Job).all()[0]
j1.stop=datetime.datetime.now()
sa_session.save_or_update(j1)
sa_session.commit()







On 12 Dic, 17:54, Rick Morrison [EMAIL PROTECTED] wrote:
 Hey Fabio, would you please post a full non-working copy with the new schema
 and all the PKs that you want set up? There are a few too many variants in
 this thread to see what's going on now. Your earlier versions didn't include
 'station' as a PK, but did include 'start', while this one's the opposite.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: save_or_update and composit Primary Keys... MSSQL / pyodbc issue ?

2007-12-18 Thread Smoke

Sorry because i'm a bit late (  work deadlines are struggling my
time! :) ).
I've made some different configurations and schema definitions... and
i've noticed that it never updates a row if i set the datetime field
as PK ( never! even if i set it as the only PK .. ). If i set
composite PKs excluding any datetime column everything works fine ( it
also works if i set a datetime as PK at the database side and
excluding it at the schema on sqlalchemy side.. ). Sorry about the
mess with the PKs between the former examples but i was only trying to
say that changing the schema results that everything works fine...

Here's a little piece of code just as an example to start playing...
( it works  for me.., but if i include start as PK it crashes)

import sqlalchemy as sa
import datetime, time
from sqlalchemy.orm import sessionmaker

sa_engine=sa.create_engine(mssql://sa:[EMAIL PROTECTED]/siaDB_BR,
echo=True)
metadata = sa.MetaData(sa_engine)
Session = sessionmaker(bind=sa_engine, autoflush=True,
transactional=True)
sa_session = Session()


jobs = sa.Table('jobs', metadata,
sa.Column('identifier', sa.Numeric(18),
primary_key=True),
sa.Column('section', sa.VARCHAR(20)),
sa.Column(start,sa.DateTime),
sa.Column(stop,sa.DateTime),
sa.Column(station, sa.VARCHAR(20),  
primary_key=True),
autoload=False)#ok

class Job(object):
def __init__(self, identifier, start, station=TCHUKI):
self.identifier, self.start, self.station=identifier, start, 
station

sa.orm.mapper(Job, jobs)

j = Job(22, datetime.datetime.now())
sa_session.save(j)
sa_session.commit()
sa_session.clear()
time.sleep(1)
j1=sa_session.query(Job).all()[0]
j1.stop=datetime.datetime.now()
sa_session.save_or_update(j1)
sa_session.commit()







On 12 Dic, 17:54, Rick Morrison [EMAIL PROTECTED] wrote:
 Hey Fabio, would you please post a full non-working copy with the new schema
 and all the PKs that you want set up? There are a few too many variants in
 this thread to see what's going on now. Your earlier versions didn't include
 'station' as a PK, but did include 'start', while this one's the opposite.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: save_or_update and composit Primary Keys... MSSQL / pyodbc issue ?

2007-12-12 Thread Smoke

I'm not on my pc right now so I can send you the non working copy only
tomorrow
I've tried several schemas changes to try and see if the problem
always occurs or if there cases that it works, not necessary because i
need all those schemas In the former table schema, as i said, i've
included only identifier and station as PKs, but the table in the
database is set with PKs on identifier, START and station... and
save_or_update works fine...
I'll post some code tomorrow...

On 12 Dic, 17:54, Rick Morrison [EMAIL PROTECTED] wrote:
 Hey Fabio, would you please post a full non-working copy with the new schema
 and all the PKs that you want set up? There are a few too many variants in
 this thread to see what's going on now. Your earlier versions didn't include
 'station' as a PK, but did include 'start', while this one's the opposite.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: save_or_update and composit Primary Keys... MSSQL / pyodbc issue ?

2007-12-10 Thread Smoke



On 10 Dic, 03:11, Michael Bayer [EMAIL PROTECTED] wrote:
 I cant reproduce your problem, although i dont have access to MSSQL
 here and there may be some issue on that end.  Attached is your script
 using an in-memory sqlite database, with the update inside of a while
 loop, and it updates regularly.A few things to try on the MSSQL
 side, if the issue is due to some typing issue, try not using
 autoload=True, try using generic types instead of the MSSQL specific
 ones, etc., in an effort to narrow down what might be the problem.

I've redefined the table using only generic types:

jobs = sa.Table('jobs', metadata,
sa.Column('identifier', sa.VARCHAR(20),
primary_key=True),
sa.Column('section', sa.VARCHAR(20)),
sa.Column(start,sa.DATETIME,  primary_key=True),
sa.Column(stop,sa.DATETIME),
sa.Column(station, sa.VARCHAR(20)),
autoload=False)

and also autoload=False made no difference. I'll trying changing
something else...


 also ive added MSSQL/pyodbc to the subject line here in case any of
 the MSSQL crew wants to try out your script with pyodbc.

Thanks.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] save_or_update and composit Primary Keys...

2007-12-09 Thread Smoke

Hi,

These days i'm playing with sqlalchemy to know if it can fit my
needs... I'm having some troubles with this ( maybe it's a real dumb
question.. or maybe a non supported feature.. :) ):
I have a database (mssql) with some tables with composite primary
keys... something like this:
t_jobs = sa.Table('jobs', metadata,
sa.Column('identifier', sa.VARCHAR(20)),#,
primary_key=True),
sa.Column('job_batch',
sa.databases.mssql.MSUniqueIdentifier),
 
sa.Column(start,_sql.MSDateTime_pyodbc),#, primary_key=True),
sa.Column(stop,_sql.MSDateTime_pyodbc),
sa.Column(station, sa.VARCHAR(20)),
sa.PrimaryKeyConstraint('identifier', 'inizio'),
autoload=True)

and it's mapped to a class... like this:
class Job(object):
...

sa.orm.mapper(Job, t_jobs)

When i create and save a j = Job(identifier, start), I have no
problems and it saves the new record on the table, but when i want to
update ( update or save_or_update ) the record with the stop time i
just don't update the record... It does not throw any new
exception  I've also tryied to change the table definition putting
the primary_key on both columns definition instead of using
PrimaryKeyConstraint ( as you can see by the comments.. ) but the
result is the same...
Am I missing something? Or maybe composite primary keys tables are not
supported for updating using the session ?

Thanks,

Fabio
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: save_or_update and composit Primary Keys...

2007-12-09 Thread Smoke

On 9 Dic, 21:37, Michael Bayer [EMAIL PROTECTED] wrote:
 theyre entirely supported.  try to provide a fully working example
 illustrating the problem youre having.



Here's a small example just to simulate the problem.. The last part of
this code is there just to simulate the problem... normally i would
just keep using j and update it... and this updates the record into
the db. But if I get an instance of the Job class from a query on the
db and try to update ( or save_or_update)it the record is not updated
into the db as well..

Here the sample code:


import sqlalchemy as sa
import datetime, time
from sqlalchemy.orm import sessionmaker

sa_engine=sa.create_engine(mssql://user:[EMAIL PROTECTED]/myDB,
echo=True)
metadata = sa.MetaData(sa_engine)
Session = sessionmaker(bind=sa_engine, autoflush=True,
transactional=True)
sa_session = Session()


jobs = sa.Table('jobs', metadata,
sa.Column('identifier', 
sa.databases.mssql.MSUniqueIdentifier,
primary_key=True),
sa.Column('section', 
sa.databases.mssql.MSUniqueIdentifier),

sa.Column(start,sa.databases.mssql.MSDateTime_pyodbc,
primary_key=True),

sa.Column(stop,sa.databases.mssql.MSDateTime_pyodbc),
sa.Column(station, 
sa.VARCHAR(20)),
autoload=True)

class Job(object):
def __init__(self, identifier, start):
self.identifier, self.start=identifier, start

sa.orm.mapper(Job, jobs)

j = Job(TEST1, datetime.datetime.now())
sa_session.save(j)
sa_session.commit()
# The following part is here just to simluate my problem... if I keep
using j instead of getting j1 from query
# the record is updated as well
sa_session.clear()
time.sleep(1)
j1=sa_session.query(Job).all()[0]
j1.stop=datetime.datetime.now()
sa_session.save_or_update(j1)
sa_session.commit()
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: db autogenerated pks?

2007-09-14 Thread Smoke



On 12 Set, 19:31, Rick Morrison [EMAIL PROTECTED] wrote:
 SQL Server provides no facilities for retrieving a GUID key after an insert
 -- it's not a true autoincrementing key. The MSSQL driver for SA uses either
 @@IDENTITY or SCOPE_IDENTITY() to retreive the most-recently inserted
 autoincrement value, but there is no such facility for getting GUID keys.

 SA provides a mechanism called passive default to handle these kinds of
 things. What it does under the covers, or what you can do explicitly without
 it is:

  a) first call newid() to get the new GUID
  b) then do the insert using the GUID value as a normal attribute

 My personal opinion is that GUID keys are over-utilized, and there are
 usually better alternatives that will perform better overall, and will not
 make you swim upstream with SA.

Ok... In the meanwhile i've tryied to play a little bit and found that
if i just change the pk from the guid to another(s) column(s)
everything works great. Probably this is the right occasion for me to
sanitize clean up e refactor this application database design... :)

Here's a link to an article about an
 alternate scheme to get rid of GUID keys that talks about performance
 implications, I'm sure you can find more if you look:


 http://www.sql-server-performance.com/articles/per/guid_performance_p...

Thans much!  :)

FP


 Rick

 On 9/12/07, Smoke [EMAIL PROTECTED] wrote:



  Still have problems...

  if i change:

  t = sa.Table(Machines, metadata, autoload=True)

  to:

  t = sa.Table(Rpm_MacchineConfig, metadata,
   sa.Column('uId', sa.databases.mssql.MSUniqueIdentifier,
  primary_key=True),
   autoload=False)

  i have:

  sqlalchemy.exceptions.DBAPIError: (ProgrammingError) ('42000',
  [42000] [Microso
  ft][ODBC SQL Server Driver][SQL Server]Incorrect syntax near
  ')'. (102)) u'INSERT INTO [Rpm_MacchineConfig] () VALUES ()' []

  and if i put autoload= True, like this:

  t = sa.Table(Rpm_MacchineConfig, metadata,
   sa.Column('uId', sa.databases.mssql.MSUniqueIdentifier,
  primary_key=True),
   autoload=True)

  i have this:

  sqlalchemy.exceptions.DBAPIError: (IntegrityError) ('23000', [23000]
  [Microsoft
  ][ODBC SQL Server Driver][SQL Server]Impossible to insert NULL value
  into column 'uId' on table 'dbtests.dbo.Rpm_MachineConfig'.The column
  does not support NULL values (515); [01000] [Microsoft][O
  DBC SQL Server Driver][SQL Server]Instruction interrupted. (3621)) u
  'INSERT INTO [Rpm_MacchineConfig] (nome, nodo, descrizione) VALUES
  (?, ?, ?)' ['MARIO', 'FI', None ]

  Any hint before i start changing my table design? I'm a newbie on
  sqlalchemy so i'm probably missing something...

  thanks

  FP

  On 12 Set, 02:18, KyleJ [EMAIL PROTECTED] wrote:
   You probably need to override the autoloaded primary key column:
 http://www.sqlalchemy.org/docs/04/metadata.html#metadata_tables_refle...

   Specify the type with MSUniqueIdentifier from
   sqlalchemy.databases.mssql

   On Sep 11, 9:01 am, Smoke [EMAIL PROTECTED] wrote:

Hi All,

I'm new to sqlalchemy and was checking if i can introduce it into some
of my projects... I have some database ( sql server ) tables with
MSUniqueidenfier columns set as PK and with newid() as default
values...
So, if i try to map this table into a class,save a new record and
flush, then i have errors because it says that column doesn't support
NULL values..
Is there any option i'm missing that can make me exclude this PK from
the INSERT query or somehow tell sqlalchemy that this pk column value
is autogenerated by the database?

thanks,
FP

P.S.:

My code is something very simple... like this:

t = sa.Table(Machines, metadata, autoload=True)
Session = sessionmaker(bind=db, autoflush=False, transactional=False)
class Machine(object):
pass

session = Session()
sa.orm.mapper(Machine, t)#, exclude_properties=['uId'])
m = Machine()

nm = Machine()
nm.name, nm.node = Mac1, P
session.save(nm)
session.flush()


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: db autogenerated pks?

2007-09-12 Thread Smoke

Still have problems...

if i change:

t = sa.Table(Machines, metadata, autoload=True)

to:

t = sa.Table(Rpm_MacchineConfig, metadata,
 sa.Column('uId', sa.databases.mssql.MSUniqueIdentifier,
primary_key=True),
 autoload=False)

i have:

sqlalchemy.exceptions.DBAPIError: (ProgrammingError) ('42000',
[42000] [Microso
ft][ODBC SQL Server Driver][SQL Server]Incorrect syntax near
 ')'. (102)) u'INSERT INTO [Rpm_MacchineConfig] () VALUES ()' []

and if i put autoload= True, like this:

t = sa.Table(Rpm_MacchineConfig, metadata,
 sa.Column('uId', sa.databases.mssql.MSUniqueIdentifier,
primary_key=True),
 autoload=True)

i have this:

sqlalchemy.exceptions.DBAPIError: (IntegrityError) ('23000', [23000]
[Microsoft
][ODBC SQL Server Driver][SQL Server]Impossible to insert NULL value
into column 'uId' on table 'dbtests.dbo.Rpm_MachineConfig'.The column
does not support NULL values (515); [01000] [Microsoft][O
DBC SQL Server Driver][SQL Server]Instruction interrupted. (3621)) u
'INSERT INTO [Rpm_MacchineConfig] (nome, nodo, descrizione) VALUES
(?, ?, ?)' ['MARIO', 'FI', None ]

Any hint before i start changing my table design? I'm a newbie on
sqlalchemy so i'm probably missing something...

thanks

FP

On 12 Set, 02:18, KyleJ [EMAIL PROTECTED] wrote:
 You probably need to override the autoloaded primary key 
 column:http://www.sqlalchemy.org/docs/04/metadata.html#metadata_tables_refle...

 Specify the type with MSUniqueIdentifier from
 sqlalchemy.databases.mssql

 On Sep 11, 9:01 am, Smoke [EMAIL PROTECTED] wrote:

  Hi All,

  I'm new to sqlalchemy and was checking if i can introduce it into some
  of my projects... I have some database ( sql server ) tables with
  MSUniqueidenfier columns set as PK and with newid() as default
  values...
  So, if i try to map this table into a class,save a new record and
  flush, then i have errors because it says that column doesn't support
  NULL values..
  Is there any option i'm missing that can make me exclude this PK from
  the INSERT query or somehow tell sqlalchemy that this pk column value
  is autogenerated by the database?

  thanks,
  FP

  P.S.:

  My code is something very simple... like this:

  t = sa.Table(Machines, metadata, autoload=True)
  Session = sessionmaker(bind=db, autoflush=False, transactional=False)
  class Machine(object):
  pass

  session = Session()
  sa.orm.mapper(Machine, t)#, exclude_properties=['uId'])
  m = Machine()

  nm = Machine()
  nm.name, nm.node = Mac1, P
  session.save(nm)
  session.flush()


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---