[sqlalchemy] Re: SQL execution order in the unit of work

2007-11-12 Thread che

Hi,

On 11 , 04:02, Michael Bayer [EMAIL PROTECTED] wrote:
 On Nov 10, 2007, at 4:54 PM, Manlio Perillo wrote:

  Isn't it possible to just use the order used by the programmer?
  If I call
  save(A)
  save(B)

 the order of save() is signficant for instances of one class:

 save(A1)
 save(A2)

 will insert A1 and A2 in that order.

 but beyond that, the order is determined by the topological sort of
 all mappers. if you save objects of type A, B, C and D, B is
 dependent on A, D is dependent on C, and by dependent i mean they
 have relation()s set up; it might say for the ordering:  A B C D.
 But you saved the objects in this order:  save(C1) save(D1) save(B1)
 save(A1) save(C2).   now the order of your save()'s is in conflict
 with what the topological sort requires - it *cannot* save C2 where
 its being saved if D1 is dependent on it - if it put D1 at the end,
 now D1 is being saved after A1, etc. and your ordering is out the
 window.   Also, by default the topological sort is only sorting at the
 level of tables, not rows - when row-based dependencies are detected,
 complexity goes up and the efficiency of the flush() goes down.  so
 no, its not at all workable for save()'s to determine the order across
 classes - in any realistic scenario they will conflict with the
 topological sort.  youre basically suggesting that SA would do half
 of a topological sort and you'd do the other half manually, but it
 doesnt work that way.

i had similar need to order things prev week. I thought that may be in
the future there will be possible to define some artificial dependency
(similar to relation) that have no its counterpart in the database in
order to meet similar requrements - this will add just one, two
dependencies to the topological sort and will not slow down much the
commit process. what is your opinion, Michael?

such requirement arose for me trying to fill some cache table (CT)
when some other table(OT) is changed. Before i made the relation OT-CT
it sometimes tried to update in CT before the change in OT is done.

regards,
stefan


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Firebird engine - support for embedded version

2007-11-12 Thread Werner F. Bruhin

I would like to change SA to support the embedded version of Firebird SQL.

I am close but the process does not terminate correctly (i.e. I have to 
kill it), so I am obviously missing something.

I made the following change to firebird.py (SA 0.4.0 final):

def create_connect_args(self, url):
opts = url.translate_connect_args(username='user')
if opts.get('port'):
opts['host'] = %s/%s % (opts['host'], opts['port'])
del opts['port']
opts.update(url.query)
print opts
print opts['host']
if opts['host'] == 'embedded':
del opts['host']
print opts

type_conv = opts.pop('type_conv', self.type_conv)


Then running the following:
import sqlalchemy as sa
import sqlalchemy.orm as sao

import model as db
import utils

database = u'C:\\Dev\\twcb\\Data\\twcb3.fdb'
host = 'embedded'
fileurl = str(database.replace('\\', '/'))
url = 'firebird://USERNAME:[EMAIL PROTECTED]/%s' % (host, fileurl)
dburl = sa.engine.url.make_url(url)

engine = sa.create_engine(dburl, encoding='utf8', echo=False)
Session = sao.sessionmaker(autoflush=True, transactional=True)
Session.configure(bind=engine)
session = Session()

dbItemConsumption = session.query(db.Consumption).load(63)
print dbItemConsumption.consumptionid
print dbItemConsumption.consumedvalue
print dbItemConsumption.updated
session.close()

Gives me the correct output, i.e.:

{'host': 'embedded', 'password': 'pw', 'user': 'USERNAME', 'database': 
'C:/Dev/twcb/Data/twcb3.fdb'}
embedded
{'password': 'pw', 'user': 'USERNAME', 'database': 
'C:/Dev/twcb/Data/twcb3.fdb'}
63
7.5
2007-11-09

However the process hangs and I have to manually kill it.


Can anyone help me pinpoint what else needs to be changed?

If this works correctly would a patch be accepted?

Best regards
Werner

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] threadlocal transactions, engine, and the Session

2007-11-12 Thread Huy Do

Hi,

I've just had  a heck of a time getting transactions to behave correctly 
after upgrading to 0.4dev from 0.3.6, and I just wanted to make sure 
that I am doing things correctly.

I've found that to totally control transactions myself, and get ORM 
sessions (i.e Session.flush()) to interact with SQL transactions i.e 
table.insert().execute(), I had to do the following.

engine = create_engine(appconfig.dburi, strategy='threadlocal', echo=False)
Session = scoped_session(sessionmaker(bind=engine, autoflush=False, 
transactional=False))
metadata = MetaData(engine)

then.

engine.begin()
try:
   // Session.flush()
   // mytable.insert().execute() stuff
   engine.commit
except:
   engine.rollback()

Does this seem correct ?

Previously i used autoflush=True, transactional=True together with 
Session.begin(), Session.commit(), Session.rollback() and I ran into all 
sorts of issues e.g transaction was started but never committed etc.

Thanks

Huy



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Firebird engine - support for embedded version

2007-11-12 Thread Werner F. Bruhin

Werner F. Bruhin wrote:
 I would like to change SA to support the embedded version of Firebird SQL.

 I am close but the process does not terminate correctly (i.e. I have to 
 kill it), so I am obviously missing something.
   
Just noticed that I can also use this:

dburl = sa.engine.url.URL('firebird', username='USERNAME', 
password='pw', database=fileurl)

Which means firebird.py does not need to be patched, however I still see 
the same problem that the process hangs.

Best regards
Werner

P.S.
To use the embedded engine one has to install FB files into the 
kinterbasdb folder, I can provide more details or anyone wanting to try 
it can follow the FB embedded install instructions provided in the FB 
release guides.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Firebird engine - support for embedded version

2007-11-12 Thread Werner F. Bruhin

Werner F. Bruhin wrote:
 Werner F. Bruhin wrote:
   
 I would like to change SA to support the embedded version of Firebird SQL.

 I am close but the process does not terminate correctly (i.e. I have to 
 kill it), so I am obviously missing something.
   
 
 Just noticed that I can also use this:

 dburl = sa.engine.url.URL('firebird', username='USERNAME', 
 password='pw', database=fileurl)

 Which means firebird.py does not need to be patched, however I still see 
 the same problem that the process hangs.

   
As the version of FB is 2.1beta I wondered if maybe there is an issue 
with it, so I just did a test with kinterbasdb directly (no SA), but 
that works correctly and the program terminates/closes correctly too.

Best regards
Werner

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Firebird engine - support for embedded version

2007-11-12 Thread Werner F. Bruhin

Hi Florent,

Florent Aide wrote:
 On Nov 12, 2007 12:57 PM, Werner F. Bruhin [EMAIL PROTECTED] wrote:
   
 Which means firebird.py does not need to be patched, however I still see
 the same problem that the process hangs.


   
 As the version of FB is 2.1beta I wondered if maybe there is an issue
 with it, so I just did a test with kinterbasdb directly (no SA), but
 that works correctly and the program terminates/closes correctly too.
 

 I use FB + SA 0.4 on a daily basis without problems.
 For this I use kinterbasdb-3.2 in embeded mode on windows with python 2.5.
   
My setup is just about the same.

- Windows Vista
- kinterbasdb 3.2 with a patch from Pavel Cisar to __init__.py to make 
it work with FB 2.1beta
- Python 2.5 (r25:51908, Sep 19 2006, 09:52:17)
 If you need to check anything specific you can ask me I'll look into
 my setup. For me it just works.
   
If I connect to the db with just kinterbasdb I see no problem.

If I use SA 0.4 final I connect without problem but when the script 
finishes it hangs.

How do you build the connection string when you want to connect to an 
embedded engine?  Do you have a sample?

Werner

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Save the default function into the database

2007-11-12 Thread luyang.han

Thank you for the advices.

I just come across an interesting database software kexi (www.kexi-
project.org), there one can store the designs, queries and scripts
into the database. And the database is in itself SQLite only, which
does not support complicated database user-defined functions. So that
basically all the information needed to retrieve all the function of
the database is just in one file (as it is SQLite based), which is
quite portable. Does anyone know how they achieved this?

Rick Morrison wrote:
 I guess you know that storing the actual bytecodes (or the source) of a
 Python function in the database itself is not going to buy you much:

 Since the function bytecodes or source
 would be in Python, only a Python interpreter could run it to produce
 the function result, and if you know you're going to be accessing
 the database via a Python interpreter (most likely via SA), then you may as
 well just bundle the function in a module, import it and use it there like
 the rest of us do.

 If your'e searching for some more portable database logic that can be run
 from both a Python interpreter and other types of tools, you want database
 user-defined functions (UDF)s and stored procedures. SA supports several
 database that sport one or both of these tools:

 Oracle both
 SQL Server  both
 Mysql  both
 Postgresql   functions only, but can modify data like stored procedures

 All of these will run the UDF or the stored procedure in the process of the
 database server, not in the context of your Python program. That means you
 won't be able to access variables in your Python program from the UDF or
 procedure, you'll need to supply them as parameters.

 If you really have to have the function be a Python function, the PLPython
 language for Postgresql allows you to run a real Python interpreter in the
 context of the server itself. There was a guy named James Pye who was
 working on a full-blown Python programming environment for Postgresql in
 which Python would be a full stored procedure language and could share
 variables with SQL and lots of other interesting stuff. Might be worth
 checking into -- check PGforge or Postgres contrib.

 Good luck with this,

 Rick


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: threadlocal transactions, engine, and the Session

2007-11-12 Thread Michael Bayer


On Nov 12, 2007, at 5:37 AM, Huy Do wrote:


 Hi,

 I've just had  a heck of a time getting transactions to behave  
 correctly
 after upgrading to 0.4dev from 0.3.6, and I just wanted to make sure
 that I am doing things correctly.

 I've found that to totally control transactions myself, and get ORM
 sessions (i.e Session.flush()) to interact with SQL transactions i.e
 table.insert().execute(), I had to do the following.

 engine = create_engine(appconfig.dburi, strategy='threadlocal',  
 echo=False)
 Session = scoped_session(sessionmaker(bind=engine, autoflush=False,
 transactional=False))
 metadata = MetaData(engine)

 then.

 engine.begin()
 try:
   // Session.flush()
   // mytable.insert().execute() stuff
   engine.commit
 except:
   engine.rollback()

 Does this seem correct ?

 Previously i used autoflush=True, transactional=True together with
 Session.begin(), Session.commit(), Session.rollback() and I ran into  
 all
 sorts of issues e.g transaction was started but never committed etc.


The reason you have to use threadlocal in that case is because you  
are relying upon implicit execution of things like mytable.insert(),  
and you also want the transaction to propigate from engine to session  
without explicitly passing the connection to it.  so what youre doing  
above is OK.   it might actually be the easiest way to do it and is  
the only way the implicit execution style can participate in the  
transaction.

the two other variants are:

1. use the Session to manage the transaction (below, we have  
transactional=False, which basically means we call begin ourselves):

Session.begin()

# execute with Session
Session.execute(mytable.insert(), stuff)

# get the current connection from Session, use that
conn = Session.connection()
conn.execute(mytable.insert(), stuff)

# commit
Session.commit()

2. use the engine to manage the transaction but dont use threadlocal:

conn = engine.connect()
trans = conn.begin()
Session(bind=conn)
try:
# 
conn.execute(mytable.insert(), stuff)
trans.commit()
except:
trans.rollback()
finally:
Session.close()






--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Firebird engine - support for embedded version

2007-11-12 Thread Lele Gaifax

On Mon, 12 Nov 2007 13:22:25 +0100
Werner F. Bruhin [EMAIL PROTECTED] wrote:

 - kinterbasdb 3.2 with a patch from Pavel Cisar to __init__.py to
 make it work with FB 2.1beta

Do you have an URL for that patch? 

thank you,
ciao, lele.
-- 
nickname: Lele Gaifax| Quando vivrò di quello che ho pensato ieri
real: Emanuele Gaifas| comincerò ad aver paura di chi mi copia.
[EMAIL PROTECTED] | -- Fortunato Depero, 1929.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Oracle date/datetime oddities

2007-11-12 Thread Michael Schlenker

Hi all,

I'm not sure if its a bug or an intended feature, but the default behaviour
of sqlalchemy when reading Oracle DATE fields is annoying. cx_Oracle
rightfully returns datetime.datetime objects, but Sqlalchemy truncates this
to datetime.date objects.

Why is it done like this (in lib/sqlalchemy/databases/oracle.py:34-60)?
Wouldn't it be a better choice to default to OracleDateTime instead of 
OracleDate
for queries without bound metadata?

Its not a (major) problem when querying via a table object, where i can override
the column type with a sane version (OracleDateTime), but for queries directly
using conn.execute() its ugly.

Basically this throws up:

import sqlalchemy as sa
import datetime
engine = sa.create_engine('oracle://scott:[EMAIL PROTECTED]')
conn = engine.connect()
conn.execute('create table dtest (a DATE)')
# insert a row with date and time
now = datetime.datetime(2007,12,11,13,11,00)
conn.execute('insert into dtest values (:dt)', {'dt':now})
# check its there
rows = conn.execute('select a from dtest where a=:dt',{'dt':now})
for r in rows:
 if rows[0]==now:
print Found
 else:
print Not Found

This prints 'Not Found' even though the row is there and is returned correctly
by cx_Oracle.

I would expect to get at least identity for this.

So is this a bug and should i add a report or is it a 'feature' of some kind 
and will
not change even if i report a bug?

Michael

-- 
Michael Schlenker
Software Engineer

CONTACT Software GmbH   Tel.:   +49 (421) 20153-80
Wiener Straße 1-3   Fax:+49 (421) 20153-41
28359 Bremen
http://www.contact.de/  E-Mail: [EMAIL PROTECTED]

Sitz der Gesellschaft: Bremen | Geschäftsführer: Karl Heinz Zachries
Eingetragen im Handelsregister des Amtsgerichts Bremen unter HRB 13215


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Oracle date/datetime oddities

2007-11-12 Thread Rick Morrison
The issue is that in essence, both answers are right. ANSI SQL specifies
different data types for DATE fields and DATETIME fields, where DATE fields
do not hold the time portion. Oracle, SQL Server and other database engines
have their own ideas about how best to handle dates / datetimes.

SA splits the difference sometimes. All the various dialects inherit from an
ANSI dialect which treats the two types as distinct. On the other hand, most
dialects take the philosophy of just hand whatever value you got off to the
DBAPI and what happens, happens.

At least one of SA purported benefits is that it helps to abstract the
various behaviors of it's supported database engines and thereby make SA
code at least theoretically a bit easier to work on multiple database
engines. This is kind of in direct conflict with the laissez-faire let the
DBAPI decide philosophy, which is why you sometimes see some of this
arguably schizophrenic behavior. If we give you DBAPI-neutral types today,
tomorrow somebody on the list will be complaining that it doesn't work the
other way around.

For the short term, you should look into the OracleDateTime type. But in
general, we need to know more about what users are looking for. I'm guessing
we can put you in the DBAPI-neutral camp?


On 11/12/07, Michael Schlenker [EMAIL PROTECTED] wrote:


 Hi all,

 I'm not sure if its a bug or an intended feature, but the default
 behaviour
 of sqlalchemy when reading Oracle DATE fields is annoying. cx_Oracle
 rightfully returns datetime.datetime objects, but Sqlalchemy truncates
 this
 to datetime.date objects.

 Why is it done like this (in lib/sqlalchemy/databases/oracle.py:34-60)?
 Wouldn't it be a better choice to default to OracleDateTime instead of
 OracleDate
 for queries without bound metadata?

 Its not a (major) problem when querying via a table object, where i can
 override
 the column type with a sane version (OracleDateTime), but for queries
 directly
 using conn.execute() its ugly.

 Basically this throws up:

 import sqlalchemy as sa
 import datetime
 engine = sa.create_engine('oracle://scott:[EMAIL PROTECTED]')
 conn = engine.connect()
 conn.execute('create table dtest (a DATE)')
 # insert a row with date and time
 now = datetime.datetime(2007,12,11,13,11,00)
 conn.execute('insert into dtest values (:dt)', {'dt':now})
 # check its there
 rows = conn.execute('select a from dtest where a=:dt',{'dt':now})
 for r in rows:
  if rows[0]==now:
 print Found
  else:
 print Not Found

 This prints 'Not Found' even though the row is there and is returned
 correctly
 by cx_Oracle.

 I would expect to get at least identity for this.

 So is this a bug and should i add a report or is it a 'feature' of some
 kind and will
 not change even if i report a bug?

 Michael

 --
 Michael Schlenker
 Software Engineer

 CONTACT Software GmbH   Tel.:   +49 (421) 20153-80
 Wiener Straße 1-3   Fax:+49 (421) 20153-41
 28359 Bremen
 http://www.contact.de/  E-Mail: [EMAIL PROTECTED]

 Sitz der Gesellschaft: Bremen | Geschäftsführer: Karl Heinz Zachries
 Eingetragen im Handelsregister des Amtsgerichts Bremen unter HRB 13215


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Oracle date/datetime oddities

2007-11-12 Thread Michael Bayer


On Nov 12, 2007, at 11:16 AM, Michael Schlenker wrote:


 Hi all,

 I'm not sure if its a bug or an intended feature, but the default  
 behaviour
 of sqlalchemy when reading Oracle DATE fields is annoying. cx_Oracle
 rightfully returns datetime.datetime objects, but Sqlalchemy  
 truncates this
 to datetime.date objects.

 Why is it done like this (in lib/sqlalchemy/databases/oracle.py: 
 34-60)?
 Wouldn't it be a better choice to default to OracleDateTime instead  
 of OracleDate
 for queries without bound metadata?

 Its not a (major) problem when querying via a table object, where i  
 can override
 the column type with a sane version (OracleDateTime), but for  
 queries directly
 using conn.execute() its ugly.

 Basically this throws up:

 import sqlalchemy as sa
 import datetime
 engine = sa.create_engine('oracle://scott:[EMAIL PROTECTED]')
 conn = engine.connect()
 conn.execute('create table dtest (a DATE)')
 # insert a row with date and time
 now = datetime.datetime(2007,12,11,13,11,00)
 conn.execute('insert into dtest values (:dt)', {'dt':now})
 # check its there
 rows = conn.execute('select a from dtest where a=:dt',{'dt':now})
 for r in rows:
 if rows[0]==now:
   print Found
 else:
   print Not Found

 This prints 'Not Found' even though the row is there and is returned  
 correctly
 by cx_Oracle.

 I would expect to get at least identity for this.

 So is this a bug and should i add a report or is it a 'feature' of  
 some kind and will
 not change even if i report a bug?


your above test means to say if r[0]==now:, else you get a runtime  
error.  When I run it with that fix, the row matches and it prints  
Found.

its only when executing result-typed ClauseElement subclasses that any  
SQLAlchemy typing behavior, including the lines 34-60 of oracle.py, is  
applied, so your above test is not using any SA typing behavior at  
all, youre getting cx_oracle's results directly.  In the case of  
Oracle, the Binary types break this rule right now but otherwise  
that's it.

Also, OracleDate and OracleDateTime dont do anything at all to bind  
parameters; the only processing that occurs is OracleDate converts  
incoming result datetimes to dates.  OracleDate and other types only  
get used for expressions that are typed ClauseElements (either  
table.select() or text() with a typemap parameter).

The only way that an OracleDate is getting used in *those* cases is if  
you specified it explicitly or the table was reflected using  
autoload=True, which currently matches DATE to OracleDate, so im  
guessing that is the default behavior you are referring to.  We  
might look into changing that to an OracleDateTime for autoload=True  
on the next release.  In the meantime, override your reflected DATE  
column with an OracleDateTime type.




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Oracle date/datetime oddities

2007-11-12 Thread Michael Schlenker

Michael Bayer schrieb:
 
 On Nov 12, 2007, at 11:16 AM, Michael Schlenker wrote:
 
 Hi all,

 your above test means to say if r[0]==now:, else you get a runtime  
 error.  
Right, should copy and paste instead of retype...

When I run it with that fix, the row matches and it prints  
 Found.
Sorry. Just retried with sqlalchemy SVN HEAD and there it works,
with 0.4.0b6 it failed.

So it looks much better that way.

Michael






--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] 2 questions

2007-11-12 Thread sdobrev

hi
1st one:  i am saving some object; the mapperExtension of the object 
fires additional atomic updates of other things elsewhere (aggregator). 
These things has to be expired/refreshed... if i only knew them.
For certain cases, the object knows exactly which are these target 
things. How (when) is best to expire these instances, i.e. assure that 
nexttime they are used they will be re-fetched?
 a) in the mapperext - this would be before the flush?
 b) later, after flush, marking them somehow ?

and, why atomic updates also have with commit after them? or is this 
sqlite-specific?

2nd one: how to compare with a Decimal? i.e. tableA.c.column == Decimal('7')
ah forget,  i found that, passing asdecimal =True to column's Numeric() 
definition.

btw, the precision/length arguments of Numeric seems misplaced??
isnt format 10.2 meaning length 10 precision 2?
 or the meaning of length and precision is different here?

2nd.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: 2 questions

2007-11-12 Thread Michael Bayer


On Nov 12, 2007, at 2:07 PM, [EMAIL PROTECTED] wrote:


 hi
 1st one:  i am saving some object; the mapperExtension of the object
 fires additional atomic updates of other things elsewhere  
 (aggregator).
 These things has to be expired/refreshed... if i only knew them.
 For certain cases, the object knows exactly which are these target
 things. How (when) is best to expire these instances, i.e. assure that
 nexttime they are used they will be re-fetched?
 a) in the mapperext - this would be before the flush?
 b) later, after flush, marking them somehow ?

the public way to mark an instance as expired is  
session.expire(instance).  if you wanted to do this inside the mapper  
extension, i think its OK as long as you do the expire *after* the  
object has been inserted/updated (i.e. in after_insert() or  
after_update()).

We also have the capability to mark any group of attributes as  
expired. however I havent gotten around to building a nice public API  
for that, though i can show you the non-public way if you want to play  
with it.  This API is used by the mapper after it saves your instance  
to mark attribues which require a post-fetch.

What I'd like to do, and this has been frustrating me a bit, is to  
bring the expire the whole instance, expire a group of attributes,  
reload a group of attributes, and reload an instance under one  
implementation umbrella - right now theres some repetition in there  
among Mapper, Query._get() and DeferredColumnLoader.  I've sort of  
wanted to address the whole thing at once.



 and, why atomic updates also have with commit after them? or is this
 sqlite-specific?

every CRUD operation requires a commit.  DBAPI is always inside of a  
transaction.



 2nd one: how to compare with a Decimal? i.e. tableA.c.column ==  
 Decimal('7')
 ah forget,  i found that, passing asdecimal =True to column's  
 Numeric()
 definition.

 btw, the precision/length arguments of Numeric seems misplaced??
 isnt format 10.2 meaning length 10 precision 2?
 or the meaning of length and precision is different here?

i think someone posted a ticket to change our terminology to  
precision/scale.  ive *no* idea how the word length got in there,  
it was probably me very early on but I cant find any document anywhere  
that might have suggested to me that its called length.



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: threadlocal transactions, engine, and the Session

2007-11-12 Thread Huy Do

Michael Bayer wrote:
 On Nov 12, 2007, at 5:37 AM, Huy Do wrote:

   
 Hi,

 I've just had  a heck of a time getting transactions to behave  
 correctly
 after upgrading to 0.4dev from 0.3.6, and I just wanted to make sure
 that I am doing things correctly.

 I've found that to totally control transactions myself, and get ORM
 sessions (i.e Session.flush()) to interact with SQL transactions i.e
 table.insert().execute(), I had to do the following.

 engine = create_engine(appconfig.dburi, strategy='threadlocal',  
 echo=False)
 Session = scoped_session(sessionmaker(bind=engine, autoflush=False,
 transactional=False))
 metadata = MetaData(engine)

 then.

 engine.begin()
 try:
   // Session.flush()
   // mytable.insert().execute() stuff
   engine.commit
 except:
   engine.rollback()

 Does this seem correct ?

 Previously i used autoflush=True, transactional=True together with
 Session.begin(), Session.commit(), Session.rollback() and I ran into  
 all
 sorts of issues e.g transaction was started but never committed etc.

 

 The reason you have to use threadlocal in that case is because you  
 are relying upon implicit execution of things like mytable.insert(),  
 and you also want the transaction to propigate from engine to session  
 without explicitly passing the connection to it.  so what youre doing  
 above is OK.   it might actually be the easiest way to do it and is  
 the only way the implicit execution style can participate in the  
 transaction.
   
Great. Thanks for the confirmation.
 2. use the engine to manage the transaction but dont use threadlocal:

 conn = engine.connect()
 trans = conn.begin()
 Session(bind=conn)
 try:
   # 
   conn.execute(mytable.insert(), stuff)
   trans.commit()
 except:
   trans.rollback()
 finally:
   Session.close()
   
There's no way i'm giving up threadlocal :-) I love it (at least in my 
web programs).

I have this transaction decorator which I wrap all my data access code 
in, and with SA's cool transaction support, I don't have to worry about 
transaction commit/rollback handling again.

def transaction(func):
'''
This is a decorator for wrapping methods in a db transaction
'''
def trans_func(*args, **kws):
engine.begin()
try:
f = func(*args, **kws)
engine.commit()
return f
except:
engine.rollback()
raise
return trans_func

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: 2 questions

2007-11-12 Thread sdobrev


 hi
 1st one:  i am saving some object; the mapperExtension of the object
 fires additional atomic updates of other things elsewhere  
 (aggregator).
 These things has to be expired/refreshed... if i only knew them.
 For certain cases, the object knows exactly which are these target
 things. How (when) is best to expire these instances, i.e. assure that
 nexttime they are used they will be re-fetched?
 a) in the mapperext - this would be before the flush?
 b) later, after flush, marking them somehow ?
 

 the public way to mark an instance as expired is  
 session.expire(instance).  if you wanted to do this inside the mapper  
 extension, i think its OK as long as you do the expire *after* the  
 object has been inserted/updated (i.e. in after_insert() or  
 after_update()).
   
in the after_*() there are (mapper, connection, instance) arguments - 
but there's no session. Any way to get to that? mapext.get_session() 
does not look like one


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] SA 0.4 weird behaviour

2007-11-12 Thread fw

Hi,

I am having a weird problem. I am dealing with some legacy database,
so I subclass TypeDecorator to help clean things up a bit.

This worked fine in 0.3 but I am now trying to use 0.4 and things
break in the strangest of way.

When I run the file below, Python complains about

AttributeError: 'String' object has no attribute 'padding'

Now, padding is a parameter of the constructor of PaddedIntString.
So it is not an attribute.

If I make padding an attribute of  PaddedIntString, the error
dissappears.

Am I doing something wrong or is this a feature of 0.4???

TIA
François


Here is a file that triggers the problem

PaddedIntString allows me to use integer in Python but to have strings
like 02,10 in the DB.

 % ##
!/usr/bin/env python
###
#
#
###


from sqlalchemy.types import TypeDecorator
from sqlalchemy import *
from sqlalchemy.orm import *

class Enum(Unicode):
An Enum is simply a field where the value can only be
chosen from a limited list of values

def __init__(self, values):
'''
construct an Enum type

values : a list of values that are valid for this column

'''
if values is None or len(values) is 0:
raise exceptions.AssertionError('Enum requires a list 
of values')
self.values = values
# the length of the string/unicode column should be the longest
string
# in values
super(Enum, self).__init__(max(map(len,values)))


def convert_bind_param(self, value, engine):
if value is None or value == '':
value = None
elif value not in self.values:
raise exceptions.AssertionError('%s not in Enum.values' 
%value)
return super(Enum, self).convert_bind_param(value, engine)


def convert_result_value(self, value, engine):
if value is not None and value not in self.values:
raise exceptions.AssertionError('%s not in Enum.values' 
%value)
return super(Enum, self).convert_result_value(value, engine)


class CP874String(TypeDecorator):
A string type converted between unicode and cp874
impl = String
def convert_bind_param(self, value, engine):
Convert from unicode to cp874
if value is None:
return None
return value.encode('cp874')
def convert_result_value(self, value, engine):
Convert from cp874 to unicode
#return unicode(value,utf8)
if value is None:
return None
return value.decode('cp874')

class IntString(TypeDecorator):
A string type converted between unicode and integer

impl = String
def convert_bind_param(self, value, engine):
Convert from int to string
if value is None:
return None
return str(value)
def convert_result_value(self, value, engine):
Convert from string to int
#return unicode(value,utf8)
if value is None:
return None
return int(value.strip())

class PaddedIntString(IntString):
A string type converted between unicode and integer

def __init__(self, length=None, convert_unicode=False,padding='0'):
if length is None:
raise Exception(Use IntString instead)

self.pat=%%%s%dd%(padding,length)
IntString.__init__(self,length,convert_unicode)

def convert_bind_param(self, value, engine):
Convert from int to string
if value is None:
return None
return self.pat%(value)

def convert_result_value(self, value, engine):
Convert from string to int
#return unicode(value,utf8)
if value is None:
return None
return int(value.strip())


class myBoolean(TypeDecorator):
A string type converted between unicode and integer

impl = Integer

def convert_bind_param(self, value, engine):
Convert from bool to int
if value is None or not value:
return 0

return 1

def convert_result_value(self, value, engine):
Convert from int to bool
#return unicode(value,utf8)
if value is None or value==0 or value=='0':
return False
return True

class