[sqlalchemy] Concurrency control using version_id_col

2007-06-21 Thread Sanjay

Hi,

In order to achieve concurrency control using optimistic locking, we
are doing the following:

1. Adding a column in the table like this:

page_tbl = Table('page', metadata,
Column('page_id', Integer, primary_key=True, autoincrement=True),
Column('link_name', Unicode(30), nullable=False),
.
.
Column('version', Integer, default=0))

2. Mapping like this:

assign_mapper(session.context, Page, page_tbl,
version_id_col=page_tbl.c.version, properties={...})


But it does not seem working. Are we missing any step?

thanks
Sanjay


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: session.clear() not clearing cascaded items?

2007-06-21 Thread svilen

 That is, in my case, lifetime of objects is much longer that
 lifetime of all database-and-related stuff - and seems this is not
 expected pattern of usage.

more questions on the theme.
What is the expected sequence / lifetime / pattern-of-usage for 
engine, metadata, mappers, and finally, my objects? 
This is also related to multiplicity, e.g. can i have several 
metadatas, and switch between them for same objects.

The only obvious rule i see is that
 a) mappers must be made after metadata; not sure if that being 
unbound is allowed.
The other rule inferred from above problem is
 b) objects lifetime should be shorter than mappers (ORM); else one 
needs to delete o._instance_key and o.db_id to be able to reuse them

Are there other rules? i do remember about doing mappers over unbound 
metadata and binding later did break certain cases.

e.g. default is:
 1.make engine
 2.make metadata + .create_all()
 3.make mappers
 4.objects

can i do:
 1. make metadata /unbound
 2. mappers
 3. make engine
 4. bind metadata + .create_all()
 5. objects
then unbind metadata/drop engine, then repeat 3,4,5 ?

or 
 1. engine
 2. metadata
 3. mappers 
 4. objects
 5 metadata.drop_all() + metadata.create_all()
 6 more objects
 repeat 5,6

or (current links/nodes case)
 1. objects
 2. engine
 3. metadata + createall()
 4. mappers
 5. drop everything
 repeat 2,3,4,5
 ...

or (probably better way of doing it, make type-info stuff once)
 1. engine
 2. metadata + createall()
 3. mappers
 4. objects
 5. metadata.drop_all() + metadata.create_all()
 6. save same objects
 repeat 5,6

svil

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Concurrency control using version_id_col

2007-06-21 Thread Michael Bayer


On Jun 21, 2007, at 4:50 AM, Sanjay wrote:


 Hi,

 In order to achieve concurrency control using optimistic locking, we
 are doing the following:

 1. Adding a column in the table like this:

 page_tbl = Table('page', metadata,
 Column('page_id', Integer, primary_key=True, autoincrement=True),
 Column('link_name', Unicode(30), nullable=False),
 .
 .
 Column('version', Integer, default=0))

 2. Mapping like this:

 assign_mapper(session.context, Page, page_tbl,
 version_id_col=page_tbl.c.version, properties={...})


 But it does not seem working. Are we missing any step?


looks fine to me.  the default=0 shouldnt break it although we dont  
have that in our unit tests, if you want to try with that removed.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: session.clear() not clearing cascaded items?

2007-06-21 Thread Michael Bayer


On Jun 21, 2007, at 6:59 AM, svilen wrote:

 more questions on the theme.
 What is the expected sequence / lifetime / pattern-of-usage for
 engine, metadata, mappers, and finally, my objects?

mappers need tables and therefore metadata, objects have mapped  
properties and therefore mappers.  no relation to engines.

 This is also related to multiplicity, e.g. can i have several
 metadatas, and switch between them for same objects.

the mapper is bound to a single metadata.  the mapper defines a  
mapping between a class and a table.   therefore to have mutliple  
metadata, you need multiple mappers, therefore you need to use  
entity_name.  there are ways to merge() objects from one entity_name  
to the other.


 The only obvious rule i see is that
  a) mappers must be made after metadata; not sure if that being
 unbound is allowed.

the metadata need not be bound to anything, it can rebind to  
different engines, etc.  the mapper only needs a conceptual Table in  
order to proceed.


 The other rule inferred from above problem is
  b) objects lifetime should be shorter than mappers (ORM); else one
 needs to delete o._instance_key and o.db_id to be able to reuse them

do you mean, when you say clear_mappers() ?  im not sure what the use  
case for that is.


 Are there other rules? i do remember about doing mappers over unbound
 metadata and binding later did break certain cases.

not at all.  if you reuse the same Session over two different engines  
bound to the same set of tables without refreshing/clearing it, yes  
that will break.



 can i do:
  1. make metadata /unbound
  2. mappers
  3. make engine
  4. bind metadata + .create_all()
  5. objects
 then unbind metadata/drop engine, then repeat 3,4,5 ?

yes


 or
  1. engine
  2. metadata
  3. mappers
  4. objects
  5 metadata.drop_all() + metadata.create_all()
  6 more objects
  repeat 5,6

yes


 or (current links/nodes case)
  1. objects
  2. engine
  3. metadata + createall()
  4. mappers
  5. drop everything
  repeat 2,3,4,5

if by objects you mean instances and not classes, and you mean  
create new mappers each time, tricky.  differerent mappers define  
different attributes for the objects.  what happens to the state of  
the old attributes ?  just gets thrown away ?  if you can ignore that  
issue, just remove the _instance_key from the objects and save() them  
into a new session.

  ...

 or (probably better way of doing it, make type-info stuff once)
  1. engine
  2. metadata + createall()
  3. mappers
  4. objects
  5. metadata.drop_all() + metadata.create_all()
  6. save same objects
  repeat 5,6

yes, if you clear out the _instance_key on the objects and save()  
them into a new session.

if its different databases youre concerned about, only the Session  
has any attachment to the contents of a specific database.  mappers  
and metadata are completely agnostic to any of that, and only define  
*how* a class is mapped to a particular table structure...they dont  
care about *where*.



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Unit Of work seems to be calling save/delete twice

2007-06-21 Thread Eric Ongerth

Thank you.  Glad it worked out easily.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: session.clear() not clearing cascaded items?

2007-06-21 Thread sdobrev

thanks a lot, that cleared some mist.
btw u can make a entry on the FAQ from this... e.g. lifetimes and 
usage patterns - or similar.

  The other rule inferred from above problem is
   b) objects lifetime should be shorter than mappers (ORM); else
  one needs to delete o._instance_key and o.db_id to be able to
  reuse them

 do you mean, when you say clear_mappers() ?  im not sure what the
 use case for that is.
It is that objects exist fot their own sake; than at certain time some 
db mapping is made, they are saved to a database, then all that 
db-related stuff (+mappers) is dropped, and objects keep living.
Next time it might be different db/mapping/etc - but eventualy same 
objects.
As i understand it, the ORMapping makes some irreversible changes to 
the objects/classes (e.g. Instrumented descriptors, __init__ etc), so 
the above scenario will never be without side effects, and it will be 
preferable to make some deep copy of the structure (class- and 
instance wise) and do all the ORmapping over the copies.

and as i understand, i can delay metadata engine-binding 
and .create_data() until really required (1st db-operation). Right?


now i meant this one:
  or (current links/nodes case)
   1. objects (instances)
   2. engine
   3. metadata + createall()
   4. mappers
   5. drop everything
   repeat 2,3,4,5

 if by objects you mean instances and not classes, and you
 mean create new mappers each time, tricky.  differerent mappers
 define different attributes for the objects.  what happens to the
 state of the old attributes ?  just gets thrown away ?  if you can
 ignore that issue, just remove the _instance_key from the objects
 and save() them into a new session.

i needed this just for testing, so mappers are all same over and over, 
and attributes gets overwriten anyway.
yeah, if mappers are different everytime it becomes a mess. But if 
mapping is same every time, just remade a new, then it's a 
non-optimized variant of the following case: 

  or (probably better way of doing it, make type-info stuff once)
   1. engine
   2. metadata + createall()
   3. mappers
   4. objects
   5. metadata.drop_all() + metadata.create_all()
   6. save same objects
   repeat 5,6

 yes, if you clear out the _instance_key on the objects and save()
 them into a new session.

ok that works. And i may keep a track of all these objects in some 
weakref list, and wash them each time.

ciao
svil

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] new sqlalchemy-devel google group

2007-06-21 Thread Michael Bayer

Hi -

those of you involved with the internal tinkerings of SQLAlchemy,
please join over on the sqlalchemy-devel group, which is also at
google groups:

http://groups.google.com/group/sqlalchemy-devel

- mike


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: help with IronPython

2007-06-21 Thread d_henderson

On Jun 20, 2:09 pm, Rick Morrison [EMAIL PROTECTED] wrote:
 The SA MSSQL module currently supports the three DBAPI modules mentioned
 above (pyodbc, pymssql, adodbapi).

 You'll need to either get one of those three to import under IronPython, or
 add support for the IPCE adaptor.

pyodbc, pymssql, and adodbapi each use binary modules, so they would
require some (unknown amount of) conversion to work with IronPython.

I wrapped a module around the IPCE adaptor module to meet (more of)
the DBAPI 2.0 module level requirements, and duplicated the pyodbc
code in databases/mssql.py to use my module.

That got me an error on the Table('users', metadata, ... line from the
tutorial:

Traceback (most recent call last):
  File D:\Software\IronPython\dbTest\test_sa.py, line 83, in
Initialize
  File D:\Software\IronPython\dbTest\test_sa.py, line 80, in main
  File D:\Software\IronPython\dbTest\test_sa.py, line 52, in
test_users_table
  File schema, line unknown, in __call__
  File D:\Downloads\Python\SQLAlchemy-0.3.8\Lib\sqlalchemy\schema.py,
line 156, in __call__
  File , line 0, in Make##135
TypeError: Cannot convert _TableSingleton(class
'sqlalchemy.schema.Table') to String


This error looks like it might be due to a difference in the type
systems between C-Python and IronPython, previously reported as
http://www.codeplex.com/IronPython/WorkItem/View.aspx?WorkItemId=7594

That IronPython work item is currently scheduled for the 2.0 alpha 2
release. Note that the community can vote for problems to be fixed on
the Codeplex site.

Here is my wrapper module, which I called ipodbc.py:
import dbapi
import odbc as db
dbapi._load_type(db.assembly,db.typename)
def connect(connectStr):
return = db.connect(connectStr)
apilevel = '2.0'
threadsafety = 0
paramstyle = 'qmark'


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] lock tables in python way?

2007-06-21 Thread Can Xue
I'm using MySQL and when I need to lock a table, I do:

conn = engine.connect()
conn._autocommit = dummyFunc
conn.execute(LOCK TABLES tablename WRITE)
try:
...
conn.execute(COMMIT)
except:
conn.execute(ROLLBACK)
...
finally:
conn.execute(UNLOCK TABLES)
conn.close()

Are there any more pythonic ways to do such tasks?


--

XUE Can

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: lock tables in python way?

2007-06-21 Thread Michael Bayer

conn = engine.connect()
trans = conn.begin()
 ...etc
trans.commit()

table locking is usually either implicit to SQL operations performed
within the transaction or using a construct like SELECT..FOR UPDATE,
but you can still issue your straight text if you like (but LOCK
TABLES isnt portable).


On Jun 21, 5:41 pm, Can Xue [EMAIL PROTECTED] wrote:
 I'm using MySQL and when I need to lock a table, I do:

 conn = engine.connect()
 conn._autocommit = dummyFunc
 conn.execute(LOCK TABLES tablename WRITE)
 try:
 ...
 conn.execute(COMMIT)
 except:
 conn.execute(ROLLBACK)
 ...
 finally:
 conn.execute(UNLOCK TABLES)
 conn.close()

 Are there any more pythonic ways to do such tasks?

 --

 XUE Can


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---