[sqlalchemy] Re: session.clear() not clearing cascaded items?

2007-06-21 Thread svilen

 That is, in my case, lifetime of objects is much longer that
 lifetime of all database-and-related stuff - and seems this is not
 expected pattern of usage.

more questions on the theme.
What is the expected sequence / lifetime / pattern-of-usage for 
engine, metadata, mappers, and finally, my objects? 
This is also related to multiplicity, e.g. can i have several 
metadatas, and switch between them for same objects.

The only obvious rule i see is that
 a) mappers must be made after metadata; not sure if that being 
unbound is allowed.
The other rule inferred from above problem is
 b) objects lifetime should be shorter than mappers (ORM); else one 
needs to delete o._instance_key and o.db_id to be able to reuse them

Are there other rules? i do remember about doing mappers over unbound 
metadata and binding later did break certain cases.

e.g. default is:
 1.make engine
 2.make metadata + .create_all()
 3.make mappers
 4.objects

can i do:
 1. make metadata /unbound
 2. mappers
 3. make engine
 4. bind metadata + .create_all()
 5. objects
then unbind metadata/drop engine, then repeat 3,4,5 ?

or 
 1. engine
 2. metadata
 3. mappers 
 4. objects
 5 metadata.drop_all() + metadata.create_all()
 6 more objects
 repeat 5,6

or (current links/nodes case)
 1. objects
 2. engine
 3. metadata + createall()
 4. mappers
 5. drop everything
 repeat 2,3,4,5
 ...

or (probably better way of doing it, make type-info stuff once)
 1. engine
 2. metadata + createall()
 3. mappers
 4. objects
 5. metadata.drop_all() + metadata.create_all()
 6. save same objects
 repeat 5,6

svil

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: session.clear() not clearing cascaded items?

2007-06-21 Thread Michael Bayer


On Jun 21, 2007, at 6:59 AM, svilen wrote:

 more questions on the theme.
 What is the expected sequence / lifetime / pattern-of-usage for
 engine, metadata, mappers, and finally, my objects?

mappers need tables and therefore metadata, objects have mapped  
properties and therefore mappers.  no relation to engines.

 This is also related to multiplicity, e.g. can i have several
 metadatas, and switch between them for same objects.

the mapper is bound to a single metadata.  the mapper defines a  
mapping between a class and a table.   therefore to have mutliple  
metadata, you need multiple mappers, therefore you need to use  
entity_name.  there are ways to merge() objects from one entity_name  
to the other.


 The only obvious rule i see is that
  a) mappers must be made after metadata; not sure if that being
 unbound is allowed.

the metadata need not be bound to anything, it can rebind to  
different engines, etc.  the mapper only needs a conceptual Table in  
order to proceed.


 The other rule inferred from above problem is
  b) objects lifetime should be shorter than mappers (ORM); else one
 needs to delete o._instance_key and o.db_id to be able to reuse them

do you mean, when you say clear_mappers() ?  im not sure what the use  
case for that is.


 Are there other rules? i do remember about doing mappers over unbound
 metadata and binding later did break certain cases.

not at all.  if you reuse the same Session over two different engines  
bound to the same set of tables without refreshing/clearing it, yes  
that will break.



 can i do:
  1. make metadata /unbound
  2. mappers
  3. make engine
  4. bind metadata + .create_all()
  5. objects
 then unbind metadata/drop engine, then repeat 3,4,5 ?

yes


 or
  1. engine
  2. metadata
  3. mappers
  4. objects
  5 metadata.drop_all() + metadata.create_all()
  6 more objects
  repeat 5,6

yes


 or (current links/nodes case)
  1. objects
  2. engine
  3. metadata + createall()
  4. mappers
  5. drop everything
  repeat 2,3,4,5

if by objects you mean instances and not classes, and you mean  
create new mappers each time, tricky.  differerent mappers define  
different attributes for the objects.  what happens to the state of  
the old attributes ?  just gets thrown away ?  if you can ignore that  
issue, just remove the _instance_key from the objects and save() them  
into a new session.

  ...

 or (probably better way of doing it, make type-info stuff once)
  1. engine
  2. metadata + createall()
  3. mappers
  4. objects
  5. metadata.drop_all() + metadata.create_all()
  6. save same objects
  repeat 5,6

yes, if you clear out the _instance_key on the objects and save()  
them into a new session.

if its different databases youre concerned about, only the Session  
has any attachment to the contents of a specific database.  mappers  
and metadata are completely agnostic to any of that, and only define  
*how* a class is mapped to a particular table structure...they dont  
care about *where*.



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: session.clear() not clearing cascaded items?

2007-06-21 Thread sdobrev

thanks a lot, that cleared some mist.
btw u can make a entry on the FAQ from this... e.g. lifetimes and 
usage patterns - or similar.

  The other rule inferred from above problem is
   b) objects lifetime should be shorter than mappers (ORM); else
  one needs to delete o._instance_key and o.db_id to be able to
  reuse them

 do you mean, when you say clear_mappers() ?  im not sure what the
 use case for that is.
It is that objects exist fot their own sake; than at certain time some 
db mapping is made, they are saved to a database, then all that 
db-related stuff (+mappers) is dropped, and objects keep living.
Next time it might be different db/mapping/etc - but eventualy same 
objects.
As i understand it, the ORMapping makes some irreversible changes to 
the objects/classes (e.g. Instrumented descriptors, __init__ etc), so 
the above scenario will never be without side effects, and it will be 
preferable to make some deep copy of the structure (class- and 
instance wise) and do all the ORmapping over the copies.

and as i understand, i can delay metadata engine-binding 
and .create_data() until really required (1st db-operation). Right?


now i meant this one:
  or (current links/nodes case)
   1. objects (instances)
   2. engine
   3. metadata + createall()
   4. mappers
   5. drop everything
   repeat 2,3,4,5

 if by objects you mean instances and not classes, and you
 mean create new mappers each time, tricky.  differerent mappers
 define different attributes for the objects.  what happens to the
 state of the old attributes ?  just gets thrown away ?  if you can
 ignore that issue, just remove the _instance_key from the objects
 and save() them into a new session.

i needed this just for testing, so mappers are all same over and over, 
and attributes gets overwriten anyway.
yeah, if mappers are different everytime it becomes a mess. But if 
mapping is same every time, just remade a new, then it's a 
non-optimized variant of the following case: 

  or (probably better way of doing it, make type-info stuff once)
   1. engine
   2. metadata + createall()
   3. mappers
   4. objects
   5. metadata.drop_all() + metadata.create_all()
   6. save same objects
   repeat 5,6

 yes, if you clear out the _instance_key on the objects and save()
 them into a new session.

ok that works. And i may keep a track of all these objects in some 
weakref list, and wash them each time.

ciao
svil

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: session.clear() not clearing cascaded items?

2007-06-20 Thread svilen

i think i found something...

i am doing testing about these links and nodes.
For each testcase i have one constant set of nodes, and then trying 
different scenarios with various combination of links. 
Each scenario creates all database stuff a new - db, metadata, 
mappers, session - and after testing destroys it all (session.close, 
clearmappers, drop_all, dispose).

because i am using same set of objects for many such scenarios, it 
happened that they keep _instance_key's on them, as noone deletes 
these. Hence first scenario will save Link1 with Node1 and Node2, but 
second scenario with Link2 and Node1 / Node2 will not save 
Node1 /Node2 as they already have _instance_key on them (from 
previous scenario / database ) and as of session.py are considered 
saved...

and sooner or later mistakes got accumulated and even _session_ids are 
not cleaned, because certain objects are considered already-saved, 
i.e. not subjects of current session.

All this is ok if i make db/metadata only once for a set of objects, 
still i would like to be able to drop_everything and start a new 
without side effects / leftovers.

Any SA way to remove these _instance_key identity-mapping keys from 
the objects?

That is, in my case, lifetime of objects is much longer that lifetime 
of all database-and-related stuff - and seems this is not expected 
pattern of usage.

svil

  g'day
 
  i have Links, pointing to Nodes.
 
  Adding just Links to session + flush() works as expected - both
  Links and Nodes got saved.
 
  Doing session.close() although does not always detach all related
  objects, i.e. sometimes some Nodes stay with _session_id on them
  after session is long gone.
 
  for x in mysession: print x does not print those objects, i.e.
  they are not explicitly in the session.
 
  Is this some intended behaviour, and is there some way to clear
  these cascaded objects too?

 i think you should create a test case.  also start poking around
 the source code on this one, Session.clear()/close() are pretty
 simple, and they shouldnt have to deal with cascading anything.

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---