Re: [sqlalchemy] [Q] Move elements in one-to-many relationship to a new owner

2012-11-13 Thread Ladislav Lenart
Hello.

I was unable to reproduce the problem on your test example. I THINK it does
exactly what my problematic code does. It seems that either the problem is more
elaborate (but I don't see it) or I have a bug somewhere in my code. However my
simple workaround works, so I consider the problem solved, at least for now. I
will try to dig deeper when I have more time.


Ladislav Lenart


On 9.11.2012 23:22, Michael Bayer wrote:
> there's no mapping or code example provided here, which makes a condition 
> like this extremely difficult to diagnose, however I've prepared a test 
> script that takes its best guess as to configuration, that is, a 
> client_products collection with a "client" backref, and a cascade of "all, 
> delete-orphan" on the one-to-many which maximizes the conditions under which 
> a ClientProduct might be deleted.   The test case below does not exhibit this 
> behavior; please alter it such that it reproduces the condition you are 
> seeing so that we can diagnose this fully and ensure you aren't coming across 
> any new SQLAlchemy bugs:
> 
> 
> from sqlalchemy import *
> from sqlalchemy.orm import *
> from sqlalchemy.ext.declarative import declarative_base
> 
> Base = declarative_base()
> 
> class Client(Base):
> __tablename__ = "a"
> 
> id = Column(Integer, primary_key=True)
> data = Column(String)
> client_products = relationship("ClientProduct",
> backref='client',
> cascade="all, delete-orphan")
> 
> 
> class ClientProduct(Base):
> __tablename__ = "b"
> 
> id = Column(Integer, primary_key=True)
> data = Column(String)
> a_id = Column(Integer, ForeignKey('a.id'))
> 
> e = create_engine("sqlite://", echo=True)
> 
> Base.metadata.create_all(e)
> 
> s = Session(e)
> 
> 
> s.add_all([
> Client(data='c1', client_products=[
> ClientProduct(data='cp1'),
> ClientProduct(data='cp2'),
> ClientProduct(data='cp3'),
> ]),
> Client(data='c2', client_products=[
> ClientProduct(data='cp4'),
> ClientProduct(data='cp5'),
> ClientProduct(data='cp6'),
> ]),
> ])
> s.commit()
> 
> 
> c1 = s.query(Client).filter_by(data='c1').one()
> c2 = s.query(Client).filter_by(data='c2').one()
> 
> cp1, cp2 = c1.client_products[0:2]
> 
> cp1.client = c2
> cp2.client = c2
> s.delete(c1)
> s.commit()
> 
> assert s.query(Client.data).all() == [('c2',)]
> assert s.query(ClientProduct.data).all() == [('cp1', ), ('cp2', ), ('cp4', ), 
> ('cp5', ), ('cp6', )]
> 
> 
> 
> 
> On Nov 8, 2012, at 11:30 AM, Ladislav Lenart wrote:
> 
>> Hello.
>>
>> I have a client which has a collection of ClientProduct-s (ClientProduct has 
>> a
>> FK to Client). The following code:
>>
>># Move some client products from a duplicate to the original.
>># Remove duplicate clients afterwards (in cascade).
>>#
>># Note that client_map is a dict from a duplicate to its original.
>>for each_duplicate, each_client in client_map.iteritems():
>>for each_cp in each_duplicate.client_products:
>>if some_condition(each_cp):
>>each_cp.client = each_client
>>session.delete(each_duplicate)
>>session.flush()
>>
>> deletes a client product that was moved from each_duplicate to each_client in
>> the inner loop. Why? What can I do to prevent it?
>>
>>
>> Thank you in advance,
>>
>> Ladislav Lenart
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "sqlalchemy" group.
>> To post to this group, send email to sqlalchemy@googlegroups.com.
>> To unsubscribe from this group, send email to 
>> sqlalchemy+unsubscr...@googlegroups.com.
>> For more options, visit this group at 
>> http://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] [Q] Move elements in one-to-many relationship to a new owner

2012-11-09 Thread Michael Bayer
there's no mapping or code example provided here, which makes a condition like 
this extremely difficult to diagnose, however I've prepared a test script that 
takes its best guess as to configuration, that is, a client_products collection 
with a "client" backref, and a cascade of "all, delete-orphan" on the 
one-to-many which maximizes the conditions under which a ClientProduct might be 
deleted.   The test case below does not exhibit this behavior; please alter it 
such that it reproduces the condition you are seeing so that we can diagnose 
this fully and ensure you aren't coming across any new SQLAlchemy bugs:


from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.ext.declarative import declarative_base

Base = declarative_base()

class Client(Base):
__tablename__ = "a"

id = Column(Integer, primary_key=True)
data = Column(String)
client_products = relationship("ClientProduct",
backref='client',
cascade="all, delete-orphan")


class ClientProduct(Base):
__tablename__ = "b"

id = Column(Integer, primary_key=True)
data = Column(String)
a_id = Column(Integer, ForeignKey('a.id'))

e = create_engine("sqlite://", echo=True)

Base.metadata.create_all(e)

s = Session(e)


s.add_all([
Client(data='c1', client_products=[
ClientProduct(data='cp1'),
ClientProduct(data='cp2'),
ClientProduct(data='cp3'),
]),
Client(data='c2', client_products=[
ClientProduct(data='cp4'),
ClientProduct(data='cp5'),
ClientProduct(data='cp6'),
]),
])
s.commit()


c1 = s.query(Client).filter_by(data='c1').one()
c2 = s.query(Client).filter_by(data='c2').one()

cp1, cp2 = c1.client_products[0:2]

cp1.client = c2
cp2.client = c2
s.delete(c1)
s.commit()

assert s.query(Client.data).all() == [('c2',)]
assert s.query(ClientProduct.data).all() == [('cp1', ), ('cp2', ), ('cp4', ), 
('cp5', ), ('cp6', )]




On Nov 8, 2012, at 11:30 AM, Ladislav Lenart wrote:

> Hello.
> 
> I have a client which has a collection of ClientProduct-s (ClientProduct has a
> FK to Client). The following code:
> 
># Move some client products from a duplicate to the original.
># Remove duplicate clients afterwards (in cascade).
>#
># Note that client_map is a dict from a duplicate to its original.
>for each_duplicate, each_client in client_map.iteritems():
>for each_cp in each_duplicate.client_products:
>if some_condition(each_cp):
>each_cp.client = each_client
>session.delete(each_duplicate)
>session.flush()
> 
> deletes a client product that was moved from each_duplicate to each_client in
> the inner loop. Why? What can I do to prevent it?
> 
> 
> Thank you in advance,
> 
> Ladislav Lenart
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy" group.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> To unsubscribe from this group, send email to 
> sqlalchemy+unsubscr...@googlegroups.com.
> For more options, visit this group at 
> http://groups.google.com/group/sqlalchemy?hl=en.
> 
> 

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] [Q] Move elements in one-to-many relationship to a new owner

2012-11-09 Thread Ladislav Lenart
Hello.

I managed to solve my problem. I use session.execute(...) instead of
session.delete(), because the second form deletes more things than it should.
The working code is:

# Move some client products from a duplicate to the original.
# Remove duplicate clients afterwards (in cascade).
#
# Note that client_map is a dict from a duplicate to its original.
for each_duplicate, each_client in client_map.iteritems():
for each_cp in each_duplicate.client_products:
if some_condition(each_cp):
each_cp.client = each_client
session.flush()
table = Client.__table__
duplicate_ids = [each.id for each in duplicate_clients.iterkeys()]
q = table.delete().where(table.c.id.in_(duplicate_ids))
session.execute(q)

I still want to know whether this is expected behaviour and if so why:
 1. ClientProduct has FK to Client. Client has relationship client_products with
ON DELETE CASCADE. Thus if a client is deleted, all its client_products are
deleted too.
 2. Suppose I have a client with two client_products.
 3. I move one of them to a different client: client.client_products[0].client =
other_client.
 4. I delete the client: session.delete(client)
 5. session.deleted now contains TWO ClientProduct instances instead of ONE even
though I moved one of them to a completely different client.
 6. Why?! Can I do anything to prevent this, e.g. insert call to
session.flush(), session.expunge(), session.refresh() or some such somewhere?
All my attempts with session.flush() failed (had no effect).


Thank you,

Ladislav Lenart


On 8.11.2012 17:30, Ladislav Lenart wrote:
> Hello.
> 
> I have a client which has a collection of ClientProduct-s (ClientProduct has a
> FK to Client). The following code:
> 
> # Move some client products from a duplicate to the original.
> # Remove duplicate clients afterwards (in cascade).
> #
> # Note that client_map is a dict from a duplicate to its original.
> for each_duplicate, each_client in client_map.iteritems():
> for each_cp in each_duplicate.client_products:
> if some_condition(each_cp):
> each_cp.client = each_client
> session.delete(each_duplicate)
> session.flush()
> 
> deletes a client product that was moved from each_duplicate to each_client in
> the inner loop. Why? What can I do to prevent it?
> 
> 
> Thank you in advance,
> 
> Ladislav Lenart

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] [Q] Move elements in one-to-many relationship to a new owner

2012-11-08 Thread Ladislav Lenart
Hello.

I have a client which has a collection of ClientProduct-s (ClientProduct has a
FK to Client). The following code:

# Move some client products from a duplicate to the original.
# Remove duplicate clients afterwards (in cascade).
#
# Note that client_map is a dict from a duplicate to its original.
for each_duplicate, each_client in client_map.iteritems():
for each_cp in each_duplicate.client_products:
if some_condition(each_cp):
each_cp.client = each_client
session.delete(each_duplicate)
session.flush()

deletes a client product that was moved from each_duplicate to each_client in
the inner loop. Why? What can I do to prevent it?


Thank you in advance,

Ladislav Lenart

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.