[sqlalchemy] foreign key support in SQLite

2008-01-28 Thread Manlio Perillo

Hi.

In this wiki page:
http://www.sqlite.org/cvstrac/wiki?p=ForeignKeyTriggers

there is a recipe on how to add basic foreign key support to SQLite
using triggers.

Is it possible to implement this recipe in SQLAlchemy?

Some time ago I have tried to implement it, by adding support for
triggers in SQLAlchemy (so that they can be automatically
created/dropped) but I have abandoned the project because the internals
of SQLAlchemy are unstable.



Thanks   Manlio Perillo


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: self-referential table question

2008-01-28 Thread Steve Zatz

Michael,

Works perfectly. Thanks much.

Steve

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: self-referential table question

2008-01-28 Thread jason kirtland

Steve Zatz wrote:
 I realize this is actually an SQL question but I haven't been able to
 figure out the answer.
 
 In a simple self-referential table, the following produces all the
 Nodes that are parents to some child node(s):
 
 node_table_alias = node_table.alias()
 parents = session.query(Node).filter(Node.id == node_table_alias.c.parent_id)
 
 I can't figure out the analogous query that produces all the Nodes
 that are not parents to another node.  It is clear that:
 
 non_parents = session.query(Node).filter(Node.id !=
 node_table_alias.c.parent_id)
 
 doesn't work but I can't figure out what the right query is.  Any help
 would be appreciated.

another option is:

.query(Node).filter(not_(Node.id.in_(select([Node.parent_id]


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: foreign key support in SQLite

2008-01-28 Thread Manlio Perillo

Koen Bok ha scritto:
 Hey Manilo,
 
 My feeling is that this is out of the scope of the SQLAlchemy project.
 It should support the official workings for each database package, not
 extend it. 

Right, this should not be part of SQLAlchemy core, but maybe in a 
contrib package?


Nobody else has written a small function for this easy job?


 It is pretty easy to implement this yourself and use
 SQLAlchemy on top of that (I use a lot of triggers in my app). 
 There
 may be something to say for pythonizing triggers so they become
 database agnostic, but I am not even sure if that is possible given
 all the different triggers different databases can have.
 

Not sure, but the generic syntax is quite standard.
The only problem is the trigger action.


 What do you mean by 'unstable internals' in SQLAlchemy?
 

Unstable internal API.

 Kindest regards,
 
 Koen Bok
 


Manlio Perillo

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: foreign key support in SQLite

2008-01-28 Thread Koen Bok

Hey Manilo,

My feeling is that this is out of the scope of the SQLAlchemy project.
It should support the official workings for each database package, not
extend it. It is pretty easy to implement this yourself and use
SQLAlchemy on top of that (I use a lot of triggers in my app). There
may be something to say for pythonizing triggers so they become
database agnostic, but I am not even sure if that is possible given
all the different triggers different databases can have.

What do you mean by 'unstable internals' in SQLAlchemy?

Kindest regards,

Koen Bok

On Jan 28, 12:03 pm, Manlio Perillo [EMAIL PROTECTED] wrote:
 Hi.

 In this wiki page:http://www.sqlite.org/cvstrac/wiki?p=ForeignKeyTriggers

 there is a recipe on how to add basic foreign key support to SQLite
 using triggers.

 Is it possible to implement this recipe in SQLAlchemy?

 Some time ago I have tried to implement it, by adding support for
 triggers in SQLAlchemy (so that they can be automatically
 created/dropped) but I have abandoned the project because the internals
 of SQLAlchemy are unstable.

 Thanks   Manlio Perillo
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: ANN: sqlalchemy-validations 0.1

2008-01-28 Thread Leandro Lameiro

Hi Nebur.

On Jan 27, 2008 8:28 AM, Nebur [EMAIL PROTECTED] wrote:

 Code hardening is always good ... I like your idea.
 Did you already think about easy validation of a set of depending
 fields, too?

Yes. You just need to inherit from the Validation class and code your
own validation rule. This is quite easy to do, you can take a look at
how are FieldFormatValidation and FieldRangeValidation classes
implemented.

  Ruben


-- 
Best regards
Leandro Lameiro

Blog: http://lameiro.wordpress.com

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: self-referential table question

2008-01-28 Thread Steve Zatz

 another option is:
 .query(Node).filter(not_(Node.id.in_(select([Node.parent_id]

jason, thanks for the alternative method. Steve

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: foreign key support in SQLite

2008-01-28 Thread Michael Bayer


On Jan 28, 2008, at 6:03 AM, Manlio Perillo wrote:


 Hi.

 In this wiki page:
 http://www.sqlite.org/cvstrac/wiki?p=ForeignKeyTriggers

 there is a recipe on how to add basic foreign key support to SQLite
 using triggers.

 Is it possible to implement this recipe in SQLAlchemy?

 Some time ago I have tried to implement it, by adding support for
 triggers in SQLAlchemy (so that they can be automatically
 created/dropped) but I have abandoned the project because the  
 internals
 of SQLAlchemy are unstable.


we have ticket 903 http://www.sqlalchemy.org/trac/ticket/903  with a  
suggested way we'll be accomplishing this, a generic DDL() construct  
that associates with table/metadata creates/drops.  we just haven't  
decided on exact API details but comments are welcome.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] No attribute '_instance_key': Defining primary keys and surrogate primary keys

2008-01-28 Thread Phil Coombs

Hi

I'm a Python and SA newbie investigating SA for a project using
Postgres. I have a database schema that I want to point SA at then be
able to write my Python code.

I'm working with a version of the basic_association example to keep
things simple. I want to have unique auto generated ids on my tables
(surrogate primary keys) but also want to have primary keys defined
over the entities' actual attributes. I want to have surrogate keys on
my tables so that I can use them instead of the full priamary keys
when writing joins to keep the SQL brief. In some cases (e.g. the
association table) I'm defining the primary key in terms of surrogate
primary keys rather than copy over the full primary key from the
associated tables.

When I run the code (below) I get AttributeError: 'Order' object has
no attribute '_instance_key'.

Please can someone help me setup SA to work in this example?

Thanks in advance

Phil

Postgress DDL
--
create table orders (
 order_id   Serial  ,
 o_customer_nametextnot null,
 o_order_date   timestamp   not null,
primary key (o_customer_name, o_order_date) using index ,unique
(order_id);

create table items (
 item_id   Serial  ,
 it_descriptiontext  not null,
 it_price  Numeric(8,2)  not null,
primary key (it_description) using index, unique (item_id) using
index);

create table order_items (
 oi_order_id   Integer   not null,
 oi_item_idInteger   not null,
 oi_price  Numeric(8,2)  not null,
primary key (oi_order_id, oi_item_id) using index);

Python
--
[SNIP] setup and connect string

engine = create_engine(connectString, echo=False)
metadata = MetaData(engine)

orders_table = Table('orders', metadata, autoload=True)
items_table = Table('items', metadata, autoload=True)
orderitems_table = Table('order_items', metadata,
 Column('oi_order_id', Integer,
ForeignKey('orders.order_id'),
primary_key=True),
 Column('oi_item_id',  Integer,
ForeignKey('items.item_id'),
primary_key=True),
 autoload=True
 )

class Order(object):
def __init__(self, customer_name, order_date):
self.customer_name = customer_name
self.order_date = order_date

class Item(object):
def __init__(self, description, price):
self.description = description
self.price = price

class OrderItem(object):
def __init__(self, item, price=None):
self.item = item
self.price = price or item.price

mapper(Order, orders_table, properties = {
'id': orders_table.c.order_id,
'customer_name' : orders_table.c.o_customer_name,
'order_date': orders_table.c.o_order_date,
'order_items'   : relation(OrderItem)
})

mapper(Item, items_table, properties = {
'id' : items_table.c.item_id,
'description': items_table.c.it_description,
'price'  : items_table.c.it_price
})

mapper(OrderItem, orderitems_table, properties = {
'order_id' : orderitems_table.c.oi_order_id,
'item_id'  : orderitems_table.c.oi_item_id,
'price': orderitems_table.c.oi_price,
'item' : relation(Item)
})

Session = sessionmaker(bind=engine, autoflush=True,
transactional=True)
session = Session()

session.save(Item('Item A', 10.99))
session.save(Item('Item B', 8))
session.save(Item('Item C', 4))
session.save(Item('Item D', 1))
session.flush()

def item(name):
return session.query(Item).filter_by(description=name).one()

order = Order(cust1, datetime.now())
order.order_items.append(OrderItem(item('Item A')))
order.order_items.append(OrderItem(item('Item B'),10))
order.order_items.append(OrderItem(item('Item C')))

session.save(order)
session.commit()

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: No attribute '_instance_key': Defining primary keys and surrogate primary keys

2008-01-28 Thread Michael Bayer

Hi Phil -

im going to run this locally to see whats up - my immediate intution  
is that it has something to do with the Column objects you are using  
to override those loaded from the autoload=True since thats the only  
thing here that is at all unusual...maybe try not using autoload=True  
for that partcular table.   but ill run it here regardless and let you  
know what i find.

On Jan 28, 2008, at 4:03 PM, Phil Coombs wrote:

 engine = create_engine(connectString, echo=False)
 metadata = MetaData(engine)

 orders_table = Table('orders', metadata, autoload=True)
 items_table = Table('items', metadata, autoload=True)
 orderitems_table = Table('order_items', metadata,
 Column('oi_order_id', Integer,
ForeignKey('orders.order_id'),
 primary_key=True),
 Column('oi_item_id',  Integer,
ForeignKey('items.item_id'),
 primary_key=True),
 autoload=True
 )

 class Order(object):
def __init__(self, customer_name, order_date):
self.customer_name = customer_name
self.order_date = order_date

 class Item(object):
def __init__(self, description, price):
self.description = description
self.price = price

 class OrderItem(object):
def __init__(self, item, price=None):
self.item = item
self.price = price or item.price

 mapper(Order, orders_table, properties = {
'id': orders_table.c.order_id,
'customer_name' : orders_table.c.o_customer_name,
'order_date': orders_table.c.o_order_date,
'order_items'   : relation(OrderItem)
})

 mapper(Item, items_table, properties = {
'id' : items_table.c.item_id,
'description': items_table.c.it_description,
'price'  : items_table.c.it_price
})

 mapper(OrderItem, orderitems_table, properties = {
'order_id' : orderitems_table.c.oi_order_id,
'item_id'  : orderitems_table.c.oi_item_id,
'price': orderitems_table.c.oi_price,
'item' : relation(Item)
})

 Session = sessionmaker(bind=engine, autoflush=True,
 transactional=True)
 session = Session()

 session.save(Item('Item A', 10.99))
 session.save(Item('Item B', 8))
 session.save(Item('Item C', 4))
 session.save(Item('Item D', 1))
 session.flush()

 def item(name):
return session.query(Item).filter_by(description=name).one()

 order = Order(cust1, datetime.now())
 order.order_items.append(OrderItem(item('Item A')))
 order.order_items.append(OrderItem(item('Item B'),10))
 order.order_items.append(OrderItem(item('Item C')))

 session.save(order)
 session.commit()


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Question: mapping a complex SQL instruction as a relation

2008-01-28 Thread Michael Bayer


On Jan 28, 2008, at 12:03 PM, Stefano Bartaletti wrote:


 Hello,

 I have two tables defined this way:

 tabItems = sqa.Table(meta, items,
   sqa.Column(id, sqa.Integer, primary_key=True),
   )
 tabTracking = sqa.Table(meta, tracking,
   sqa.Column(id, sqa.Integer, primary_key=True),
   sqa.Column(item_id, sqa.Integer, sqa.ForeignKey(items.id)),
   sqa.Column(date_start, sqa.DateTime, default=now),
   sqa.Column(date_end, sqa.DateTime, default=infinity),
   )

 Now I would like the Item mapper to automagically get the last  
 Tracking
 record (through MAX() on date_start field)

 In SQL should be:

 select items.*, tracking.*
 from items
 left join tracking on
   tracking.item_id=items.id
   and tracking.date_start = (
   select max(date_start)
   from tracking t1
   where t1.item_id=tracking.item_id
   )

 Is it possible to write a relation() to perform such a task at  
 mapper level?


sure , just use a custom primaryjoin condition.  We have some trac  
tickets related to improving the behavior of the correlation you're  
doing there (correlating tracking to the parent items table in the  
subquery) in conjunction with a relation(), however, if you use  
lazy=False it is doable right now, as in (replace users with  
items, stuff with tracking):

salias = stuff.alias()

stuff_view =  
select 
([func 
.max 
(salias.c.date).label('max_date')]).where(salias.c.user_id==users.c.id)

mapper(User, users, properties={
 'stuff':relation(Stuff, lazy=False,  
primaryjoin=and_(users.c.id==stuff.c.user_id,  
stuff.c.date==(stuff_view.label('foo'
})

this example is the working version of what's in ticket #948.  a few  
other combinations of the above are not yet working, namely if you  
tried using lazy=True, or if you put an explicit correlate(users) on  
the stuff_view selectable.

theres other ways to do this too, like not using the correlation to  
items and instead joining to a view of all the max(date_start)s,  
though that tends to produce less efficient queries (but also work  
better with SQLAlchemy at the moment).

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: foreign key support in SQLite

2008-01-28 Thread Manlio Perillo

Michael Bayer ha scritto:
 
 On Jan 28, 2008, at 6:03 AM, Manlio Perillo wrote:
 
 Hi.

 In this wiki page:
 http://www.sqlite.org/cvstrac/wiki?p=ForeignKeyTriggers

 there is a recipe on how to add basic foreign key support to SQLite
 using triggers.

 Is it possible to implement this recipe in SQLAlchemy?

 Some time ago I have tried to implement it, by adding support for
 triggers in SQLAlchemy (so that they can be automatically
 created/dropped) but I have abandoned the project because the  
 internals
 of SQLAlchemy are unstable.
 
 
 we have ticket 903 http://www.sqlalchemy.org/trac/ticket/903  with a  
 suggested way we'll be accomplishing this, a generic DDL() construct  
 that associates with table/metadata creates/drops.  we just haven't  
 decided on exact API details but comments are welcome.
 

Thanks.
I like the idea of custom events.

However, instead of having:
table.events['after-create'].append(run)

IMHO it is better an higher level:
table.add_event(phase, event_type, callable, *args, **kwargs)

Where phase is create or drop and event_type is before or after.


(http://twistedmatrix.com/trac/browser/trunk/twisted/internet/interfaces.py#L624)


Manlio Perillo

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: No attribute '_instance_key': Defining primary keys and surrogate primary keys

2008-01-28 Thread Michael Bayer

hey Phil -

reflection bug.  Heres a workaround for now:

orders_table = Table('orders', metadata,
 Column('order_id', Integer, Sequence('orders_order_id_seq'),  
primary_key=True),
 autoload=True)


On Jan 28, 2008, at 4:03 PM, Phil Coombs wrote:


 Hi

 I'm a Python and SA newbie investigating SA for a project using
 Postgres. I have a database schema that I want to point SA at then be
 able to write my Python code.

 I'm working with a version of the basic_association example to keep
 things simple. I want to have unique auto generated ids on my tables
 (surrogate primary keys) but also want to have primary keys defined
 over the entities' actual attributes. I want to have surrogate keys on
 my tables so that I can use them instead of the full priamary keys
 when writing joins to keep the SQL brief. In some cases (e.g. the
 association table) I'm defining the primary key in terms of surrogate
 primary keys rather than copy over the full primary key from the
 associated tables.

 When I run the code (below) I get AttributeError: 'Order' object has
 no attribute '_instance_key'.

 Please can someone help me setup SA to work in this example?

 Thanks in advance

 Phil

 Postgress DDL
 --
 create table orders (
 order_id   Serial  ,
 o_customer_nametextnot null,
 o_order_date   timestamp   not null,
 primary key (o_customer_name, o_order_date) using index ,unique
 (order_id);

 create table items (
 item_id   Serial  ,
 it_descriptiontext  not null,
 it_price  Numeric(8,2)  not null,
 primary key (it_description) using index, unique (item_id) using
 index);

 create table order_items (
 oi_order_id   Integer   not null,
 oi_item_idInteger   not null,
 oi_price  Numeric(8,2)  not null,
 primary key (oi_order_id, oi_item_id) using index);

 Python
 --
 [SNIP] setup and connect string

 engine = create_engine(connectString, echo=False)
 metadata = MetaData(engine)

 orders_table = Table('orders', metadata, autoload=True)
 items_table = Table('items', metadata, autoload=True)
 orderitems_table = Table('order_items', metadata,
 Column('oi_order_id', Integer,
ForeignKey('orders.order_id'),
 primary_key=True),
 Column('oi_item_id',  Integer,
ForeignKey('items.item_id'),
 primary_key=True),
 autoload=True
 )

 class Order(object):
def __init__(self, customer_name, order_date):
self.customer_name = customer_name
self.order_date = order_date

 class Item(object):
def __init__(self, description, price):
self.description = description
self.price = price

 class OrderItem(object):
def __init__(self, item, price=None):
self.item = item
self.price = price or item.price

 mapper(Order, orders_table, properties = {
'id': orders_table.c.order_id,
'customer_name' : orders_table.c.o_customer_name,
'order_date': orders_table.c.o_order_date,
'order_items'   : relation(OrderItem)
})

 mapper(Item, items_table, properties = {
'id' : items_table.c.item_id,
'description': items_table.c.it_description,
'price'  : items_table.c.it_price
})

 mapper(OrderItem, orderitems_table, properties = {
'order_id' : orderitems_table.c.oi_order_id,
'item_id'  : orderitems_table.c.oi_item_id,
'price': orderitems_table.c.oi_price,
'item' : relation(Item)
})

 Session = sessionmaker(bind=engine, autoflush=True,
 transactional=True)
 session = Session()

 session.save(Item('Item A', 10.99))
 session.save(Item('Item B', 8))
 session.save(Item('Item C', 4))
 session.save(Item('Item D', 1))
 session.flush()

 def item(name):
return session.query(Item).filter_by(description=name).one()

 order = Order(cust1, datetime.now())
 order.order_items.append(OrderItem(item('Item A')))
 order.order_items.append(OrderItem(item('Item B'),10))
 order.order_items.append(OrderItem(item('Item C')))

 session.save(order)
 session.commit()

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Question: mapping a complex SQL instruction as a relation

2008-01-28 Thread Stefano Bartaletti

Hello,

I have two tables defined this way:

tabItems = sqa.Table(meta, items,
sqa.Column(id, sqa.Integer, primary_key=True),
)
tabTracking = sqa.Table(meta, tracking,
sqa.Column(id, sqa.Integer, primary_key=True),
sqa.Column(item_id, sqa.Integer, sqa.ForeignKey(items.id)),
sqa.Column(date_start, sqa.DateTime, default=now),
sqa.Column(date_end, sqa.DateTime, default=infinity),
)

Now I would like the Item mapper to automagically get the last Tracking 
record (through MAX() on date_start field)

In SQL should be:

select items.*, tracking.*
from items
left join tracking on
tracking.item_id=items.id
and tracking.date_start = (
select max(date_start) 
from tracking t1 
where t1.item_id=tracking.item_id
)

Is it possible to write a relation() to perform such a task at mapper level?

-- 
Cordialmente

Stefano Bartaletti
Responsabile Software

G.Tosi Spa Tintoria

Skype account: stefano.bartaletti
ICQ contact  : 1271960

Viale dell'Industria 61
21052 Busto Arsizio (VA)

Tel. +39 0331 34 48 11
Fax  +39 0331 35 21 23 

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: No attribute '_instance_key': Defining primary keys and surrogate primary keys

2008-01-28 Thread Michael Bayer

OK, i missed that order_id is not a PK and is kind of a surrogate  
non-primary key column.  So, yes you've touched upon something here  
we've never been asked to support explicitly before.  The bug is  
actually within the mapper that it wants to post-fetch the value of  
that column in order to get the foreign key value for order_items,  
but the Order object hasn't been totally prepared yet for such...so  
thats the bug I'll fix here.

So here is an uglier workaround similar to the one below which works  
around the primary key issue; a function which will force the  
sequence to be pre-executed (normally non-primary key defaults are  
executed inline and postfetched):

def get_seq(ctx):
 return ctx.connection.execute(Sequence('orders_order_id_seq'))

orders_table = Table('orders', metadata,
 Column('order_id', Integer, default=get_seq),
 autoload=True)



On Jan 28, 2008, at 4:35 PM, Michael Bayer wrote:


 hey Phil -

 reflection bug.  Heres a workaround for now:

 orders_table = Table('orders', metadata,
 Column('order_id', Integer, Sequence('orders_order_id_seq'),
 primary_key=True),
 autoload=True)


 On Jan 28, 2008, at 4:03 PM, Phil Coombs wrote:


 Hi

 I'm a Python and SA newbie investigating SA for a project using
 Postgres. I have a database schema that I want to point SA at then be
 able to write my Python code.

 I'm working with a version of the basic_association example to keep
 things simple. I want to have unique auto generated ids on my tables
 (surrogate primary keys) but also want to have primary keys defined
 over the entities' actual attributes. I want to have surrogate keys  
 on
 my tables so that I can use them instead of the full priamary keys
 when writing joins to keep the SQL brief. In some cases (e.g. the
 association table) I'm defining the primary key in terms of surrogate
 primary keys rather than copy over the full primary key from the
 associated tables.

 When I run the code (below) I get AttributeError: 'Order' object has
 no attribute '_instance_key'.

 Please can someone help me setup SA to work in this example?

 Thanks in advance

 Phil

 Postgress DDL
 --
 create table orders (
 order_id   Serial  ,
 o_customer_nametextnot null,
 o_order_date   timestamp   not null,
 primary key (o_customer_name, o_order_date) using index ,unique
 (order_id);

 create table items (
 item_id   Serial  ,
 it_descriptiontext  not null,
 it_price  Numeric(8,2)  not null,
 primary key (it_description) using index, unique (item_id) using
 index);

 create table order_items (
 oi_order_id   Integer   not null,
 oi_item_idInteger   not null,
 oi_price  Numeric(8,2)  not null,
 primary key (oi_order_id, oi_item_id) using index);

 Python
 --
 [SNIP] setup and connect string

 engine = create_engine(connectString, echo=False)
 metadata = MetaData(engine)

 orders_table = Table('orders', metadata, autoload=True)
 items_table = Table('items', metadata, autoload=True)
 orderitems_table = Table('order_items', metadata,
Column('oi_order_id', Integer,
   ForeignKey('orders.order_id'),
 primary_key=True),
Column('oi_item_id',  Integer,
   ForeignKey('items.item_id'),
 primary_key=True),
autoload=True
)

 class Order(object):
   def __init__(self, customer_name, order_date):
   self.customer_name = customer_name
   self.order_date = order_date

 class Item(object):
   def __init__(self, description, price):
   self.description = description
   self.price = price

 class OrderItem(object):
   def __init__(self, item, price=None):
   self.item = item
   self.price = price or item.price

 mapper(Order, orders_table, properties = {
   'id': orders_table.c.order_id,
   'customer_name' : orders_table.c.o_customer_name,
   'order_date': orders_table.c.o_order_date,
   'order_items'   : relation(OrderItem)
   })

 mapper(Item, items_table, properties = {
   'id' : items_table.c.item_id,
   'description': items_table.c.it_description,
   'price'  : items_table.c.it_price
   })

 mapper(OrderItem, orderitems_table, properties = {
   'order_id' : orderitems_table.c.oi_order_id,
   'item_id'  : orderitems_table.c.oi_item_id,
   'price': orderitems_table.c.oi_price,
   'item' : relation(Item)
   })

 Session = sessionmaker(bind=engine, autoflush=True,
 transactional=True)
 session = Session()

 session.save(Item('Item A', 10.99))
 session.save(Item('Item B', 8))
 session.save(Item('Item C', 4))
 session.save(Item('Item D', 1))
 session.flush()

 def item(name):
   return session.query(Item).filter_by(description=name).one()

 order = Order(cust1, datetime.now())
 order.order_items.append(OrderItem(item('Item A')))
 

[sqlalchemy] SQLAlchemy advanced tutorial at PyCon

2008-01-28 Thread Jonathan Ellis

Hi all,

Michael and I will be presenting beginning and advanced SQLAlchemy
tutorials at PyCon in March.  The beginning session will cover similar
ground to my OSCON tutorial last year:
http://spyced.blogspot.com/2007/07/final-version-of-oscon-sqlalchemy.html

What would you like to see covered in an advanced SQLAlchemy session?

Here is what I threw together for the tutorial proposal, as a starting point.


SQL layer
-
 - building a select
 - updates
 - deletes


Advanced queries

 - Selects, subselects
 - Labels and aliases
 - Self-referential
 - add_entity, resultset queries


Advanced mapping

 - adjacency list relationship
 - ON DELETE CASCADE
 - composite types


Inheritance
---
 - Inheritance patterns
 - Joined table inheritance
 - Optimization


MapperExtensions

 - __init__ and create_instance
 - writing a second-level cache as an Extension


-Jonathan

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: foreign key support in SQLite

2008-01-28 Thread jason kirtland

Manlio Perillo wrote:
 Michael Bayer ha scritto:
 On Jan 28, 2008, at 6:03 AM, Manlio Perillo wrote:

 Hi.

 In this wiki page:
 http://www.sqlite.org/cvstrac/wiki?p=ForeignKeyTriggers

 there is a recipe on how to add basic foreign key support to SQLite
 using triggers.

 Is it possible to implement this recipe in SQLAlchemy?

 Some time ago I have tried to implement it, by adding support for
 triggers in SQLAlchemy (so that they can be automatically
 created/dropped) but I have abandoned the project because the  
 internals
 of SQLAlchemy are unstable.

 we have ticket 903 http://www.sqlalchemy.org/trac/ticket/903  with a  
 suggested way we'll be accomplishing this, a generic DDL() construct  
 that associates with table/metadata creates/drops.  we just haven't  
 decided on exact API details but comments are welcome.

 
 Thanks.
 I like the idea of custom events.
 
 However, instead of having:
 table.events['after-create'].append(run)
 
 IMHO it is better an higher level:
 table.add_event(phase, event_type, callable, *args, **kwargs)
 
 Where phase is create or drop and event_type is before or after.

An updated implementation is up on the trac, give it a whirl if you 
like.  It includes table.append_ddl_listener(event, listener)

Depending on how regular your schema is, it may be possible to 
generically implement the FK-like triggers with the simple templating 
support built-in to the DDL object in the patch.  If not, it should be a 
very simple task with the listener events.  They get called with the 
Table, so you can introspect and do conditional generation based on the 
columns in each individual table...

-j

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Can orm Query return an empty instance when no results are found?

2008-01-28 Thread Jim Musil

Hi, this is probably a basic question, but I'm a little fuzzy on the
best approach.

Is there a way to retrieve OR create an object with just one call? It
seems like I'm always doing the same pattern:

1. Look in table for existing record
2. return object if already exists
3. create and new object if it doesn't exist

It'd be nice if Query() could look for an object and return it, or
return a new instance if it doesn't find it.

Perhaps this is too much of a short cut...

Jim
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: No attribute '_instance_key': Defining primary keys and surrogate primary keys

2008-01-28 Thread Michael Bayer

hi there -

the bug is fixed in r4103, the test below will work as stated.  Or you  
can stick with the workaround for now.


On Jan 28, 2008, at 4:03 PM, Phil Coombs wrote:


 Hi

 I'm a Python and SA newbie investigating SA for a project using
 Postgres. I have a database schema that I want to point SA at then be
 able to write my Python code.

 I'm working with a version of the basic_association example to keep
 things simple. I want to have unique auto generated ids on my tables
 (surrogate primary keys) but also want to have primary keys defined
 over the entities' actual attributes. I want to have surrogate keys on
 my tables so that I can use them instead of the full priamary keys
 when writing joins to keep the SQL brief. In some cases (e.g. the
 association table) I'm defining the primary key in terms of surrogate
 primary keys rather than copy over the full primary key from the
 associated tables.

 When I run the code (below) I get AttributeError: 'Order' object has
 no attribute '_instance_key'.

 Please can someone help me setup SA to work in this example?

 Thanks in advance

 Phil

 Postgress DDL
 --
 create table orders (
 order_id   Serial  ,
 o_customer_nametextnot null,
 o_order_date   timestamp   not null,
 primary key (o_customer_name, o_order_date) using index ,unique
 (order_id);

 create table items (
 item_id   Serial  ,
 it_descriptiontext  not null,
 it_price  Numeric(8,2)  not null,
 primary key (it_description) using index, unique (item_id) using
 index);

 create table order_items (
 oi_order_id   Integer   not null,
 oi_item_idInteger   not null,
 oi_price  Numeric(8,2)  not null,
 primary key (oi_order_id, oi_item_id) using index);

 Python
 --
 [SNIP] setup and connect string

 engine = create_engine(connectString, echo=False)
 metadata = MetaData(engine)

 orders_table = Table('orders', metadata, autoload=True)
 items_table = Table('items', metadata, autoload=True)
 orderitems_table = Table('order_items', metadata,
 Column('oi_order_id', Integer,
ForeignKey('orders.order_id'),
 primary_key=True),
 Column('oi_item_id',  Integer,
ForeignKey('items.item_id'),
 primary_key=True),
 autoload=True
 )

 class Order(object):
def __init__(self, customer_name, order_date):
self.customer_name = customer_name
self.order_date = order_date

 class Item(object):
def __init__(self, description, price):
self.description = description
self.price = price

 class OrderItem(object):
def __init__(self, item, price=None):
self.item = item
self.price = price or item.price

 mapper(Order, orders_table, properties = {
'id': orders_table.c.order_id,
'customer_name' : orders_table.c.o_customer_name,
'order_date': orders_table.c.o_order_date,
'order_items'   : relation(OrderItem)
})

 mapper(Item, items_table, properties = {
'id' : items_table.c.item_id,
'description': items_table.c.it_description,
'price'  : items_table.c.it_price
})

 mapper(OrderItem, orderitems_table, properties = {
'order_id' : orderitems_table.c.oi_order_id,
'item_id'  : orderitems_table.c.oi_item_id,
'price': orderitems_table.c.oi_price,
'item' : relation(Item)
})

 Session = sessionmaker(bind=engine, autoflush=True,
 transactional=True)
 session = Session()

 session.save(Item('Item A', 10.99))
 session.save(Item('Item B', 8))
 session.save(Item('Item C', 4))
 session.save(Item('Item D', 1))
 session.flush()

 def item(name):
return session.query(Item).filter_by(description=name).one()

 order = Order(cust1, datetime.now())
 order.order_items.append(OrderItem(item('Item A')))
 order.order_items.append(OrderItem(item('Item B'),10))
 order.order_items.append(OrderItem(item('Item C')))

 session.save(order)
 session.commit()

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: foreign key support in SQLite

2008-01-28 Thread Michael Bayer


On Jan 28, 2008, at 3:09 PM, Manlio Perillo wrote:


 Michael Bayer ha scritto:

 On Jan 28, 2008, at 6:03 AM, Manlio Perillo wrote:

 Hi.

 In this wiki page:
 http://www.sqlite.org/cvstrac/wiki?p=ForeignKeyTriggers

 there is a recipe on how to add basic foreign key support to SQLite
 using triggers.

 Is it possible to implement this recipe in SQLAlchemy?

 Some time ago I have tried to implement it, by adding support for
 triggers in SQLAlchemy (so that they can be automatically
 created/dropped) but I have abandoned the project because the
 internals
 of SQLAlchemy are unstable.


 we have ticket 903 http://www.sqlalchemy.org/trac/ticket/903  with a
 suggested way we'll be accomplishing this, a generic DDL() construct
 that associates with table/metadata creates/drops.  we just haven't
 decided on exact API details but comments are welcome.


 Thanks.
 I like the idea of custom events.

 However, instead of having:
 table.events['after-create'].append(run)

 IMHO it is better an higher level:
 table.add_event(phase, event_type, callable, *args, **kwargs)

 Where phase is create or drop and event_type is before or  
 after.


what I like about *my* version, i.e. not the events[] version but the  
previous DDL(postcreate=..), is that the keyword argument interface  
provides Python-based validation of the keywords you're using, which  
also show up nicely in docstrings.  Using strings like 'after-create'  
feels icky as we have to reimplement validation of unknown keyword  
'after-createee'.



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---