[sqlalchemy] Re: SQLAlchemy 0.4.1 released

2007-11-19 Thread Nebur

I very much appreciate the API hardening. It immediately broke tests
of mine, exhibiting year-old abuse of save().
thank you !
 Ruben
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Unnecessary selects when cascade=all, delete-orphane and composite keys

2007-11-19 Thread Anton V. Belyaev

  I am building a grading system for students and got unexpected
  performance problems. I am using composite key for marks, which refer
  to students and subjects.

  When I am creating a mark (for student_1 and subject_1), unnecessary
  select operations are performed (select all marks for student_1 and
  select all marks for subject_1).

  Why these selects are generated and how to avoid them?

 The SQL issue looks like the marks collections on Student and
 Subject issuing a lazyload for their full collection of Mark items
 before the backref event appends the Mark object to each of them, i.e.
 the event that occurs when you issue mark.student = subject_1.
 Ordinary collections currently don't handle being present in a
 partial state, so in order for an append to occur, they load their
 contents.

 As a workaround, you can use lazy=dynamic relations for the
 collections, which is a special relation that can handle append
 operations without the full collection being available.
 lazy=noload would work as well but then you couldn't read from your
 collections.

 A future release may look into merging some of the dynamic relation
 behavior into an ordinary un-loaded collection so that this workaround
 would not be needed. Actually this might not be a bad idea for 0.4.2,
 so ive added ticket #871.

Thanks a lot for your reply!

Setting the relation to be lazy=dynamic really eliminated
unnecessary selects when creating Mark. Making a default relation a
bit dynamic is a great idea!

There is problem when relation is both lazy=dynamic and
cascade=all, delete-orphan:
When parent (Subject or Student) object is deleted, its children
(Marks) are not deleted. When relation is not dynamic, children are
deleted correctly.

Is this a proper behavior?

Thanks for the attention.

P.S. Congratulations to developers and users of SA with version 0.4.1!
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Generative queries in SA0.3 ?

2007-11-19 Thread Glauco

exhuma.twn ha scritto:
 I have seen that in SA 0.4 on can do something like:

 q = table.select()
 q = q.where(x=1)
 q = q.where(z=2)
 ...

 Is this also possible in SA 0.3?

 I want to build a web-page where a user can refine filters on the go,
 to perform a drill-down in a data set. Doing this with plain-text SQL
 is fairly easy, but doing it with SA is not as straight-forward.

 Considering this scenario, would it be possible to *remove* one such
 filter from a query? Say, remove the x=1 from the above query.
   

You must post-pone rendering of qry where clause...

where_clause = []
where_clause.append( cond1 )
where_clause.append( cond2 )
where_clause.append( cond3 )


you can  at this point of programm remove some condition positinally or 
by content.

where_clause.pop( cond2)
or
del where_clause[1]



finally you can rendere your qry:


your_base_qry = session.query( bla bla )
if where_clause:
   your_removed_qry = your_base_qry.filter( and_( *where_clause ) )





Glauco

-- 
++
  Glauco Uri - Programmatore
glauco(at)allevatori.com 
   
  Sfera Carta Software®  [EMAIL PROTECTED]
  Via Bazzanese,69  Casalecchio di Reno(BO) - Tel. 051591054 
++



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] pyodbc and inserts... again

2007-11-19 Thread polaar

I've just noticed a remaining problem with the pyodbc/inserts with
triggers/scope_identity()/set nocount on/nextset() thing ;-) (it's
still a workaround if I understand correctly?)
If nocount is off (eg. turned off again by the trigger as it seems in
my case), MSSQLDialect_pyodbc.do_execute jumps over the rowcount to
get to the resultset, but it is possible that there are more rowcounts
than foreseen (multiple inserts by the trigger).
I've tried the following change in MSSQLDialect_pyodbc.do_execute()
with good results:

old version, which jumps over one set in case of an exception:
try:
row = cursor.fetchone()
except pyodbc.Error, e:
# if nocount OFF fetchone throws an exception and we
have to jump over
# the rowcount to the resultset
cursor.nextset()
row = cursor.fetchone()

new version, which keeps jumping as long as there are errors:
while True:
try:
row = cursor.fetchone()
break
except pyodbc.Error, e:
# if nocount OFF fetchone throws an exception and
we have to jump over
# the rowcount to the resultset
cursor.nextset()

Probably still not ideal, but it seems better than the current one...

greetings,

Steven
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Earn Money Online! No Registration Fees. Guaranteed Payments

2007-11-19 Thread Riaz Muhammad
http://www.moneycosmos.com/?r=321740

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Call or Send SMS to any phone anywhere in the world Free!

2007-11-19 Thread Riaz Muhammad
Call or Send SMS to any phone anywhere in the world Free! -
http://offr.biz/HLGB7321740QUQKUQA

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Unnecessary selects when cascade=all, delete-orphane and composite keys

2007-11-19 Thread Michael Bayer


On Nov 19, 2007, at 6:22 AM, Anton V. Belyaev wrote:



 Setting the relation to be lazy=dynamic really eliminated
 unnecessary selects when creating Mark. Making a default relation a
 bit dynamic is a great idea!

 There is problem when relation is both lazy=dynamic and
 cascade=all, delete-orphan:
 When parent (Subject or Student) object is deleted, its children
 (Marks) are not deleted. When relation is not dynamic, children are
 deleted correctly.

 Is this a proper behavior?

 Thanks for the attention.


yes, this is the caveat of the dynamic relation; on the read side,  
all its doing is issuing a query to the database.  Therefore, any data  
which hasn't been flushed will not show up at that end.  For this  
reason, the dynamic relation was intended to produce a smoother  
experience in conjunction with a session that is autoflush=True, so  
that whenever you read from the relation, a flush occurs first and you  
get the latest state back from the database.

So you can see that the new behavior ive proposed in trac would  
produce more specifically the behavior you're looking for in this  
case.  I hope you can wait for it !

- mike



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Using count on a relation without loading all related entities?

2007-11-19 Thread Thomas Wittek

If you want to count the children of a parent entity you can do it
like that:

parent.children.count(Child.id)

Generally, this is fine. But it loads all children into the session
and then manually counts them.
For large sets this will become very slow.

Wouldn't it be smarter to do the count in the database, as it would be
done with the following query?

 
session.query(Child).filter(Parent.id==Child.parent_id).count(Child.id)

Currently I have to implement a children_count() method in the parent
to avoid loading all the children from the database.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Using count on a relation without loading all related entities?

2007-11-19 Thread Michael Bayer


On Nov 19, 2007, at 9:49 AM, Thomas Wittek wrote:


 If you want to count the children of a parent entity you can do it
 like that:

parent.children.count(Child.id)

 Generally, this is fine. But it loads all children into the session
 and then manually counts them.
 For large sets this will become very slow.

 Wouldn't it be smarter to do the count in the database, as it would be
 done with the following query?


 session
 .query(Child).filter(Parent.id==Child.parent_id).count(Child.id)

 Currently I have to implement a children_count() method in the parent
 to avoid loading all the children from the database.

The children collection is nothing more than a plain Python list,  
which loads from the database the first time you access the  
attribute.  By the time the count method is accessed, the list is  
already loaded; this is currently how attribute instrumentation works.

To have it work differently, accessing parent.children when its not  
yet loaded would need to return some kind of proxy object, which when  
you then do something against the proxy object it then decides whether  
or not to populate.   The populate itself would then need to switch  
the attribute around, or perhaps remain in its proxied state.  We have  
done some experiments with this and its possibly something we might  
offer as an option in the future.   It might even be as easy as using  
a custom collection_class option, i.e.  
collection_class=ProxyCollection(list) or similar.  Id be hesitant  
making this the default, as our current collection behavior features  
the ability to deal directly with a native Python datastructure, with  
a minimal amount of magic going on.

Currently, there is the dynamic relation option where the collection  
on the read side is actually just a Query object set up with the join  
you have there, so that would produce this behavior right now, but  
then you arent dealing with a true collection.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Concrete Inheritance problem

2007-11-19 Thread Partha

I have 2 tables Person (id, name)  Employee (id, salary) and every
Employee 'isa' Person, so employee.id == person.id. I am trying to use
the Concrete Inheritance (i.e. ' pjoin) example provided in the
documentation. My mapping looks as follows.

person_table = Table(persons, __meta__,
 Column(id, Integer, primary_key=True),
 Column(name, String(80))
 )

employee_table = Table(empys, __meta__,
 Column(id, Integer, ForeignKey(persons.id),
primary_key=True),
 Column(salary, Integer),
 )

class Person (object): pass
class Employee(Person): pass

pjoin = polymorphic_union({
'person':person_table,
'employee':employee_table
}, 'type', 'pjoin')



person_mapper = mapper(Person, person_table, select_table=pjoin, \
polymorphic_on=pjoin.c.type,
polymorphic_identity='person')
emp_mapper = mapper(Employee, employee_table, inherits=person_mapper,
\
concrete=True, polymorphic_identity='employee')



I want to now add a New employee to the system with the following
snippet.

e = Employee()
e.name = 'TestEmpl'
e.salary = 100
session.save(e)
session.commit()

Problem is when it tries to save employee, sqlalchemy raises the
following error

sqlalchemy.exceptions.ProgrammingError: (ProgrammingError) relation
empys_id_seq does not exist
'select nextval(\'empys_id_seq\')' None

Since all id's in employees table map directly to Person table (which
has the correct sequence), I would think alchemy would first store the
Person part and then store the employee part. Any clues on how to
correct this ??.
I  also tried adding a Sequence to employee forcing it to use the same
one as person table...

employee_table = Table(empys, __meta__,
 Column(id, Integer, ForeignKey(persons.id),
Sequence(person_id_seq)  primary_key=True),
 Column(salary, Integer),
 )


But now it raises a constraint violation because its trying to insert
a row in employees table without inserting anything in person first..

Any ideas how I can fix this?.. I would greatly appreciate any help in
this regard.. Thanks

Partha


Here is the full program.. Thanks..

from sqlalchemy import create_engine, MetaData, Table, Column, types,
Sequence
from sqlalchemy import Table, Column, Integer, String, DECIMAL,
Numeric, ForeignKey, DateTime, Boolean, CHAR
from sqlalchemy.orm import mapper,sessionmaker, relation,
polymorphic_union
__meta__ = MetaData()

person_table = Table(persons, __meta__,
 Column(id, Integer, primary_key=True),
 Column(name, String(80))
 )

employee_table = Table(empys, __meta__,
 Column(id, Integer, ForeignKey(persons.id),
primary_key=True),
 Column(salary, Integer),
 )
class Person (object): pass

class Employee(Person): pass

pjoin = polymorphic_union({
'person':person_table,
'employee':employee_table
}, 'type', 'pjoin')



person_mapper = mapper(Person, person_table, select_table=pjoin, \
polymorphic_on=pjoin.c.type,
polymorphic_identity='person')
emp_mapper = mapper(Employee, employee_table, inherits=person_mapper,
\
concrete=True, polymorphic_identity='employee')


engine = create_engine('postgres:///test?user=postgrespassword=foo',
echo=True)
session = sessionmaker(autoflush=True, bind = engine,
transactional=True)()

def refresh(engine = engine):
__meta__.drop_all(engine)
__meta__.create_all(engine)

def main():
refresh()
e = Employee()
e.name = 'TestEmpl'
e.salary = 100
session.save(e)
session.commit()

if __name__ == '__main__':
main()

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Concrete Inheritance problem

2007-11-19 Thread sdobrev

if it's about concrete inheritance, then employee contains ALL info it 
needs, that is, a full copy of person + whatever else is there,
and is completely independent from person table.
so for that case,
  a) foregn key is not needed
  b) inserting in employee_tbl will never insert stuff in person_tbl - they 
are independent.

now, what u're expecting (chaining of id's and rows etc) will come from 
joined_table inheritance. See about that in docs; and just remove 
concrete=True from your mapper.


Partha wrote:
 I have 2 tables Person (id, name)  Employee (id, salary) and every
 Employee 'isa' Person, so employee.id == person.id. I am trying to use
 the Concrete Inheritance (i.e. ' pjoin) example provided in the
 documentation. My mapping looks as follows.
 
 person_table = Table(persons, __meta__,
  Column(id, Integer, primary_key=True),
  Column(name, String(80))
  )
 
 employee_table = Table(empys, __meta__,
  Column(id, Integer, ForeignKey(persons.id),
 primary_key=True),
  Column(salary, Integer),
  )
 
 class Person (object): pass
 class Employee(Person): pass
 
 pjoin = polymorphic_union({
 'person':person_table,
 'employee':employee_table
 }, 'type', 'pjoin')
 
 
 
 person_mapper = mapper(Person, person_table, select_table=pjoin, \
 polymorphic_on=pjoin.c.type,
 polymorphic_identity='person')
 emp_mapper = mapper(Employee, employee_table, inherits=person_mapper,
 \
 concrete=True, polymorphic_identity='employee')
 
 
 
 I want to now add a New employee to the system with the following
 snippet.
 
 e = Employee()
 e.name = 'TestEmpl'
 e.salary = 100
 session.save(e)
 session.commit()
 
 Problem is when it tries to save employee, sqlalchemy raises the
 following error
 
 sqlalchemy.exceptions.ProgrammingError: (ProgrammingError) relation
 empys_id_seq does not exist
 'select nextval(\'empys_id_seq\')' None
 
 Since all id's in employees table map directly to Person table (which
 has the correct sequence), I would think alchemy would first store the
 Person part and then store the employee part. Any clues on how to
 correct this ??.
 I  also tried adding a Sequence to employee forcing it to use the same
 one as person table...
 
 employee_table = Table(empys, __meta__,
  Column(id, Integer, ForeignKey(persons.id),
 Sequence(person_id_seq)  primary_key=True),
  Column(salary, Integer),
  )
 
 
 But now it raises a constraint violation because its trying to insert
 a row in employees table without inserting anything in person first..
 
 Any ideas how I can fix this?.. I would greatly appreciate any help in
 this regard.. Thanks
 
 Partha
 
 
 Here is the full program.. Thanks..
 
 from sqlalchemy import create_engine, MetaData, Table, Column, types,
 Sequence
 from sqlalchemy import Table, Column, Integer, String, DECIMAL,
 Numeric, ForeignKey, DateTime, Boolean, CHAR
 from sqlalchemy.orm import mapper,sessionmaker, relation,
 polymorphic_union
 __meta__ = MetaData()
 
 person_table = Table(persons, __meta__,
  Column(id, Integer, primary_key=True),
  Column(name, String(80))
  )
 
 employee_table = Table(empys, __meta__,
  Column(id, Integer, ForeignKey(persons.id),
 primary_key=True),
  Column(salary, Integer),
  )
 class Person (object): pass
 
 class Employee(Person): pass
 
 pjoin = polymorphic_union({
 'person':person_table,
 'employee':employee_table
 }, 'type', 'pjoin')
 
 
 
 person_mapper = mapper(Person, person_table, select_table=pjoin, \
 polymorphic_on=pjoin.c.type,
 polymorphic_identity='person')
 emp_mapper = mapper(Employee, employee_table, inherits=person_mapper,
 \
 concrete=True, polymorphic_identity='employee')
 
 
 engine = create_engine('postgres:///test?user=postgrespassword=foo',
 echo=True)
 session = sessionmaker(autoflush=True, bind = engine,
 transactional=True)()
 
 def refresh(engine = engine):
 __meta__.drop_all(engine)
 __meta__.create_all(engine)
 
 def main():
 refresh()
 e = Employee()
 e.name = 'TestEmpl'
 e.salary = 100
 session.save(e)
 session.commit()
 
 if __name__ == '__main__':
   main()
 
  


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SQLAlchemy 0.4.1 released

2007-11-19 Thread Chris M

I hear Super Mario Galaxy is one hell of a game!

On Nov 18, 7:26 pm, Michael Bayer [EMAIL PROTECTED] wrote:
 Hello alchemers -

 This is an awesome release.  I'm excited about this one. With our new
 shiny clean 0.4 codebase, internals are starting to look a lot more
 intelligent, and new things are becoming possible.  Call counts are
 going down like a rock.   Intents and behaviors are clarifying and
 sharpeningplus Super Mario Galaxy arrives tomorrow so its time for
 a break.

 Some highlights of this release:

 - you might notice that some eager load operations are suddenly a lot
 faster, particularly on MySQL.  This is because we've improved the
 queries that are issued when you use eager loading with LIMIT and/or
 OFFSET; whereas we previously would wrap the LIMITed query in a
 subquery, join back to the mapped table, and then add the eager
 criterion outer joined against the mapped table, a trick we've been
 doing since 0.1.0, we now outer join the eager criterion directly
 against the subquery, and the main mapper pulls rows straight from the
 subquery columns.  Improved SQL expression functionality has allowed
 this to be possible.  What it means is, an eager load with LIMIT/
 OFFSET uses one less JOIN in all cases.   This is an example of SA's
 very rich expression constructs paying off - since a query that is
 much more efficient on the database side trumps the hundred or so
 method calls spent compiling the query anyday.

 - session.refresh() and session.expire() can now operate on individual
 instance attributes.   Just say session.expire(myobject, ['items',
 'description', 'name']), and all three of those attributes, whether
 they're just columns or relations to other objects, will go blank
 until you next access them on the instance, at which point they are
 refreshed from the DB.  Column attributes will be grouped together in
 a single select() statement and related tables will be lazy loaded
 individually right now.  Also, the internal mechanisms used by
 deferred() columns, refresh/expire operations, and polymorphically
 deferred columns have all been merged into one system, which means
 less internal complexity and more consistent behavior.

 - the API of the session has been hardened.  This means its going to
 check more closely that operations make sense (and it also no longer
 raises some errors that did not make sense in certain circumstances).
 The biggest gotcha we've observed so far from people using trunk is
 that session.save() is used *only* for entities that have not been
 saved to the database yet.  If you put an already-stored instance in
 save(), you'll get an error.  This has always been the contract, it
 just hasn't complained previously. If you want to put things in the
 session without caring if they've already been saved or not, use
 session.save_or_update(myinstance).  We've also fixed things regarding
 entities that have been de-pickled and placed back into the session -
 some annoying errors that used to occur have been fixed.

 - still in the session category, the merge() method gets a
 dont_load=True argument.  Everyone using caches like memcached can
 now place copies of their cached objects back in the session using
 myinstance = merge(mycachedinstance, dont_load=True), and the
 instance will be fully copied as though it were loaded from the
 database, *without* a load operation proceeding; it will trust that
 you want that instance state in the session.

 - query.options() are way more intelligent.  Suppose you have a large
 bidirectional chain of relations.   If you say something like
 query.options(eagerload('orders.items.keywords.items.orders')), it
 will accurately target the 'orders' relation at the end of that chain
 and nothing else.  On a similar topic, self-referential eagerloads can
 be set up on the fly, such as
 query.options(eagerload_all('children.children.children')) without
 needing to set the join_depth flag on relation().

 - method call overhead continues to be cut down.  Many expensive calls
 in statement compilation, clauseelement construction, and statement
 execution have been whacked away completely and replaced with simpler
 and more direct behaviors, and results are more accurate and correct.
 This continues along from all that we've done in 0.4 and at this point
 most call counts should be half of what they were in 0.3.   I invite
 everyone to take a tour around expression.py, compiler.py, and
 critique; we've had a huge amount of housecleaning in these modules
 (and others), and further suggestions/ideas/flames are entirely
 welcome (though not too early in the morning) on sqlalchemy-devel.

 - in the fringe category, you can now define methods like __hash__(),
 __nonzero__(), and __eq__() on your mapped instances and the ORM won't
 get confused; we've rearranged things so that those methods are not
 accessed by the ORM.

 - a new, experimental MaxDB dialect, lots of typing fixes for MySQL
 and Oracle, and lots more.

 As 

[sqlalchemy] Re: Concrete Inheritance problem

2007-11-19 Thread Partha

Thank you! . That worked great..

Partha

On Nov 19, 2:41 pm, [EMAIL PROTECTED] wrote:
 if it's about concrete inheritance, then employee contains ALL info it
 needs, that is, a full copy of person + whatever else is there,
 and is completely independent from person table.
 so for that case,
   a) foregn key is not needed
   b) inserting in employee_tbl will never insert stuff in person_tbl - they
 are independent.

 now, what u're expecting (chaining of id's and rows etc) will come from
 joined_table inheritance. See about that in docs; and just remove
 concrete=True from your mapper.

 Partha wrote:
  I have 2 tables Person (id, name)  Employee (id, salary) and every
  Employee 'isa' Person, so employee.id == person.id. I am trying to use
  the Concrete Inheritance (i.e. ' pjoin) example provided in the
  documentation. My mapping looks as follows.

  person_table = Table(persons, __meta__,
   Column(id, Integer, primary_key=True),
   Column(name, String(80))
   )

  employee_table = Table(empys, __meta__,
   Column(id, Integer, ForeignKey(persons.id),
  primary_key=True),
   Column(salary, Integer),
   )

  class Person (object): pass
  class Employee(Person): pass

  pjoin = polymorphic_union({
  'person':person_table,
  'employee':employee_table
  }, 'type', 'pjoin')

  person_mapper = mapper(Person, person_table, select_table=pjoin, \
  polymorphic_on=pjoin.c.type,
  polymorphic_identity='person')
  emp_mapper = mapper(Employee, employee_table, inherits=person_mapper,
  \
  concrete=True, polymorphic_identity='employee')

  I want to now add a New employee to the system with the following
  snippet.

  e = Employee()
  e.name = 'TestEmpl'
  e.salary = 100
  session.save(e)
  session.commit()

  Problem is when it tries to save employee, sqlalchemy raises the
  following error
  
  sqlalchemy.exceptions.ProgrammingError: (ProgrammingError) relation
  empys_id_seq does not exist
  'select nextval(\'empys_id_seq\')' None
  
  Since all id's in employees table map directly to Person table (which
  has the correct sequence), I would think alchemy would first store the
  Person part and then store the employee part. Any clues on how to
  correct this ??.
  I  also tried adding a Sequence to employee forcing it to use the same
  one as person table...

  employee_table = Table(empys, __meta__,
   Column(id, Integer, ForeignKey(persons.id),
  Sequence(person_id_seq)  primary_key=True),
   Column(salary, Integer),
   )

  But now it raises a constraint violation because its trying to insert
  a row in employees table without inserting anything in person first..

  Any ideas how I can fix this?.. I would greatly appreciate any help in
  this regard.. Thanks

  Partha

  Here is the full program.. Thanks..

  from sqlalchemy import create_engine, MetaData, Table, Column, types,
  Sequence
  from sqlalchemy import Table, Column, Integer, String, DECIMAL,
  Numeric, ForeignKey, DateTime, Boolean, CHAR
  from sqlalchemy.orm import mapper,sessionmaker, relation,
  polymorphic_union
  __meta__ = MetaData()

  person_table = Table(persons, __meta__,
   Column(id, Integer, primary_key=True),
   Column(name, String(80))
   )

  employee_table = Table(empys, __meta__,
   Column(id, Integer, ForeignKey(persons.id),
  primary_key=True),
   Column(salary, Integer),
   )
  class Person (object): pass

  class Employee(Person): pass

  pjoin = polymorphic_union({
  'person':person_table,
  'employee':employee_table
  }, 'type', 'pjoin')

  person_mapper = mapper(Person, person_table, select_table=pjoin, \
  polymorphic_on=pjoin.c.type,
  polymorphic_identity='person')
  emp_mapper = mapper(Employee, employee_table, inherits=person_mapper,
  \
  concrete=True, polymorphic_identity='employee')

  engine = create_engine('postgres:///test?user=postgrespassword=foo',
  echo=True)
  session = sessionmaker(autoflush=True, bind = engine,
  transactional=True)()

  def refresh(engine = engine):
  __meta__.drop_all(engine)
  __meta__.create_all(engine)

  def main():
  refresh()
  e = Employee()
  e.name = 'TestEmpl'
  e.salary = 100
  session.save(e)
  session.commit()

  if __name__ == '__main__':
 main()

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Generative queries in SA0.3 ?

2007-11-19 Thread exhuma.twn

On Nov 19, 12:41 pm, Glauco [EMAIL PROTECTED] wrote:
 exhuma.twn ha scritto:



  I have seen that in SA 0.4 on can do something like:

  q = table.select()
  q = q.where(x=1)
  q = q.where(z=2)
  ...

  Is this also possible in SA 0.3?

  I want to build a web-page where a user can refine filters on the go,
  to perform a drill-down in a data set. Doing this with plain-text SQL
  is fairly easy, but doing it with SA is not as straight-forward.

  Considering this scenario, would it be possible to *remove* one such
  filter from a query? Say, remove the x=1 from the above query.

 You must post-pone rendering of qry where clause...

 where_clause = []
 where_clause.append( cond1 )
 where_clause.append( cond2 )
 where_clause.append( cond3 )

 you can  at this point of programm remove some condition positinally or
 by content.

 where_clause.pop( cond2)
 or
 del where_clause[1]

 finally you can rendere your qry:

 your_base_qry = session.query( bla bla )
 if where_clause:
your_removed_qry = your_base_qry.filter( and_( *where_clause ) )

 Glauco


Thanks. This is nearly the way I am doing it right now. I didn't know
and_ could take a list. This is will simplify things. I suppose I
will stick to this method then as removing refining filters (adding
and removing) is something that is done quite often.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---