[sqlalchemy] Re: Info needed regarding the use of cascade

2008-11-18 Thread --- [EMAIL PROTECTED] ---


Thank you Michael ,

 you only need a single relation() + backref(), books-stock.

did you mean like this ?


class Stock(declarative_base):
  __tablename__ = 'tbl_stock'
  
  
  
  pass

class Book(declarative_base):
  __tablename__ = 'tbl_books'
  
  
  stock = relation('Stock', backref=backref
('tbl_books',order_by=id))


if so how can i retrieve all the books in a particular stock  ??
in my case i could have done it by

 ins_stock = session.querry(Stock).filter(id=100).one()
 print ins.stock.books
[book1 objectbook2 object ...]
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Info needed regarding the use of cascade

2008-11-18 Thread King Simon-NFHD78

 -Original Message-
 From: sqlalchemy@googlegroups.com 
 [mailto:[EMAIL PROTECTED] On Behalf Of 
 ---[EMAIL PROTECTED]@il06exr02.mot.com
 Sent: 18 November 2008 10:04
 To: sqlalchemy
 Subject: [sqlalchemy] Re: Info needed regarding the use of cascade
 
 
 
 Thank you Michael ,
 
  you only need a single relation() + backref(), books-stock.
 
 did you mean like this ?
 
 
 class Stock(declarative_base):
   __tablename__ = 'tbl_stock'
   
   
   
   pass
 
 class Book(declarative_base):
   __tablename__ = 'tbl_books'
   
   
   stock = relation('Stock', backref=backref
 ('tbl_books',order_by=id))
 
 
 if so how can i retrieve all the books in a particular stock  ??
 in my case i could have done it by
 
  ins_stock = session.querry(Stock).filter(id=100).one()
  print ins.stock.books
 [book1 objectbook2 object ...]


The 'backref' of a relation is the name of a property that gets placed
on the 'other end' of the relation, pointing back at the original
object. So with the configuration that you had above, you should be able
to say:

 ins_stock = session.query(Stock).filter(id=100).one()
 print ins_stock.tbl_books
[book1 objectbook2 object ...]

If you name your backref 'books', then you can use your Stock objects in
exactly the same way as you did before.

Hope that helps,

Simon

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] flush session.is_modified()

2008-11-18 Thread sandro dentella

Hi,

  I have a code like this:

if self.session.dirty:
for dirty in self.session.dirty:
if self.session.is_modified(dirty):
return True
if self.session.new:
for obj in self.session.new:
if self.session.is_modified(obj):
return True

I realized that if I have a session with autoflush=True, checking
is_modified(dirty_obj), triggers a flush(), so that next loop on
'session.new' would never find anything.

While I personally have non reason to keep autoflush=True, I didn't
want to force this for everybody using my library (sqlkit). Is there a
way to force is_modified (with collection) not to flush(), the same
way as session.query  can be issued without flushing?

TIA
sandro
*:-)

--
http://sqlkit.argolinux.org
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: flush session.is_modified()

2008-11-18 Thread Michael Bayer


is_modified() shouldn't trigger a flush.  is that with rc4 ?

sandro dentella wrote:

 Hi,

   I have a code like this:

 if self.session.dirty:
 for dirty in self.session.dirty:
 if self.session.is_modified(dirty):
 return True
 if self.session.new:
 for obj in self.session.new:
 if self.session.is_modified(obj):
 return True

 I realized that if I have a session with autoflush=True, checking
 is_modified(dirty_obj), triggers a flush(), so that next loop on
 'session.new' would never find anything.

 While I personally have non reason to keep autoflush=True, I didn't
 want to force this for everybody using my library (sqlkit). Is there a
 way to force is_modified (with collection) not to flush(), the same
 way as session.query  can be issued without flushing?

 TIA
 sandro
 *:-)

 --
 http://sqlkit.argolinux.org
 



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: flush session.is_modified()

2008-11-18 Thread Alessandro Dentella

On Tue, Nov 18, 2008 at 10:27:32AM -0500, Michael Bayer wrote:
 
 
 is_modified() shouldn't trigger a flush.  is that with rc4 ?

no it was svn rev. 4935. It's still the same with rev. 5311.
I verified again: autoflush=False, just works. Autoflush=True
shows that

## here session.new is not empty

if self.session.dirty:
for dirty in self.session.dirty:
if self.session.is_modified(dirty):
return True

## session.dirty is empty

Do you need more details?


sandro
*:-)


--
http://sqlkit.argolinux.org   PyGTK + python + sqlalchemy

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Trouble with Strings getting converted to Unicode types

2008-11-18 Thread Harish Vishwanath
Hello All,

I am using Elixir 0.6.1 over SQLA 0.5rc2. Consider the below :

 from elixir import *
 class A(Entity):
*... name = Field(String(40))*
...
 class B(A):
*... address = Field(String(40))*
...
 engine = sqlite:///c:\\temp\\2.sqlite
 metadata.bind = engine
 setup_all(True)
 a = A(name=A)
 type(a.name)
type 'str'
 b = B(name=B,address=B'sAddress)
 type(b.name)
type 'str'
 type(b.address)
type 'str'
 type(a.row_type)
type 'NoneType'
 session.flush()
 session.commit()
* d = session.query(A)[0]
 d.name
u'A'
 type(d.name)
type 'unicode'
 d.row_type
u'a'
* from elixir import options
* options.DEFAULT_POLYMORPHIC_COL_NAME
'row_type'
 options.POLYMORPHIC_COL_TYPE
String(length=40, convert_unicode=False, assert_unicode=None)
 options.POLYMORPHIC_COL_SIZE
40*

Though I declare fields of Class A and B as strings, when I retrieve the
objects back from database, the are automatically getting converted to
Unicode. Also, the default polymorphic column type is String, but row_type
after retrieval is coming back as unicode. When I query SQLite, I see normal
strings :

sqlite select * from __main___a;
1|A|a|
2|B|b|B'sAddress

I donot want my strings to get converted to Unicode, since I am having
issues with marshalling and transmitting unicode objects over the network.
Is there any option I am missing? Kindly let me know.


-- 
Regards,
Harish

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: flush session.is_modified()

2008-11-18 Thread Michael Bayer


is_modified() has a flag passive which when set to True, disables lazy
loaders from executing.  As it turns out the flag was also not hooked up
so you'd have to use the latest trunk to get this.

the reason it wants to lazily load things is because the modified
calculation compares the current value to that which is present in the
database, so when things are not loaded it can't make that determination.


Alessandro Dentella wrote:

 On Tue, Nov 18, 2008 at 10:27:32AM -0500, Michael Bayer wrote:


 is_modified() shouldn't trigger a flush.  is that with rc4 ?

 no it was svn rev. 4935. It's still the same with rev. 5311.
 I verified again: autoflush=False, just works. Autoflush=True
 shows that

 ## here session.new is not empty

 if self.session.dirty:
 for dirty in self.session.dirty:
 if self.session.is_modified(dirty):
 return True

 ## session.dirty is empty

 Do you need more details?


 sandro
 *:-)


 --
 http://sqlkit.argolinux.org   PyGTK + python + sqlalchemy

 



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Mapper Extensions, after_update and old data

2008-11-18 Thread Adam

I have a project where I need to monitor the changes to a record -
when using after_update, is it possible to see what the data was
before the update?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: adding objects to table only once

2008-11-18 Thread Faheem Mitha



On Mon, 17 Nov 2008, Faheem Mitha wrote:

 Hi,

 I've written a session transcript to init db tables and add objects 
 (well, rows) to the tables. The issue I'm currently facing is how to 
 make the creating and populating the tables section of the script a 
 no-op when the objects exist. If the tables already exist sqlalchemy 
 does nothing, which is fine. However, this script currently does try to 
 add the objects that are already there, and so throws an exception. I 
 suppose the thing to do would be to check for each object whether it 
 already exists in the db, and do nothing if so. What would be the 
 simplest/cleanest way to do so? I've been fiddling with this for a while 
 without finding an obviously good solution. Is it possible to check 
 whether an object is already in a specific table?

[following up to my own message]

The following approach works, but is kinda kludgy. In particular, I'd like 
to genericise it. The main obstacle in doing so is finding a generic 
expression for the primary key. There is always a primary key, and by 
definition it is unique, right? So, I think it makes sense to use that for 
comparison, but the actual name of the primary key can differ and is can 
also be composite. So, is there a way to access it in a generic way? 
Alternatively, is there a better approach to this?

   Thanks, Faheem.

def add_patient_obj(session, patient_obj):
  Check if object primary key exists in db. If so,exit, else add.
 pid = patient_obj.id
 if session.query(Patient).filter_by(id=pid).count()  0:
 print Patient object with id %s is already in db.%pid
 exit
 else:
 session.save(patient_obj)
 session.commit()

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: flush session.is_modified()

2008-11-18 Thread Alessandro Dentella

On Tue, Nov 18, 2008 at 11:31:38AM -0500, Michael Bayer wrote:
 
 
 is_modified() has a flag passive which when set to True, disables lazy
 loaders from executing.  As it turns out the flag was also not hooked up
 so you'd have to use the latest trunk to get this.
 
 the reason it wants to lazily load things is because the modified
 calculation compares the current value to that which is present in the
 database, so when things are not loaded it can't make that determination.


perfect!  thanks a lot

sandro
*;-)

-- 

Sandro Dentella  *:-)
http://sqlkit.argolinux.orgSQLkit home page - PyGTK/python/sqlalchemy

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Trouble with Strings getting converted to Unicode types

2008-11-18 Thread Michael Bayer

pysqlite always returns Python unicode objects and this is outside of the
realm of SQLAlchemy.   I'm not familiar with a pysqlite option to change
this but you should consult their site for any options related to it.

Harish Vishwanath wrote:
 Hello All,

 I am using Elixir 0.6.1 over SQLA 0.5rc2. Consider the below :

 from elixir import *
 class A(Entity):
 *... name = Field(String(40))*
 ...
 class B(A):
 *... address = Field(String(40))*
 ...
 engine = sqlite:///c:\\temp\\2.sqlite
 metadata.bind = engine
 setup_all(True)
 a = A(name=A)
 type(a.name)
 type 'str'
 b = B(name=B,address=B'sAddress)
 type(b.name)
 type 'str'
 type(b.address)
 type 'str'
 type(a.row_type)
 type 'NoneType'
 session.flush()
 session.commit()
 * d = session.query(A)[0]
 d.name
 u'A'
 type(d.name)
 type 'unicode'
 d.row_type
 u'a'
 * from elixir import options
 * options.DEFAULT_POLYMORPHIC_COL_NAME
 'row_type'
 options.POLYMORPHIC_COL_TYPE
 String(length=40, convert_unicode=False, assert_unicode=None)
 options.POLYMORPHIC_COL_SIZE
 40*

 Though I declare fields of Class A and B as strings, when I retrieve the
 objects back from database, the are automatically getting converted to
 Unicode. Also, the default polymorphic column type is String, but row_type
 after retrieval is coming back as unicode. When I query SQLite, I see
 normal
 strings :

 sqlite select * from __main___a;
 1|A|a|
 2|B|b|B'sAddress

 I donot want my strings to get converted to Unicode, since I am having
 issues with marshalling and transmitting unicode objects over the network.
 Is there any option I am missing? Kindly let me know.


 --
 Regards,
 Harish

 



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Mapper Extensions, after_update and old data

2008-11-18 Thread Michael Bayer


the API to retrieve history for an individual attribute looks like:

from sqlalchemy.orm import attributes
a, u, d = attributes.get_history(Class.attribute, obj)

where a, u, d are lists of added items, unchanged items, and deleted
items, respectively.   For a scalar attributes, a, u, d are lists of
length one.

Example:


user = Session.query(User).get(1)
a, u, d = attributes.get_history(User.name, user)

a, u, d would be along the lines of:

['jack'], [], ['ed']

to indicate the name field was changed from 'ed' to 'jack'.


Adam wrote:

 I have a project where I need to monitor the changes to a record -
 when using after_update, is it possible to see what the data was
 before the update?
 



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: adding objects to table only once

2008-11-18 Thread az

my approach to this is to fetch all objects that look like my objects 
and then add the ones that are missing.
e.g.
q = query(A).filter( A.name.in_( allnames_that_should_be) )
missingnames = set( allnames_that_should_be) - set( a.name for a in q)
for name in missingnames: ...
probably would be even faster if u use 
q = query(A.name).filter(...) directly

at least this gives one query as opposed to hundreds query.filter_by( 
some_user_primary_key) - but it depends on the situation, numbers and 
filtering criterias.
Note the some_user_primary_key above: checking if dbid=5 exists is 
meaningless - any 5th object will have it; u have to check by 
meaningfull user or application-field primary keys.

also, the in_() operator does not scale well (YMMV), so u may have to 
slice into portions if the list is way too big.

ciao
svilen

   I've written a session transcript to init db tables and add
   objects (well, rows) to the tables. The issue I'm currently
   facing is how to make the creating and populating the tables
   section of the script a no-op when the objects exist. If the
   tables already exist sqlalchemy does nothing, which is fine.
   However, this script currently does try to add the objects that
   are already there, and so throws an exception. I suppose the
   thing to do would be to check for each object whether it
   already exists in the db, and do nothing if so. What would be
   the simplest/cleanest way to do so? I've been fiddling with
   this for a while without finding an obviously good solution. Is
   it possible to check whether an object is already in a
   specific table?
 
  [following up to my own message]
 
  The following approach works, but is kinda kludgy. In particular,
  I'd like to genericise it. The main obstacle in doing so is
  finding a generic expression for the primary key. There is always
  a primary key, and by definition it is unique, right? So, I think
  it makes sense to use that for comparison, but the actual name of
  the primary key can differ and is can also be composite. So, is
  there a way to access it in a generic way? Alternatively, is
  there a better approach to this?
 
                                                           
   Thanks, Faheem.
 
  def add_patient_obj(session, patient_obj):
        Check if object primary key exists in db. If so,exit,
  else add. pid = patient_obj.id
       if session.query(Patient).filter_by(id=pid).count()  0:
           print Patient object with id %s is already in db.%pid
           exit
       else:
           session.save(patient_obj)
           session.commit()


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] dynamic columns

2008-11-18 Thread g00fy

hi
so i have list of languages (suffixes)
en
de
pl
ru
 etc...

now i have my article_table, when normaly i would have columns:
id, title, content

but now i want to have:
id, title_en, title_de, title_pl,
title_ru,,content_en,..,content_ru

how can i create table definition dynamicly according to languages i
have ?
[I am aware that i will have to alter my table when I will add or
remove a language]
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: filter on backrelation again

2008-11-18 Thread Michael Bayer

technically you'd say query(X).join(X.all_A.as_type(C)) but I don't know
offhand if that use case has been implemented for join() as of yet.  you
can of course just say query(X).join((A, A.id==x.aid), (C, C.id==A.id))
assuming you're on joined table inheritance.  all bets are off for
concrete.



[EMAIL PROTECTED] wrote:

 hi

 i have A, B, C,... inheritance, where A points to some X.
 how i query all Xs which are pointed by, say Cs ?

 if i put a backref on the reference,
  query(X).join( X.all_A) will give me As Bs Cs everything...

 or should that go down to joins?

 ciao
 svil

 



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] filter on backrelation again

2008-11-18 Thread az

hi

i have A, B, C,... inheritance, where A points to some X.
how i query all Xs which are pointed by, say Cs ?

if i put a backref on the reference, 
 query(X).join( X.all_A) will give me As Bs Cs everything...

or should that go down to joins?

ciao
svil

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: filter on backrelation again

2008-11-18 Thread az

query(X).join(X.all_A.of_type(C)) works but produces a subselect 
(select all-them-columns ...) which is huge/hard to read.
i found that query(X).filter( X.all_A.of_type(C).any()) works too, and 
the subselect is shorter (select 1 ...).

hmm, might be useful to somehow differ between a join that is there 
for usage of it's columns and a join that is just for filtering.

thanks
svil

On Wednesday 19 November 2008 01:22:22 Michael Bayer wrote:
 technically you'd say query(X).join(X.all_A.as_type(C)) but I don't
 know offhand if that use case has been implemented for join() as of
 yet.  you can of course just say query(X).join((A, A.id==x.aid),
 (C, C.id==A.id)) assuming you're on joined table inheritance.  all
 bets are off for concrete.

 [EMAIL PROTECTED] wrote:
  hi
 
  i have A, B, C,... inheritance, where A points to some X.
  how i query all Xs which are pointed by, say Cs ?
 
  if i put a backref on the reference,
   query(X).join( X.all_A) will give me As Bs Cs everything...
 
  or should that go down to joins?
 
  ciao
  svil

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] JOIN to subquery in mapper

2008-11-18 Thread indigophone

zipcode_meta_join_subquery = session.query(us_zipcode_table.c.city_id,
us_zipcode_table.c.zipcode_population,
us_zipcode_table.c.average_house_value).group_by(us_zipc
ode_table.c.city_id).subquery()

mapper(USCity, us_city_table, properties={
'state':relation(USState, backref=backref('cities')),
'zipcode_meta':relation(USZipCode, primaryjoin=
(zipcode_meta_join_subquery,
zipcode_meta_join_subquery.c.city_id==us_city_table.c.city_id))
})

The above code obviously doesn't work. How do I add a join to the
above subquery in my mapper?


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Vanilla parameterized query and results

2008-11-18 Thread TheShadow

All I need to be able to do is the following. (I don't need, can't
use, and/or don't want ORM)

query = 'SELECT col FROM table where col = :col'
params = {'col':1} # and/or params = [{'col':1},{'col':2}]
OR
query = 'INSERT INTO table(col1,col2) VALUES(:col1,col2)'
params = {'col1':1,'col2':2} # and/or params [{'col1':1,'col2':2},
{'col1':3,'col2':4}]

etc..

I've tried to figure out how to do this but every attempt has failed.
If anyone wants to just point me to the point in the documentation
that covers such use I would be greatly appreciative.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Vanilla parameterized query and results

2008-11-18 Thread Empty
Hi,

On Tue, Nov 18, 2008 at 9:39 PM, TheShadow [EMAIL PROTECTED] wrote:


 All I need to be able to do is the following. (I don't need, can't
 use, and/or don't want ORM)

 query = 'SELECT col FROM table where col = :col'
 params = {'col':1} # and/or params = [{'col':1},{'col':2}]
 OR
 query = 'INSERT INTO table(col1,col2) VALUES(:col1,col2)'
 params = {'col1':1,'col2':2} # and/or params [{'col1':1,'col2':2},
 {'col1':3,'col2':4}]


The SQL Expression Tutorial has all this laid out:

http://www.sqlalchemy.org/docs/05/sqlexpression.html

I encourage you to make your way through that. It covers simple inserting as
well as selects.

Michael

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: JOIN to subquery in mapper

2008-11-18 Thread Michael Bayer

joining to a subquery is better accomplished outside of relation()  
using query, such as query(USZipCode).join((subquery,  
subquery.c.col==USZipCode.somecol)).

Now you want it as an attribute on your class.  Do it like this:

class USCity(object):
 ...

 @property
 def zipcode_meta(self):
 return object_session(self).query(USZipCode).join(...join  
criterion...).params(..whatever...)

The advantage to this is that you can formulate the query and its  
relation to the parent in exactly the way you need.


On Nov 18, 2008, at 9:34 PM, indigophone wrote:


 zipcode_meta_join_subquery = session.query(us_zipcode_table.c.city_id,
 us_zipcode_table.c.zipcode_population,
 us_zipcode_table.c.average_house_value).group_by(us_zipc
 ode_table.c.city_id).subquery()

 mapper(USCity, us_city_table, properties={
'state':relation(USState, backref=backref('cities')),
'zipcode_meta':relation(USZipCode, primaryjoin=
 (zipcode_meta_join_subquery,
 zipcode_meta_join_subquery.c.city_id==us_city_table.c.city_id))
 })

 The above code obviously doesn't work. How do I add a join to the
 above subquery in my mapper?


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Vanilla parameterized query and results

2008-11-18 Thread Michael Bayer

Providing great utility with no ORM whatsoever is one of our core  
values.  That's why if you dont import sqlalchemy.orm, theres no ORM.

A full overview of SQLAlchemy components is at:  
http://www.sqlalchemy.org/docs/05/intro.html


On Nov 18, 2008, at 9:39 PM, TheShadow wrote:


 All I need to be able to do is the following. (I don't need, can't
 use, and/or don't want ORM)

 query = 'SELECT col FROM table where col = :col'
 params = {'col':1} # and/or params = [{'col':1},{'col':2}]
 OR
 query = 'INSERT INTO table(col1,col2) VALUES(:col1,col2)'
 params = {'col1':1,'col2':2} # and/or params [{'col1':1,'col2':2},
 {'col1':3,'col2':4}]

 etc..

 I've tried to figure out how to do this but every attempt has failed.
 If anyone wants to just point me to the point in the documentation
 that covers such use I would be greatly appreciative.

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: dynamic columns

2008-11-18 Thread Eric Ongerth

A way to normalize this:

article_table:
id

title_table:
article_id = primary key, also = foreign key to article_table.id
language
title

content_table:
article_id = primary key, also = foreign key to article_table.id
language
content

mapper(Article, article_table, properties={
'titles':relation(Title),
'contents':relation(Content) } )

mapper(Title, title_table)
mapper(Content, content_table)

Now you can assign an article as many titles in as many languages as
you wish, and as many content fields in as many language as you wish.
You could extend this further in many ways.  It is even possible to
use a dictionary-like class as the collection class for the relations
in the Article mapper, so that you could work like this:

a=Article()
a.title['en']='Confusion in Ten Easy Steps'
a.title['fr']='...'
a.title['ru']='...'   (etc.)

So, this approach means no need to alter your schema, change any
classes, or alter any tables when you add or remove a language.  You
could keep a table of currently accepted languages and use that (via
foreign key) to check the integrity of the 'language' column on the
title and content tables, and so forth.

This technique, generally speaking, is known as Vertical Partitioning
or a 'Vertical Tables' approach, because it results in tall, skinny
tables as opposed to tables that are many columns wide.  Each
separable, changeable attribute that you might wish to add to an
article can have its own table instead of a field in the articles
table, and foreign keys from these tables back to the main article
table give sqlalchemy the information it needs to make articles
respond with integrity as a coherent class despite the normalization/
separation of data.  Of course, to do this you have to use a database
that is good with foreign keys.

On Nov 18, 2:31 pm, g00fy [EMAIL PROTECTED] wrote:
 hi
 so i have list of languages (suffixes)
 en
 de
 pl
 ru
  etc...

 now i have my article_table, when normaly i would have columns:
 id, title, content

 but now i want to have:
 id, title_en, title_de, title_pl,
 title_ru,,content_en,..,content_ru

 how can i create table definition dynamicly according to languages i
 have ?
 [I am aware that i will have to alter my table when I will add or
 remove a language]
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---