[sqlalchemy] Re: InstrumentedList in SA 0.4.3

2008-03-05 Thread svilen

see sqlalchemy.orm.collections.py. 
The collections are reworked in 0.4.x, so check if your usage is still 
valid or has to be changed too.

On Wednesday 05 March 2008 18:06:23 Acm wrote:
> With SA version 0.3.11 I used to import InstrumentedList as
> follows: from sqlalchemy.orm.attributes import InstrumentedList
>
> Now I upgraded to SA 0.4.3 and cannot import InstrumentedList from
> the same file.
>
> I looked into the SQLAlchemy file attribute.py and noticed that
> this file uses the import as in version 0.3.11
>
> Should I use class InstrumentedAttribute instead of
> InstrumentedList? Or else, how can I import InstrumentedList to my
> files?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Intersect of ORM queries

2008-03-05 Thread svilen

pardon my sql-ignorancy, but cant u express this in just one 
expression? it should be possible, it is a graph/set arithmetics 
after all... 
mmh, (could be very wrong!) something like 
 - get all rows that has some b_id from the looked list
 - group(?) somehow by a_id, and then finger the a_id which collection 
of b's matches the looked list.
on 2nd thought, maybe no, this is a procedural way, not a 
set-arithmetics way...

On Wednesday 05 March 2008 17:23:52 Eric Ongerth wrote:
> I know how to issue an Intersect of multiple Selects, but I am
> wondering if there is a simple way to seek the intersection of
> multiple ORM queries.  I already know how to get the results I want
> in pure SQL or in SA-assisted non-ORM statements.  I just wonder if
> I could use sqlalchemy even more powerfully in the case of a
> particular M:M relationship in my model.
>
> Here's the situation, very simple.  Table a is mapped to class A,
> and table b mapped to class B.  I have a many:many relationship
> between class A and class B, via a secondary table whose only
> columns are a_id and b_id.  What I'm looking for is, I have a list
> of Bs and I want to find the (single) A that has exactly those same
> Bs in its Bs collection.  (I know the results will either be a
> single A or None, because of certain uniqueness constraints that
> don't matter to this question).
>
> So to do this with selects, this should work fine: (pardon the
> pseudo- SQL)
> intersect(
> select a_id from secondary_table where b_id ==
> b_being_sought[0].id,
> select a_id from secondary_table where b_id ==
> b_being_sought[1].id,
> ...
> select a_id from secondary_able where b_id ==
> b_being_sought[n].id
> )
>
>  -- Easily generated with a Python loop over the list of B's that
> I'm searching for.
>
> So each of those selects returns a number of rows from the
> secondary table, all linked to ONE of the B's in the list; and the
> intersect returns the single (or none) row in the secondary table
> which refers to the A which has *all* of those B's in its B
> collection.
>
> Fine.  But it would be so syntactically smooth if I could just do
> something like:
> intersect(
> query(A).filter(b=b_being_sought[0]),
> query(A).filter(b=b_being_sought[1]),
> ...
> query(A).filter(b=b_being_sought[n])
> )
>
> Is this possible in some way?  I haven't found a way to make this
> work ORM-style, because intersect() only wants select statements. 
> Am I correct in thinking that I could build each ORM query, steal
> its where_clause and use those where_clauses as my set of selects
> for the intersect()?  But that is enough extra steps, and enough
> exposition of internals, to clearly make it a confusing and
> backwards way of getting the results I want.  Definitely not a path
> to more readable code -- if that were the only way, then I would
> just do it the non-ORM way above.
>
> All comments appreciated.
> 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: AuditLog my first attempt

2008-03-04 Thread svilen

On Tuesday 04 March 2008 12:34:11 Marco De Felice wrote:
> So after some coding and thanks to sdobrev previous reply I came up
> with the following mapperextension that allows for a client side
> update log to a different table (logtable name = table_prefix +
> original table name) with a logoperation field added.
>
> Any comment is welcome as I don't really know SA internals, it
> seems to work against a simple mapped table.
for multitable mappers... 
 a) log as mapper-access and not table-access
 b) or, while walking mapper.columns, each column knows which table it 
comes from, so for those changed u can log them separately

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Has anyone implemented a "dict of lists" collection_class?

2008-02-18 Thread svilen

> Actually, I don't need or want the ability to append or remove
> entire subsets.  There will only be a single element appended or
> removed at a time.  It's just that when a Bar is added to one of
> these "children" collections, I want it to be filed under a dict
> slot whose key happens to be the Bar's "parent" attribute.  
is all this a sort of maintaining some classification over a 
(otherwise simple) set of elements?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Joined Table Inheritance

2008-02-18 Thread svilen

AFAIK, the polymorphic_identity value there is used:
 a) for queries, to discriminate/switch between object-types
 b) for inserts, to put _that value when inserting that object-type.

i.e. to me these values ought to be some consts, different for each 
subtype in the hierarchy. As of sharing that functionality with 
something else (why is it a FK?), i have no idea.

> I'm busy to play with (joinded) inheritance in SQLAlchemy. I have
> to design and implement a CMS-like app and my idea was to do
> something like (it's the SQL script) :
> http://pastebin.com/m35f5227d
>
> As you can see it's a classic inheritance situation: a base Content
> (abstract) which is of a certain ContentType (the polymorphic_on),
> and more specialized objects (News, Events, ...) inherits from this
> base Content class.
>
> In the manual I read that :
> "The table also has a column called type. It is strongly advised in
> both single- and joined- table inheritance scenarios that the root
> table contains a column whose sole purpose is that of the
> discriminator; it stores a value which indicates the type of object
> represented within the row. The column may be of any desired
> datatype"
>
> In my situation the so-called "type" column is the
> "content_type_id", which is foreign key, so for the News / Content
> relation in this case I would have something like :
>
> mapper(Content, contents,
> polymorphic_on=contents.c.content_type_id,
> polymorphic_identity='?')
>
> mapper(Engineer, engineers, inherits=Employee,
> polymorphic_identity='?')
>
> My question is: can I have a select() object which return a scalar
> for the polymorphic_identity parameters ? I guess it should be
> possible to put anything else than the FK value ?
>
> Thanks,
> Julien



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: building mappers for an existing database

2008-02-11 Thread svilen

On Monday 11 February 2008 16:07:03 Chris Withers wrote:
> svilen wrote:
> > search the group for things related to migrate (i call it migrene
> > :); there are 2 approaches:
> >  - make the db match the py-model
> >  - make the model match the db
>
> It's this 2nd one I'm asking about. Is sqlautocode the standard way
> of doing this?
probably something like it. Reverse engineering the db, and mime self 
accordingly, IF possible. See the autoload=true flag to metadata and 
tables, it does most of the job. 
i have something quick-and-dirty here:
http://dbcook.svn.sourceforge.net/viewvc/dbcook/trunk/dbcook/misc/metadata/

svilen

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] some error at v4070

2008-02-11 Thread svilen
hi.
running the dbcook tests, from v4070 onwards i get the following 
error:
Traceback (most recent call last):
  File "rr.py", line 68, in 
for q in session.query(A).all(): print q
  File "sqlalchemy/orm/query.py", line 746, in all
return list(self)
  File "sqlalchemy/orm/query.py", line 887, in iterate_instances
rows.append(main(context, row))
  File "sqlalchemy/orm/query.py", line 831, in main
extension=context.extension, 
only_load_props=context.only_load_props, 
refresh_instance=context.refresh_instance
  File "sqlalchemy/orm/mapper.py", line 1291, in _instance
row = self.translate_row(mapper, row)
  File "sqlalchemy/orm/mapper.py", line 1388, in translate_row
translator = create_row_adapter(self.mapped_table, 
tomapper.mapped_table, equivalent_columns=self._equivalent_columns)
  File "sqlalchemy/orm/util.py", line 202, in create_row_adapter
corr = from_.corresponding_column(c)
  File "sqlalchemy/sql/expression.py", line 1628, in 
corresponding_column
i = c.proxy_set.intersection(target_set)
AttributeError: 'NoneType' object has no attribute 'proxy_set'

The case is somewhat strange but is working before that - a 
polymorphic union of concrete inheritances where the root-table is 
not included in the union (i.e. leafs only). If it gets included, the 
error goes away. 
i dont know, if it's the case that is too weird, i could workaround it 
possibly.

svilen

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



rr.py
Description: application/python


[sqlalchemy] Re: building mappers for an existing database

2008-02-08 Thread svilen

On Friday 08 February 2008 16:09:28 Chris Withers wrote:
> Hi All,
>
> Almost similar to my last question, how do you go about building
> mappers for an existing database schema?
>
> What happens if you don't get it quite right? :-S
>
> cheers,
>
> Chris

search the group for things related to migrate (i call it migrene :); 
there are 2 approaches:
 - make the db match the py-model
 - make the model match the db 

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Search in object list by field value

2008-02-08 Thread svilen

On Friday 08 February 2008 14:26:04 maxi wrote:
a) let SQl do it 
  p1 = session.query(Person).filter_by(id==123).first()
#see .filter_by syntax
b) get all people, then plain python:
 for p in people:
   if p.id == 123: break
 else: p=None

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: How to accomplish setup/run-app/teardown with mapped classes and sessions?

2008-01-23 Thread svilen

apart of all runtime issues u've hit - session etc - on declaration 
level u'll may do these:
 - destroy all your refs to the mappers/tables etc
 - sqlalchemy.orm.clearmappes()
 - yourmetadata.drop_all()
 - yourengine.dispose()
 - destroy refs to yourmetadata/yourengine
(dbcook.usage.sa_manager.py/destroy())
if your classes are made-on-the-fly inside some function, then every 
time they are _different_ classes. Make sure u do not hold them 
somewhere - or they will keep all the stuff asociated with them 
(mappers and even session). The reason is the replaced __init__ 
method is stored as class.__init (in orm.attributes.register_class) 
and is not cleared in clearmappers.

for easy check, u can run your test 100 times and watch the memory 
used; if it grows then _something_ of all those above is not cleared.

ciao
svilen

On Thursday 24 January 2008 00:00:52 Kumar McMillan wrote:
> Hello, I have not been able to figure this out from the docs.
>
> I would like to setup and teardown test data using mapped classes.
> The problem is that those same mapped classes need to be used by
> the application under test and in case there is an error, the
> teardown still needs to run so that subsequent tests can setup more
> data.  

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: eagerloading polymorphic mapper

2008-01-15 Thread svilen

On Tuesday 15 January 2008 17:19:49 Michael Bayer wrote:
> On Jan 15, 2008, at 4:33 AM, svilen wrote:
> > also, i dont see a reason for it not to work if the (A jon B join
> > C) is a polymunion - all the same, all columns will be present
> > there, having None where missing.
> >
> > 0.4.3?
>
> unlikely, I dont see how it could work from a generic standpoint.
> the query generates SQL based on the attributes attached to A.  if
> it had to also loop through all the attributes of B, C, D, E, F,
> etc. and attempt to have all of those add their clauses to the SQL,
> theyd all have to assume that the selectable for "A" even supports
> receiving their joins, etc.

hmmm, specify explicitly? 
e.g. query(A).eagerload( B.address)

joined-inh via left-outer-join is enough, no need for polymunion. IMO 
this will be big plus for the ORM - eagerloading polymorphical child 
attributes - moving further away from SQL-like-looking stuff.
i dont know how the current machinery for eagerload works, but imo 
knowing your level of lookahead-design, it should not be hard to 
apply that machinery over a polymorphic mapper/query?
ciao
svilen

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: eagerloading polymorphic mapper

2008-01-15 Thread svilen

well, i tried it manualy and it works (sqlite).

here the eagerloading query:

model-description (joined_inheritance):
class A:
  name = Text()
class B(A):
  address = reference( Adress)
  nom = reference( Nomerator)
class C(A): pass


select * from A 
 left outer join B on A.db_id = B.db_id 
 left outer join C on A.db_id = C.db_id
 left outer join 
   Address as a1 on p.address_id = address.db_id
 left outer join 
   Nomerator as a2 on p.nom_id = a2.db_id 
;

or with subselects: 

select * from A 
 left outer join B on A.db_id = B.db_id 
 left outer join C on A.db_id = C.db_id
 left outer join 
   ( select * from Address ) as a1 on p.address_id = address.db_id
 left outer join 
   ( select * from Nomerator) as a2 on p.nom_id = a2.db_id 
;



also, i dont see a reason for it not to work if the (A jon B join C) 
is a polymunion - all the same, all columns will be present there, 
having None where missing.

0.4.3?

On Monday 14 January 2008 18:56:16 svilen wrote:
> On Monday 14 January 2008 18:35:40 Michael Bayer wrote:
> > On Jan 14, 2008, at 11:29 AM, svilen wrote:
> > > On Monday 14 January 2008 17:19:14 Michael Bayer wrote:
> > >> On Jan 14, 2008, at 8:41 AM, svilen wrote:
> > >>> i have, say, base class A, inherited by two children B and C.
> > >>> B has an attribute/relation 'address', A and C do not have
> > >>> it. So i had a query(A).eagerload( 'address') and that did
> > >>> work before r3912. But later it gives an error - "mapper|A
> > >>> has no property 'address'".
> > >>> Any hint how to do it now?
> > >>
> > >> what kind of inheritance/mapping  from A->B ?  i cant really
> > >> imagine any way that kind of eager load could have worked
> > >> since the "address" property of "B" does not (and has never)
> > >> get consulted in that case.
> > >
> > > plain joined?... hmm.
> > > maybe it did not really work (eagerly) but lazy-load has fired
> > > instead... seems that's the case.
> > > anyway.
> > > some way to accomplish such thing?
> >
> > no !  this the same issue with the Channel->CatalogChannel thing,
>
> yes i guessed it..
>
> > your query is against "A"...attributes that are only on "B" don't
> > enter into the equation here.
>
> this is somewhat different, my query/filter is on attributes that
> do exist in A; i only want the ORM to postprocess certain things...
> there will be 'address' column in the result-set anyway (empty or
> not), why it cannot be eagerloaded via B.address?
>
> > But also, if youre using
> > select_table, we dont yet support eager loads from a
> > polymorphic-unioned mapper in any case (though we are close).
>
> it is not polymunion, joined_inh works via left-outer-join.
>
> well, no is no.
>
> 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: eagerloading polymorphic mapper

2008-01-14 Thread svilen

On Monday 14 January 2008 18:35:40 Michael Bayer wrote:
> On Jan 14, 2008, at 11:29 AM, svilen wrote:
> > On Monday 14 January 2008 17:19:14 Michael Bayer wrote:
> >> On Jan 14, 2008, at 8:41 AM, svilen wrote:
> >>> i have, say, base class A, inherited by two children B and C. B
> >>> has an attribute/relation 'address', A and C do not have it.
> >>> So i had a query(A).eagerload( 'address') and that did work
> >>> before r3912. But later it gives an error - "mapper|A has no
> >>> property 'address'".
> >>> Any hint how to do it now?
> >>
> >> what kind of inheritance/mapping  from A->B ?  i cant really
> >> imagine any way that kind of eager load could have worked since
> >> the "address" property of "B" does not (and has never) get
> >> consulted in that case.
> >
> > plain joined?... hmm.
> > maybe it did not really work (eagerly) but lazy-load has fired
> > instead... seems that's the case.
> > anyway.
> > some way to accomplish such thing?
>
> no !  this the same issue with the Channel->CatalogChannel thing,
yes i guessed it..
> your query is against "A"...attributes that are only on "B" don't
> enter into the equation here. 
this is somewhat different, my query/filter is on attributes that do 
exist in A; i only want the ORM to postprocess certain things... 
there will be 'address' column in the result-set anyway (empty or 
not), why it cannot be eagerloaded via B.address?

> But also, if youre using 
> select_table, we dont yet support eager loads from a
> polymorphic-unioned mapper in any case (though we are close).
it is not polymunion, joined_inh works via left-outer-join.

well, no is no.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: eagerloading polymorphic mapper

2008-01-14 Thread svilen

On Monday 14 January 2008 17:19:14 Michael Bayer wrote:
> On Jan 14, 2008, at 8:41 AM, svilen wrote:
> > i have, say, base class A, inherited by two children B and C. B
> > has an attribute/relation 'address', A and C do not have it.
> > So i had a query(A).eagerload( 'address') and that did work
> > before r3912. But later it gives an error - "mapper|A has no
> > property 'address'".
> > Any hint how to do it now?
>
> what kind of inheritance/mapping  from A->B ?  i cant really
> imagine any way that kind of eager load could have worked since the
> "address" property of "B" does not (and has never) get consulted in
> that case.

plain joined?... hmm. 
maybe it did not really work (eagerly) but lazy-load has fired 
instead... seems that's the case. 
anyway. 
some way to accomplish such thing?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Query object behavior for methods all() and one()

2008-01-14 Thread svilen

all() returns whatwever is there, 0, 1, n
first() returns first if any or None
one() asserts there's exactly 1

On Monday 14 January 2008 18:23:28 Adrian wrote:
> I am a bit confused by the behavior for the methods all() and one()
> if the Query would return an empty result set. In the case of all()
> it returns an empty list whereas one() will throw an exception
> (sqlalchemy.exceptions.InvalidRequestError). I am sure there was a
> reason to implement as it is now but wouldn't it be more convenient
> to return simply None (or an empty String) and throw an exception
> only if more than one row would be returned? An empty result set as
> such is valid and shouldn't be treated as an error.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] eagerloading polymorphic mapper

2008-01-14 Thread svilen

i have, say, base class A, inherited by two children B and C. B has an 
attribute/relation 'address', A and C do not have it.
So i had a query(A).eagerload( 'address') and that did work before 
r3912. But later it gives an error - "mapper|A has no 
property 'address'".
Any hint how to do it now?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: filter() on inherited class doesn't point to the correct table

2008-01-11 Thread svilen

On Friday 11 January 2008 17:03:06 Alexandre Conrad wrote:
> svilen wrote:
> > On Friday 11 January 2008 16:12:08 Alexandre Conrad wrote:
> >>Channel -> Playlist -> Media
> >>Channel -> CatalogChannel(Catalog) -> Media
> >>
> >>(Media has a fk to Catalog, not CatalogChannel)
> >>The only element I have, is "playlist" (instance of Playlist). At
> >>this point, I need to find out the available Media of the
> >>Playlist's Channel's catalog so I can attach them to the
> >> Playlist.
> >>
> >>At first, I tryied out:
> >>
> >>Media.query.join(["catalog",
> >>"channel"]).filter(Channel.c.id==playlist.id_channel).all()
> >>
> >>But then it complains that "channel" is not part of the Catalog
> >>mapper. Catalog ? I want it to be looking at CatalogChannel, this
> >>is the one having the "channel" relation, not Catalog.
> >
> > i see what u want, but formally (computer languages are formal,
> > SA is a language) u are contradicting yourself. u said above that
> > media points to catalog and not to catalogchannel. How u expect
> > it to find a .channel there?
>
> I was expecting that SA would know that from the polymorphic "type"
> flag. I have a "catalog" relation on media. When I do
> media.catalog, it doesn't just return a Catalog object, but really
> a CatalogChannel object
hey, polymorphic means ANY subtype, u could have 5 other CatalogRivers 
that have no .channel in them... how to guess which one? Find the one 
that has .channel? the root-most one or some of its children-klasses?

> (which is the whole point of polymorphic 
> inheritance). And I thought it could figure out channel from that.
> But Mike said no. :) That's why he talked about having some extra
> API query methods:
>
> Media.query.join_to_subclass(CatalogChannel).join("channel").filter
>(Channel.c.id==playlist.id_channel).all()
>
> We could even join classes only directly (isn't this ORM after
> all?):
>
> Media.query.join([CatalogChannel, Channel])
this is completely different beast. it might be useful... although the 
whole idea of the join(list) is list of attribute-names and not 
klasses/tables - to have a.b.c.d.e.f, i.e. be specific and avoid 
thinking in diagram ways (klasA referencing klasB means nothing if it 
happens via 5 diff.attributes)

when i needed similar thing, a) i moved the .channel into the root or 
b) changed media to reference CatChannel and not the base one.

ciao
svilen

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: filter() on inherited class doesn't point to the correct table

2008-01-11 Thread svilen

On Friday 11 January 2008 16:12:08 Alexandre Conrad wrote:
> svilen wrote:
> >>Here is the syntax followed by the generated query:
> >>
> >>   query.filter(Catalog.c.id==CatalogChannel.c.id)
> >>   WHERE catalogs.id = catalogs.id
> >
> > why u need such a query?
> > that's exactly what (inheritance) join does, and automaticaly -
> > just query( CatalogChannel).all() would give u the above query.
>
> I have hidden the full concept I'm working on and only focused my
> problem. Here's my full setup the query is involved with:
>
> Channel -> Playlist -> Media
> Channel -> CatalogChannel(Catalog) -> Media
>
> (Media has a fk to Catalog, not CatalogChannel)
> The only element I have, is "playlist" (instance of Playlist). At
> this point, I need to find out the available Media of the
> Playlist's Channel's catalog so I can attach them to the Playlist.
>
> At first, I tryied out:
>
> Media.query.join(["catalog",
> "channel"]).filter(Channel.c.id==playlist.id_channel).all()
>
> But then it complains that "channel" is not part of the Catalog
> mapper. Catalog ? I want it to be looking at CatalogChannel, this
> is the one having the "channel" relation, not Catalog.
i see what u want, but formally (computer languages are formal, SA is 
a language) u are contradicting yourself. u said above that
media points to catalog and not to catalogchannel. How u expect it to 
find a .channel there? Forget DB; think plain objects. u point to a 
base class but expect it always to have an attribute that belongs to 
one of children classes... which is implicitly saying that while 
pointing to base-class AND u need only those pointing to the 
child-class AND the attribute is whatever. (i.e. 
isinstance(x,CatChanel) and x.channel==...

your query above is missing the isinstance-filter specifying that u 
need catalogchannels and not just any catalogs. i'm not sure how this 
would be expressed in SA but it has to be explicit - and probably 
somewhere on level of tables. 
have u tried 
 Media.query.join( ["catalog", "id", "channel"])... ???

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: filter() on inherited class doesn't point to the correct table

2008-01-11 Thread svilen

On Friday 11 January 2008 13:58:34 Alexandre Conrad wrote:
> Hi,
>
> playing with inheritance, I figured out that an inherited mapped
> class passed to filter doesn't point to the correct table.
>
> I have 2 classes, Catalog and CatalogChannel(Catalog).
>
> Here is the syntax followed by the generated query:
>
>query.filter(Catalog.c.id==CatalogChannel.c.id)
>WHERE catalogs.id = catalogs.id
why u need such a query? 
that's exactly what (inheritance) join does, and automaticaly - 
just query( CatalogChannel).all() would give u the above query.

as of the relations, they are quite automatic BUT magic does not 
always work, so u have to explicitly specify some things manualy.

> Normaly, I would join(["A", "B"]) the tables between each other.
> But if a "channel" relation only exists on the CatalogChannel
> class, join("channel") wouldn't work as SA looks at superclass
> Catalog. I thought it would naturally find the relationship by
> looking at the polymorphic "type" column from Catalog, but it
> doesn't. Mike suggested we would need to extend the API with a new
> method like
> join_to_subclass() or so... Even though, I still think SA should
> figure out which relation I'm looking at...
i'm not sure i can follow u here... i do have tests about referencing 
to baseclas / subclasses, self or not, and they work ok. 
dbcook/tests/sa/ref_*.py for plain sa (160 combinations), and 
dbcook/tests/mapper/test_ABC_inh_ref_all.py (1 combinations)
IMO u are missing some explicit argument

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] String -> Text type deprecation

2008-01-09 Thread svilen

what it is about? 
i'm not much into sql types... isn't Varchar enough for a 
unsized/anysized String?

btw, 'count * from ...' produces the warning for sqlite; a bindparam 
autoguesses a type_ of VARCHAR and for some reason the VARCHAR also 
needs length ??

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: get related table object via mapped class

2008-01-04 Thread svilen

On Friday 04 January 2008 18:54:29 Alexandre da Silva wrote:
> Hello all,
>
> is there any way to access class related by an relationship?
>
> sample:
>
> # Table definition ommited
>
> class Person(object):
> pass
>
> class Address(object):
> pass
>
> mapper(Person, person_table,
> properties=dict(
> address=relation(Address, uselist=False)
> )
> )
>
>
>
> now I want to access the Address class (and/or address_table)
> through a Person object,
>
> any sugestion?

Person.address._get_target_class()
Person.address.mapper.class_
and eventualy the table from the above mapper.local_table; see what 
kind of tables the mapper keeps.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Emptying out the session of new objects

2008-01-04 Thread svilen

On Friday 04 January 2008 14:44:32 Dave Harrison wrote:
> On Friday 04 January 2008 23:32:21 Alexandre da Silva wrote:
> > > Is there an easy way of flushing all objects that are
> > > categorised as new in the session ??
> >
> > I think you can use session.clear(), it will remove objects from
> > session, and persistent objects will stay on database.
> >
> > Att
> >
> > Alexandre
>
> I had been using session.clear up until now, but as of 0.4.2 I've
> found that I get this error for alot of my tests,
>
> object has no attribute '_sa_session_id'
>
> As background, the tests I have use a sqlite memory database, and
> on setUp do a meta.create_all and on tearDown do a meta.drop_all
>
> I've assumed that this error is due to me leaving objects around in
> the session, so my current theory is that I need to manually
> expunge the objects I create at the end of the test.

the above means that the object has been removed from the session 
(which is what clear() does) BUT later someone (inside it) needs that 
session for something, e.g. to read some lazy-attribute or whatever.
u may try to do clear_mappers() too - just a suggestion.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: query.filter_or() ?

2007-12-20 Thread svilen

On Thursday 20 December 2007 16:06:53 Rick Morrison wrote:
> How would you specify the logical "parenthesis" to control
> evaluation order in complex expressions?
something like joinpoint? or maybe, a semi-explicit ".parethnesis()" 
of sorts... 
dunnow, but this is a thing that's missing; one has to build those 
complex expressions by hand and shove it to .filter() - or worse, 
to .from_statement(). And the whole nice query.machinery (joins, 
grouping etc) goes away...

no, i'm not about an OQL; it's just a wish...
(while diggin the dbcook filter-expressions translator i found that i 
am duplicating code that is already there in the Query, doing same 
thing in slightly different angle)

> On Dec 20, 2007 5:14 AM, svilen <[EMAIL PROTECTED]> wrote:
> > query.filter() does criterion = criterion & new
> > why not having one that does criterion = criterion | new ?
> > its useful to have some
> > query.this.that.filter.whatever.filter_or(...)

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] query.filter_or() ?

2007-12-20 Thread svilen

query.filter() does criterion = criterion & new
why not having one that does criterion = criterion | new ?
its useful to have some query.this.that.filter.whatever.filter_or(...)

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Conventions for creating SQL alchemy apps

2007-12-20 Thread svilen

some may-be-stupid answers:
 - see the number of lines per file
 - split it into app-field-related parts, not SA-arhitectural parts
 - hell, do as it is easier - start as one file, once u hit some limit 
of your nerve(r)s, split.. but do keep one single file as main 
entrance point

On Thursday 20 December 2007 09:37:26 Morgan wrote:
> Hi Guys,
>
> This may be a stupid question so flame away I don't care, but I
> have been wondering. Is there a better way to layout your SQL
> alchemy files that my random method? Does anyone have a convention
> that works well for them.
>
> I'm only asking this because I cannot decide how I want to lay out
> the SQLAlchemy component of my application.
>
> I'm thinking of putting it all in files like engines.py,
> mapping.py, metadata.py etc or should I just shove this all in one
> file.
>
> Let me know if I have had too much coffee or not.
> Morgan

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] some error around trunk

2007-12-15 Thread svilen

seems something about .type vs .type_ or similar:

Traceback (most recent call last):
  File "tests/convertertest.py", line 152, in 
test4_balance_trans_via_prev_balance_date_subselect
trans.c.date > func.coalesce( sprev,0 )
  File "sqlalchemy/sql/expression.py", line 777, in __call__
return func(*c, **o)
  File "sqlalchemy/sql/functions.py", line 12, in __call__
return type.__call__(self, *args, **kwargs)
  File "sqlalchemy/sql/functions.py", line 35, in __init__
kwargs.setdefault('type_', _type_from_args(args))
  File "sqlalchemy/sql/functions.py", line 75, in _type_from_args
if not isinstance(a.type, sqltypes.NullType):
AttributeError: 'Select' object has no attribute 'type'


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Column defaults in MapperExtension.after_insert

2007-12-11 Thread svilen

> This works nicely for attributes that I set directly. However, it
> breaks when it comes across a column that is defined as:
>
> sa.Column('date_created', sa.DateTime,
>   default=sa.func.current_timestamp(type=sa.DateTime))
>
> The attached script should show the problem. The traceback is:
>
> Traceback (most recent call last):
>   File "satest.py", line 32, in after_insert
> print "  %s = %r" % (prop.key, getattr(instance, prop.key))
>   File
> "/home/warranty/python/development/lib/python2.5/site-packages/SQLA
>lchem y-0.4.1-py2.5.egg/sqlalchemy/orm/attributes.py", line 40, in
> __get__ return self.impl.get(obj._state)
>   File
> "/home/warranty/python/development/lib/python2.5/site-packages/SQLA
>lchem y-0.4.1-py2.5.egg/sqlalchemy/orm/attributes.py", line 215, in
> get value = callable_()
>   File
> "/home/warranty/python/development/lib/python2.5/site-packages/SQLA
>lchem y-0.4.1-py2.5.egg/sqlalchemy/orm/attributes.py", line 628, in
> __fire_trigger
> self.trigger(instance, [k for k in self.expired_attributes if k
> not in self.dict])
>   File
> "/home/warranty/python/development/lib/python2.5/site-packages/SQLA
>lchem y-0.4.1-py2.5.egg/sqlalchemy/orm/session.py", line 1112, in
> load_attributes
> if
> object_session(instance).query(instance.__class__)._get(instance._i
>nstan ce_key, refresh_instance=instance,
> only_load_props=attribute_names) is None:
> AttributeError: 'User' object has no attribute '_instance_key'
>
> I assume the problem is that my date_created column isn't
> immediately available at the 'after_insert' stage, because it is
> generated in the SQL INSERT statement, but hasn't been read back
> from the database yet. Is there a more suitable hook-point than
> after_insert, where I can safely read values like this?
>
i was going to suggest u to do a session.expire( that_attribute) but 
as i see from the traceback it is going after expired attrs and 
failing there, so thats for Michael... eventualy the after_insert is 
happening too early and the _instance_key etc SA-system stuff is not 
set on the object yet.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Iterating over mapped properties

2007-12-11 Thread svilen

On Tuesday 11 December 2007 13:13:37 King Simon-NFHD78 wrote:
> Hi,
>
> I used to be able to iterate over mapper.properties.items() to get
> the name of each mapped property along with the object that
> implements it. However, in 0.4.1, trying to do this results in a
> NotImplementedError telling me to use iterate_properties and
> get_property instead, but I can't see a way to get the name of the
> property through either of these methods.
>
> Is there a generic way that, given a mapped class, I can get a list
> of the mapped properties with their names?
iterate_props yields Property instances; maybe p.key is the name u 
want.
here a generic wrapper simulating dict.items():
def props_iter( mapr, klas =sqlalchemy.orm.PropertyLoader ):
try: i = mapr.properties
except: # about r3740
for p in mapr.iterate_properties:
if isinstance( p, klas):
yield p.key, p
else:
for k,p in i.iteritems():
yield k,p


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Design: mapped objects everywhere?

2007-12-10 Thread svilen

On Monday 10 December 2007 12:12:19 Paul-Michael Agapow wrote:
> Yowser. Thanks to both of you - that's exactly what I mean. Any
> pointers on where I can find an example of a class that is
> "unaware" if it is in the db? Or is there a good example of the
> second solution,  of "a single class that does the what and why,
> and an interchangeable layer/context that does load/saving"? I'm
> digging through dbcook.sf.net but haven't found anything just yet.
well... good example - no.
there is a bad example:
dbcook/dbcook/usage/example/example1.py
The classes are plain classes (.Base can be anything/object), with 
some DB-related declarations/metainfo in them.
they do not have to know that they are DB-related.
if u dont give them to dbcook.builder.Builder, they will not become 
such. If u give them, they will become SA-instrumented etc, but u 
still do not have to change anything - as long as your methods do not 
rely (too much) on being (or not being) DB. 
see dbcook.expression as attempt to wrap some queries in independent 
manner.

more, if u redefine the Reflector u can have different syntax for 
db-metainfo - or get it from different place, not at all inside the 
class. So u can plug that in and out whenever u decide to (no example 
on this, its theoretical ;-).

Still, the final class (or object) will be always aware about being in 
the db or not; it is _you_ who should know when u do not care (95%) 
and when u do (5%).

All this is "proper design and then self-discipline" issue: 
u have to keep the things separate (and i tell u, it is NOT easy)
if u start putting it any db-stuff in the classes, no framework will 
help u.

complete opaque separation is probably possible, but will probably 
mean having 2 paralel class hierarchies instead of one.

> On 2007 Dec 7, at 22:07, [EMAIL PROTECTED] wrote:
> > Paul Johnston wrote:
> >>> "A Sample may be created by the web application or fetched from
> >>> the database. Later on, it may be disposed of, edited or
> >>> checked back into
> >>> the db."
> >>
> >> Sounds like you want your app to be mostly unaware of whether a
> >> class is
> >> saved in the db or not (i.e. persistent)? If so, I'd use a
> >> single class,
> >> design the properties so they work in non-persistent mode, and
> >> then they'll work in persistent mode as well.
> >
> > or like a single class that does the what and why, and an
> > interchangeable
> > layer/context that does load/saving (and the relations!).
> > in such situations declarative programming helps a lot, so u dont
> > bind your
> > self to (the) db (or whatever persistency). Check dbcook.sf.net.
> > My own
> > latest experience is about turning a project that was thought for
> > db/using
> > dbcook into non-db simple-file-based persistency. The change was
> > relatively
> >   small, like 5-10 lines per class - as long as there are
> > Collections etc
> > similar notions so Obj side of ORM looks same.
>
> --
> Dr Paul-Michael Agapow: VieDigitale / Inst. for Animal Health
> [EMAIL PROTECTED] / [EMAIL PROTECTED]
>
>
>
>
> 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: delete children of object w/o delete of object?

2007-12-06 Thread svilen

On Wednesday 05 December 2007 23:36:12 kris wrote:
> with sqlalchemy 0.4.1,
>
> Is there an idiom for delete the children of the object
> without actually deleting the object itself?
>
> I tried
> session.delete (obj)
> session.flush()
> # add new children
> session.save (obj)
> session.flush()
>
> But it gave me the error
>InvalidRequestError: Instance '[EMAIL PROTECTED]' is already
> persistent
>
> which does not appear correct either.

u'll probably have to first detach the children from the 
parent.relation somehow?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] clause with crossproduct of joined-inheritance

2007-11-26 Thread svilen

hi
i have following scenario (excluse the very brief "syntax"):

class BaseAddress:
  street = ... text
class Office( BaseAddress):
  some_fields
class Home( BaseAddress):
  other_fields

class Person:
 home = reference-to-HomeAdrress
 office = reference-to-MainAdrress
 otherstuff

now, i need something like 
query(Person).filter( 
(Person.home.street == 'foo') 
& (Person.office.street == 'bar') 
)

should that be something with .join() and then restoring joinpoint, or 
what?

if i write the joins explicitly, like
  query(Person).filter( _and(
Person.home_id == Home.id,
Home.street == 'foo', 
Person.main_id == Office.id,
Office.street == 'bar' )
)

this does not work, the BaseAddress is used only once instead of being 
aliased...

i know the above query is stupd, can be "optimized", it boils down to 
  query(Person).filter( _and(
Person.home_id == BaseAddress.id,
BaseAddress.street == 'foo', 
Person.main_id == BaseAddress.id,
BaseAddress.street == 'bar' )
)
and then the need of aliasing is glaring... but this requires a human 
to do it.

can theses alises be somehow made automaticaly ? e.g. multiple access 
to same base table coming from different "subclasses" -> alias 

ciao
svilen

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: filter_by VS python properties/descriptors VS composite properties

2007-11-20 Thread svilen

On Tuesday 20 November 2007 11:37:29 Gaetan de Menten wrote:
> Hi people,
>
> I have some classes with "standard python" properties which target
> another python object and also uses several columns in the
> database. I also got a global factory function to create an
> instance of that target object out of the value of the columns (the
> class of that target object can vary).
>
> Now, I'd like to use those properties in filter criteria, as so:
>
> session.query(MyClass).filter(MyClass.my_prop == value)...
> session.query(MyClass).filter_by(my_prop_name=value)...
>
> I've tried using composite properties for that (passing the factory
> function instead of a composite class), and it actually works, but
> I'm a little nervous about it: can I have some bad side effect
> provided that in *some* cases (but not always) the target object is
> loaded from the database.
>
> I also dislike the fact I have to provide a __composite_values__ on
> all the possible target classes, while in my case, I would prefer
> to put that logic on the property side. I'd prefer if I could
> provide a callable which'd take an instance and output the tuple of
> values, instead of the method. Would that be considered a valid use
> case for composite properties or am I abusing the system?
>
> I've also tried to simply change those properties to descriptors so
> that I can override __eq__ on the object returned by accessing the
> property on the class. This worked fine for "filter". But I also
> want to be able to use filter_by. So, I'd wish that
> query(Class).filter_by(name=value) would be somehow equal to
> query(Class).filter(Class.name == value), but it's not. filter_by
> only accepts MapperProperties and not my custom property.
what would query(Class).filter(Class.name == value) mean in the case 
of plain property? the Class.name == value would selfconvert to 
SA.expression?

then what if the filter_by(**kargs) is simply doing 
 _and( getattr( Class, k) == v for k,v in kargs,items() )

i dont understand the _composite bit, why u have to bother with that?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: 2 questions

2007-11-13 Thread svilen

On Monday 12 November 2007 23:11:25 Michael Bayer wrote:
> On Nov 12, 2007, at 2:07 PM, [EMAIL PROTECTED] wrote:
> > hi
> > 1st one:  i am saving some object; the mapperExtension of the
> > object fires additional atomic updates of other things elsewhere
> > (aggregator).
> > These things has to be expired/refreshed... if i only knew them.
> > For certain cases, the object knows exactly which are these
> > target things. How (when) is best to expire these instances, i.e.
> > assure that nexttime they are used they will be re-fetched?
> > a) in the mapperext - this would be before the flush?
> > b) later, after flush, marking them somehow ?
>
> the "public" way to mark an instance as expired is
> session.expire(instance).  if you wanted to do this inside the
> mapper extension, i think its OK as long as you do the expire
> *after* the object has been inserted/updated (i.e. in
> after_insert() or after_update()).
lets say A points to B and A.price is accumulated in the B, and that B 
needs be expired.
i've done expire in the A.mapExt.after_insert(). 
but A trouble comes when creating both A and B: they are not not yet 
persistent and the expire() (as well as the refresh() hits this:
session.refresh( g)
  File "sqlalchemy/orm/session.py", line 725, in refresh
if self.query(obj.__class__)._get(obj._instance_key, reload=True) 
is None:
AttributeError: _instance_key

i'm not sure if i am doing proper thing at all.
Something like i need intermediate flush() to get B in the database 
first; then the insert of A will update that B and expire it 
eventualy.
i.e. i need to have the B record in the DB in order to have the 
aggregation /update working; else i am updating nothing...

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] ticket 819/r3762

2007-11-13 Thread svilen

what a coincidence, 2 days ago we stepped on this bindparam-types 
thing; table.some_decimal_column == decimal.Decimal(5) did not always 
work.

now it mostly works, i think there is one more case that breaks for 
me: when the column itself is hidden in a function.
e.g.
table_A = Table( 'Nalichnost', meta,
Column( 'db_id',   primary_key= True,   type_= Integer, ),
Column( 'price', Numeric(precision=10,length=2,asdecimal=True), ),
)

print session.query( A).filter(
  (func.coalesce( A.price, 0) ==Decimal( 1))
).all()

Traceback (most recent call last):
  File "aa.py", line 40, in 
(func.coalesce( Nalichnost.cena, 0) ==Decimal( 1))
  File "sqlalchemy/orm/query.py", line 586, in all
return list(self)
  File "sqlalchemy/orm/query.py", line 634, in __iter__
return self._execute_and_instances(context)
  File "sqlalchemy/orm/query.py", line 637, in _execute_and_instances
result = self.session.execute(querycontext.statement, 
params=self._params, mapper=self.mapper)
  File "sqlalchemy/orm/session.py", line 527, in execute
return self.__connection(engine, 
close_with_result=True).execute(clause, params or {}, **kwargs)
  File "sqlalchemy/engine/base.py", line 784, in execute
return Connection.executors[c](self, object, multiparams, params)
  File "sqlalchemy/engine/base.py", line 835, in execute_clauseelement
return self._execute_compiled(elem.compile(dialect=self.dialect, 
column_keys=keys, inline=len(params) > 1), distilled_params=params)
  File "sqlalchemy/engine/base.py", line 847, in _execute_compiled
self.__execute_raw(context)
  File "sqlalchemy/engine/base.py", line 859, in __execute_raw
self._cursor_execute(context.cursor, context.statement, 
context.parameters[0], context=context)
  File "sqlalchemy/engine/base.py", line 875, in _cursor_execute
raise exceptions.DBAPIError.instance(statement, parameters, e)
sqlalchemy.exceptions.InterfaceError: (InterfaceError) 
Error binding parameter 1 - probably unsupported type. 
u'SELECT "Nalichnost".db_id AS "Nalichnost_db_id", "Nalichnost".cena 
AS "Nalichnost_cena" FROM "Nalichnost" 
WHERE coalesce("Nalichnost".cena, ?) = ? ORDER BY "Nalichnost".oid' 
[0, Decimal("1")]

which the same as what was before r3762 for half of the 
Decimal-passing queries.

i guess the funcs could return anything, so their return type is 
probably unknown generally. For this one, i am generating the query, 
so i could hint the bindparam something about the type of the 
column - but i dont know what.
any idea?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: 2 questions

2007-11-13 Thread svilen

clarification for this below; i have non-ORM updates happening inside 
ORM transaction (in after_insert() etc). How to make them use the 
parent transaction? i have a connection there.

> >> and, why atomic updates also have with commit after them? or is
> >> this sqlite-specific?
> >
> > every CRUD operation requires a commit.  DBAPI is always inside
> > of a transaction.
>
> mmm i mean i have bunch of inserts and updates without a commit,
> and then several atomic updates come with their own commits. and
> then one more commit at the end.
> e.g.
> * SA: INFO BEGIN
> * SA: INFO UPDATE "SequenceCounter" SET "curNum"=? WHERE
> "SequenceCounter".db_id = ?
> * SA: INFO [2, 8]
> * SA: INFO INSERT INTO "Nalichnost" (kolvo_kym, cena, obj_id,
> data_godnost, sklad_id, disabled, time_valid, kolvo_ot, stoka_id,
> time_trans) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
> * SA: INFO ['0', '978', None, None, 3, 0, None, '0', 1, None]
> * SA: INFO INSERT INTO "Nalichnost" (kolvo_kym, cena, obj_id,
> data_godnost, sklad_id, disabled, time_valid, kolvo_ot, stoka_id,
> time_trans) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
> * SA: INFO ['0', '94', None, None, 3, 0, None, '0', 1, None]
> ...
> * SA: INFO INSERT INTO "DocItem" (kolichestvo, data_godnost,
> sk_ot_id, number, sk_kym_id, opisanie, cena, stoka_id, doc_id)
> VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)
> * SA: INFO ['702', None, None, 3, 5, None, '668', 1, 2]
> * SA: INFO INSERT INTO "DocItem" (kolichestvo, data_godnost,
> sk_ot_id, number, sk_kym_id, opisanie, cena, stoka_id, doc_id)
> VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)
> * SA: INFO ['422', None, None, 4, 6, None, '17', 1, 2]
>
> * SA: INFO UPDATE "Nalichnost" SET
> kolvo_ot=(coalesce("Nalichnost".kolvo_ot, ?) + ?) WHERE
> "Nalichnost".db_id = ?
> * SA: INFO [0, 643, 4]
> * SA: INFO COMMIT
> * SA: INFO UPDATE "Nalichnost" SET
> kolvo_kym=(coalesce("Nalichnost".kolvo_kym, ?) + ?) WHERE
> "Nalichnost".db_id = ?
> * SA: INFO [0, 643, None]
> * SA: INFO COMMIT
> * SA: INFO UPDATE "Nalichnost" SET
> kolvo_ot=(coalesce("Nalichnost".kolvo_ot, ?) + ?) WHERE
> "Nalichnost".db_id = ?
> * SA: INFO [0, 702, 5]
> * SA: INFO COMMIT
> * SA: INFO UPDATE "Nalichnost" SET
> kolvo_kym=(coalesce("Nalichnost".kolvo_kym, ?) + ?) WHERE
> "Nalichnost".db_id = ?
> * SA: INFO [0, 702, None]
> * SA: INFO COMMIT
> * SA: INFO UPDATE "Nalichnost" SET
> kolvo_ot=(coalesce("Nalichnost".kolvo_ot, ?) + ?) WHERE
> "Nalichnost".db_id = ?
> * SA: INFO [0, 9, 3]
> * SA: INFO COMMIT
> * SA: INFO COMMIT
>
>
>
> 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: r3727 / AbstractClauseProcessor problem

2007-11-08 Thread svilen

> > i dont really understand why u need the ACP being so different to
> > plain
> > visitor; i mean cant they share some skeleton part of traversing,
> > while
> > putting all the choices (visit* vs convert; onentry/onexit;
> > stop/dont) in their own parts.
> > After all, visitor pattern is twofold,  a) Guide + b) Visitor;
> > the Guide
> > doing traversing, the Visitor noting things; choice where to go
> > might be
> > in visitor and/or in guide. some times (one extreme) the visitor
> > is just
> > one dumb functor; other cases (other extreme end)  the visitor is
> > very sofisticated and even does guiding/traversing.
> > Here it looks more like second case, u have most of both sides
> > put in the Visitor, and only small part (specific visit_* /
> > copy_internals) left to the actual nodes.
> > And to me, the skeleton is still same between ACP and
> > ClauseVisitor.
>
> you cant use plain visitor beacuse you are copying the whole
> structure in place at the same time, and a method is deciding
> arbitrarily to not copy certain elements, and instead returns an
> element that was set from the outside; that element cannot be
> mutated since its not from the original structure, therefore it
> cannot be traversed.
well this is a behavior that can be controlled - to traverse the 
originals that are to be replaced, or not; and to traverse the 
replacement AFTER it has been replaced or not.

> as it turns out, this visitor still has lots of problems which will
> continue to prevent more sophisticated copy-and-replace
> operations...some of the ways that a Select() just works from the
> ground up just get in the way here.

mmmh. u can think of splitting the Visitor into 3: Guide (who 
traverses _everything_ given), Visitor (who does things), and 
intermediate Decisor, who decides where to go / what to do. But this 
can get complicated (slow) although it would be quite clear who does 
what.
Also, do have both onEntry and onExit for each node; i am sure some 
operations will require both; first to gather info, second to make a  
decision about what to do with it while still at that level.

i've done quite a lot of tree/expression traversers, and while 
readonly walking doesnot care much if on entry or on exit (except if 
u want depth or breadth first), replacements-in-place and 
copy+replace sometimes needed both entry and exit hooks, + they where 
conditional like in leafs.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: mapper/mapperExtension initialization order / mapper.properties

2007-11-08 Thread svilen

forget the instrument_class(). 

i do a separate MapExt, and append it to the mapper.extensions 
manualy. So if this post-mapper() append'ing does not screw things 
up, all else is ok.

> i have another option to forget the above auto-approach and add the
> extension separately, after the mapper(...) is ready - like
> mapr.extension.append(e).
> Is there anything wrong or that needs be done over it?


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] mapper/mapperExtension initialization order / mapper.properties

2007-11-08 Thread svilen

g'day.
in the Aggregator i have mapper extension, that needs info from the 
mapper like local-table and mapping/naming of column properties.
It used to hook on instrument_class() for that, but now the 
mapper.properties in its new get()/iterate() form is not 
available yet when mapper-extensions are setup (e.g. at 
extension.instrument_class).

Mapper._init__(): 
...
self._compile_class()
self._compile_extensions()
self._compile_inheritance()
self._compile_tables()
self._compile_properties()
self._compile_selectable()

is there any specific reason for the compile_extensions to be that 
early in this order? any other hook that i can use ?

i have another option to forget the above auto-approach and add the 
extension separately, after the mapper(...) is ready - like  
mapr.extension.append(e). 
Is there anything wrong or that needs be done over it?


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: r3727 / AbstractClauseProcessor problem

2007-11-07 Thread svilen

ahha. so i am replacing one whole subexpr with somthing, and the 
original subexpr is not traversed inside.
if i comment the stop_on.add(), it attempts to traverse the result 
subexpr, not the original one. 
i want the original to be traversed. Something like doing onExit 
instead of current onEntry.
if its too hard, i can probably traverse it twice, once just marking , 
2nd time replaceing things? i'll try

On Wednesday 07 November 2007 20:02:05 svilen wrote:
> On Wednesday 07 November 2007 19:33:22 Michael Bayer wrote:
> > ohyoure *extending* abstractclauseprocessor ???
> >
> > well yes, thats
> > going to change things quite a bit.  I think you should study ACP
> > in its current form; what its doing now is faithfully calling
> > convert_element() for *every* element in the expression, and also
> > is not copying any elements before calling convert_element() -
> > convert_element() always gets components from the original clause
> > only.   if convert_element() returns non-None, the resulting
> > element is assembled into the output, and traversal *stops* for
> > the remainder of that element.  this is different behavior than
> > it was before.  the reason it stops for a replaced element is
> > because its assumed that the replacement value is not part of the
> > expression which is being copied, and therefore should not be
> > copied or processed itself.  if its that second part of the
> > behavior thats breaking it for you, we can add an option to
> > switch it off (comment out line 156, stop_on.add(newelem) to
> > produce this).
>
> this did not change things, the column is still not traversed.
> maybe something else also has to be changed.
>
> i want a copy of original expression, where certain things are
> replaced by my things, and no need to go inside them - so this
> stop_on as u describe is okay... unless: what u mean remainder?
> that the returned element is not further traversed (thats ok), or
> the parent of that element is not traversed anymore (not ok)?
>
> > this new version of ACP can locate things besides just plain
> > Table, Alias and Column objects; it can locate things like Joins
> > embedded in a clause which match the target selectable.
> >

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: r3727 / AbstractClauseProcessor problem

2007-11-07 Thread svilen

On Wednesday 07 November 2007 19:33:22 Michael Bayer wrote:
> ohyoure *extending* abstractclauseprocessor ??? 
 
> well yes, thats 
> going to change things quite a bit.  I think you should study ACP
> in its current form; what its doing now is faithfully calling
> convert_element() for *every* element in the expression, and also
> is not copying any elements before calling convert_element() -
> convert_element() always gets components from the original clause
> only.   if convert_element() returns non-None, the resulting
> element is assembled into the output, and traversal *stops* for the
> remainder of that element.  this is different behavior than it was
> before.  the reason it stops for a replaced element is because its
> assumed that the replacement value is not part of the expression
> which is being copied, and therefore should not be copied or
> processed itself.  if its that second part of the behavior thats
> breaking it for you, we can add an option to switch it off (comment
> out line 156, stop_on.add(newelem) to produce this).
this did not change things, the column is still not traversed. 
maybe something else also has to be changed.

i want a copy of original expression, where certain things are 
replaced by my things, and no need to go inside them - so this 
stop_on as u describe is okay... unless: what u mean remainder? that 
the returned element is not further traversed (thats ok), or the 
parent of that element is not traversed anymore (not ok)?


> this new version of ACP can locate things besides just plain Table,
> Alias and Column objects; it can locate things like Joins embedded
> in a clause which match the target selectable.
>
> On Nov 7, 2007, at 10:45 AM, svilen wrote:
> > On Wednesday 07 November 2007 16:57:08 Michael Bayer wrote:
> >> On Nov 7, 2007, at 2:03 AM, [EMAIL PROTECTED] wrote:
> >>> - something changed in the traversing (AbstractClauseProcessor
> >>> - r3727)
> >>> and it does not find proper things...
> >>
> >> ACP has been entirely rewritten.   if you can provide simple
> >> tests in the form that theyre present in test/sql/generative.py
> >> and/or test/sql/ selectable.py that would be helpful.  I have a
> >> feeling its not "missing" things, its just doing it slightly
> >> differently.
> >
> > http://dbcook.svn.sourceforge.net/viewvc/dbcook/trunk/dbcook/misc
> >/aggregator/ (no it does not need dbcook)
> > $ cd dbcook/misc/aggregator/tests
> > $ PYTHONPATH=$PYTHONPATH:../.. python convertertest.py
> >
> > ...
> > 
> > FAIL: count tags per movie
> >  File "tests/convertertest.py", line 73, in
> > test1_count_tags_per_movie['oid']) ...
> > AssertionError: ['oid'] != ['tabl', 'oid']
> > 
> > FAIL: count tags per movie
> >  File "tests/convertertest.py", line 73, in
> > test1_count_tags_per_movie['oid']) ...
> > AssertionError: ['oid'] != ['tabl', 'oid']
> >
> > 
> > i did print the interesting elements in my
> > Converter.convert_element(), and the result is that
> > a) order is slightly different - which i dont care
> > b) 1 item is not traversed in r3727
> > e.g.
> >
> > r3626:
> >> Column tags.tabl
> >> Column tags.oid
> >> Column movies.id
> >> Column tags.tabl
> >> Column tags.oid
> >> Column movies.id
> >> Column users.id
> >> Column userpics.uid
> >> Column userpics.state
> >
> > 
> >
> > r3627:
> >> Column tags.tabl
> >> Column tags.oid
> >> Column movies.id
> >> Column tags.oid
> >> Column movies.id
> >> Column users.id
> >> Column userpics.uid
> >> Column userpics.state
> >
> > the 2nd tags.tabl is missing, hence the assertFails
> >
> > ciao
> > svilen
>
> 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: r3727 / AbstractClauseProcessor problem

2007-11-07 Thread svilen

also, i put a 

class ClauseVisitor( sql_util.AbstractClauseProcessor):
def convert_element( me, e): return None
in the beginning of the tests.sql.generative, and after ignoreing this 
or that error, here is similar thing:

==
FAIL: test_correlated_select (__main__.ClauseTest)
--
Traceback (most recent call last):
  File "sql/generative.py", line 235, in test_correlated_select
self.assert_compile(Vis().traverse(s, clone=True), "SELECT * FROM 
table1 WHERE table1.col1 = table2.col1 AND table1.col2 
= :table1_col2")
  File "/home/az/src/ver/sqlalchemy-trunk/test/testlib/testing.py", 
line 262, in assert_compile
self.assert_(cc == result, "\n'" + cc + "'\n does not match \n'" + 
result + "'")
AssertionError: 
'SELECT * FROM table1 WHERE table1.col1 = table2.col1'
 does not match 
'SELECT * FROM table1 WHERE table1.col1 = table2.col1 AND table1.col2 
= :table1_col2'

here whole subexpr is gone


On Wednesday 07 November 2007 17:45:04 svilen wrote:
> On Wednesday 07 November 2007 16:57:08 Michael Bayer wrote:
> > On Nov 7, 2007, at 2:03 AM, [EMAIL PROTECTED] wrote:
> > > - something changed in the traversing (AbstractClauseProcessor
> > > - r3727)
> > > and it does not find proper things...
> >
> > ACP has been entirely rewritten.   if you can provide simple
> > tests in the form that theyre present in test/sql/generative.py
> > and/or test/sql/ selectable.py that would be helpful.  I have a
> > feeling its not "missing" things, its just doing it slightly
> > differently.
>
> i did print the interesting elements in my
> Converter.convert_element(), and the result is that
>  a) order is slightly different - which i dont care
>  b) 1 item is not traversed in r3727
> e.g.
>
> r3626:
>  > Column tags.tabl
>  > Column tags.oid
>  > Column movies.id
>  > Column tags.tabl
>  > Column tags.oid
>  > Column movies.id
>  > Column users.id
>  > Column userpics.uid
>  > Column userpics.state
>
> 
>
> r3627:
>  > Column tags.tabl
>  > Column tags.oid
>  > Column movies.id
>  > Column tags.oid
>  > Column movies.id
>  > Column users.id
>  > Column userpics.uid
>  > Column userpics.state
>
> the 2nd tags.tabl is missing, hence the assertFails
>
> ciao
> svilen
>
> 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: r3727 / AbstractClauseProcessor problem

2007-11-07 Thread svilen

On Wednesday 07 November 2007 16:57:08 Michael Bayer wrote:
> On Nov 7, 2007, at 2:03 AM, [EMAIL PROTECTED] wrote:
> > - something changed in the traversing (AbstractClauseProcessor -
> > r3727)
> > and it does not find proper things...
>
> ACP has been entirely rewritten.   if you can provide simple tests
> in the form that theyre present in test/sql/generative.py and/or
> test/sql/ selectable.py that would be helpful.  I have a feeling
> its not "missing" things, its just doing it slightly differently.

http://dbcook.svn.sourceforge.net/viewvc/dbcook/trunk/dbcook/misc/aggregator/
(no it does not need dbcook)
$ cd dbcook/misc/aggregator/tests
$ PYTHONPATH=$PYTHONPATH:../.. python convertertest.py

...

FAIL: count tags per movie
  File "tests/convertertest.py", line 73, in 
test1_count_tags_per_movie['oid']) ...
AssertionError: ['oid'] != ['tabl', 'oid']

FAIL: count tags per movie
  File "tests/convertertest.py", line 73, in 
test1_count_tags_per_movie['oid']) ...
AssertionError: ['oid'] != ['tabl', 'oid']


i did print the interesting elements in my 
Converter.convert_element(), and the result is that 
 a) order is slightly different - which i dont care
 b) 1 item is not traversed in r3727
e.g.
r3626:
 > Column tags.tabl
 > Column tags.oid
 > Column movies.id
 > Column tags.tabl
 > Column tags.oid
 > Column movies.id
 > Column users.id
 > Column userpics.uid
 > Column userpics.state

r3627:
 > Column tags.tabl
 > Column tags.oid
 > Column movies.id
 > Column tags.oid
 > Column movies.id
 > Column users.id
 > Column userpics.uid
 > Column userpics.state
the 2nd tags.tabl is missing, hence the assertFails

ciao
svilen

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Code Organisation

2007-11-07 Thread svilen

u can use the timephase-separation, i.e. declare vs runtime;
i.e. use global scope in for B in A, but use runtime scope for A in B.

modB.py:
 import A
 ...

modA.py:
  def somefunc_or_method():
import B
...


another solution is to have sort-of forward-text-declarations that at 
certain time are all translated into real things by someone else. But 
this has more overhead and is more usable on more larger-scale 
dependencies; i.e. all business-obj klasses

> Hi there,
>
> We have a pretty large project by now and we run into import loops.
> So I decided to restructure the code, and I hoped some people with
> more experience can comment on this.
>
> The basic problem is this:
>
> We have the database object code, mappers and tables neatly
> organized in one module (db). The controller code imports this
> module to get access to these objects. All fine.
>
> But we have another object called Connection which is a singleton
> class that actually manages the connection to our database. It is
> basically a wrapper for create_engine and contextual_session. But
> next to that it keeps info about the current login state like the
> employee, location etc. The mapped database objects need this info
> on their turn to add the current user to a new object etc. So the
> Connection object depends on the Mapped Database Objects, but the
> Mapped Database Object depend on the Connection object too.
>
> Anyone got a good tip to solve this? Or designed something similar?
>
> Thanks, Koen
>
>
> 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: r3681/ session.save( smth_persistent) became error?

2007-10-31 Thread svilen

On Wednesday 31 October 2007 17:51:09 Michael Bayer wrote:
> also am considering taking save()/update()/save_or_update(), which
> are hibernate terms,  into just "add()".  maybe ill put that in
> 0.4.1.
why not save() - having the 'save_or_update' meaning?
would anyone need the new explicit save() - which actualy has the 
meaning of 'add'? 
maybe rename the explicit one as add() and rename save_or_update as 
save... these are more talkative names to me

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] r3681/ session.save( smth_persistent) became error?

2007-10-31 Thread svilen

why is this so?
i have bunch of objects, and i make them all persistent.
then i have another bunch, some of them are noe, other are from above 
and i want this bunch to also became persistent. (If something there 
IS already persistent - so what, do nothing)
how do i do it now?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] couple of errors

2007-10-26 Thread svilen
hi
i found 2 issues with latest SA:

 * r3449 introduces a memory leak if SA is used and then all cleaned 
(clear_mappers() etc) - ArgSingleton is not cleared anymore and 
grows. Same applies to orm.session._sessions, but i dont know whats 
kept there. 
 $ python _test_leak.py repeat=10 leak
or
 $ python _test_leak.py repeat=1000
and watch memusage


 * r3646 introduces some forever-recursion, goes like:

  File "other/expression.py", line 355, in 
    p2 = session.query( Person).filter_by( name= 'pesho').first()
  File "sqlalchemy/orm/query.py", line 595, in first
    ret = list(self[0:1])
  File "sqlalchemy/orm/query.py", line 620, in __iter__
    context = self._compile_context()
  File "sqlalchemy/orm/query.py", line 873, in _compile_context
    value.setup(context)
  File "sqlalchemy/orm/interfaces.py", line 483, in setup
    self._get_context_strategy(querycontext).setup_query(querycontext, 
**kwargs)
  File "sqlalchemy/orm/strategies.py", line 553, in setup_query
    value.setup(context, parentclauses=clauses, 
parentmapper=self.select_mapper)
  File "sqlalchemy/orm/interfaces.py", line 483, in setup
    self._get_context_strategy(querycontext).setup_query(querycontext, 
**kwargs)
  File "sqlalchemy/orm/strategies.py", line 553, in setup_query
    value.setup(context, parentclauses=clauses, 
parentmapper=self.select_mapper)

... last two repeated ...


ciao
svilen


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



_test_leak.py
Description: application/python


_test_recursion.py
Description: application/python


sa_gentestbase.py
Description: application/python


[sqlalchemy] weird case of bindparam confilct

2007-09-18 Thread svilen

g'day
i have a case where one bindparam's shortname conflicts with another 
bindparam's key. 

The sql.Column._bindparam assigns columns name as bindparam's 
shortname, while haveing the full calculated _label as its key, and 
the bindparam is requested unique. later, in 
Compiler.visit_bindparam, both key and shortname, that is, col._label 
and col.name are associated with this bindparam, but: shortname goes 
as is, while the key is being uniquied by appending some _%d count to 
it.

And i have another plain bindparam using the that same column name as 
its key, and this conflicts with above.

i know i can rename all me-made bindparams, or prefix or something, 
but why is the (longer) key being mangled with count, while (shorter) 
shortname is directly put as is and not mangled too?

ciao
svilen

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Feature suggestion: Description attribute in Tables/Columns

2007-09-17 Thread svilen

suggestions:

on the descriptions... maybe right now u can inherit Columns and 
Tables and whatever by your own classes adding the .descriptions 
passing all else down.

on the table-gathering, instead of python-parsing, u could just use 
the metadata and walk it - it will have all your tables, and they 
already know their (DB) names.

Thus u can have different layout of your documentation than what is in 
the source code.

svilen

On Monday 17 September 2007 16:07:15 Hermann Himmelbauer wrote:
> Hi,
> I am creating my database via SQLAlchemy, therefore I use several
> "Column()" objects in a Table() clause.
>
> For my documentation, it is very handy to gather the table
> definitions directly from my SQLAlchemy-based python code - that's
> relatively easy, I simply import my table-definitions file and look
> for module attributes that end with "_table" (by convention, I
> always name my tables like that).
>
> Then I can read the table/columns, format them nicely and put them
> in my documentation.
>
> What's missing, however, are description fields: It would be
> perfect, if I could specify a description attribute for the table
> and the column directly, e.g. like that:
>
> myperson_table = Table('person', metadata,
>   Column('id', Integer, primary_key=True, description =
> 'Identification Number'),
>   Column('name', String(20), description = 'Name of a person'),
>   description = 'Table for my Persons')
>
> I could, also create my own Table/Column classes that hold this
> information, but maybe this could be implemented in SA as a
> standard feature?
>
> Best Regards,
> Hermann



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Automatically loading data into objects

2007-09-14 Thread svilen

On Friday 14 September 2007 14:41:14 Hermann Himmelbauer wrote:
> Hi,
> In one of my database tables I have a varchar that is mapped to an
> object with a string attribute. This specific varchar should
> however be represented by a certain Python object, therefore it
> would be very handy, if there would be a way to automatically
> load/represent this data. Is there support from SQLAlchemy?
>
> I thought about writing a property for my attribute but I'm unsure
> if this is compatible with SQLAlchemy, as SA sets up its own
> properties for attributes when the object is mapped, so probably
> mine would be overwritten?
>
> Best Regards,
> Hermann
i guess the property for xx would store the data into some extra attr 
say _xx, which one u can map DB-wise. So the xx will be a facade to 
the real DB _xx column. u ca also have additional mapping of 
attr-names: mapper-name vs column-name vs column-db-name.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Why is explicit 'and_' required for filter but not filter_by?

2007-09-14 Thread svilen

On Thursday 13 September 2007 22:54:25 Ryan Fugger wrote:
> In 0.4: Is there any reason that I (as far as i can tell) need an
> explicit 'and_' to use multiple parameters with Query.filter, as
> opposed to filter_by, where I don't need an explicit 'and_'?  I
> think the latter is cleaner.
becase filter_by implies and_( x=v1, y=v2, etc...). 
while filter( expr) takes any expression, and_() is one tiny (albeit 
very common) part of the possibilities.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Many-to-Many, Column not available

2007-09-11 Thread svilen

wild guess: do u need relations_table.id? rename/remove it

On Tuesday 11 September 2007 12:34:47 KyleJ wrote:
> I get the same result with this in 0.3.10 and 0.4beta5
>
> Basic idea: I have two tables which hold various data and a third
> table which let's different rows in each table be related to
> another (many-to-many relationship).
>
> Table/ORM code:
> base_table = Table('base_type', metadata,
> Column('id', types.Integer, primary_key=True,
> autoincrement=True), Column('name', types.String(255),
> nullable=False, default=''), Column('pickledData',
> types.PickleType)
> )
>
> people_table = Table('people', sac.metadata,
> Column('id', types.Integer, primary_key=True,
> autoincrement=True), Column('name', types.String(255),
> nullable=False, default=''), Column('email', types.String(255),
> nullable=False, default=''), Column('pickledData',
> types.PickleType)
> )
>
> relations_table = Table('relations', sac.metadata,
> Column('id', types.Integer, primary_key=True,
> autoincrement=True), Column('toObj', types.Integer, default=0),
> Column('fromObj', types.Integer, default=0),
> Column('toType', types.Integer, default=0),
> Column('fromType', types.Integer, default=0)
> )
>
> class BaseType(object): # type 0
> pass
> class Person(object): # type 1
> pass
>
> mapper(Person, people_table) # <-- not complete yet, just trying to
> get basic many-to-many working below
> mapper(BaseType, base_table, properties={
> 'people': relation(Person, secondary=relations_table,
> viewonly=True,
> primaryjoin=and_(people_table.c.id ==
> relations_table.c.fromObj, relations_table.c.fromType == 1),
> secondaryjoin=and_(base_table.c.id ==
> relations_table.c.toObj, relations_table.c.toType == 0),
> foreign_keys=[relations_table.c.fromObj]),
> 'base': relation(BaseType, secondary=relations_table,
> viewonly=True,
> primaryjoin=and_(base_table.c.id ==
> relations_table.c.fromObj, relations_table.c.fromType == 0),
> secondaryjoin=and_(base_table.c.id ==
> relations_table.c.toObj, relations_table.c.toType == 0),
> foreign_keys=[relations_table.c.fromObj])
> })
>
>
> So, I do a query on BaseType, and take the first item and then
> iterate through the people property:
> base_q = Session.query(BaseType)
> item = base_q.get_by(id=0)
> if item:
> for person in item.people:
> print person.name
>
> Which raises an exception:
> sqlalchemy.exceptions.InvalidRequestError: Column 'people.id' is
> not available, due to conflicting property
> 'id': 0x2aac3910>
>
>
> My guess is it's something to do with foreign_keys (from my
> searching here, it would appear foreign_keys isn't a way to
> "pretend" that foreign keys exist in the schema).
>
> Any help would be greatly appreciated. :)
>
>
> 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: How to safely increasing a numeric value via SQLAlchemy

2007-09-11 Thread svilen

On Tuesday 11 September 2007 13:35:18 Hermann Himmelbauer wrote:
> Am Dienstag, 11. September 2007 10:54 schrieb svilen:
> > in 0.4 there is atomic update, e.g. update set a=expression
> >
> > syntax is something like
> > table.update( values=dict-of-name-expression ).execute(
> > **bindings-if-any)
> > expressions is whatever sa sql expression
>
> Ah, that's interesting. Is it possible to specify an if construct
> there, too?
CASE? or whatever the sql was.
> I need to select the value, subtract something from it, check that
> the resulting value is > 0 and if yes, write it.
maybe this if has to be in the external filter, thus changing only 
certain rows.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: How to safely increasing a numeric value via SQLAlchemy

2007-09-11 Thread svilen

in 0.4 there is atomic update, e.g. update set a=expression

syntax is something like 
table.update( values=dict-of-name-expression ).execute( 
**bindings-if-any)
expressions is whatever sa sql expression 


On Tuesday 11 September 2007 09:25:46 Hermann Himmelbauer wrote:
> Hi,
> I need to safely increase a numeric value via SQLAlchemy:
> I have a table that has a column with numeric data. For increasing
> the value, I need to read the data, add some value and store it, so
> it would look like that:
>
> - select myvalue from mytable
> - myvalue += 123
> - update mytable and set myvalue to 123
>
> However, problems arise when two of this operations are done
> simultaneously, e.g. operation A and B:
>
> A - select
> B - select
> A - increase
> B - increase
> A - update
> B - update
>
> In this case, operation A is overwritten by B.
>
> One viable solution would be to make the operation atomic, by e.g.
> locking the database row. Is this possible with SQLAlchemy? If yes,
> how?
>
> Or is there a better way?
>
> Best Regards,
> Hermann



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: add condition over a classmethod

2007-09-10 Thread svilen

somethings' missing here.. whats the link classmethod - select - etc?
do explain again/more...
u mean the classmethod generates the filter-expression?
whats the difference classmethod vs plainmethod here?
all the same, just call it: self.myclassmethod(..)

On Monday 10 September 2007 14:40:57 Glauco wrote:
> Yes this is strange..  but i want (if is possible) to implement in
> select process,  a condition that is evaluated as a result of
> classmethod.
>
> In my case i must select all person from my database that are
> inside a geografical  circle with coordinate = x,y... and radius =
> z.
>
> i must use a callable (wich is called for each item) because the
> inside or outside is calculated for each item.
> i cannot iterate over result of the qry, because after this
> particular condition i must add more other filter  condition .
>
>
> My workflow data is something like
>
> my_base_qry = select .
>
> if condition1:
>my_base_qry = my_base_qry.filter( ...)
>
> if coordinate:
>my_base_qry = my_base_qry.filter(   my_method_inside_circle (
> latitude, longitude,radius)   )
>
> if condition3:
>my_base_qry = my_base_qry.filter( ...)
>
> 
> if condition_n:
>my_base_qry = my_base_qry.filter( ...)
>
>
>
>
>
> sorry for my poor english
> any idea?
> Glauco



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] _state and related

2007-09-10 Thread svilen

before v3463, _state was a property and was setup on the fly whenever 
used.
now its being setup into the __init__-replacement of the object.

Thus, with the property it was possible as side-effect to have an 
object instance _before_ having any sqlalchemy around, then 
declare/build mappers/ whatever, and then save /use that instance as 
db-persistent.
Now this wont work, as the instance has no _state attribute, and noone 
to set it up.

i guess this usage case - of instances having wider lifetime than 
orm-mapping itself - is rare. i use it for tests, running many db- 
tests over same instances. So i'll probably put a check in my save() 
method to setup that missing ._state. Not sure about the 
mapext.init_instance(), and why's that is called before the original 
oldinit, and is given that oldinit as argument.

Anyway it would be nice if these lifetime-related 
expectations/limitations are documented somewhere.
Another one is the ._instance_key that stays on the instance after orm 
is gone (the ._state will also stay).

ciao
svilen

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Many tables in eagerloading

2007-09-04 Thread svilen

On Tuesday 04 September 2007 17:12:26 Arun Kumar PG wrote:
> Good work svilan!
>
> couple questions from what you suggested:
> >> skipping creation of objects - only using the data, if time of
> >> creation gets critical.
> In my query wherein the eagerloading is being done on 8 tables if I
> manually run the join query being generated by SA ORM I get a
> result set of over 70,000+ records because this is contains
> duplicate things. I guess SA ORM traverses the returned result and
> creates the right number of objects discarding the duplicate or
> unwanted rows.
probably, but unless u discard 50% of rows, it makes no much 
difference.

> How much time is generally spent by SA if let's say we have a
> 1,000,000 rows returned and this resultset will form the whole
> heirarchy i.e. A (main entity) A.b, A.c [] (a list) , A.c.d []
> (again a list) etc. ? Is that a considerable time ?
well u just sum how many constructors has to be executed - per row, 
multiply by million... IMO anything in python mutiplied by million is 
considerable time. Maybe in 10 years it will be not. Thats why i want 
to keep my model separate from the denormalization scheme.

> What is the best optimization technique followed (recommended by SA
> ORM) when it comes to aggregation of data from multiple tables with
> lots of rows in it (and continuously growing) ?

SA 0.4 can do query polymorphisms by multiple per-subtype queries, but 
i have no idea how that works or how much better it is than one huge 
union/leftouterjoin + the type-switching over it, + u'll miss overall 
the ordering.
Mike, can this mechanism be used somehow for my "vertical" loading, 
i.e. some load "column" from some related table that is not in the 
polymorhistic hierarchy?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Many tables in eagerloading

2007-09-04 Thread svilen

we have similar data and reports, so here what we have invented so 
far.
be ware: its all pure theory.

 -1 (horizontal) (eager) loading ONLY of the needed row attributes, 
also hierarhicaly (a.b.c.d)
 -2 (vertical) simultanously loading of columns - e.g. the lazy 
attribites - wholly, or in portions/slices (depending on UI 
visibility or other slice-size)
 -3 skipping creation of objects - only using the data, if time of 
creation gets critical. For example a simple report for a name.alias 
and age of person, the creation of 100,000 Persons can be ommitted. 
To be able to do drill-down, the person.db_id would be needed+stored 
too.
 -4 cacheing of some aggregations/calculations in special 
columns/tables, so they're not re-invented everytime
 -5 translate the whole report - calculations, aggregations, grouping 
etc. into sql and use the result as is (with same thing about 
db_id's)

and combination of above, in whatever subset.

i've done something about aggregation/4 with Paul Colomiets, see last 
development at 
http://www.mr-pc.kiev.ua/en/projects/SQLAlchemyAggregator/
But there's more to it, as i want it completely transparent and 
switchable (on/off).

i'd be most interested in 7/ last one, but as i see the trend, very 
very few people look a level higher than plain sql expressions (and 
_want_ all the sql-dependencies that follow from that), what to say 
about translations of meta-info...

on the way further, we'll probably have more of these invented, unless 
someone does it first, which would be very welcome...

ciao
svilen

On Tuesday 04 September 2007 09:56:49 Arun Kumar PG wrote:
> Guys,
>
> Was wondering if we have 10 tables or so which are related to each
> other and are required during  let's say report generation then if
> I specify eagerloading for all those attributes which are related
> to these tables then down the line as the records in the table
> grows the temp tables generated in the join (left outer in
> eagerloading) will be massive before appying the where clause. So I
> guess we should worry about this or is that fine as long as the
> tables are getting join on primary/foreign key as the query plan
> looks decent ?
>
> I am doing this for 7-8 tables out of which data is growing
> continuously in couple tables with a factor of 2XN every month. I
> am worried if eagerloading may be a problem in the sense if it will
> bring the db server down to knees some day considering the joins
> happening ? FYI: the eager loading is specified at the Query level
> so that I can optimize where I really need.
>
> But currently it's faster as compared to executing individual
> query. And in my case if I go with individual queries due to lazy
> load it takes forever. And in many cases the request times out when
> using a web browser. So eargerloading is better but just worried
> about speed. Any good guidelines on how we should use eagerloading,
> best practises, any limitation etc ?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: How to construct a dialect-aware user type

2007-08-30 Thread svilen

IMO current way as of src (sorry i havent read docs at user level), 
u'll need two-side implementaion - one abstract SA, and one 
dialect-dependent. In each dialect, there are 2 mappings: one 
abstractSAtype->specificDialectType (look for something named 
colspecs), and another one used for reflection - named 'pragmanames' 
or ischemanames' or 'similar.

so u have to create your custom abstract type and a custom dialect 
type(s) and add them to those mappings - eventualy substituting with 
some available type where inapplicable.

see in some dialect (in database/*) how a dialect-specific type is 
made, subclassing the abstract one.

of course there might be easier "official" way...

> Hello,
>
> apologies if I'm the obvious or important things in the
> documentation - and I'm obviously very new to SQLAlchemy...
>
> I'd like to create my own types which are aware of different
> database dialects. For example, I need a type 'Double' which holds
> a representation of a double precision floating point number. With
> MySQL, I would  robably use the native DOUBLE type, with Oracle
> BINARY_DOUBLE. I would like to encapsulate the data base dependency
> with a properly defined custom type.
>
> Is there a straigtforward way to achieve this? I have managed to
> build custom types, of course, but I haven't found a way on how to
> deduce the current data base engine / dialect from the
> types.TypeEngine...
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: interface error with Decimal("0") in where clause

2007-08-24 Thread svilen

i am storing only accounting amounts so i do care...
long time ago there was no decimals easily available, so we used a 
fixed-point arithmetic (over integers) instead of decimals.
either way, u cant store 1/3, and i dont think any db supports 
fractions.

On Friday 24 August 2007 12:17:39 Florent Aide wrote:
> How would you do something like this then:
>
> session.query.(LedgerLine).query(LedgerLine.base_amount.between(dec
>imal1, decimal2))
>
> the between() won't work since sqlite won't be able to compare your
> pickled amounts.
>
> Pickling cannot be an option in all cases particularly when you are
> storing amounts for accounting books...
>
> Florent.
>
> On 8/24/07, svilen <[EMAIL PROTECTED]> wrote:
> > decimals.. u can use pickling? slower, yes.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: interface error with Decimal("0") in where clause

2007-08-24 Thread svilen

decimals.. u can use pickling? slower, yes.

On Friday 24 August 2007 10:37:53 Florent Aide wrote:
> Hi,
>
> As far as I know, sqlite does not allow you to store decimal
> objects, only floats. Which really is not the same. If you really
> need decimals (ie: accounting books anyone ?) then you should
> consider using firebird which is the only other database engine
> supported by SA that is embeddable in a python application without
> the need of and external server.
>
> If someone has a way to accurately manipulate floats with the same
> precision as decimals I would gladly hear from it because for the
> moment I just banned sqlite from my dbengine choices for this
> particular reason :(
>
> Regards,
> Florent.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: echo-es

2007-08-22 Thread svilen

> Another thing, the dots that are produced by unittests magically
> disappear if meta.bind.echo = True, very interesting..
shoot me, thats my problem

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] echo-es

2007-08-22 Thread svilen

in 0.3, one could do 
 meta = MetaData( whatever, echo=True)
later, early 0.4, the echo kwarg was gone, so it got less convenient, 
adding another line:
 meta.bind.echo = True
As of latest trunk, neither works, one has to explicitly do
 meta = MetaData( create_engine(whatever, echo=True))
which is probably fine for explicitness with lots of args, but is not 
very useful for a simple echo. 

IMO it is important to get the simplest use case (minimum explicit 
typing):
 meta = MetaData( dburl, echo=True)
working.

Another thing, the dots that are produced by unittests magically 
disappear if meta.bind.echo = True, very interesting..

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] dbcook updated for SA 0.4

2007-08-21 Thread svilen

hi
The automatic DB-declaration layer, dbcook.sf.net/, is now working 
with either sqlalchemy 0.3 and 0.4. Other changes:

 - All DB_* parametrisation class attributes become DBCOOK_*.
 - DB-recreate works for postgress/msqql
 - misc/metadata/: autoload, diff, copydata are ok
 - Some work has been done on declaring collections (one2many), but as 
i have no use cases for that (everything around is many2many because 
of bitemporal), it may not happen soon. If anyone can give dbcook a 
go and send some complains/questions/suggestions, would be very 
welcome.



For those interested in doing multiple SA-version compatibility:
 
 * The auto-guessing of the version is done by 
v03 = hasattr( sqlalchemy, 'mapper')

 * an useful function for finding some attribute within the quickly 
shifting module-structure is:

#in dbcook.util.attr:
def find_valid_fullname_import( paths, last_non_modules =1):
'search for a valid attribute path, importing them if needed'
...

with usage like (e.g. in expression.py)
ClauseAdapter = find_valid_fullname_import( '''
sqlalchemy.sql.util.ClauseAdapter
sqlalchemy.sql_util.ClauseAdapter
''',1 )

_COMPOUNDexpr = find_valid_fullname_import( '''
sqlalchemy.sql.expression.ClauseList
sqlalchemy.sql.ClauseList
sqlalchemy.sql._CompoundClause
''',1 )

_BinaryExpression = find_valid_fullname_import( '''
sqlalchemy.sql.expression._BinaryExpression
sqlalchemy.sql._BinaryExpression
''',1 )

(some of my usages date back to 0.3.6... i still keep them all)

The migration was not very easy thing, as dbcook uses a_lot of 
under-cover internalities from SA.


svn co 
https://dbcook.svn.sourceforge.net/svnroot/dbcook/trunk/

ciao
svilen

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: overriding collection methods

2007-08-20 Thread svilen

On Monday 20 August 2007 18:01:49 jason kirtland wrote:
> svilen wrote:
> > a suggestion about _list_decorators() and similar.
> > they can be easily made into classes, i.e. non dynamic (and
> > overloadable/patchable :-).
>
> The stdlib decorators end up in a static, module-level dictionary
> that can be manipulated if you want to.  Wouldn't this be replacing
> a dict with some_cls.__dict__?

well, more or less...
i use similar function-made locals() namespaces a _lot_, and maybe 
thats why i avoid using them whenever i can - their contents is not 
easily changeable/inheritable/splitable... programaticaly, 
piece-by-piece.
whatever.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: overriding collection methods

2007-08-20 Thread svilen

On Monday 20 August 2007 17:29:52 jason kirtland wrote:
> [EMAIL PROTECTED] wrote:
> > hi
> > i need to have a list collection with list.appender (in SA 0.4
> > terms) that accepts either one positional arg as the value, or
> > keyword args which it uses to create the value. Each collection
> > instance knows what type of values to create.
> >
>  > [...]
>  > Any idea to fix/enhance this, letting **kwargs through to my
>  > function? The dynamic wrapper() can do this, while these preset
>  > ones cannot... while they should be equaly powerful.
>
> Hi Svil,
>
> @collections.appender
> @collections.internally_instrumented
> def append(self, obj=_NOTSET, **kargs):
>...
>
> > There are 2 (different) uses of an appender, one is the SA
> > itself, but the other is the programmer. SA will always use
> > single
> > arg/positionals, while i could use this or that or combination.
>
> SQLAlchemy's appender doesn't have to be the programmer's appender.
>  You can add a method solely for the orm's use if you like.  That's
> one of the points of the decorator syntax, divorcing function names
> from the interface.  If you want to keep 'append' for your own use,
> just tag another method as the @appender.
thanks for suggestion. 

This would work if the default decorators were not force wrapped 
anyway, in that "ABC decoration" part. i'm looking now to see why is 
it so. 

And anyway i need to first create the object and just then append it 
(the decorators will first fire event on the object and just then 
append(), that is call me), so may have to look further/deeper. 
Maybe i can make my append create objects first and then call the 
actual appender - so yes, this is the way.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: overriding collection methods

2007-08-20 Thread svilen

and no need for that __new__ replacement either - just use 
_list_decorators._funcs instead of _list_decorators()

On Monday 20 August 2007 17:05:32 svilen wrote:
> a patch, it got even tidier ;-) -
> no more _tidy() calls, all automated.
>
> On Monday 20 August 2007 16:41:30 svilen wrote:
> > a suggestion about _list_decorators() and similar.
> > they can be easily made into classes, i.e. non dynamic (and
> > overloadable/patchable :-).
> >
> > class _list_decorators( object):

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: overriding collection methods

2007-08-20 Thread svilen
a patch, it got even tidier ;-) - 
no more _tidy() calls, all automated.

On Monday 20 August 2007 16:41:30 svilen wrote:
> a suggestion about _list_decorators() and similar.
> they can be easily made into classes, i.e. non dynamic (and
> overloadable/patchable :-).
>
> class _list_decorators( object):  

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---

Index: sqlalchemy/orm/collections.py
===
--- sqlalchemy/orm/collections.py	(revision 3375)
+++ sqlalchemy/orm/collections.py	(working copy)
@@ -746,7 +749,7 @@
 pass
 return wrapper
 
-def __set(collection, item, _sa_initiator=None):
+def _set(collection, item, _sa_initiator=None):
 """Run set events, may eventually be inlined into decorators."""
 
 if _sa_initiator is not False and item is not None:
@@ -754,7 +757,7 @@
 if executor:
 getattr(executor, 'fire_append_event')(item, _sa_initiator)
   
-def __del(collection, item, _sa_initiator=None):
+def _del(collection, item, _sa_initiator=None):
 """Run del events, may eventually be inlined into decorators."""
 
 if _sa_initiator is not False and item is not None:
@@ -762,17 +765,28 @@
 if executor:
 getattr(executor, 'fire_remove_event')(item, _sa_initiator)
 
-def _list_decorators():
+def _tidy_(fn, base):
+fn._sa_instrumented = True
+fn.__doc__ = getattr( base, fn.__name__).__doc__
+return fn
+
+def _tider( func, base):
+def f( fn): return _tidy_( func(fn), base)
+return f
+
+def _get_decorators( d, base):
+skip = '__module__', '__doc__',
+return dict( (k,_tider(v,base)) for k,v in d.iteritems() if k not in skip )
+
+
+class _list_decorators( object):#def _list_decorators():
 """Hand-turned instrumentation wrappers that can decorate any list-like
 class."""
 
-def _tidy(fn):
-setattr(fn, '_sa_instrumented', True)
-fn.__doc__ = getattr(getattr(list, fn.__name__), '__doc__')
 
 def append(fn):
 def append(self, item, _sa_initiator=None):
-# FIXME: example of fully inlining __set and adapter.fire
+# FIXME: example of fully inlining _set and adapter.fire
 # for critical path
 if _sa_initiator is not False and item is not None:
 executor = getattr(self, '_sa_adapter', None)
@@ -780,21 +794,18 @@
 executor.attr.fire_append_event(executor._owner(),
 item, _sa_initiator)
 fn(self, item)
-_tidy(append)
 return append
 
 def remove(fn):
 def remove(self, value, _sa_initiator=None):
 fn(self, value)
-__del(self, value, _sa_initiator)
-_tidy(remove)
+_del(self, value, _sa_initiator)
 return remove
 
 def insert(fn):
 def insert(self, index, value):
-__set(self, value)
+_set(self, value)
 fn(self, index, value)
-_tidy(insert)
 return insert
 
 def __setitem__(fn):
@@ -802,8 +813,8 @@
 if not isinstance(index, slice):
 existing = self[index]
 if existing is not None:
-__del(self, existing)
-__set(self, value)
+_del(self, existing)
+_set(self, value)
 fn(self, index, value)
 else:
 # slice assignment requires __delitem__, insert, __len__
@@ -830,114 +841,101 @@
len(rng)))
 for i, item in zip(rng, value):
 self.__setitem__(i, item)
-_tidy(__setitem__)
 return __setitem__
 
 def __delitem__(fn):
 def __delitem__(self, index):
 if not isinstance(index, slice):
 item = self[index]
-__del(self, item)
+_del(self, item)
 fn(self, index)
 else:
 # slice deletion requires __getslice__ and a slice-groking
 # __getitem__ for stepped deletion
 # note: not breaking this into atomic dels
 for item in self[index]:
-__del(self, item)
+_del(self, item)

[sqlalchemy] Re: overriding collection methods

2007-08-20 Thread svilen

a suggestion about _list_decorators() and similar.
they can be easily made into classes, i.e. non dynamic (and 
overloadable/patchable :-).

class _list_decorators( object):  #instead of def _list_decorators()
  all contents/decorators stays same...
 
_funcs = _get_decorators( locals() )

def __new__( klas):
return klas._funcs

def _get_decorators( d):
skip = '__module__', '__doc__',
if 1:   #Whichever way preferred
r = dict( (k,v) for k,v in d.iteritems() if k not in skip)
or:
r = d.copy()
for s in skip: r.pop(s)
return r

_def _tidy(fn): ... #becomes global

the only prerequisite for this is to rename __del() and __set() into 
_del /_set or else they get looked up as private-named identifiers 
(?? no idea why).

ciao
svilen


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: overriding collection methods

2007-08-20 Thread svilen

another thing noted, the collections instrumentation fails over old 
python classes (not inheriting object), e.g. 
class myX: ...whatever...

it fails at _instrument_class(), because type(myX()) being 
 is recognized as builtin, and apart of that the 
util.duck_type_collection() may fail because issubclass does not work 
just straighforward, e.g. must be 
import types
isa = isinstance(specimen, (type, types.ClassType)) and issubclass 
or isinstance

neither types.ClassType (type 'clasobj'> or the above mentioned 
 are otherwise accesible as some predefined python 
type - but only as type(someclass) or type(someclass()).

ciao
svil

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] query.select vs .filter + .from_statement

2007-08-16 Thread svilen

i have a premade filtering clause and give it to a query.select at 
runtime. Sometimes its a simple x == 13 expression, another time it 
is a full sql-construct like polymorphic_union().

in 0.3 all went into .select(), but in 0.4 these 2 kinds seems split 
between .from_statement and .filter.
so i have either to check at runtime which one to call, or make the 
simple expression into sql construct.

why filter() does not accept everything as before? 
IMO the types are distinct enough to check/switch...

or, let filter() be, why not have another single method that accepts 
everything? All that is a filter anyway

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SQLAlchemy 0.4 beta2 released

2007-08-14 Thread svilen


performance-wise - do u have any test/target for profiling? else i can 
repeat some tests i did somewhen in february (if i remember them..)

=

while looking to replace all {} with dict/Dict(), i found some things. 
Here the list, quite random, probably some can be just ignored if not 
an actual issue - i may have misunderstood things; have a look.
(btw The overall candidates for the replacement are like 60-70 lines, 
all else are kwargs or lookup-tables.)

---
database/informix.py:
  ischema_names = { ... }  has duplicate keys/entries

---
database/*
  get_col_spec(self) etc:
these string-formats may be better without the artificial dict,
 eg. return self._extend("CHAR(%(length)s)" % {'length': self.length})
 -> return self._extend("CHAR(%s)" % self.length )
  or
 -> return self._extend("CHAR(%(length)s)" % self.__dict__ )
no idea if get_col_spec() is used that much to have crucial impact on 
speed though, at least it looks simpler.

---
orm.util.AliasedClauses._create_row_adapter()
class AliasedRowAdapter( object):
 1. can't this be made as standalone class, returning an instance, 
initialized with the map, which is then __call__()ed ?
 2. this can be faster if: 
a) has_key = __contains__ #instead of yet another funccall
b) __getitem__ uses try except instead of double lookup key in map

---
orm.mapper
 Mapper._instance():
WTF is the **{'key':value, ... } ?
eg. if extension.populate_instance(self, context, row, instance, 
**{'instancekey':identitykey, 'isnew':isnew}) ...
same thing is done as separate variable a page later;
btw there are several of these **{} in the file

 also, Mapper._options is redundant (leftover?) neverused


---
orm.attribute
  AttributeManager.init_attr():
the saving this one eventualy does is too small, compared to a 
property call of ._state.

  AttributeManager.register_attribute():
the  def _get_state(self) that is made into as property _state can 
be made eventualy faster with try-except instead of 'if'.

 btw: cant that ._state property be removed alltogether (i.e. made a 
plain attribute? then init_attr() MUST be there seting it up as plain 
dict.

---
orm/unitofwork
UOWTask._sort_circular_dependencies():
def get_dependency_task(obj, depprocessor):
try:
dp = dependencies[obj]
except KeyError:
dp = dependencies.setdefault(obj, {})
isnt just the setdefault() enough?

---
engine.url.
  def translate_connect_args(self, names):
this assumes the order of passed names matches the order of 
attribute_names inside... very fragile. Why not use set of kwargs 
like (attr_name=replacement_name defaulting to None), then just use 
the non empty ones?
  def _parse_rfc1738_args():
the 'opts' dict is redundant, would be cleaner if args are just 
passed to URL( name=value)

---
topological.py
   QueueDependencySorter.sort():
  'cycles' is redundant neverused variable

---
util:
  ThreadLocal:
   - wouldnt be faster if the key in the _tdict is tuple(id,key) and 
not some formatted string off these? or the key is nonhashable?
   - the engine/threadlocal.TLEngine._session() issues a hasattr() on 
such object. how does it actualy work? IMO it always fails

==
hey, thanks for the MetaData.reflect()!

svil

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: a renaming proposal

2007-07-27 Thread svilen

On Friday 27 July 2007 18:14:48 Michael Bayer wrote:
> On Jul 27, 2007, at 6:29 AM, avdd wrote:
> > On Jul 27, 9:45 am, jason kirtland <[EMAIL PROTECTED]> wrote:
> >> This is the last opportunity
> >> for terminology changes for a while, so I offer this up for
> >> discussion.
> >
> > Does anyone else think "orm.relation" is wrong?  Perhaps
> > "relationship" if you must have a noun, or "relates_to", etc, but
> > "relation" could cement the popular misunderstanding of
> > "relational database".
>
> "relation" is wrongish because it conflicts with Codd's term
> "relation".
>
> However, I have pored over SQL and relational articles to see what
> word they use when they want to describe a table A that joins to
> table B.
>
> and they always say:  "relationship".
>
> If you look up "relation" and "relationship" in the dictionary,
> *they mean the same thing*.
>
> somehow naming it "relationship" feels weird (though we can go with
> that if we want).  "relates_to", probably more accurate..but the
> kinds of words we've been using in "properties" (and generally) are
> nouns.  Elixir is the one using the "verb/preposition" style.

well, there are relations (interdependencies if u want) between the 
tables in SQL, AND there are relations (interdependencies) between 
objects in OOP, AND there are relations (interdependencies) between 
things in real world. Call them whatever, they mean same thing, but 
in different context/"world".

so i think "relation" is okay, as long the context of usage is 
obvious. sql.relation vs orm.relation vs whatever.relation. Also, the 
verb-ish way (relates_to) adds some directional scent, which may or 
may not be intended/needed...

i have different trouble with orm.relation, mostly that i'm mixing it 
with a collection in some cases, when it is just a way to implement a 
collection.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SQLAlchemy 0.4 MERGED TO TRUNK

2007-07-27 Thread svilen

here the changes i needed to get dbcook (abstraction layer over SA), 
and its tests going to some extent (70% - relations and expressions 
are broken):
 - BoundMetaData -> MetaData - lots (15)
 - metadata.engine.echo=True - lots (14)
Whats the difference between create_engine's echo=boolean and 
MetaData's echo=boolean ??
 - import sqlalchemy.orm - i need mostly 4 things off that, but on 
lots of places (35): create_session, mapper, relation, clear_mappers. 
several more things like polymorphic_union, class_mapper etc on 
single occasions (2-3)
 *** above 3 are very common - but easy - about each SA-using file
 - query.get_by_whatever - geee, i didnt know i use it (3)
 - binaryExpression.operator - operator.xx-builtin and not text (1)

 - type_ vs type - too bad, now i need a key-faking dict. (3+2)
why, u're afraid type() will become reserved word or what?

 - mapper.polymorphic_identity without polymorphic_on - i have these, 
assuming that having no polymorphic_on ignores anything about 
polymorphism anyway... now it becomes very tricky as for D(C(B(A))) 
when making D, it does not really know if was A polymorphic or not... 
it has to get base_mapper and check its polymorphic_on... (3)

 - Select (and i guess Clause) lost its accept_visitor

 - sql.compile traversing takes about more recursion levels than 
before - not really a problem

 - InstrumentedList gone = my append(key=value) hack gone.. must find 
the new way (1)
 - expression translator - simply forget, it relies on way to many 
internal hooks, and should be rethinked as of new Query and stuff.
 
The hack4repeateability stopped working :-) which was expected. 
Same about the 'nicer echo of select' hack. ticket/497 - u forgot me..

So: the bulk of changes, line-count-wise is cosmetical; but anything 
depending on internal hooks/hacks is ... near-dead.

i'm not sure if to go 0.4 or stay 0.3, or try being backward 
compatible. or add a branch. i did backward compatibility before, but 
now there are way too many changes now.

ciao
svil

>On Friday 27 July 2007 07:36:50 Michael Bayer wrote:
> Note that this version has some backwards incompatibilities with 0.3
>
> When we get an idea as to how easily people are upgrading to 0.4,
> we'll figure out how aggressively we want to patch bug fixes from
> 0.4 back into 0.3 and continue releasing on that series.  Currently
> we plan to keep 0.3 going as long as people need it.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: not updated relation one-to-many

2007-07-27 Thread svilen

i think u should not make 2 separate relations, but one relation on 
one of the tables, with a backref to the other.
i.e. just 
 mapper( T1, t1, properties={"t2s": relation(T2, lazy=False, 
backref='t1')})
do check doco, just in case.

On Friday 27 July 2007 15:07:39 Michal Nowikowski wrote:
> Hello
>
> I've following problem. I've two tables (t1, t2) and relation t1
> (one) - t2 (many):
>   mapper(T1, t1, properties={"t2s": relation(T2, lazy=False)})
>   mapper(T2, t2, properties={"t1": relation(T1, lazy=False)})
>
> When I add row to t1, then to t2, and then run query for first row
> in t1, I see one
> element in collection t2s - it is ok.
> Then when I add second row to t2, the collection in t1 object is
> not updated.
> It still contains only one element.
>
> Example below.
>
> Could you tell me how to refresh collection in one-to-many
> relation???
>
> Regards
> Michal Nowikowski
>
>
> from sqlalchemy import *
>
> md = MetaData('sqlite:///a.db', echo=False)
> t1 = Table("t1", md,
>Column("id", Integer, primary_key=True),
>Column("version", Integer))
>
> t2 = Table("t2", md,
>Column("id", Integer, primary_key=True),
>Column("name", String),
>Column("t1_id", Integer, ForeignKey("t1.id")))
>
> md.create_all()
> s = create_session()
>
> class T1(object):
> pass
> class T2(object):
> pass
>
> mapper(T1, t1, properties={"t2s": relation(T2, lazy=False)})
> mapper(T2, t2, properties={"t1": relation(T1, lazy=False)})
>
> a1 = T1()
> s.save(a1)
> s.flush()
>
> a2 = T2()
> a2.t1_id = a1.id
> a2.name = "AAA"
> s.save(a2)
> s.flush()
>
> ra1 = s.query(T1).first()
> print [ a.name for a in ra1.t2s ]
>
> a22 = T2()
> a22.t1_id = a1.id
> a22.name = "BBB"
> s.save(a22)
> s.flush()
>
> rra1 = s.query(T1).first()
> print [ a.name for a in rra1.t2s ]
>
>
> 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SQLAlchemy 0.4 MERGED TO TRUNK

2007-07-27 Thread svilen

one suggesstion / request.

As your changing everything anyway, can u replace all important {} and 
dict() with some util.Dict, and set() with util.Set? 
util.Ones can point to dict/set.
The reason is so they can be further globally replaced by user with 
OrderedOnes, for example to achieve repeatability - tests etc.

example ones are: 
 MetaData.tables
 unitofwork.UOWTransaction.dependencies
 unitofwork.UOWTask.objects
 mapperlib.Mapper.save_obj():  table_to_mapper = {}
 mapperlib.Mapper._compile_properties(): self.__props = {}
 sqlalchemy.topological.QueueDependencySorter.sort(): nodes = {}
these give 90% repeatabliity, but i could not make it 100%. )-:

On Friday 27 July 2007 07:36:50 Michael Bayer wrote:
> Hey ho -
>
> after around 400 revisions the 0.4 branch is merged to trunk:
>
> http://svn.sqlalchemy.org/sqlalchemy/trunk

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: autoload'ing metadata

2007-07-27 Thread svilen

On Friday 27 July 2007 12:44:49 Christophe de VIENNE wrote:
> 2007/7/26, [EMAIL PROTECTED] <[EMAIL PROTECTED]>:
> > noone wanting to try autoload'ing nor metadatadiff? i am
> > surprised.. Christophe, u can at least try how much autoload.py
> > works like your autocode2 - i got lost with 'schema' vs 'dbname'
> > - and/or add mysql support (;-)
>
> I tried to run it on a mssql db (Although I'd prefer to test it on
> a mysql db to see the actual differences from autocode2), but I got
> some errors :
well, u have put only mssql in your autocode2 - which i ripped. 
i've no idea about mysql - and about mssql either.
up to you if u wanna hack it further, and find out what should be the 
way there.

> Traceback (most recent call last):
> ...
> 'sqlalchemy.databases.mssql.MSSQLDialect_pymssql'>): None 'SELECT
> tables_77bf.table_name, tables_77bf.table_schema \nFROM
> information_schema.tables AS tables_77bf \nWHERE
> tables_77bf.table_schema = %(tables_table_schema)s'
> {'tables_table_schema':
>  0xb78a7f0c>}
hmmm
replace the line 91
 schema = engine.dialect
with
 schema = engine.url.database
this might be equivalent to your old code (if works at all).
i'm not sure what should be there anyway... and have nowhere to test 
now.

ciao
svil

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: a renaming proposal

2007-07-27 Thread svilen

On Friday 27 July 2007 11:44:43 Gaetan de Menten wrote:
> On 7/27/07, svilen <[EMAIL PROTECTED]> wrote:
> > On Friday 27 July 2007 02:45:12 jason kirtland wrote:
> >
> >  - Catalog:
> > what is a sqlalchemy's metadata?
> >
> > >jason> "a catalog of tables available in the database."
> >
> > to me it holds everything about the "subset of database
> > structure", used in the app.
> >
> > as i have seen, sql-wise the term is metadata. going away from
> > sql? To me it is important. sure, it is not The Metadata of the
> > server. Why not just Table_collection? And, is it _just_ table
> > collection, or there's more to it? Catalog... of what? make it
> > TableCatalog then, or just TableSet? elements are uniq and not
> > ordered...
> > what about DBSchema/Schema/TableSchema - it does match one
> > schema, or no? can u have one metadata over tables from 2+
> > schemas?
>
> As I was reminded on IRC, metadata can hold more than Tables:
>
>  IF you go to the trouble to change that, I'd say simply
> "TableCollection" 
>  yah except indexes and sequences can be 
> in it too 
>  and possibly functions 
>  ok, bad idea then
>  MetaData is based off of fowler's usage of the word
>  and domains
>  and lots of other things
>  yeah it coujld have functions and domains someday too
>  it doesnt really right now

i guess triggers too?
in that case, TableWhatever isn't proper thing. 
Catalog... some Apple][ memories... Directory? DBSchema? DBStructure? 
DBDescription? DBMetaData (;-l)?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: a renaming proposal

2007-07-27 Thread svilen

On Friday 27 July 2007 02:45:12 jason kirtland wrote:
> So there you have it.  I'm not married to this proposal by *any*
> means. The ideas gelled in my brain during the SQLAlchemy tutorial
> at OSCON, and this seems like the last opportunity to deeply
> question and reconsider what we have before a new generation of
> users takes on SQLAlchemy.
>
> -j

> Engines would be DataSources, and MetaData would be Catalogs.
i'm trying to clarify the meanings of these, lets do semantical 
analysis of sorts.

 - Datasource: 
what is an sqlachemy's engine?
>jason> "What database is that bound to?"
to me engine mostly meant "the driver that does it".

so yes, it's not engine really. But datasource is way to general. Are 
u going http? although... And it's not readonly - so it's not 
just "source", source implies "pulling" and never pushing. How about 
db_driver? data_driver? just driver? or something around it...

 - Catalog: 
what is a sqlalchemy's metadata?
>jason> "a catalog of tables available in the database."
to me it holds everything about the "subset of database structure", 
used in the app. 

as i have seen, sql-wise the term is metadata. going away from sql? To 
me it is important. sure, it is not The Metadata of the server.
Why not just Table_collection? And, is it _just_ table collection, or 
there's more to it? Catalog... of what? make it TableCatalog then, or 
just TableSet? elements are uniq and not ordered... 
what about DBSchema/Schema/TableSchema - it does match one schema, or 
no? can u have one metadata over tables from 2+ schemas? 

Anyway it may depend which audience are u targeting with these names - 
those who never seen an API or those for which names are important 
only to associate them with a library/version/use-case... both 
extremes are equaly uninteresting imo.

i'm not "opposing for the sake of it", i just throw other 
view-points/ideas..
maybe "catalog" is closer hit than "datasource". And, the names should 
automaticaly associate to _what_ they really are - datasource talks 
about data (and not about database) and source (readonly), catalog 
doesnot talk about tables either.

so my picks would be something like DataBaseDriver/DataDriver and 
TableSet - or TableCatalog. To me these represent better the meanign 
of them notions, and are not twofold/ambigious.

ciao
svil

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: autoload'ing metadata

2007-07-26 Thread svilen

On Thursday 26 July 2007 11:37:08 Marco Mariani wrote:
> [EMAIL PROTECTED] ha scritto:
> > here some theory on comparing data trees, in order to produce the
> > changeset edit scripts.
> > http://www.pri.univie.ac.at/Publications/2005/Eder_DAWAK2005_A_Tr
> >ee_Comparison_Approach_to_Detect.pdf
>
> The complete title of the paper is "A Tree Comparison Approach To
> Detect Changes in Data Warehouse Structures".
>
> "data warehouse" is the key concept.
>
> > of course full automation is not possible and not needed - but
> > why not do maximum effect/help with minimum resources?
>
> I've not read it, but what is working for data warehouse could fail
> miserably in a normalized database.
sure. there are graphs and not just trees. 
Apart of that, same thing, nodes and edges.
u can try the metadatadiff.py, there's lots of node-types to 
add/describe but IMO the idea is there.
or u can keep doing it by hand. choice is yours.

Actualy, i'm the worst one to develop this - i have no enough 
experience with sql and db-admining in general, nor i know _all_ 
internals of SA. 
But hey...

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: autoload'ing metadata

2007-07-25 Thread svilen

another version, separated autoload from code-generation, 
which is now the __main__ test.

http://dbcook.svn.sourceforge.net/viewvc/*checkout*/dbcook/trunk/autoload.py

now it is possible to do something like:

$ python autoload.py postgres://[EMAIL PROTECTED]/db1 | python - sqlite:///db2

copying the structure of input db1 database into the output db2.

ciao
svilen

> this is along the recent threads about metadata consistency between
> code and DB, and the DB-migration. Both these require a full
> metadata reflection from database.
>
> Here a version of autocode.py, hacked for couple of hours.
> It has more systematic approach, replaces back column types with SA
> ones, has sqlite and postgress, and is somewhat simpler but more
> dense. Output also looks nicer (nested identation). Not tested for
> mssql.
>
> my idea is to use this metadata-reflection as starting point
> towards model-migration technology or framework or whatever.
>
> one way is to put all metadata-reflection in the dialects
> themselves. Maybe there should be reflect_metadata() method, which
> will extract all tables/names, indexes, etc. This is what i like,
> although it means hacking 10 files instead of one. But it would be
> more easier/consistent on the long run.
>
> Another way is to pull all reflection stuff out of dialects, or at
> least separate it somehow.
>
> anyway.
>
> http://www.sqlalchemy.org/trac/wiki/UsageRecipes/AutoCode#autoload2
>codeorAutoCode3
>
> using some metadata howto from:
> http://sqlzoo.cn/howto/source/z.dir/tip137084/i12meta.xml
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Choosing a few columns out a query object

2007-07-25 Thread svilen

> from sqlalchemy import *
>
> metadata = MetaData()
> docs = Table('docs', metadata)
> docs.append_column(Column('DocID', Integer, primary_key=True))
> docs.append_column(Column('Path', String(120)))
> docs.append_column(Column('Complete', Boolean))
>
> class Doc(object):
> def __init__(self, id, path, state):
> self.DocID = id
> self.Path = path
> self.Complete = state
>
> def __str__(self):
> return '(%s) %s %s' %(self.DocID, self.Path, self.Complete)
>
> if __name__ == "__main__":
> mapper(Doc, docs)
>
> db = create_engine( 'sqlite:///mydb' )
> db.echo = False
>
> s = create_session(bind_to = db)
> q = s.query(Doc)


i'm no expert but i dont see where u bind the metadata to the db.
u are binding the session - which isn't needed usualy.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Choosing a few columns out a query object

2007-07-25 Thread svilen

On Wednesday 25 July 2007 15:01:59 alex.schenkman wrote:
> Hello:
>
> How do I get only a few columns from a query object?
>
> q = session.query(Document).select_by(Complete=False)
>
> would give me a list of rows (all columns) where Complete == False.
this would give u a list of objects, not rows. Eventualy u can disable 
all references not to get loaded. query.select() takes a premade sql 
statement but it should have enough columns to construct the object.

> I have seen the select([doc.c.name]) notation but it doesn't work
> for me.
these are plain sql constructs (non-ORM), a somewhat different beast.

> select([docs.c.DocID]).execute()
>
> Gives me:
> sqlalchemy.exceptions.InvalidRequestError: This Compiled object is
> not bound to any engine.
is your table's metadata bound to an engine?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] autoload'ing metadata

2007-07-25 Thread svilen

this is along the recent threads about metadata consistency between 
code and DB, and the DB-migration. Both these require a full metadata 
reflection from database.

Here a version of autocode.py, hacked for couple of hours. 
It has more systematic approach, replaces back column types with SA 
ones, has sqlite and postgress, and is somewhat simpler but more 
dense. Output also looks nicer (nested identation). Not tested for 
mssql. 

my idea is to use this metadata-reflection as starting point towards 
model-migration technology or framework or whatever.

one way is to put all metadata-reflection in the dialects themselves. 
Maybe there should be reflect_metadata() method, which will extract 
all tables/names, indexes, etc. This is what i like, although it 
means hacking 10 files instead of one. But it would be more 
easier/consistent on the long run.

Another way is to pull all reflection stuff out of dialects, or at 
least separate it somehow.

anyway.

http://www.sqlalchemy.org/trac/wiki/UsageRecipes/AutoCode#autoload2codeorAutoCode3

using some metadata howto from:
http://sqlzoo.cn/howto/source/z.dir/tip137084/i12meta.xml

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Consistency with DB while modifying metadata

2007-07-24 Thread svilen

> i just saw there is some usagerecipe ModelUpdate in the wiki, may
> be a good start point:
> http://www.sqlalchemy.org/trac/wiki/UsageRecipes/ModelUpdate
>
and this one: 
http://www.sqlalchemy.org/trac/wiki/UsageRecipes/AutoCode

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Consistency with DB while modifying metadata

2007-07-24 Thread svilen

i just saw there is some usagerecipe ModelUpdate in the wiki, may be a 
good start point:
http://www.sqlalchemy.org/trac/wiki/UsageRecipes/ModelUpdate

> > >> assert t.compare(t2)
> > >
> > > yes i was hoping for such method (:-)
> > > And the best will be if it can produce a list/ hierarchy of
> > > differences, which then programaticaly can be iterated - and
> > > checked and resolved or raised higher.
> > >
> > >> but why not just use autoload=True across the board in the
> > >> first place and eliminate the chance of any errors ?
> > >
> > > what do u mean? The db-model of the app will not be the
> > > db-model in the database - and the semantix will be gone.
> > > Example:
> > >  from simplistic renaming of columns/ tables, to splitting a
> > > class into clas+subclass (table into 2 joined-tables) etc
> >
> > ok, fine.  anyway, feel free to add a trac ticket for this one -
> > it'll need a volunteer.
>
> ticket #680, have a look if what i wrote is what was meant in this
> thread.
> i may look into it after 2-3 weeks - unless someone does it ahead
> of me ;P)
>
> 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Consistency with DB while modifying metadata

2007-07-24 Thread svilen

> >> assert t.compare(t2)
> >
> > yes i was hoping for such method (:-)
> > And the best will be if it can produce a list/ hierarchy of
> > differences, which then programaticaly can be iterated - and
> > checked and resolved or raised higher.
> >
> >> but why not just use autoload=True across the board in the first
> >> place and eliminate the chance of any errors ?
> >
> > what do u mean? The db-model of the app will not be the db-model
> > in the database - and the semantix will be gone.
> > Example:
> >  from simplistic renaming of columns/ tables, to splitting a
> > class into clas+subclass (table into 2 joined-tables) etc
>
> ok, fine.  anyway, feel free to add a trac ticket for this one -
> it'll need a volunteer.
ticket #680, have a look if what i wrote is what was meant in this 
thread.
i may look into it after 2-3 weeks - unless someone does it ahead of 
me ;P)

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Consistency with DB while modifying metadata

2007-07-24 Thread svilen

On Tuesday 24 July 2007 17:30:27 Michael Bayer wrote:
>
> such a feature would make usage of table reflection, and then a
> comparison operation, along the lines of :
>
> ...
>
> assert t.compare(t2)
yes i was hoping for such method (:-)
And the best will be if it can produce a list/ hierarchy of 
differences, which then programaticaly can be iterated - and checked 
and resolved or raised higher.

> but why not just use autoload=True across the board in the first
> place and eliminate the chance of any errors ?
what do u mean? The db-model of the app will not be the db-model in 
the database - and the semantix will be gone.
Example: 
 from simplistic renaming of columns/ tables, to splitting a class 
into clas+subclass (table into 2 joined-tables) etc

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Consistency with DB while modifying metadata

2007-07-24 Thread svilen

On Tuesday 24 July 2007 16:22:43 Anton V. Belyaev wrote:
> Hey,
>
> I believe there is a common approach to the situation, but I just
> dont know it.
>
> Let say, I have some tables created in the DB using SQLAlchemy.
> Then I modify Python code, which describes the table (add a column,
> remove another column,...). What is the common way to handle this
> situation? I guess it would be good to have an exception raised
> when there is a mismatch between DB tables and Python-defined
> (using SQLAlchemy).

Very soon i'll be in your situation (with hundreds of tables), so i'm 
very interested if something comes up. 

it's in the todo list of dbcook. my idea so far is:
 - automaticaly reverse engineer i.e. autoload the available 
db-structure into some metadata.
 - create another metadata as of current code
 - compare the 2 metadatas, and based on some rules - ??? - 
alter/migrate the DB into the new shape.
This has to be as automatic as possible, leaving only certain - if 
any - decisions to the user.
Assuming that the main decision - to upgrade or not to upgrade - is 
taken positive, and any locks etc explicit access is obtained.

svil

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Issue when loading module in scheduler utility

2007-07-23 Thread svilen

> First, I appended the sys.path var like this (relative, was
> absolute before):
> sys.path.append( 'vor')
IMO u should not touch sys.path unless u really really have no other 
chance. Although this above is another wholesale solution to your 
initial problem (and no need of my_imports etc).

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Issue when loading module in scheduler utility

2007-07-23 Thread svilen

> 1: At least I'm in a working context...since I will be the user of
> this jobs module 90% of the time for a while, I'll have a fighting
> chance of refining it further to handle the other things you allude
> to.
have fun then. As for the $100... give them to someone in _need_ (the 
mall is not in need).
And, one day, prepare some $x000 and come over (.bg for now) to teach 
u a thing or two. may save u lots of $100s. (-:)
ciao

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Issue when loading module in scheduler utility

2007-07-23 Thread svilen

On Monday 23 July 2007 17:52:51 Jesse James wrote:
> > aaah, u are _that_ new...
> >  - use it instead of the __import__() func
> >  - original python library reference of the version u use;
> > e.g.http://docs.python.org/lib/built-in-funcs.html
> >
> > wow there's a level parameter now... somethin to try
>
> I'm not using 2.5 (2.4 still). I tried the 'my_import' function.
> Nice, but not what I needed (I knew that before I tried it, but
> what the heck...didn't take much to try it anyway).

> So, I'm at square 2.
> I know that I have a problem with trying to import a module that
> has already been imported under another name.
>
> Square 3 is where I DO know how to properly retrieve that module
> without the importer thinking it needs to reload it.
>
> There is NO square 4 (said firmly with authority).

-3: 2.5 or 2.4 all the same (Except abs/rel imports which still dont 
work in 2.5 anyway).
-2: u need my_import (or similar) because __import__( 'vor.model') 
will not give u want u want.
-1: u need to give _same_ full absolute paths to __import__ (or 
substitute) or else u'll get duplicated modules. OR, do check how the 
model.py is imported in the application, and __import__ it the same 
way. same to for anything u want to import, _again, _later.
this should bring u to square next.

0: i havent asked why the heck u need to __import__ - multiple times - 
but this is out of my MYOB.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Issue when loading module in scheduler utility

2007-07-23 Thread svilen

On Monday 23 July 2007 16:45:15 Jesse James wrote:
> which python reference (url?) are you speaking of?
> how does 'import_fullname' work? how would it be applied?

aaah, u are _that_ new...
 - use it instead of the __import__() func
 - original python library reference of the version u use; 
e.g. http://docs.python.org/lib/built-in-funcs.html

wow there's a level parameter now... somethin to try

> On Jul 21, 11:35 am, [EMAIL PROTECTED] wrote:
> > > In other words, should I first attempt to
> > > __import__('vor.'+modname) in runJob() ?
> >
> > see the python reference about how to use __import__ over
> > hierarchical paths, or use this instead
> >
> > def import_fullname( name):
> > m = __import__( name)
> > subnames = name.split('.')[1:]
> > for k in subnames: m = getattr(m,k)
> > return m

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: doing a one() on a unique key

2007-07-20 Thread svilen

i think there was .one() to return one and only one and die otherwise 
i.e. match {1}, 
and first() or similar that allows match {0,1}.

On Friday 20 July 2007 18:42:56 Marco De Felice wrote:
> Hi all for my first post
> I'm using the new one() method like this:
>
> query(Table).filter_by(and_(Table.c.field1 == val1, Table.c.field2
> == val2)).one()
>
> filed1 + field2 has a unique constraint in Table, so there won't
> never be more than one row returned, but there may be none.
> When there's none I get an InvalidRequestError, would'nt it be
> better to return something else (None ?) or am i missing something
> ?
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: dbcook 0.1

2007-07-19 Thread svilen

> Isn't it what does already Elixir?
not really. Frankly, i dont know much elixir, just some impressions.
elixir is sort of syntax sugar over SA, with very little 
decision-making inside. It leaves all the decisions - the routine 
ones too - to the programmer. At least thats how i got it.

This one hides / automates _everything_ possible - the very concept of 
existing of relational SQL underneath is seen only by side-effects, 
e.g,. the DB_inheritance types "concrete-", "joined-", "single-" 
table. It decides things like where to break cyclical references with 
alter_table/post_update; makes up the polymorphic inheritances, etc.

Of course this is only the declaration/creation part (building the DB 
model); after that it can cover only small/simple part of the queries 
(model usage) - combinative possibilities there are endless.
That's why u have plain SA stuff, once the "python function over 
object converted into SA-expression over tables" path gets too 
narrow.

dbcook does not have assign_mapper-like things, putting query methods 
on the objects. it leaves all that to you. Although one day there 
will be a usage-case/example of some way to do it - once i get there.

elixir is lighter, this one might be heavier - depends on how u 
measure it.

more differences maybe - no idea, someone has to have time to try both 
(:-)

ciao
svil

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: "lazy = None" behavior help...

2007-07-17 Thread svilen

On Tuesday 17 July 2007 18:01:12 Michael Bayer wrote:
> > if u can make the a query( primary_mapper, select_table) somehow
> > possible... i won't need separate NPs. Note this selecttable is
> > not additional .select() off the query, it IS the starting
> > query(), e.g. thepolymorphic union in a big case and
> > base-class-only in a small case.
>
> query(class).from_statement(your select) ?
i tried it, my_select would be something which the NP mapper does 
itself now, e.g. select( [class_join], base.c.type=='classname')... 
naah, as i see from srccode this misses the whole compile() stuff.
i want to be able to do additional filtering, ordering, whatever a 
query can do.
Note i said the starting query, not the final query...

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: "lazy = None" behavior help...

2007-07-17 Thread svilen

well, i'm using them NPs just to have another (simpler) select_table.
otherwise anything query through primary_mapper starts off its own 
select_table, which can be rather huge in my case.
if u can make the a query( primary_mapper, select_table) somehow 
possible... i won't need separate NPs. Note this selecttable is not 
additional .select() off the query, it IS the starting query(), e.g. 
thepolymorphic union in a big case and base-class-only in a small 
case.

maybe u need some notion that is smaller than mapper - a mapper 
without ANY select_table, + something do attach allowing different 
initial select_tables.

On Tuesday 17 July 2007 16:17:17 Michael Bayer wrote:
> On Jul 17, 2007, at 2:18 AM, [EMAIL PROTECTED] wrote:
> >>> I don't remember that in docs. Did I miss a section, or is this
> >>> too obscure for the general usage?
> >>
> >> its only obscure because i think non_primary mappers are not
> >> *too* common, but their behavior should be more well defined in
> >> the docs (that they dont want to deal with attributes is a big
> >> one)
> >
> > is this 100%?
> > i mean, if i define a primary mapper with all attrs and
> > relations, then i define a non-primary without adding _any_
> > properties, it will work? Sort-of "inheriting" all these from the
> > primary?
> > so far i am adding (again) all properties which are not
> > relations...
>
> no, it doesnt inherit them.   it just doesnt have any access to
> class- level instrumented attributes since it would conflict with
> the primary mapper, and thats where the "default" loading strategy
> of an attribute is set.  the idea that it would change the loading
> strategies instead in a per-instance manner, which is possible,
> seems like a lot of work to go through when youre supposed to just
> be using options() with the primary mapper.  technically, if you
> made a non primary mapper with a different join condition to an
> attribute, then maybe thats a road more worth going downbut
> then, id rather find a way to not have NPs be necessary since its
> getting crazy at that point.
>
>
>
>
>
> 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Bug? Polymorphic inheritance 100 times slower

2007-07-16 Thread svilen

On Monday 16 July 2007 17:08:08 Michael Bayer wrote:
> On Jul 16, 2007, at 2:59 AM, Yves-Eric wrote:
> > Thanks for the explanation! The root of the issue is now very
> > clear. But are you saying that this is intended behavior? Was I
> > wrong in trying to use the session as an object cache?
>
> this is why im extremely hesitant to call the session a "cache". 
> its only slightly cachelike becuase in most cases it doesnt prevent
> SQL from being issued.
>
> > Now onto a possible solution or workaround... Please forgive me
> > if the following does not make sense, but would it be possible to
> > store our object in the identity map under a second key: (Person,
> > (2,))? Then the Person mapper would find it and we'd avoid having
> > to generate a DB query. Is there any technical issue that would
> > prevent the same object from being registered under different
> > keys in the identity map?
>
> There are some kinds of inheritance where both Person/2 and
> Employee/ 2 exist distinctly, namely the concrete inheritance svil
> mentions. so that makes me less comfortable with that approach.  
> it does seem like you can in many cases define the set of
> identities for an entire class hierarchy a/b/c/d/e... against the
> root mapper only, i.e. just (A, (id,)).its something to look
> into (changing the key to represent based on the root class in all
> non-concrete cases, not storing two keys).

i think it is not a problem to store as many keys as there are 
identical levels in polymorphism/inheritance as long as this is 
synchronized with type of inheritance.
i.e. joined-table inheritance chains (and single table maybe) are ok 
as is, anything concrete-table in between changes the game.
As for the concrete, i think it can be done same as with 
polymorphism - storing composite keys (type,id) where just keys are 
ambigious.

but i bet noone has ever wanted mixed inheritance, no? (:-)


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SQLAlchemy 0.3.9 Released

2007-07-16 Thread svilen

On Monday 16 July 2007 17:18:04 Michael Bayer wrote:
> On Jul 16, 2007, at 10:12 AM, svilen wrote:
> > On Monday 16 July 2007 16:54:01 Michael Bayer wrote:
> >> On Jul 16, 2007, at 9:20 AM, svilen wrote:
> >>> on the "generative" line:
> >>> - how would i prepack a select (or some other filtering) and
> >>> give it to a query() _later_?
> >>> e.g. i have some table.c.type == 'person', and i want to apply
> >>> to several queries?
> >>> i can store the expression, doing query.select(expr) each time.
> >>> Any other way? e.g. store a select([expr])?
> >>
> >> you can store the select(), or the where clause, or the select
> >> ().compile() object, or the query object itself since its
> >> generative.   which do you prefer ?
> >
> > hmmm. query is not available at that time.
> > so far i store the expression only, although that is missing
> > things like fold_equivalents=True.
> >
> > in the case of "polymorphic" outerjoins, i get all the id'
> > columns of all joined tables (5 columns something.id for 5 level
> > inheritance). Any way to get rid of them?
>
> well if you select() from it and fold_equivalents, yes.but
> using just the join by itself, i think the code in 0.4 should be
> *really* smart about not tripping over those at this point, no ?
i/u will know when i switch to 0.4. 
as i have another "language" over the SA "language" itself (i.e. the 
API + how-to-use), sometimes u change that "language" too fast to 
follow it smoothly.

now it does not fall down, it just retrieves all them from db, 
redundantly.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SQLAlchemy 0.3.9 Released

2007-07-16 Thread svilen

On Monday 16 July 2007 16:54:01 Michael Bayer wrote:
> On Jul 16, 2007, at 9:20 AM, svilen wrote:
> > on the "generative" line:
> > - how would i prepack a select (or some other filtering) and give
> > it to a query() _later_?
> > e.g. i have some table.c.type == 'person', and i want to apply to
> > several queries?
> > i can store the expression, doing query.select(expr) each time.
> > Any other way? e.g. store a select([expr])?
>
> you can store the select(), or the where clause, or the select
> ().compile() object, or the query object itself since its
> generative.   which do you prefer ?
hmmm. query is not available at that time.
so far i store the expression only, although that is missing things 
like fold_equivalents=True. 

in the case of "polymorphic" outerjoins, i get all the id' columns of 
all joined tables (5 columns something.id for 5 level inheritance). 
Any way to get rid of them?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SQLAlchemy 0.3.9 Released

2007-07-16 Thread svilen

on the "generative" line:
- how would i prepack a select (or some other filtering) and give it 
to a query() _later_?
e.g. i have some table.c.type == 'person', and i want to apply to 
several queries?
i can store the expression, doing query.select(expr) each time.
Any other way? e.g. store a select([expr])?


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



  1   2   3   >