[sqlalchemy] Re: new joined-table polymorphic helpers available in trunk

2008-03-02 Thread sdobrev


 I want to direct your attention to some new features in trunk which
 I'll also be demonstrating at this years Advanced SQLAlchemy
 tutorial.

 these features apply primarily to joined-table inheritance
 scenarios, and are in response to the need to specify criterion
 against subclasses as well as to join to/from joined-table classes
 where a specific subclass is required.

 I've put some new docs at
 http://www.sqlalchemy.org/docs/04/mappers.html#advdatamapping_mappe
r_inheritance_joined_querying .

 the two methods are with_polymorphic() and of_type().

 with_polymorphic() may very well replace the need for select_table
 in almost all cases, and it creates the joins for you.  It allows
 filtering criterion to be against subclasses, *and* allows the
 mapper to load the full subclass instance without the need for a
 second query against the joined table.

 Assume the usual Person / Engineer(Person) / Manager(Person)
 example :
   # query for Person objects, but specify criterion against Engineer

 session
 .query
 (Person
 ).with_polymorphic(Engineer).filter(Engineer.engineer_name=='dilber
t')

   # query for Person objects but specify criterion against both
 Manager and Engineer
   session.query(Person).with_polymorphic([Engineer,
 Manager]).filter(or_(Engineer.engineer_name=='dilbert',
 Manager.manager_name=='dogbert'))

   # eagerly load all joined-table subclasses
   session.query(Person).with_polymorphic('*').filter(...filter on
 any table..).all()

let me see if i got it... does this mean that, in the a-b-c-.. mapper 
hierarchy i do not have anymore to use the selecttable= 
a.outerjoin(b) (and eventualy .outerjoin(c) etc) on each level, but 
instead use only one mapper (a) for everything, just having different 
query.with_poly(x) for each subclass x?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Audit log client side

2008-02-22 Thread sdobrev

check MapperExtensions, u have pre+post ins/upd/del hooks there. 
u may or may not have a mapper for the log-table.

On Friday 22 February 2008 21:21:52 Marco De Felice wrote:
 Hi
 I'm thinking about a simple client side table audit with SA. Given
 the audit log pattern: http://martinfowler.com/ap2/auditLog.html
 wouldn't it be simple to adapt it to a mapped class?
 I was thinking of a log() function inside the mapped class that if
 called saves data into a second table (could it be in a distinct
 log database?) with the same columns + timestamp and current user
 columns. Is there a way to automate this function at the orm level
 without defining additional mappers?

 I'm sorry for the vagueness but I'm really just beginning to
 explore this. Thanks all

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Audit log client side

2008-02-22 Thread sdobrev

and, i do have bitemporal pattern implemented at 
http://dbcook.svn.sourceforge.net/viewvc/dbcook/trunk/dbcook/misc/timed2/
it is not at all optimized but is correct.

 check MapperExtensions, u have pre+post ins/upd/del hooks there.
 u may or may not have a mapper for the log-table.

 On Friday 22 February 2008 21:21:52 Marco De Felice wrote:
  Hi
  I'm thinking about a simple client side table audit with SA.
  Given the audit log pattern:
  http://martinfowler.com/ap2/auditLog.html wouldn't it be simple
  to adapt it to a mapped class?
  I was thinking of a log() function inside the mapped class that
  if called saves data into a second table (could it be in a
  distinct log database?) with the same columns + timestamp and
  current user columns. Is there a way to automate this function at
  the orm level without defining additional mappers?
 
  I'm sorry for the vagueness but I'm really just beginning to
  explore this. Thanks all


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: schema changes

2008-02-13 Thread sdobrev

On Wednesday 13 February 2008 22:06:54 Don Dwiggins wrote:
 [EMAIL PROTECTED] wrote:
  we've put such a notion in our db, so the db knows what
  model-version it matches. Then, at start, depending on the
  versions one can decide which migration script to execute (if the
  db should be made to match the py-model), or which feautures to
  drop (if py-model should follow the db). btw there should be some
  tricky strategy as of what to call db-model-version and when
  that version really changes.

 This is getting into a big area: the problem of version
 control/configuration management for databases
its not any bigger than any other configuration management of 
something structured that is deployed in the field... as long it 
consists of pieces and these pieces can go/combine wrong...
 .. 
 (Just having a realized
 object-level schema should be a big step forward.)
mmm this is going slightly offtopic, but there might be many levels of 
modelling (db-model - mapping - obj-model - concept-model - 
behavioural-model - ... - philosophy-model if-u-want). The more the 
merrier ;-) i.e. the easier to change something without affecting 
something else on a far-away-level -- but is harder to grasp, takes 
longer to develop, and needs more initial investment. But on a long 
run pays off very well - of course IF there is no long run, no point 
doing it.
e.g. Right now i have a bitemporal machine applied even over some part 
of the code of the system, organised as sort of read-only 
db/repository of replaceable pieces of code. And changes of piece in 
this repository are treated same as changes in the salary of 
someone - maybe less dynamic but still changes...

adieu
svilen

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Handling of currency values

2008-02-02 Thread sdobrev

according to sqlalchemy/types.py, the Decimal() is used straight away, 
without any precision etc stuff. the numeric(precision/length) are only for 
the db. i assume u have to use some precision-context around your db-related 
stuff.

Werner F. Bruhin wrote:
 I am converting an existing Firebird DB over to use sqlalchemy (0.4.0) 
 and I can't figure out how to define the model for currency values.
 
 In the DB they are defined as numeric(18,2) default 0 and in the model I 
 do e.g.
 
 sa.Column(u'cbb_currentvalue', 
 sa.Numeric(precision=16,length=2,asdecimal=True)),
 
 However I don't get the trailing zero in my wxPython application, i.e. 
 when I debug it I get:
 
 Decimal(26.2) or Decimal(0)
 
 I would have expected:
 
 Decimal(26.20) or Decimal(0.00)
 
 What am I missing?
 
 Appreciate any hints on this
 Werner
 
 
 
  

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: the columns retrieved in a recordset

2008-01-19 Thread sdobrev

theoreticaly, looking at the sql.expression.py/Select, try
   for a in yourselect.inner_columns: print a
it's a yielding property.

alex bodnaru wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 
 hi friends,
 
 could i know the columns a select would retrieve, without examining the
 first record retrieved?
 or if no records matched the where.
 
 tia ,
 
 alex

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: the columns retrieved in a recordset

2008-01-19 Thread sdobrev

 i just wonder whether the * (all columns) is being expanded there.
try?

 [EMAIL PROTECTED] wrote:
 theoreticaly, looking at the sql.expression.py/Select, try
for a in yourselect.inner_columns: print a
 it's a yielding property.

 alex bodnaru wrote:
 could i know the columns a select would retrieve, without examining the
 first record retrieved?
 or if no records matched the where.

 tia ,

 alex


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Relating objects of the same type Using Joined Table Inheritance

2008-01-18 Thread sdobrev

you relation should have argument like
primary_join= engineers.c.hired_by_id==managers.c.employee_id
or similar. i do not know for sure as i've done a layer on top of SA that 
stores most of this knowledge, so i dont bother with it. Have a look at 
dbcook.sf.net. u may use it as ORM to build and use your model, or use is 
just to describe your model then dump the equivalent SA-calls (see 
usage/example/example*), and use that one, dropping the dbcook.
As of joined-inh, SA supports all the 3 single/concrete/joined, but real 
polymorphism does not work for the concrete case. Also, joined inh is done 
via left-outer-join which is simpler/faster than an union - although that is 
also an option.
have fun
svilen

Dave E wrote:
 http://www.sqlalchemy.org/docs/04/mappers.html#advdatamapping_mapper_inheritance_joined
 
 My question is what if you wanted to add a 'hired' field to say the
 employees table  object (from the example in that link) that
 references a manager object.  In this way I imagine that we are making
 another reference to an employee object and might be an issue when
 trying to figure out the join.  So I'd imagine that the tables look
 like:
 
 employees = Table('employees', metadata,
Column('employee_id', Integer, primary_key=True),
Column('name', String(50)),
Column('type', String(30), nullable=False)
 )
 engineers = Table('engineers', metadata,
Column('employee_id', Integer, ForeignKey('employees.employee_id'),
 primary_key=True),
Column('hired_by_id',Integer,ForeignKey('managers.employee_id')),
 ###INTERESTING PART
Column('engineer_info', String(50)),
 )
 managers = Table('managers', metadata,
Column('employee_id', Integer, ForeignKey('employees.employee_id'),
 primary_key=True),
Column('manager_data', String(50)),
 )
 
 and the mappers look like:
 
 mapper(Employee, employees, polymorphic_on=employees.c.type,
 polymorphic_identity='employee')
 mapper(Engineer, engineers, inherits=Employee,
 polymorphic_identity='engineer',
 properties={
 'hirer':relation(Manager,uselist=False,backref='hired')
 })
 mapper(Manager, managers, inherits=Employee,
 polymorphic_identity='manager')
 
 
 But the error message you'd get if you do this is:
 sqlalchemy.exceptions.ArgumentError: Error determining primary and/or
 secondary join for relationship 'Engineer.hirer (Manager)'. If the
 underlying error cannot be corrected, you should specify the
 'primaryjoin' (and 'secondaryjoin', if there is an association table
 present) keyword arguments to the relation() function (or for
 backrefs, by specifying the backref using the backref() function with
 keyword arguments) to explicitly specify the join conditions. Nested
 error is Can't determine join between 'Join object on
 employees(14680464) and engineers(14680880)' and '_FromGrouping
 object'; tables have more than one foreign key constraint relationship
 between them. Please specify the 'onclause' of this join explicitly.
 
 Might I add that this is an extremely informative error message!
 
 Basically, my question is how do I satisfy this requirement as
 described by the error message?  I have to make my join more
 specific?  How would I do that?
 
 And another question, is Joined Table Inheritance a common thing to do
 in SQLAlchemy?
 
  


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Filter by year in datetime column

2008-01-18 Thread sdobrev

Rick Morrison wrote:
 Such operations will likely trigger a full table scan
 
 SQLite dates are stored as strings anyway, AFAIK there is little one can do
 to avoid table-scans in SQLite based solely on date criteria. I use julian
 dates stored as integers when working with large datasets in SQLite, and
 convert as needed. Be interested to hear what others do about this.
well one can decompose dates into day-month-year triple of ints, but this 
definitely will need some wrapper to be made easy-to-use (and may need 
composite keys for example).


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Schema Display recipe

2008-01-17 Thread sdobrev

Allen Bierbaum wrote:
 Thanks, that worked great.
 
 Have their been any new capabilities added to this code?
no idea, never used it
 -Allen
 
 On Jan 17, 2008 12:21 PM,  [EMAIL PROTECTED] wrote:
 use sqlalchemy.orm.class_mapper(cls) instead of cls.mapper, and it should 
 work?


 Allen Bierbaum wrote:
 I was just taking a look at the recipes on the SA wiki and stumbled
 across this one:
 http://www.sqlalchemy.org/trac/wiki/UsageRecipes/SchemaDisplay

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Schema Display recipe

2008-01-17 Thread sdobrev

use sqlalchemy.orm.class_mapper(cls) instead of cls.mapper, and it should work?

Allen Bierbaum wrote:
 I was just taking a look at the recipes on the SA wiki and stumbled
 across this one:
 http://www.sqlalchemy.org/trac/wiki/UsageRecipes/SchemaDisplay
 
 It is a pretty nice little piece of code to automatically create
 schema diagrams from a DB engine and UML diagrams for a set of
 mappers.  I am very interested in using this because I think it would
 help documentation on our current project quite a bit.  It is very
 easy for a diagram to get out of sync with the code, and this looks
 like a great tool to generate the diagram directly from the code.
 Very sweet. :)
 
 The problem is that I can't seem to get the code to work with SA
 0.3.11.  It looks like it expects some old naming conventions to get
 mappers.
 
 This seems to me like a very nice tool that could prove useful as an
 addon to SA. Am I alone in thinking this or is anyone else
 successfully using it?
 
 -Allen
 
  


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: dictionary returning a collection

2008-01-17 Thread sdobrev

jason kirtland wrote:
 Christophe Alexandre wrote:
 Dear All,

 Send me some study material on DBMS + $100 !

 Or if it fits you better, can you please help on the issue described
 below?

 The closest situation to what I am facing is described here:
 http://www.sqlalchemy.org/docs/04/plugins.html#plugins_associationproxy_
 building

 But unfortunately the situation is not close enough to help me.

 Now, based on the example above, here is the issue:

 stocks = Table(stocks, meta,
Column('symbol', String(10), primary_key=True),
Column('description', String(100), nullable=False),
Column('last_price', Numeric)
 )

 brokers = Table(brokers, meta,
Column('id', Integer,primary_key=True),
Column('name', String(100), nullable=False)
 )

 holdings = Table(holdings, meta,
   Column('broker_id', Integer,
  ForeignKey('brokers.id'), primary_key=True),
   Column('symbol', String(10),
  ForeignKey('stocks.symbol'), primary_key=True),
   Column('shares', Integer),
   Column('date', DateTime, primary_key=True)  # -- HERE IS THE
 DIFFERENCE
 )

 The example in the doc provides a nice way to retrieve one holding:
 holding = broker.holdings[stock]

 But what if holdings are parameterized by a new attribute ('date' in our
 case)?

 How would you implement a solution allowing you to do something like
 this:

 date = datetime.datetime(2007,1,1)
 holdings = broker.holdingsByDate[date]

 where 'holdings' is a collection of what the broker is holding at a
 given date.
 
 A couple approaches come to mind.  How many Holdings rows in total do 
 you expect per Broker?
 
 If the number is large, then on-demand querying for chunks of the set by 
 date could be a good option.  That can be wrapped up in a dict-like 
 class with __getattr__ for an interface like holdingsByDate[date].  You 
 could cache too, if you wanted.  No mapper relation from Broker-Holding 
 would be needed, but you'd want Holding-Broker.
 
 If the number is small enough that loading the entire collection of 
 associated Holdings per Broker isn't a big deal, then a whole bunch of 
 options come to mind, but none of these are built-in.
 
 - A dict-like proxy similar to the large collection approach, except 
 scanning .holdings in-memory and filtering by .date on demand
 
 - Add some append/remove event handlers to the collection backing 
 .holding, and use them to maintain a obj.holdingsByDate dictionary 
 transparently
 
 - Use the collections framework to build a fully featured partitioned 
 collection type
 
 These are all pretty simple to implement- something like #2 is just a 
 few lines of code.
 
 An implementation of #3 (a dictionary of lists with full instrumented 
 goodness) is probably not a ton of work at all and could make a good 
 addition to either the collections or ext package.

hmm. this sounds interesting.
in dbcook i have a similar testcase, where the intermediate association 
table has 3 columns in its primary key, but a) they are all references to 
somewhere b) its only sort of syntax test and doesnt do anything real with 
3rd column.
thanks for reminding me.. one day i may need it

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: eagerloading polymorphic mapper

2008-01-15 Thread sdobrev
 hmmm, specify explicitly?
 e.g. query(A).eagerload( B.address)

 joined-inh via left-outer-join is enough, no need for polymunion.

 i dont know how the current machinery for eagerload works, but imo
 knowing your level of lookahead-design, it should not be hard to
 apply that machinery over a polymorphic mapper/query?
 
 theres plenty of much higher priority issues than this one in the  
 queue...considering that you can already get the results you want with  
 this one using direct SQL.

right..

i've hacked something that seems to work; It's about 20 lines split in 
orm.query and orm.interfaces:
  - such special eagerloaders are requested as query.eagerload( B.address) - 
and not just the name/path
  - query-compile calling context.exec_withpath(...) iterates over all 
self.mapper properties (not only select_mapper's), plus all eagerloaders of 
above type (i.e. non-names). Thus the 4 cases are covered:
  A has address / query(B).eagerload('address') #works before
  A has address / query(A).eagerload('address') #new - did not work before
  B has address / query(B).eagerload('address') #works before
  B has address / query(A).eagerload(B.address) #new - not possible before
(in all these B inherits A via joined inheritance; A is polymorphic via 
left-outer-joins)
i'm absolutely sure that this is not the completely right thing - that's 
what i got from the machinery-src in 2 hours -  but it is something as a 
start... sure it needs correctness tests etc of the sorts.

g'night
svilen

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---

Index: orm/query.py
===
--- orm/query.py	(revision 4032)
+++ orm/query.py	(working copy)
@@ -991,10 +991,20 @@
 # give all the attached properties a chance to modify the query
 # TODO: doing this off the select_mapper.  if its the polymorphic mapper, then
 # it has no relations() on it.  should we compile those too into the query ?  (i.e. eagerloads)
-for value in self.select_mapper.iterate_properties:
+
+for value in self.mapper.iterate_properties:
+if self._only_load_props and value.key not in self._only_load_props:
+continue
+context.exec_with_path(self.mapper, value.key, value.setup, context, only_load_props=self._only_load_props)
+for (mp, key) in self._eager_loaders:
+if isinstance( key, str): continue  #plain
+value = key.property
 if self._only_load_props and value.key not in self._only_load_props:
 continue
-context.exec_with_path(self.select_mapper, value.key, value.setup, context, only_load_props=self._only_load_props)
+context.exec_with_path(self.mapper, key, value.setup, context, only_load_props=self._only_load_props)
 
 # additional entities/columns, add those to selection criterion
 for tup in self._entities:
Index: orm/interfaces.py
===
--- orm/interfaces.py	(revision 4032)
+++ orm/interfaces.py	(working copy)
@@ -594,6 +599,7 @@
 raise exceptions.ArgumentError(Can't find entity %s in Query.  Current list: %r % (str(mapper), [str(m) for m in [query.mapper] + query._entities]))
 else:
 mapper = query.mapper
+if isinstance( self.key,str):
 for token in self.key.split('.'):
 if current_path and token == current_path[1]:
 current_path = current_path[2:]
@@ -604,6 +610,16 @@
 path = build_path(mapper, prop.key, path)
 l.append(path)
 mapper = getattr(prop, 'mapper', None)
+else:
+ia = self.key
+key = ia.impl.key
+#from sqlalchemy.orm import class_mapper
+#mp  = mapper#class_mapper( ia.impl.class_) #assert mp inherits mapper?
+self.key = key
+path = build_path(mapper, ia, path)
+l.append(path)
 return l
 
 PropertyOption.logger = logging.class_logger(PropertyOption)


[sqlalchemy] Re: eagerloading polymorphic mapper

2008-01-15 Thread sdobrev

Michael Bayer wrote:
 
 On Jan 15, 2008, at 5:17 PM, [EMAIL PROTECTED] wrote:
 
 hmmm, specify explicitly?
 e.g. query(A).eagerload( B.address)

 joined-inh via left-outer-join is enough, no need for polymunion.
 i dont know how the current machinery for eagerload works, but imo
 knowing your level of lookahead-design, it should not be hard to
 apply that machinery over a polymorphic mapper/query?
 theres plenty of much higher priority issues than this one in the
 queue...considering that you can already get the results you want  
 with
 this one using direct SQL.
 right..

 i've hacked something that seems to work; It's about 20 lines split in
 orm.query and orm.interfaces:
  - such special eagerloaders are requested as  
 query.eagerload( B.address) -
 and not just the name/path
  - query-compile calling context.exec_withpath(...) iterates over all
 self.mapper properties (not only select_mapper's), plus all  
 eagerloaders of
 above type (i.e. non-names). Thus the 4 cases are covered:
  A has address / query(B).eagerload('address') #works before
  A has address / query(A).eagerload('address') #new - did not work  
 before
  B has address / query(B).eagerload('address') #works before
  B has address / query(A).eagerload(B.address) #new - not possible  
 before
 (in all these B inherits A via joined inheritance; A is polymorphic  
 via
 left-outer-joins)
 i'm absolutely sure that this is not the completely right thing -  
 that's
 what i got from the machinery-src in 2 hours -  but it is something  
 as a
 start... sure it needs correctness tests etc of the sorts.

 
 yeah thats the idea but it needs more work than that.  
 for one thing  
 you might be hitting the same MappedProperty twice using that  
 iteration (therefore joining twice), and also it doesn't account for  
 eager loaders like eagerload(foo.bar.bat); i.e. deeper level  
 properties which would need to have an adjusted path sent to them  
 (or ignored in that part of the iteration).
pure textual paths are going same (old) way; it needs some 
extra-syntax/API for multilevel descriptor-specified eagers; e.g 
eagerload( (B.foo, 'bar', C.bat ) )

 also i can see a lot of cases where the eager loader from B is going  
 to generate invalid SQL, such as joined table inheritance with no  
 select_table, the query is only generated against A.  B.address   
 is going to try generating an eager join against the B table which  
 isnt present, and youll get some kind of mess as a result.  checking  
 for this condition beforehand is bound to add lots of complexity and i  
 only want to add features like these if they can be smoothly  
 integrated, not lots of extra if/thens bolted on.
yeahhh i know i dont see the wider sql picture...
i may put all this as a ticket to remind.. maybe one day you'll be in better 
mood (;-)


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Get default value

2008-01-10 Thread sdobrev

 How many levels I can inherit classes/tables without get something
 wrong?

my tests go to 4, all works. And as all corner cases are already there,
i guess any level above will work too.
mixed inheritance (joined+concrete) also can be made to work, as long as 
polymoprhic_union() is fixed slightly - AND no real polymoprhism over 
concrete tables, that does not work on SA level / conceptualy.

may i suggest, that u get dbcook.sf.net, model your whole hierarchy there 
(it is SIMPLE), then run it in generator mode (see usage/example/*) and see 
what equivalent SA source/calls it generates.
maybe you are missing something (some mapper/relation parameters are tricky 
to guess). be ware, only joined_inheritance is of real use (single_table is 
not implemented).

 let me show a simplest sample hierarchy:
 
   resource
 / \
   person   material
/  \
  employeecustomer
 
 
 now, I am creating a type column on resource to map persons and
 materials, them I am creating another type column on person, to get
 mapped the various types of persons.
no u dont do it this way. u musthave only one discriminator column per 
hierarchy-island.
either use the root one, and put all types there, or separate the material 
from resource in its own subhierarchy. If there is explosion of  types, to 
avoid the huuuge union/outerjoin, u can make the resource a virtual base, 
that is, not a table at all - so u'll have two separate db-hierarchies, each 
one with its own root/discriminator. (dbcook: just declare 
DBCOOK_no_mapping=True there.)

 resource_table = Table(
   Column('id',Integer, primary_key=True),
   Column('poly', String(31), nullable=False)
 )
 
 person_table = Table(
   Column('id',Integer, primary_key=True, ForeignKey('resource.id'),
 primary_key=True)),
   Column('poly', String(31), nullable=False)
 )
u should not have poly here. its already in the root=resource.

 employee_table = Table(
   Column('id',Integer, primary_key=True, ForeignKey('person.id'),
 primary_key=True)),
 )
 
 class Resource(object):
   pass
 class Person(Resource):
   pass
 class Employee(Person):
   pass
 
 mapper(Resource, resource_table,
 polymorphic_on=resource_table.c.poly,
 polymorphic_identity='resource'
 )
 
 mapper(Person, person_table, 
 polymorphic_on=person_table.c.poly,
 inherits=Resource, polymorphic_identity='person'
 )
either put  the correct polymorphic_on=resource.c.poly, or remove it 
alltogether,
it comes from the inherited base-mapper.

 mapper(Employee employee_table,
 inherits=Person, polymorphic_identity='employee',
 )

svilen


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: quick question...

2008-01-08 Thread sdobrev

and what this is expected to do?
x = task and (max(task.sequence)+100) or 100 ?

Jonathan LaCour wrote:
 I have been banging my head against the wall for a little bit
 attempting to translate this SQL:
 
  SELECT max(value) FROM (
  SELECT max(sequence)+100 as value FROM task
  UNION
  SELECT 100.0 as value
  )
 
 into an SQLAlchemy expression that I can embed into an INSERT.
 Should I just go ahead an use text() rather than bother with
 attempting to construct this using an SQLAlchemy expression?
 
 (Yes, I know that this is gross...)
 
 --
 Jonathan LaCour
 http://cleverdevil.org
 
 
  


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Handling unique constraints

2008-01-05 Thread sdobrev

Matt Haggard wrote:
 I'm using SQLAlchemy with Pylons and am having trouble validating
 data.  I have an App object mapped to a table with a unique constraint
 on App.number.
 
 Here's some code:
 
 q = Session.query(App)
 if app_id:
 q = q.filter_by(id=app_id).first()
 if q:
 c.app = q
 number = request.params.get('number')
 notes = request.params.get('notes')
 if appmodel and number:
 try:
 q.number = number
 q.notes = notes
 Session.save(q)
 Session.commit()
 c.message = 'Record updated'
 except:
 # restore pre-form data ?? how??
 c.message = 'Error updating record'
 return render('index.mtl')
 else:
 return self.index()
 
 My questions are:
 
 1) When I do the try statement, the value of q.number changes to
 whatever the user passed in via the form -- even if it's invalid, so
 that when I render the page, the invalid value is used.  How do I
 reset the object to have the values it had before I did the try?  Do I
 have to get it afresh from the db?
try something like session.refresh( obj) or similar

 2) How do I let the user know which value caused the record not to
 update?  What information does SQLAlchemy provide back that I can use
 to say: You're number must be unique... and such-and-such must be
 greater than 0, etc..?
mmh, do not mistake DB-constraints with validation-rules.
the only sensible info u can get here is that the record is not unique 
(check what sort of exception that throws), but any further interpretation - 
why, what, where - is up to you - u have to know what field u have just 
set/changed, etc.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Mapper properties and a callback

2007-12-22 Thread sdobrev

i'm not sure how much this would help u, but 0.4 has better support for 
your-own-collection-containers. see 
http://www.sqlalchemy.org/docs/04/mappers.html#advdatamapping_relation_collections

e.g. subclass some list and u can do the callback at append() or whatever.

Dave Harrison wrote:
 Hi all,
 
 I have a situation where I want to declare a relation between two
 tables, but I want to be able to run a callback whenever I append to
 that relation.  Here's an example
 
 class Kennel:
 def callback(self, o):
 print callback, o
 
 mapper = Mapper(
 Kennel,
 kennelTable,
 properties = {
 dogs : relation(
 Dog, cascade=all, delete-orphan
 ),
 }
 )
 
 So whenever I use mykennel.dogs.append(fred) I want to be able to tell
 the relation to call callback() so that I can do some checking on the
 object that is being appended.
 
 Is this possible ??
 
 Cheers
 Dave


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: get mapped class

2007-12-22 Thread sdobrev

Alexandre da Silva wrote:
 
 I am already trying go get the list of mapped tables. I currently got a
 list from sqlalchemy.org.mapper from the weakref mapper_registry, but
 I don't know if that values are useful for my context.
what u need?
all tables? see metadata.
all mappers? see the mapper_registry
the relations inbetween? u have to dig the individual mappers, walk the 
polymorphisms and inheritances.

 I also have another question, I have some way to access the mapped class
 from the table object?
probably no, the relation is directional in the other way,. mapper-table; 
one may have many mappers linked in a way or another to same table.
i'm not sure if u can have 2 separate clasess with 2 primary mappers using 
same table - probably can.


and, why u need all this?


 something like:
 
 user_table = Table(...)
 
 class User(object)
   pass
 
 mapper(User, user_table)
 
 I have some method like user_table.getclass? or another way to get it
 from orm?
 
 thank's for help
 
 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Readonly objects/protecting objects from modifications

2007-12-21 Thread sdobrev

i have such thing implemented externaly but it is definitely not nice (read: 
tricky and underground) - replacing the __dict__ with something handmade 
that does what i say as i say if i say. that's dbcook's reflector for my 
static_type structures; look in dbcook/usage/static_type if interested. and 
it is now broken with the latest instance_state handling mechanism.

another, similar or not, feature i needed while doing dbcook, was a 
readonly/loadonly mapper; i.e. a mapper for a sort-of intermediate 
base-class which should not have its own instances; only subclasses may have 
instances/DB-footprint. That i made via MapperExt, throwing at 
before_insert/update/delete.

Rick Morrison wrote:
 Something like this is available on a roll-your-own basis via Python
 properties along with some mapper tricks:
 
 
 http://www.sqlalchemy.org/docs/04/mappers.html#advdatamapping_mapper_overriding
 
 I would be +1 for such a feature implemented on mapped instances, could be
 useful for detecting those hard-to-find bugs, but I can't think of a nice
 and simple API for it. For mapped instances via Query(), it  could be an
 .option(), but I can't see a good way for its use on relation()s. Also not
 sure if such a feature would throw an exception on attribute setting, or
 whether it ought to simply be ingored during a flush (OIOW, have it's
 dirty flag locked down to False)


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: obtaining previous value in mapper.extension.after_*

2007-12-17 Thread sdobrev

Michael Bayer wrote:
 
 On Dec 16, 2007, at 3:26 PM, [EMAIL PROTECTED] wrote:
 
 and another issue around attribute.get_history...
 i have a descriptor that is autosetting some defaultvalue at first  
 get.
 
 a descriptor on top of the InstrumentedAttribute itself ?  id wonder  
 how you are configuring that.
under IA. in the __dict__ (which is not a dict at all).

 ScalarObjectAttributeImpl never knows that the attribute has been  
 missing at
 start - dict.get(key,NOVALUE) will never return NOVALUE, as
 the descriptor machinery is called instead of __dict__[key] / haskey  
 etc.
 
 if you really want a default on first get value, and youre riding  
 directly on top of the IA instead of using a differently-named  
 attribute for your own descriptor like all the docs say to do, 
yes and no, as i said i'm replacing the __dict__ with something special; so 
its IA riding on top of me (;-) but otherwise its that. no renaming, i dont 
want someone (thats can be me, later) to be able to workaround either me or SA.

 you  
 should use the callable mechanisms built into the attributes package,  
 although the new value created becomes the committed state so maybe  
 thats not what you want.  im confused, someone issues print  
 object.foo, then it inserts some new data instead of not doing  
 anything ?  what if nobody gets the attribute ?
then it remains unset (None, null, whatever).
and i have a.b.c.d = 3 where b and c are auto-created (if declared so of course)
as for autoinsert -- what people write is what people get.

i know it may be confusing; its a sort of declarative attribute-type 
metainfo, with validation, conversion, readonly, autoset, default-value, 
optional'ism, hints about UI/str/whatever representation, comment etc etc). 
And static-typing/static-attribute-set of course. db-business objects should 
not have any random stuff hanging on them.
trouble is it partialy duplicates some SA features so there is a conflict of 
interests, since day one of their meeting.
i was thinking about splitting it into two and putting one part above IA 
using those renamed columns approach and the rest underneath but that would 
be lots of work.
later
svilen

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: obtaining previous value in mapper.extension.after_*

2007-12-17 Thread sdobrev

 yes and no, as i said i'm replacing the __dict__ with something  
 special; so
 its IA riding on top of me (;-) but otherwise its that. no renaming,  
 i dont
 want someone (thats can be me, later) to be able to workaround  
 either me or SA.
 
 then have your magic __dict__ implement the same save committed on  
 change behavior as the attributes package.  and of course test on  
 every SA release 
(i know, it broke only 6 times this year ;-)
since youre welded to internal behavior:
 
 def my_magic_dict_set_something(dict, key, value):
  if key not in dict['_state'].committed_state:
  dict['_state'].committed_state[key]  = _the_old_value
  dict[key] = value
 
thanks, thats different idea to what i had in mind... i'll try..

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Using ORM Object as an intermadiate holder

2007-12-16 Thread sdobrev

the expire() is requesting a reload.
try moving that after the sending back stuff to user.

Utku Altinkaya wrote:
 Hi,
 
 I am using SQLAlchemy on a web application, I have used a base class
 for ORM clases which provides soem web related things like validation
 and loading data from forms etc. When the form is submitted:
 Here is the current life cycle:
 
 object.loadFromForm()
 if object.Validate():
   session.save_or_update(object)
 else
   render_form(object)
   session.expire(object)
 session.commit()
 
 I do not want to lose invalid values, becouse I want to send them to
 the user again, so while using object as intermediate holder I have to
 set attributes invalid values... So Autoflush = False, and if
 invalid values are existed the object is reloaded from DB before
 commit step of web request cycle.
 
 But while using SQLAlchemy I had the impression that it is not
 designed to to that, It designers thought objects are direct
 representation of the data in the database.  I can do this ofcouse
 some kind of holder class copies attributes from data objects etc, but
 it is cumbersome, and will force me to write longer code.
 
 I am expecting someone to tell me the way I am doing is perfectly
 valid, or another advice
 
 regards
 
  


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: some error around trunk

2007-12-16 Thread sdobrev

ok;
see mapper.py line 1134, calling after_update with state instead of state.obj()

Michael Bayer wrote:
 
 On Dec 16, 2007, at 2:40 AM, [EMAIL PROTECTED] wrote:
 
 from sqlalchemy import *
 m= MetaData()
 trans  =Table( 'trans',   m, Column( 'date', Date),  )
 balance=Table( 'balance', m, Column( 'finaldate', Date), )

 b = balance.alias('b')
 sprev = select( [ func.max( b.c.finaldate)],
 b.c.finaldate  balance.c.finaldate
 )
 #correlate is non-generative in 0.3 (ret None) but generative in 0.4
 sprev = sprev.correlate( balance) or sprev

 r = trans.c.date  func.coalesce( sprev,0 )
 #, as_scalar=True ) with or without all the same
 
 
 r = trans.c.date  func.coalesce( sprev.as_scalar(),0 )
 
 
  


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] obtaining previous value in mapper.extension.after_*

2007-12-16 Thread sdobrev

i used to get the original (before change) value of some attribute via 
state.commited_state[key]... but seems now that dict is empty at the time 
when ext.after_* are called.
any way to get that? storing copies at ext.before_* is not good alternative...

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: obtaining previous value in mapper.extension.after_*

2007-12-16 Thread sdobrev

[EMAIL PROTECTED] wrote:
 i used to get the original (before change) value of some attribute via 
 state.commited_state[key]... but seems now that dict is empty at the time 
 when ext.after_* are called.
 any way to get that? storing copies at ext.before_* is not good alternative...

found some workaround but not sure if it's proper thing:
  r = getattr( instance.__class__, attribute).get_history( instance)
  r = r[-1] or r[-2]
  return r and r[0] or None #should never be None ???

not sure what the three get_history sublists are for...


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Using ORM Object as an intermadiate holder

2007-12-16 Thread sdobrev

expiring the obj has the effect that any further access to the object will 
auto-refresh it. so if u expire(x) and then say x.a. x will be reloaded 
first then u get x.a

Utku Altinkaya wrote:
 
 
 On 16 Aralık, 17:46, [EMAIL PROTECTED] wrote:
 the expire() is requesting a reload.
 try moving that after the sending back stuff to user.
 
 The documents says it does not reload until it is accessed if the
 object is expired, I think what does the thing you have mantioned is
 the refresh method of session.
 http://www.sqlalchemy.org/docs/04/session.html#unitofwork_using_refreshing
 


 Utku Altinkaya wrote:
 Hi,
 I am using SQLAlchemy on a web application, I have used a base class
 for ORM clases which provides soem web related things like validation
 and loading data from forms etc. When the form is submitted:
 Here is the current life cycle:
 object.loadFromForm()
 if object.Validate():
   session.save_or_update(object)
 else
   render_form(object)
   session.expire(object)
 session.commit()
 I do not want to lose invalid values, becouse I want to send them to
 the user again, so while using object as intermediate holder I have to
 set attributes invalid values... So Autoflush = False, and if
 invalid values are existed the object is reloaded from DB before
 commit step of web request cycle.
 But while using SQLAlchemy I had the impression that it is not
 designed to to that, It designers thought objects are direct
 representation of the data in the database.  I can do this ofcouse
 some kind of holder class copies attributes from data objects etc, but
 it is cumbersome, and will force me to write longer code.
 I am expecting someone to tell me the way I am doing is perfectly
 valid, or another advice
 regards- Alıntıyı gizle -
 - Alıntıyı göster -
 
  



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: obtaining previous value in mapper.extension.after_*

2007-12-16 Thread sdobrev

and another issue around attribute.get_history...
i have a descriptor that is autosetting some defaultvalue at first get.
before r3935 it was ok; now the atribute is not updated anymore (in exact 
case, another object has to be inserted but it is not) as it seems that 
ScalarObjectAttributeImpl never knows that the attribute has been missing at 
start - dict.get(key,NOVALUE) will never return NOVALUE, as
the descriptor machinery is called instead of __dict__[key] / haskey etc.
i am looking at _create_history() and the way it is used but see no light...
as i do not know either was there a value or not...
well the object is brand new so it has to have nothing...
Any way to hint it? so some just-created object would have an initialy empty 
history.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: some error around trunk

2007-12-15 Thread sdobrev

from sqlalchemy import *
m= MetaData()
trans  =Table( 'trans',   m, Column( 'date', Date),  )
balance=Table( 'balance', m, Column( 'finaldate', Date), )

b = balance.alias('b')
sprev = select( [ func.max( b.c.finaldate)],
 b.c.finaldate  balance.c.finaldate
 )
#correlate is non-generative in 0.3 (ret None) but generative in 0.4
sprev = sprev.correlate( balance) or sprev

r = trans.c.date  func.coalesce( sprev,0 )
 #, as_scalar=True ) with or without all the same



Michael Bayer wrote:
 seems like you might need an as_scalar() on the select object.   
 otherwise send an example.
 
 
 On Dec 15, 2007, at 10:06 AM, svilen wrote:
 
 seems something about .type vs .type_ or similar:

 Traceback (most recent call last):
  File tests/convertertest.py, line 152, in
 test4_balance_trans_via_prev_balance_date_subselect
trans.c.date  func.coalesce( sprev,0 )
  File sqlalchemy/sql/expression.py, line 777, in __call__
return func(*c, **o)
  File sqlalchemy/sql/functions.py, line 12, in __call__
return type.__call__(self, *args, **kwargs)
  File sqlalchemy/sql/functions.py, line 35, in __init__
kwargs.setdefault('type_', _type_from_args(args))
  File sqlalchemy/sql/functions.py, line 75, in _type_from_args
if not isinstance(a.type, sqltypes.NullType):
 AttributeError: 'Select' object has no attribute 'type'




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Many-To-One...What I'm doing wrong?

2007-12-14 Thread sdobrev

your 'iva' table-column AND 'iva' attribute/relation/property have same 
name, Thats what the error says. either rename one of them (e.g. the column 
to become iva_id), or use that allow_column_override=True flag to the 
producto mapper.

Marcos wrote:
 Hello, first at all, sorry about my english... I'm from Mendoza,
 Argentina, and I'm starting with this. Because I can't describe my
 problem, I will show the next:
 
 from sqlalchemy import *
 from sqlalchemy.orm import *
 
 class Producto(object):
 pass
 
 class Iva(object):
 pass
 
 metadata = MetaData()
 engine = create_engine('postgres://postgres:[EMAIL PROTECTED]:5432/
 marcos')
 connection = engine.connect()
 Session = sessionmaker(bind=connection, autoflush=True,
 transactional=True)
 session = Session()
 
 iva_tabla = Table( 'iva' , metadata ,
Column('id' , Integer , primary_key =
 True) ,
Column('valor', Float)
)
 
 productos_tabla = Table ( 'productos' , metadata ,
   Column('id', Integer ,
 primary_key = True ) ,
   Column('nro' , Integer ) ,
   Column('descripcion' ,
 String(100) ) ,
   Column('iva' , Integer ,
 ForeignKey('iva.id')) ,
   Column('precioUnitario' ,
 Float )
  )
 
 mapper(Producto, productos_tabla , properties={'iva':relation(Iva)})
 mapper(Iva,iva_tabla)
 
 
 - And when i run it:
 raceback (most recent call last):
   File /home/instancia_webware/MyContext/Pruebas/sql.py, line 29,
 in ?
 mapper(Producto, productos_tabla ,
 properties={'iva':relation(Iva)})
   File /usr/lib/python2.4/site-packages/SQLAlchemy-0.4.1-py2.4.egg/
 sqlalchemy/orm/__init__.py, line 518, in mapper
 return Mapper(class_, local_table, *args, **params)
   File /usr/lib/python2.4/site-packages/SQLAlchemy-0.4.1-py2.4.egg/
 sqlalchemy/orm/mapper.py, line 154, in __init__
 self._compile_properties()
   File /usr/lib/python2.4/site-packages/SQLAlchemy-0.4.1-py2.4.egg/
 sqlalchemy/orm/mapper.py, line 612, in _compile_properties
 self._compile_property(column_key, column, init=False,
 setparent=True)
   File /usr/lib/python2.4/site-packages/SQLAlchemy-0.4.1-py2.4.egg/
 sqlalchemy/orm/mapper.py, line 654, in _compile_property
 raise exceptions.ArgumentError(WARNING: column '%s' not being
 added due to property '%s'.  Specify 'allow_column_override=True' to
 mapper() to ignore this condition. % (column.key, repr(prop)))
 sqlalchemy.exceptions.ArgumentError: WARNING: column 'iva' not being
 added due to property 'sqlalchemy.orm.properties.PropertyLoader
 object at 0xb7aba1cc'.  Specify 'allow_column_override=True' to
 mapper() to ignore this condition.
 
 --
 I've read a lot, and I can't or I'm incapable to resolve this by
 myself. Can anybody help me?
 
 Sorry again, Marcos
 
  


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Design: mapped objects everywhere?

2007-12-07 Thread sdobrev

Paul Johnston wrote:
 Hi,
 
 A Sample may be created by the web application or fetched from the 
 database. Later on, it may be disposed of, edited or checked back into 
 the db.
 
 On the other hand, the requirements and coding of both classes are 
 kinda different, and I find myself changing the properties of the 
 mapped class for better mapping and making the use of the class in 
 non-Db contexts more awkward.
 
 Sounds like you want your app to be mostly unaware of whether a class is 
 saved in the db or not (i.e. persistent)? If so, I'd use a single class, 
 design the properties so they work in non-persistent mode, and then 
 they'll work in persistent mode as well.

or like a single class that does the what and why, and an interchangeable 
layer/context that does load/saving (and the relations!).
in such situations declarative programming helps a lot, so u dont bind your 
self to (the) db (or whatever persistency). Check dbcook.sf.net. My own 
latest experience is about turning a project that was thought for db/using 
dbcook into non-db simple-file-based persistency. The change was relatively 
  small, like 5-10 lines per class - as long as there are Collections etc 
similar notions so Obj side of ORM looks same.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Polymorphic from multiple columns

2007-12-03 Thread sdobrev

is that something looking like real concrete-polymorphism?
AFAIremember there was something composite there in the pattern.. the id is 
actualy (id,type)

Michael Bayer wrote:
 you cant do it right now.  but its something we could support.   its  
 unclear to me if we should just go for composite types as the way to  
 do it, or just use a callable.   using a composite is likely cleaner  
 and would integrate with the save process better (otherwise, the  
 callable needs to generate the discriminator value at save time as  
 well).
 
 On Dec 3, 2007, at 11:09 AM, Koen Bok wrote:
 
 I'd like to make a polymorphic mapper based on two columns. Is that
 possible?

 See example code here: http://paste.pocoo.org/show/13799/

 Thanks, Koen
 
 
  


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Concrete Inheritance problem

2007-11-19 Thread sdobrev

if it's about concrete inheritance, then employee contains ALL info it 
needs, that is, a full copy of person + whatever else is there,
and is completely independent from person table.
so for that case,
  a) foregn key is not needed
  b) inserting in employee_tbl will never insert stuff in person_tbl - they 
are independent.

now, what u're expecting (chaining of id's and rows etc) will come from 
joined_table inheritance. See about that in docs; and just remove 
concrete=True from your mapper.


Partha wrote:
 I have 2 tables Person (id, name)  Employee (id, salary) and every
 Employee 'isa' Person, so employee.id == person.id. I am trying to use
 the Concrete Inheritance (i.e. ' pjoin) example provided in the
 documentation. My mapping looks as follows.
 
 person_table = Table(persons, __meta__,
  Column(id, Integer, primary_key=True),
  Column(name, String(80))
  )
 
 employee_table = Table(empys, __meta__,
  Column(id, Integer, ForeignKey(persons.id),
 primary_key=True),
  Column(salary, Integer),
  )
 
 class Person (object): pass
 class Employee(Person): pass
 
 pjoin = polymorphic_union({
 'person':person_table,
 'employee':employee_table
 }, 'type', 'pjoin')
 
 
 
 person_mapper = mapper(Person, person_table, select_table=pjoin, \
 polymorphic_on=pjoin.c.type,
 polymorphic_identity='person')
 emp_mapper = mapper(Employee, employee_table, inherits=person_mapper,
 \
 concrete=True, polymorphic_identity='employee')
 
 
 
 I want to now add a New employee to the system with the following
 snippet.
 
 e = Employee()
 e.name = 'TestEmpl'
 e.salary = 100
 session.save(e)
 session.commit()
 
 Problem is when it tries to save employee, sqlalchemy raises the
 following error
 
 sqlalchemy.exceptions.ProgrammingError: (ProgrammingError) relation
 empys_id_seq does not exist
 'select nextval(\'empys_id_seq\')' None
 
 Since all id's in employees table map directly to Person table (which
 has the correct sequence), I would think alchemy would first store the
 Person part and then store the employee part. Any clues on how to
 correct this ??.
 I  also tried adding a Sequence to employee forcing it to use the same
 one as person table...
 
 employee_table = Table(empys, __meta__,
  Column(id, Integer, ForeignKey(persons.id),
 Sequence(person_id_seq)  primary_key=True),
  Column(salary, Integer),
  )
 
 
 But now it raises a constraint violation because its trying to insert
 a row in employees table without inserting anything in person first..
 
 Any ideas how I can fix this?.. I would greatly appreciate any help in
 this regard.. Thanks
 
 Partha
 
 
 Here is the full program.. Thanks..
 
 from sqlalchemy import create_engine, MetaData, Table, Column, types,
 Sequence
 from sqlalchemy import Table, Column, Integer, String, DECIMAL,
 Numeric, ForeignKey, DateTime, Boolean, CHAR
 from sqlalchemy.orm import mapper,sessionmaker, relation,
 polymorphic_union
 __meta__ = MetaData()
 
 person_table = Table(persons, __meta__,
  Column(id, Integer, primary_key=True),
  Column(name, String(80))
  )
 
 employee_table = Table(empys, __meta__,
  Column(id, Integer, ForeignKey(persons.id),
 primary_key=True),
  Column(salary, Integer),
  )
 class Person (object): pass
 
 class Employee(Person): pass
 
 pjoin = polymorphic_union({
 'person':person_table,
 'employee':employee_table
 }, 'type', 'pjoin')
 
 
 
 person_mapper = mapper(Person, person_table, select_table=pjoin, \
 polymorphic_on=pjoin.c.type,
 polymorphic_identity='person')
 emp_mapper = mapper(Employee, employee_table, inherits=person_mapper,
 \
 concrete=True, polymorphic_identity='employee')
 
 
 engine = create_engine('postgres:///test?user=postgrespassword=foo',
 echo=True)
 session = sessionmaker(autoflush=True, bind = engine,
 transactional=True)()
 
 def refresh(engine = engine):
 __meta__.drop_all(engine)
 __meta__.create_all(engine)
 
 def main():
 refresh()
 e = Employee()
 e.name = 'TestEmpl'
 e.salary = 100
 session.save(e)
 session.commit()
 
 if __name__ == '__main__':
   main()
 
  


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] 2 questions

2007-11-12 Thread sdobrev

hi
1st one:  i am saving some object; the mapperExtension of the object 
fires additional atomic updates of other things elsewhere (aggregator). 
These things has to be expired/refreshed... if i only knew them.
For certain cases, the object knows exactly which are these target 
things. How (when) is best to expire these instances, i.e. assure that 
nexttime they are used they will be re-fetched?
 a) in the mapperext - this would be before the flush?
 b) later, after flush, marking them somehow ?

and, why atomic updates also have with commit after them? or is this 
sqlite-specific?

2nd one: how to compare with a Decimal? i.e. tableA.c.column == Decimal('7')
ah forget,  i found that, passing asdecimal =True to column's Numeric() 
definition.

btw, the precision/length arguments of Numeric seems misplaced??
isnt format 10.2 meaning length 10 precision 2?
 or the meaning of length and precision is different here?

2nd.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: 2 questions

2007-11-12 Thread sdobrev


 hi
 1st one:  i am saving some object; the mapperExtension of the object
 fires additional atomic updates of other things elsewhere  
 (aggregator).
 These things has to be expired/refreshed... if i only knew them.
 For certain cases, the object knows exactly which are these target
 things. How (when) is best to expire these instances, i.e. assure that
 nexttime they are used they will be re-fetched?
 a) in the mapperext - this would be before the flush?
 b) later, after flush, marking them somehow ?
 

 the public way to mark an instance as expired is  
 session.expire(instance).  if you wanted to do this inside the mapper  
 extension, i think its OK as long as you do the expire *after* the  
 object has been inserted/updated (i.e. in after_insert() or  
 after_update()).
   
in the after_*() there are (mapper, connection, instance) arguments - 
but there's no session. Any way to get to that? mapext.get_session() 
does not look like one


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: mixed joined+concrete inheritance broken/r3735

2007-11-09 Thread sdobrev


 yeah this is the same thing.  if you get A's ID column in there  
 instead of C's the problem would not occuri think this is why our  
 own test suite doesn't have these issues.  ive made the A-B FK  
 match previous checkin recursive, so it also matches A-C,D,E,,  
 in r3759.

   
 actually, what would prevent the issue in the first place would be
 if you use the A_tbl.db_id column in your poly union instead of
 B_tbl.db_id.  in 0.4, mappers always use the base table's column as
 the primary key column and it actually would not even search for
 B_tbl.db_id.
   
 hmm i'll check if i can do this way (as i also support 0.3.x).
 in the mean time, one more of the same...
 
and how exactly to guess through the A.join(B).join(C)... that C.dbid is 
equivalent  with A.dbid? ask mapper for equivalence?
or should that be additional info given to polumunion maker and coming 
from inheritance chain?



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: r3727 / AbstractClauseProcessor problem

2007-11-09 Thread sdobrev

one more error in ACP, took me a day to find and separate.
it's very simple and very basic... ClauseAdapter does not work.
--
from sqlalchemy import *
from sqlalchemy.sql.util import ClauseAdapter
m = MetaData()
a=Table( 'a',m,
Column( 'id',Integer, primary_key=True),
Column( 'xxx_id', Integer, ForeignKey( 'a.id', name='adf', 
use_alter=True ) )
)

e = (a.c.id == a.c.xxx_id)
print e

b = a.alias()
#print b.c.id == 0

r = ClauseAdapter( b, include= set([ a.c.id ]),
equivalents= { a.c.id: set([ a.c.id]) }
).traverse( e)
print e

# results

#in r3726: (OK)
a.id = a.xxx_id
a_1.id = a.xxx_id

#in r3727 - as well as in r3760:
a.id = a.xxx_id
a.id = a.xxx_id


it does generate proper thing in ClauseAdapter.conv_element()/newcol but 
it gets lost in the way

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: r3727 / AbstractClauseProcessor problem

2007-11-09 Thread sdobrev


 om sqlalchemy import *
 from sqlalchemy.sql.util import ClauseAdapter
 m = MetaData()
 a=Table( 'a',m,
Column( 'id',Integer, primary_key=True),
Column( 'xxx_id', Integer, ForeignKey( 'a.id', name='adf',
 use_alter=True ) )
 )

 e = (a.c.id == a.c.xxx_id)
 print e

 b = a.alias()
 #print b.c.id == 0

 r = ClauseAdapter( b, include= set([ a.c.id ]),
equivalents= { a.c.id: set([ a.c.id]) }
 ).traverse( e)
 print e

 # results

 #in r3726: (OK)
 a.id = a.xxx_id
 a_1.id = a.xxx_id

 #in r3727 - as well as in r3760:
 a.id = a.xxx_id
 a.id = a.xxx_id
 

 no, it works, it just clones in all cases:

 sql_util.ClauseAdapter( b, include= set([ a.c.id ]),
 equivalents= { a.c.id: set([ a.c.id]) }
  ).traverse( e)

 assert str(e) == a_1.id = a.xxx_id
   
huh? it dies here. r3727 or 3760 all the same, py2.5..., did remove all 
*pyc


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: r3727 / AbstractClauseProcessor problem

2007-11-08 Thread sdobrev

Michael Bayer wrote:
 On Nov 8, 2007, at 11:32 AM, svilen wrote:

   
 mmmh. u can think of splitting the Visitor into 3: Guide (who
 traverses _everything_ given), Visitor (who does things), and
 intermediate Decisor, who decides where to go / what to do. But this
 can get complicated (slow) although it would be quite clear who does
 what.
 Also, do have both onEntry and onExit for each node; i am sure some
 operations will require both; first to gather info, second to make a
 decision about what to do with it while still at that level.

 i've done quite a lot of tree/expression traversers, and while
 readonly walking doesnot care much if on entry or on exit (except if
 u want depth or breadth first), replacements-in-place and
 copy+replace sometimes needed both entry and exit hooks, + they where
 conditional like in leafs.
 

 i think you should come up with your own ACP and lets take a look at  
 it.while i can get various ACP ideas to work further and further,  
 im still able to come up with plenty of cases where none of them work  
 and its because the structure of a clauseelement really isnt a tree.   
 the same node can be represented many times largely because columns  
 reference their parent table.
   
u mean a graph? mmm no.
IMO an expression is a tree of nodes, each node points to some element 
of another  space (metadata things+bindparams+...), and many nodes can 
point to same element, and eventualy many elements can hold same value 
(but be different elements - e.g. literal(1) and another literal(1) - 
not sure about whether this is usable). So traversing the (original) 
tree is one thing; what to do with the elements pointed by the nodes is 
another decision; e.g.. whether to process multiple-referred elements 
multiple times or just one, etc.

i can try... but dont rely on me too much ;-)


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: r3727 / AbstractClauseProcessor problem

2007-11-08 Thread sdobrev


 heres the structure of:   select(from_obj=[t1, t2, t1.join(t2)])

 select +--- t1 -+
 |--- t2  |
 +--- join of t1/t2 ---+

 t2 and t1 both have two parents, and there are two paths to each of t1  
 and t2 from the head select.  so its not a tree in the strict sense.

 or another one:

 s1 = t1.select().alias('s1')
 s2 = t1.select().alias('s2')

 s3 = s1.union(s2)  (two paths to t1:  s3-s1-t1, s3-s2-t1)

 any kind of subquery which references a table at a higher level falls  
 into this category.
   
hmm. it's still a tree, just the nodes contain same things (t1). There 
are no cyclic paths to _same_ node in the expression.
like x+y+23*(x+1) --- x is used/pointed twice but the expression is 
still a tree.

is there any case where some t1 (or even subexpr) is translated once in 
one way, and then in another branch in another way? e.g. like in the 
above x+y+... first x is to be replaced with its value, but second with 
its name (say because its in round brackets - in some now-invented 
syntax of mine ).

i think there is some mixup between what the expression is, as grammar, 
and what it actualy means, and u're trying to solve/combine both in one 
thing/visitor. maybe also what it should mean _after the processing. 
While they should be all separate notions, somehow.
eh, i'm just throwing ideas / alternative view points to play with...


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] mixed joined+concrete inheritance broken/r3735

2007-11-08 Thread sdobrev

i have a A-B-C test case where B inherits A via joined, and C inherits B 
via concrete; anbd there are links to each other, e.g. A points to B. it 
used to work before r3735.

now query(A) gives:
 NoSuchColumnError: Could not locate column in row for column 
'A_tbl.db_id'
if A-B link is not assigned, it works.



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: r3727 / AbstractClauseProcessor problem

2007-11-07 Thread sdobrev


 ahha. so i am replacing one whole subexpr with somthing, and the
 original subexpr is not traversed inside.
 if i comment the stop_on.add(), it attempts to traverse the result
 subexpr, not the original one.
 i want the original to be traversed. Something like doing onExit
 instead of current onEntry.
 if its too hard, i can probably traverse it twice, once just marking ,
 2nd time replaceing things? i'll try
 
 if youre replacing a subtree, why would you want to then traverse that  
 part of the subtree which was replaced ?  
because as i said, the needed work is onExit of the nodes, i.e. all 
room/building is traversed, now what u want to make out of it? thus i 
can traverse only once.
 can you see why how i have  
 it working now is reasonably straightforward ?  perhaps a second call  
 to ACP.traverse() should be called for those elements which were  
 replaced.
i did change my code to traverse all twice, once doing nothing, and once 
replaceing; it works. speed there is not an issue.
Still, if u can make the onEntry/onExit/both switchable, that would be 
usable. Maybe not immediately, but i'm sure once day you'll need it. 
Right now, it is onEntry, and the actual code/choice is not done 
clearly, it is spread between traverse() and _convert_element() and similar.

Has anything changed on plain ClauseVisitor? coz' i have (more 
important) other code elsewhere, which also breaks at same r3727 - it's 
today topic of digging.
   also r3754 fixes the issue that aliased selects, while  
 being traversed, were not being copied properly.  i still need to  
 tweak the rules for aliases (alised tables, for example, remain  
 immutable).
   
i dont really understand why u need the ACP being so different to plain 
visitor; i mean cant they share some skeleton part of traversing, while 
putting all the choices (visit* vs convert; onentry/onexit; stop/dont) 
in their own parts.
After all, visitor pattern is twofold,  a) Guide + b) Visitor; the Guide 
doing traversing, the Visitor noting things; choice where to go might be 
in visitor and/or in guide. some times (one extreme) the visitor is just 
one dumb functor; other cases (other extreme end)  the visitor is very 
sofisticated and even does guiding/traversing.
Here it looks more like second case, u have most of both sides put in 
the Visitor, and only small part (specific visit_* / copy_internals) 
left to the actual nodes.
And to me, the skeleton is still same between ACP and ClauseVisitor.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] r3695 causes strange error

2007-11-06 Thread sdobrev

hi.
i have somewhat messy setup (~test case), about association with 
intermediate table/class, double pointing to one side and single 
pointing to another. i do set up both A-links in one item; and set up 
only first in another item, the other link (a2_link) is pre-set to None. 
And, i have the error below since r3695.
The error seems to disappear if i do not explicitly initiate the a2_link 
to None - dont touch it or set to some object.
Any idea what's wrong?

Traceback (most recent call last):
  File a.py, line 91, in module
s.flush()
  File /home/az/src/hor-trunk/sqlalchemy/orm/session.py, line 683, in 
flush
self.uow.flush(self, objects)
  File /home/az/src/hor-trunk/sqlalchemy/orm/unitofwork.py, line 209, 
in flush
flush_context.execute()
  File /home/az/src/hor-trunk/sqlalchemy/orm/unitofwork.py, line 436, 
in execute
UOWExecutor().execute(self, head)
  File /home/az/src/hor-trunk/sqlalchemy/orm/unitofwork.py, line 1055, 
in execute
self.execute_save_steps(trans, task)
  File /home/az/src/hor-trunk/sqlalchemy/orm/unitofwork.py, line 1074, 
in execute_save_steps
self.execute_childtasks(trans, task, False)
  File /home/az/src/hor-trunk/sqlalchemy/orm/unitofwork.py, line 1092, 
in execute_childtasks
self.execute(trans, child, isdelete)
  File /home/az/src/hor-trunk/sqlalchemy/orm/unitofwork.py, line 1055, 
in execute
self.execute_save_steps(trans, task)
  File /home/az/src/hor-trunk/sqlalchemy/orm/unitofwork.py, line 1072, 
in execute_save_steps
self.execute_dependencies(trans, task, False)
  File /home/az/src/hor-trunk/sqlalchemy/orm/unitofwork.py, line 1085, 
in execute_dependencies
self.execute_dependency(trans, dep, False)
  File /home/az/src/hor-trunk/sqlalchemy/orm/unitofwork.py, line 1066, 
in execute_dependency
dep.execute(trans, isdelete)
  File /home/az/src/hor-trunk/sqlalchemy/orm/unitofwork.py, line 1021, 
in execute
self.processor.process_dependencies(self.targettask, [elem.obj for 
elem in self.targettask.polymorphic_tosave_elements if elem.obj is not 
None], trans, delete=False)
  File /home/az/src/hor-trunk/sqlalchemy/orm/dependency.py, line 282, 
in process_dependencies
self._synchronize(obj, child, None, False, uowcommit)
  File /home/az/src/hor-trunk/sqlalchemy/orm/dependency.py, line 317, 
in _synchronize
self.syncrules.execute(source, dest, obj, child, clearkeys)
  File /home/az/src/dbcook/sqlalchemy/orm/sync.py, line 91, in execute
rule.execute(source, dest, obj, child, clearkeys)
  File /home/az/src/dbcook/sqlalchemy/orm/sync.py, line 139, in execute
raise exceptions.AssertionError(Dependency rule tried to blank-out 
primary key column '%s' on instance '%s' % (str(self.dest_column), 
mapperutil.instance_str(dest)))
sqlalchemy.exceptions.AssertionError: Dependency rule tried to blank-out 
primary key column 'IntermediateAB.a2_link_id' on instance 
'[EMAIL PROTECTED]'


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: r3695 causes strange error

2007-11-06 Thread sdobrev

[EMAIL PROTECTED] wrote:
 sorry, here the files
   
and the line 83 ( marked XXX ) there must be =None to get the error.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: r3695 causes strange error

2007-11-06 Thread sdobrev

Michael Bayer wrote:
 nevermind, this one was pretty straightforward and r3695 didnt  
 actually break things, it just revealed the lack of checking for  
 things elsewhere, so works in r3747.
   
yes, that works.
but now multiple other things broke. pf

 - the mapper.properties in its new get()/iterate() form is not 
available yet when mapper-extensions are setup (e.g. at 
ext.instrument_class). i need to find which attribute-name is being 
mapped to certain table.column... maybe i can get without it..

 - something changed in the traversing (AbstractClauseProcessor - r3727) 
and it does not find proper things...

 - r3735 - i started getting again these:
 File /home/az/src/dbcook/sqlalchemy/orm/util.py, line 261, in __getitem__
return self.row[key]
  File /home/az/src/dbcook/sqlalchemy/engine/base.py, line 1247, in 
__getitem__
return self.__parent._get_col(self.__row, key)
  File /home/az/src/dbcook/sqlalchemy/engine/base.py, line 1470, in 
_get_col
rec = self._key_cache[key]
  File /home/az/src/dbcook/sqlalchemy/util.py, line 72, in __missing__
self[key] = val = self.creator(key)
  File /home/az/src/dbcook/sqlalchemy/engine/base.py, line 1375, in 
lookup_key
raise exceptions.NoSuchColumnError(Could not locate column in row 
for column '%s' % (str(key)))
NoSuchColumnError: Could not locate column in row for column 'A_tbl.db_id'

details later.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Separate version table

2007-10-29 Thread sdobrev

 I dont need history tracking, just revert documents to older ones.
that is history, just not timed history. 
u'll have documents, and then for each document a bunch of versions.
 Once get it working on simple form, then perhaps trying optimicing
 and feeding only field, that has changed. Version field automatic
 update was on my mind lately. Could it be something like this:

 Column(version, Integer, default=1, nullable=False,
 onupdate=document_versions.select(document_versions.document_id=c.d
ocument_id, order_by=version, order=DESC, limit=1)
no idea, what this onupdate is expected to do here?
afaik these things are done with a mapper extension, but i'm not sure 
its the only way.
what would be version field? something that counts the inserts for 
that particular document_id?
 I just dont know, how to get current document id, is it just
 c.document_id?
yes, yourobj.document_id or whatever u named it as column.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Separate version table

2007-10-28 Thread sdobrev

 Next design problem for me is version table. I have Document model
 with DocumentVersion model, but i dont know how to:

 - get the latest version of document
 - set creator and updator, automatic behavior for this
 - update version number
 - fetch thru Document(s) and DocumentVersion(s)

just to warn you, if u're trying to have a versioned document, i.e. 
document with history of changes/versions, and track them in time, 
that's a rather complicated thing. see bitemporal mixin recipe in 
dbcook:
https://dbcook.svn.sourceforge.net/svnroot/dbcook/trunk/dbcook/misc/timed2/

if u don't realy care about the history, but only need the last one, 
that might be easier, YMMV.

automatic setup of fields in an object, e.g. in your case 
creator/modifier of document, might be done at several places/times:
 - object's constructor
 - just before saving the object to DB - mapperEextension.befor_insert 
and friends
 - maybe other places to hook between these two
but u'll need a context-like state to keep track of the current user 
(or time or whatever).
or, u can do it by hand somewhere at proper place within your 
workflow, around saving the object. Beware that either way it must be 
done in a way that does not change/affect objects which have not been 
really modified - else all objects will be always saved/updated, over 
and over.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: polymorphic question

2007-10-25 Thread sdobrev

hi, i'm back to the cane field...

 do your ABC tests all use select_mapper ?   ticket 795 revealed
 that totally basic ABC loading was broken if you're using
 secondary loads of the remaining attributes (which is the default
 behavior when you dont specify select_mapper).
u mean mapper's select_table=..? it's allways used because of 
polymorphism.

today i tried those ABC things, found 2 issues:
 - r3449 introduces some memory leak 
 - r3646 introduces some forever-recursion, goes like:
  File other/expression.py, line 355, in module
p2 = session.query( Person).filter_by( name= 'pesho').first()
  File /home/az/src/hor-trunk/sqlalchemy/orm/query.py, line 595, in 
first
ret = list(self[0:1])
  File /home/az/src/hor-trunk/sqlalchemy/orm/query.py, line 620, in 
__iter__
context = self._compile_context()
  File /home/az/src/hor-trunk/sqlalchemy/orm/query.py, line 873, in 
_compile_context
value.setup(context)
  File /home/az/src/hor-trunk/sqlalchemy/orm/interfaces.py, line 
483, in setup
self._get_context_strategy(querycontext).setup_query(querycontext, 
**kwargs)
  File /home/az/src/hor-trunk/sqlalchemy/orm/strategies.py, line 
553, in setup_query
value.setup(context, parentclauses=clauses, 
parentmapper=self.select_mapper)
  File /home/az/src/hor-trunk/sqlalchemy/orm/interfaces.py, line 
483, in setup
self._get_context_strategy(querycontext).setup_query(querycontext, 
**kwargs)
  File /home/az/src/hor-trunk/sqlalchemy/orm/strategies.py, line 
553, in setup_query
value.setup(context, parentclauses=clauses, 
parentmapper=self.select_mapper)
... last two repeated ...

more details tomorrow
ciao
svilen

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: polymorphic question

2007-09-28 Thread sdobrev

On Friday 28 September 2007 01:14:32 Michael Bayer wrote:
 On Sep 27, 2007, at 3:53 PM, [EMAIL PROTECTED] wrote:
  i know in 0.4 one can request a polymorphic request to be
  automaticaly split into multiple per-subtype requests. i've no
  idea how this compares +/- to the huge union/outerjoin that gives
  all in one long shot.
 
  my question is.. can this mechanism/approach be used somehow for
  (semi) automatic vertical loading, i.e. instead of loading all
  items in all rows, load only some items normaly while
  (pre)loading some specific columns in whole in one separate shot?
  say there is a table X having a,b,c..p,q. So i do a plain row
  query over X, getting a,b,c,d per row, and request the p and q
  columns (pre)loaded whole (or partial as per some limit).
  i imagine this to be useful to avoid zillions of eagerloads/joins
  etc (when the columns in question are links to somewhere).
  but i might be wrong... - i've no idea how the eagerloading
  scales. Any arguments pro/con?

 if youre talking about just one table X, deferred columns will
 hold off the loading of those columns until they are accessed.  as
 far as the joined tables, polymorphic_fetch=deferred will leave
 those to be loaded when those attributes are first accessed. 
 that's as well as I understand your question so far
mmm no, i'm not asking that. 
is there any available mechanism to load a query by columns instead of 
by rows. that is split a query verticaly. like paginate verticaly and 
not just horizontaly. Think a table with rows and many columns. 
usualy query yields full rows. i want to split the query into 
several, each having a number (=1) of columns of the whole thing, 
all under same filtering, one being the leading one, others loaded 
per request, in big chunks).
nevermind, it won't be hard to make anyway, once i know exacty what i 
need

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] polymorphic question

2007-09-27 Thread sdobrev

i know in 0.4 one can request a polymorphic request to be automaticaly 
split into multiple per-subtype requests. i've no idea how this 
compares +/- to the huge union/outerjoin that gives all in one long 
shot.

my question is.. can this mechanism/approach be used somehow for 
(semi) automatic vertical loading, i.e. instead of loading all 
items in all rows, load only some items normaly while (pre)loading 
some specific columns in whole in one separate shot?
say there is a table X having a,b,c..p,q. So i do a plain row query 
over X, getting a,b,c,d per row, and request the p and q columns 
(pre)loaded whole (or partial as per some limit).
i imagine this to be useful to avoid zillions of eagerloads/joins etc 
(when the columns in question are links to somewhere).
but i might be wrong... - i've no idea how the eagerloading scales.
Any arguments pro/con?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: OID usage

2007-09-27 Thread sdobrev

just some ideas.

 Here is an example of a properly interpreted row using the
 dbutils.OID class:
  08C82B7C6A844743::SDRAM::64Mb::Marketing::0C::70C::DC Electrical
 Characteristics

 Here is the binding statement being generated by SqlAlchemy:
 2007-09-27 13:32:12,444 INFO sqlalchemy.engine.base.Engine.0x..cL
 {'spec_section_symbols_section_OID': '\x08\xc8+|j\x84GC'}

 I know in Perl I have to format the query without quotes or I don't
 get any results. Something like this:
 select * from table where section_OID=0x08C82B7C6A844743

the dict shows that your value is sent/assumed a raw byte-string, 
while what u're suggesting in the perl example is that the server 
expects a sort of hexadecimal longint representation. Are these 
equivalent? maybe u need to convert from one to another?

 I am not receiving any type of error, just an empty result set. I
 have verified that the OID is being interpreted correctly by
 running an interactive sql statement on the interpreted value
 shown.

 I'm wondering if SqlAlchemy is quoting the parameter when it is
 bound or if anyone has any other suggestions as to why I am not
 getting any results.
insert one row via sqlalchey with OID=aabbccddeeff and see what u get.
may give u idea of what conversion happens on the way.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: r3507 breaks here

2007-09-26 Thread sdobrev

On Wednesday 26 September 2007 20:09:10 Michael Bayer wrote:
 On Sep 25, 2007, at 12:15 PM, [EMAIL PROTECTED] wrote:
  anyway, all 10328 (joined) cases pass, have a nice day.
  svilen

 ive changed my approach on this one to what i should have done in
 the 1st place.  try out 3518.
ok too now, with no special naming.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: r3507 breaks here

2007-09-24 Thread sdobrev

it something to do with that expects thing...
just do a x = str(b) before session.clear(), breaks it all.

On Monday 24 September 2007 18:57:36 Michael Bayer wrote:
 somethings weird.  if i take out your expects/me.query stuff (which
 remains impossible to read), and do this :

  session.clear()
  for o in session.query(A):
  print str(o)


 I get this:

 C( db_id=1 linkA=None linkC=B( id=notset name=ben ) dataC=mc
 name=cc )
 A( db_id=2 linkA=B( id=notset name=ben ) name=anna )
 B( db_id=3 linkB=C( id=notset name=cc ) linkA=None name=ben
 dataB=gun )


 which is correct.  im trying all sorts of things and I cant get the
 linkB=B condition the test case claims.

 but this expects string:

 dict( klas= A, table= table_A, oid= a.db_id, exp_single= str(a),
  exp_multi = [ str(a), str(b), str(c) ]),

 returns the wrong result.  what am i doing wrong ?

 On Sep 24, 2007, at 10:53 AM, [EMAIL PROTECTED] wrote:
  scratch that, i found one, try r3512.
 
  this one is ok now.
 
  Another one.
  - the A-B-C all-cases works 100% on 0.3.xx but some cases fail on
  0.4.anything.
  Attached is one ~random case of about 168 similar ones - t1.py.
 
  ciao
 
 
  sa_gentestbase.py
  t1.py

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: r3507 breaks here

2007-09-24 Thread sdobrev

On Monday 24 September 2007 22:31:35 Michael Bayer wrote:
 On Sep 24, 2007, at 12:13 PM, [EMAIL PROTECTED] wrote:
  it something to do with that expects thing...
  just do a x = str(b) before session.clear(), breaks it all.

 OK...that was just a *great* way to spend all day tracking that one
 down.  its fixed in 3515.  I didn't bother to see why it doesn't
 exist in 0.3, but its confusing that it doesn't, since the basic
 mechanics of this one are present there as well, there must be some
 subtlety which conceals it.   I hope you can find more insanely
 obscure bugs like this for me tomorrow !
at your service, sir |-:

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: plain python objects from ORM

2007-09-24 Thread sdobrev

On Tuesday 25 September 2007 05:43:40 Huy Do wrote:
 Michael Bayer wrote:
  On Sep 24, 2007, at 11:48 AM, Huy Do wrote:
  Hi,
 
  Is it possible to get SA ORM to return plain python objects
  (with eagerloaded relations and all) but without any attribute
  instrumentation
  (or anything else magically added by SA).
 
  not really.   unless you remove the instrumentation from the
  classes themselves afterwards (a one way operation).

 Any pointers on how to do this ? to the whole object hierachy.
just grab the objects __dict__, and make any class out of it. u'll 
need 2 parralel class hierarchies, or maybe one parasit hierarhcy 
hanging on the other..
e.g.
class Aplain:
 methods...
 def undress(self): 
r = self.__class__(); r.__dict__.update( self.__dict__); return r
class Aplain4SA(Aplain): pass
m = mapper( Aplain4SA, ...)
...

def myquery( query):
 for q in query: yield q.undress()

and use myquery() as wrapper for all session.query(..)
the overhead would be one generator..

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: r3507 breaks here

2007-09-24 Thread sdobrev
On Monday 24 September 2007 22:31:35 Michael Bayer wrote:
 On Sep 24, 2007, at 12:13 PM, [EMAIL PROTECTED] wrote:
  it something to do with that expects thing...
  just do a x = str(b) before session.clear(), breaks it all.

 OK...that was just a *great* way to spend all day tracking that one
 down.  its fixed in 3515.  I didn't bother to see why it doesn't
 exist in 0.3, but its confusing that it doesn't, since the basic
 mechanics of this one are present there as well, there must be some
 subtlety which conceals it.   I hope you can find more insanely
 obscure bugs like this for me tomorrow !

now that u mentioned it... r3515 introduces something:

  File /home/az/src/dbcook/sqlalchemy/orm/query.py, line 619, in 
__iter__
return self._execute_and_instances(statement)
  File /home/az/src/dbcook/sqlalchemy/orm/query.py, line 624, in 
_execute_and_instances
return iter(self.instances(result))
  File /home/az/src/dbcook/sqlalchemy/orm/query.py, line 680, in 
instances
self.select_mapper._instance(context, row, result)
  File /home/az/src/dbcook/sqlalchemy/orm/mapper.py, line 1360, in 
_instance
discriminator = row[self.polymorphic_on]
  File /home/az/src/dbcook/sqlalchemy/engine/base.py, line 1590, in 
__getitem__
return self.__parent._get_col(self.__row, key)
  File /home/az/src/dbcook/sqlalchemy/engine/base.py, line 1394, in 
_get_col
rec = self._key_cache[key]
  File /home/az/src/dbcook/sqlalchemy/util.py, line 72, in 
__missing__
self[key] = val = self.creator(key)
  File /home/az/src/dbcook/sqlalchemy/engine/base.py, line 1304, in 
lookup_key
raise exceptions.NoSuchColumnError(Could not locate column in row 
for column '%s' % (str(key)))
NoSuchColumnError: Could not locate column in row for 
column 'pu_a.atype'
 ? failed A.query_SUB_instances: Could not locate column in row for 
column 'pu_a.atype'

pu_a is polymorphic union, and fails only if from_statement() is 
involved.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



_t3515.py
Description: application/python


sa_gentestbase.py
Description: application/python


[sqlalchemy] r3507 breaks here

2007-09-23 Thread sdobrev

hi.
r3506 is still ok, while r3507 gives this:
result: [] expected: [35]
SAMPLE: 2006-09-11 00:00:00 2006-09-12 00:00:00 2006-09-14 00:00:00 
[35]
'trans exact, valids between _2' FROM test_range TimedRangeTestCase
--
Traceback (most recent call last):
  ...
  File test_timed_dbcook.py, line 142, in _get_range_val
q = me.val.get_obj_history_in_range( me.val.OBJ_ID, timeFrom, 
timeTo).all()
  File /home/az/src/hor-trunk/sqlalchemy/orm/query.py, line 571, in 
all
return list(self)
  File /home/az/src/hor-trunk/sqlalchemy/orm/query.py, line 619, in 
__iter__
return self._execute_and_instances(statement)
  File /home/az/src/hor-trunk/sqlalchemy/orm/query.py, line 624, in 
_execute_and_instances
return iter(self.instances(result))
  File /home/az/src/hor-trunk/sqlalchemy/orm/query.py, line 680, in 
instances
self.select_mapper._instance(context, row, result)
  File /home/az/src/dbcook/sqlalchemy/orm/mapper.py, line 1436, in 
_instance
self.populate_instance(context, instance, row, **flags)
  File /home/az/src/dbcook/sqlalchemy/orm/mapper.py, line 1496, in 
populate_instance
(newpop, existingpop, post_proc) = 
prop.create_row_processor(selectcontext, self, row)
ValueError: need more than 2 values to unpack

the result query is somewhat awwful, but it works before that:

-- TEST trans exact, valids between _2* SA: INFO 
SELECT PolymBase.disabled AS PolymBase_disabled, PolymBase.val 
AS PolymBase_val, PolymBase.time_valid 
AS PolymBase_time_valid, PolymBase.time_trans 
AS PolymBase_time_trans, PolymBase.obj_id 
AS PolymBase_obj_id, PolymBase.atype 
AS PolymBase_atype, PolymBase.db_id 
AS PolymBase_db_id, PolymLeaf.db_id AS PolymLeaf_db_id
FROM (
SELECT max(PolymBase.db_id) AS db_id, PolymBase.time_trans AS 
time_trans, PolymBase.time_valid AS time_valid
FROM PolymBase JOIN (
SELECT max(PolymBase.time_trans) AS 
time_trans, PolymBase.time_valid AS time_valid
FROM PolymBase
WHERE PolymBase.time_trans = ? AND PolymBase.time_valid 
= ? AND PolymBase.time_valid = ? AND PolymBase.obj_id = ? 
AND PolymBase.atype = ?
GROUP BY PolymBase.time_valid
) AS t1 ON PolymBase.time_valid = t1.time_valid 
AND PolymBase.time_trans = t1.time_trans
WHERE PolymBase.obj_id = ? AND PolymBase.atype = ?
GROUP BY PolymBase.time_valid, PolymBase.time_trans
) AS timedr, PolymBase JOIN PolymLeaf ON PolymLeaf.db_id 
= PolymBase.db_id
WHERE PolymBase.db_id = timedr.db_id AND NOT PolymBase.disabled 
ORDER BY PolymBase.time_valid
* SA: INFO [11, 12, 14, 1, 'PolymLeaf', 1, 'PolymLeaf']


any idea? i mau try prepare some stripped testcase but it'll take 
time...
if u wanna try, svn co then run make in 
https://dbcook.svn.sourceforge.net/svnroot/dbcook/trunk/dbcook/misc/timed2/test/
or do
PYTHONPATH=..:../../../..:../../..:$(PYTHONPATH) python 
test_timed_dbcook.py 

svil

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: postgres POLYGON data type

2007-09-17 Thread sdobrev

i don't have recent py-gis experience, but from past, its been 
tuple-likes and numpy arrays. Best option will be to have some 
default data-representation constructor for each SA-GIS type, and 
allow overriding that.
e.g. Point holds data by default in a tuple (Point.DataHoler=tuple), 
but i can override Point.DataHolder=my-special-point-type and i'll 
get those. There might be numerous aspects to count here before 
choosing a representation.
Even some semantix can float (e.g. is closed polygon represented as 
p1..pn + closed=true, or as p1..pn,p1 again), so have a clear 
non-twofold creation-protocol and leave actual representation to the 
programer.
Otherwise u're stick with something half people won't like.

my 2baht
svilen

On Monday 17 September 2007 21:59:00 jason kirtland wrote:
 sc0ttbeardsley wrote:
  On Sep 17, 5:03 am, [EMAIL PROTECTED] wrote:
  Has anyone added support for the POLYGON data type in
  PostgreSQL? If so, is there any code that can be shared?
 
  I haven't seen POLYGON but the beginnings of such support is over
  at bycycle.org[1]. I'm interested in something similar for
  MySQL's spatial extensions. Has anyone done MySQL yet?

 Just before the first 0.4 beta I started looking at adding support
 for the OGC types for databases that support them.  The basic
 column plumbing in SQLAlchemy is simple- an hour's work, with unit
 tests- but what's less obvious is what to map those column types
 *to*.  They could just be tuples or the like, but that didn't feel
 as useful as one might want.

 I've been idly monitoring the progress summary for the GeoDjango
 project to see if their efforts scare up any clear winners for
 python-side representation of the geometry types.  They seem to be
 mapping to ctypes-fronted native classes, which feels like a good
 direction to me.

 What do all of you GIS folks think about the python mappings?  Is
 there a clear approach, or would the type implementation (e.g.
 Point, Curve, etc.) need to be pluggable?  Some specialized
 expression functions to make query specification smooth might also
 be in order.

 I think it would be pretty nifty to have a GeoAlchemy kind of
 extension or add-on that really rocks for GIS work.  I haven't had
 cause to use geometry in database work and so my GIS experience is
 pretty limited, but I would be more than delighted to lend a hand
 if folks with working domain knowledge are also game.

 -j


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL connection url format?

2007-09-17 Thread sdobrev

see some notes at
http://www.sqlalchemy.org/trac/wiki/DatabaseNotes

also check dbcook.usage.sa_engine_defs.py at 
(svn co)
https://dbcook.svn.sourceforge.net/svnroot/dbcook/trunk/dbcook/usage/
for some create/drop stuff, both pyodbs/pymssql.

 3. I'll be piggy backing on an existing ERP system and I'm trying
 to decide what would be the best way to store new tables
 - in the DB used by the ERP system
 - in a new DB on the same MSSQL server
 - in a SQLite DB

sqlite db will be cheapest and fastest (in the means of rapid 
develpment), but might not support all magic u want, e.g. sequences 
etc. i'll say go initialy for sqlite, dont rely on specific mssql 
features, and timely try things on mssql to prevent surprises. Then 
once u have 70+% there, specialize if need be.

 How easy does SA make it to use data from multiple DBs?
transparent enough for an antisqler like me.
see copyall.py at
https://dbcook.svn.sourceforge.net/svnroot/dbcook/trunk/dbcook/misc/metadata/

have fun
svil

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: How to best select from a large data set

2007-09-16 Thread sdobrev

 In my database I have 5000 customers who made purchases and made
 some form of payment. I need to find the names of all customers who
 made payments by cash.

 My SQL query looks like this:

 SELECT customers.name, payments.payid
   FROM customers, purchases, payments
  WHERE customers.cid = purchases.cid
AND purchases.payid = payments.payid
AND payments.pay_type = 'cash'

have a look at 'querying with joins' in the docs. Also, 0.4 docs are 
better explaning some things than 0.3 (but check for feature 
availability).

non-orm:
q = customers_table.join( purchases_table).join( 
payments_table).select( payments_table.c.pay_type=='cash').execute()
for a in q: print a
u have something like this anyway.

orm:
q = Customer.query().add_entity( Payment).join( 
['purchases', 'payments'] ).filter( Payment.pay_type=='cash')
for a in q: print a

here u'll get tuples (objCustomer,objPayment) 

in both cases, for further optimization, u could (somehow?) preselect 
the columns u need instead of getting all. Especialy for the 
orm/Customers, u'll get whole purchases[] relations loaded unless 
they are lazy.

u can also make a mapper directly over the table-join-select:

class CusPay: pass
m= Mapper( CusPay, customers_table.join( purchases_table).join( 
payments_table).select( payments_table.c.pay_type=='cash') )
for a in m.query(): print a

or something of sorts, avoiding any extra columns/relation/whatever.

 Currently my code looks like this:
 for customer in customers:
 for purchase in customer.purchases:
 if purchase.payment.pay_type == 'cash':
 print %s, %s % (customer.name,
 purchase.payment.payid)
u are not using any db-filtering, u are filtering all by hand in 
python. so u're instantiating all them customers, then for each 
loading all the purchases, then printing those =='cash' - essentialy 
loading whole db.

ciao
svilen

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Aggregate function of connected items as a property?

2007-09-10 Thread sdobrev

 My problem is: I want to be able to select from Thread, ordering it
 by descending order of the maximum tn_ctime for each thread, to
 find the most recently referenced threads. Which is to say, I want
 to do something like

 select
 t.*,
 coalesce(c.most_recent_child, t.tn_ctime) as last_upd
 from tnode t
 left join (select tn_parent as node_id, max(tn_ctime) as
 most_recent_child from tnode group by tn_parent) c on
 c.node_id==t.tn_id group by t.tn_id
   order by last_upd desc;

 Is it possible to add a property to Thread holding the maximum
 child node timestamp if any (or the thread node's timestamp, if
 none), so that I can do something like

 Thread.select(Thread.c.tn_parent==forum_id,
 order_by=desc(Thread.c.last_upd))

 ?


i dont think i got what u want from these examples, but adding a 
property that always fires a select is easy:

class Thread:
 ...
 @property
 def mymaxprop( self): 
   return self.select(...) or whatever
eventualy u can put some cache, but you're responsible for keeping it 
uptodate

Another way would be to play with relation, maybe it can do something 
like that too;

And another way is to look here:
http://www.mr-pc.kiev.ua/en/projects/SQLAlchemyAggregator/
or another ~copy (with v0.3 support) here
https://dbcook.svn.sourceforge.net/svnroot/dbcook/trunk/dbcook/misc/aggregator/

now after rereading your stuff i'm even less sure if what i say is 
relevant.

svilen

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] avoid a subselect yielding null

2007-09-09 Thread sdobrev

g'day. 
i have a subselect that may yield null (nothing found), and i want to 
treat that as value of 0. i've read about coalesce() that would 
return first non-null of its args.

plain query looks like:

expr = and_( trans.c.account.startswith( balance.c.account),
  trans.c.date) = balance.c.finaldate,
  trans.c.date  select( [ func.max( b.c.finaldate)],
   b.c.finaldate  balance.c.finaldate
 ).correlate( balance)

this gives something like:
$print expr
 trans.account LIKE balance.account+'%' \
 AND trans.date = balance.finaldate \
 AND trans.date  coalesce((SELECT max(b.finaldate)
 FROM balance AS b
 WHERE b.finaldate  balance.finaldate)


adding coalesce breaks the subselect:

expr = and_( trans.c.account.startswith( balance.c.account),
  trans.c.date) = balance.c.finaldate,
  trans.c.date  func.coalesce(
   select( [ func.max( b.c.finaldate)],
   b.c.finaldate  balance.c.finaldate
 ).correlate( balance),
   0 )
$print expr
 trans.account LIKE balance.account + '%' \
 AND trans.date = balance.finaldate \
 AND trans.date  coalesce(NULL,0)

removing the correlate() restores proper subselect, but now its not 
correlated, and has 'FROM balance as b, balance' in it...

or should i use CASE instead?
i'm sure i've misunderstood all this sql thing...
(The subselect is just trying to find the date of previous row in 
table balance before current balance.finaldate, OR 0.)

svilen

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: avoid a subselect yielding null

2007-09-09 Thread sdobrev

On Sunday 09 September 2007 22:51:32 Michael Bayer wrote:
 try calling scalar() on that subquery, it needs to be treated as
 such.
oops, forgot to mention: this is 0.3.xx. 
in 0.4 all is okay without scalars.

so, 0.3.latest, adding .scalar() after .correlate() complains about 
None having no .scalar attribute; adding .scalar() before 
the .correlate() gives:
Traceback (most recent call last):
  File tests/convertertest.py, line 144, in 
test4_balance_trans_via_prev_balance_date_subselect
b.c.finaldate  balance.c.finaldate
  File /home/az/src/dbcook/sqlalchemy/sql.py, line 1215, in scalar
return self.execute(*multiparams, **params).scalar()
  File /home/az/src/dbcook/sqlalchemy/sql.py, line 1208, in execute
return self.compile(bind=self.bind, 
parameters=compile_params).execute(*multiparams, **params)
  File /home/az/src/dbcook/sqlalchemy/sql.py, line 1097, in execute
raise exceptions.InvalidRequestError(This Compiled object is not 
bound to any Engine or Connection.)
InvalidRequestError: This Compiled object is not bound to any Engine 
or Connection.

which is true, its all unbound.

btw .scalar() behaves same in 0.4.. so i guess its not that.

if its too much of a hassle, forget it, one testcase less when 0.3 
(-;).
svilen

 On Sep 9, 2007, at 2:37 PM, [EMAIL PROTECTED] wrote:
  g'day.
  i have a subselect that may yield null (nothing found), and i
  want to treat that as value of 0. i've read about coalesce() that
  would return first non-null of its args.
 
  plain query looks like:
 
  expr = and_( trans.c.account.startswith( balance.c.account),
trans.c.date) = balance.c.finaldate,
trans.c.date  select( [ func.max( b.c.finaldate)],
 b.c.finaldate 
  balance.c.finaldate ).correlate( balance)
 
  this gives something like:
  $print expr
   trans.account LIKE balance.account+'%' \
   AND trans.date = balance.finaldate \
   AND trans.date  coalesce((SELECT max(b.finaldate)
   FROM balance AS b
   WHERE b.finaldate  balance.finaldate)
 
 
  adding coalesce breaks the subselect:
 
  expr = and_( trans.c.account.startswith( balance.c.account),
trans.c.date) = balance.c.finaldate,
trans.c.date  func.coalesce(
 select( [ func.max( b.c.finaldate)],
 b.c.finaldate 
  balance.c.finaldate ).correlate( balance),
 0 )
  $print expr
   trans.account LIKE balance.account + '%' \
   AND trans.date = balance.finaldate \
   AND trans.date  coalesce(NULL,0)
 
  removing the correlate() restores proper subselect, but now its
  not correlated, and has 'FROM balance as b, balance' in it...
 
  or should i use CASE instead?
  i'm sure i've misunderstood all this sql thing...
  (The subselect is just trying to find the date of previous row in
  table balance before current balance.finaldate, OR 0.)
 
  svilen

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: avoid a subselect yielding null

2007-09-09 Thread sdobrev

On Sunday 09 September 2007 23:30:20 Michael Bayer wrote:
 sorry, as_scalar() in 0.4.  in 0.3, correlate() is not generative
 (i.e. modifies the parent select(), returns None), so thats your
 problem (call correlate() beforehand).
yeah that's it. thanks.

now back to that argument, months ago: 
 - how to make same code work with generative and non-generative api?
now i need to do:

sel = select(...)
xx = sel.correlate(..)
if not _v03: sel = xx
...use-the-sel...

this now is just one place, big deal. But in some wider usage...
Anyway, food for thought... and/or documentation.

svilen

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: bisect.insort

2007-09-07 Thread sdobrev

On Friday 07 September 2007 13:54:03 Jean-Philippe Dutreve wrote:
 I was using SA 0.3.9 to insert an item in an ordered list with
 bisect method insort (py 2.5):

 mapper(Entry, table_entries)
 mapper(Account, table_accounts, properties = dict(
 entries = relation(Entry, lazy=True,
 backref=backref('account', lazy=False),
 collection_class=ordering_list('position'),
 order_by=[table_entries.c.position])
 ))
 bisect.insort(account.entries, an_entry)

 This is not working anymore with SA 0.4 beta5 : the list owns the
 item but not the other way.
  assert account.entries[0] is an_entry  # TRUE
  assert an_entry.account is account  # FALSE, currently is None

 Remark: it's working if I copy/paste the bisect method in my
 module.

collections were reworked in 0.4, so see what methods the insort() 
uses from your list, and see if they are simulated/wrapped in 
orm.collections.py. maybe there some unhooked one.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: sqlalchemy.select() and tablename.select() why are they different?

2007-09-07 Thread sdobrev

 ok. So this time I am trying to get data from my widget from
 database that has two compound keys, using assign_mapper.
 #Initialize:
 User_table = sqlalchemy.Table('User', metadata, autoload=True)
 class User(object):
 pass
 usermapper=assign_mapper(session.context,User,user_table)

 #get my record:
  x=model.User.get(343,2)

 Why do I get an error? What am I doing wrong here?

 File /home/lucas/web/tg/quote/quote/controllers.py, line 126, in
 upddriver x=model.Bdriver.get(343,2)
   File
 /usr/lib/python2.4/site-packages/sqlalchemy/ext/assignmapper.py,
 line 7, in do
 return getattr(query, name)(*args, **kwargs)
 TypeError: get() takes exactly 2 arguments (3 given)
u are giving 2 args here, it wants one arg (apart of the self). try 
give it as tuple .get( (343,2) ), but i'm not sure what is the exact 
way for composite keys.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] rev 3449 breaks MapperExtension?

2007-09-07 Thread sdobrev

or has something in MapperExt protocol changed?

File dbcook/usage/samanager.py, line 189, in query_BASE_instances
return session.query( m.plain )
File sqlalchemy/orm/session.py, line 638, in query
q = self._query_cls(mapper_or_class, self, **kwargs)
File sqlalchemy/orm/query.py, line 31, in __init__
self._extension = self.mapper.extension.copy()
AttributeError: '_MapExt' object has no attribute 'copy'

i have very simple one, just for sanity checks:

class _MapExt( sqlalchemy.orm.MapperExtension):
def before_insert( self, mapper, connection, instance):
assert (instance.__class__ is not mapper.class_, 
'load_only_object - no save: ' 
+ str( instance.__class__) + ':'+str(mapper)
  )
before_update = before_delete = before_insert

_mapext = _MapExt()

and using that one on all mappers which should be readonly.

svilen

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: rev 3449 breaks MapperExtension?

2007-09-07 Thread sdobrev

On Friday 07 September 2007 20:25:50 Michael Bayer wrote:
 we've got plenty of MapperExtensions running.   i dont see how you
 are getting mapper.extension to be your actual mapper, its supposed
 to point to a container called ExtensionCarrier (
 unless, you are 
 saying mapper.extension = _MapExt().  
 not supported, unless you 
 want to provide a property-based patch for it).
yes, i am setting it this way... okay, if thats not the proper way 
anymore, i'll have a look.
Any other way to force a readonly mapper? 
its for mapping of (virtual) base classes that must not have 
instances.

 On Sep 7, 2007, at 11:12 AM, [EMAIL PROTECTED] wrote:
  or has something in MapperExt protocol changed?
 
  File dbcook/usage/samanager.py, line 189, in
  query_BASE_instances return session.query( m.plain )
  File sqlalchemy/orm/session.py, line 638, in query
  q = self._query_cls(mapper_or_class, self, **kwargs)
  File sqlalchemy/orm/query.py, line 31, in __init__
  self._extension = self.mapper.extension.copy()
  AttributeError: '_MapExt' object has no attribute 'copy'
 
  i have very simple one, just for sanity checks:
 
  class _MapExt( sqlalchemy.orm.MapperExtension):
  def before_insert( self, mapper, connection, instance):
  assert (instance.__class__ is not mapper.class_,
  'load_only_object - no save: '
  + str( instance.__class__) + ':'+str(mapper)
)
  before_update = before_delete = before_insert
 
  _mapext = _MapExt()
 
  and using that one on all mappers which should be readonly.
 
  svilen

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: sqlalchemy.select() and tablename.select() why are they different?

2007-09-06 Thread sdobrev

On Thursday 06 September 2007 23:03:35 Lukasz Szybalski wrote:
 Hello,
 So it seems to me there are two select function that I can use but
 they are different
 First:
 s=Users.select(Users.c.LASTNAME=='Smith')
 but when you want to select only two columns via :
 s=Users.select([Users.c.LASTNAME, Users.c.FIRSTNAME],
 Users.c.LASTNAME =='Smith')

 you get an error :
  File
 /usr/lib/python2.4/site-packages/sqlalchemy/ext/assignmapper.py,
 line 7, in do
 return getattr(query, name)(*args, **kwargs)
 TypeError: select() takes at most 2 arguments (3 given)




 Second:

 import sqlalchemy
 s2=sqlalchemy.select(Users.c.LASTNAME=='Smith')
 s3=s2.execute()
 This works just fine:
 s2=sqlalchemy.select([Users.c.LASTNAME, Users.c.FIRSTNAME],
 Users.c.LASTNAME =='Smith')
 s3=s2.execute()

 Is this difference suppose to be there? or is it a bug in
 assign_mapper?

 Lucas


the first is the ORM query().select() taking only where clause, the 
second is plain sql select(columns,where,from). To disambiguate, 
first one is discontinued and is replaced by filter().

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] bitemporal mixin recipe in dbcook

2007-09-06 Thread sdobrev

hi.

For those interested, i've put a bitemporal mixin class under 
dbcook/misc/timed2/. It handles Objects with multiple versions 
(history), disabled/enabled state, and stays sane with  
same-timestamp-versions.

The available queries are:
 - get_time_clause( times):
return the clause to get the object-version for that timestamps;
use to (further) filter some SA.query()
 - klas.allinstances( times ):
return the last valid versions of all objects for the 
(bitemporal) timestamp
 - klas.get_obj_history_in_range( objid, fromtimes, totimes):
return all versions of that Object within given timesrange

It uses plain SA - as long as u maintain the required object 
attributes, it should work no matter how they appeared (by 
dbcook/elixir/whatever/manual).

Required attributes:
   objid  - this keeps track of which versions belong to same Object. 
This should be incremented by special means, only when new Object is 
made (and not when new version of existing Object)
   time_trans, time_valid  - the two temporal dimensions. can be 
anything orderable - numbers, floats, strings, datetime, ...
   disabled - a boolean 
plus some class-setup is required, see begin of class Timed2Mixin.

It is there together with quite thorough test, and a complementary 
timed/ library, containing things like timed1 and timed2 
implementations over (python) sequences, timed* versioning support 
for python modules, some timedcontext etc.

todo: as ever, some documentation, and simple examples

svn co 
https://dbcook.svn.sourceforge.net/svnroot/dbcook/trunk/dbcook/misc/timed2/
https://dbcook.svn.sourceforge.net/svnroot/dbcook/trunk
http://dbcook.sourceforge.net/

ciao
svilen

p.s. something learned around minor fix in dbcook.usage.sa2static: 
delattr( Myclass, attrname) calls Myclass' metaclass.__delattr__(), 
probably same goes for setattr. Thus the setting and clearing of 
InstrumentedAttributes can be controlled, maintaning/restoring the 
overriden original descriptors if any.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Self referencing keys

2007-09-03 Thread sdobrev

On Monday 03 September 2007 19:57:54 voltron wrote:
 would this work?

 users = Table(users, metadata,
 Column(id,Integer,primary_key=True),
 Column(username,  String(50),unique=True,
 nullable=False),
 Column(password,  String(255)),
 Column(email, String(255),unique=True,
 nullable=False),
 Column(firstname, String(255)),
 Column(lastname,  String(255)),

 Column(modifiedby_id, Integer,  ForeignKey(users.id))
 Column(modifiedon,DateTime(timezone=True),
 default=func.now()),

 On Sep 3, 6:50 pm, voltron [EMAIL PROTECTED] wrote:
  Excuse the newbie question, how does one declare a
  sef-referencing key? I would like to have fields modified_by
  and created_by in my users table, the field should refer back
  to the user id or user name in the same table.
 
  Thanks

yes, foreign key + eventualy use_alter=True if u get cyclic

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Self referencing keys

2007-09-03 Thread sdobrev

A pointing to A, is cyclical dependency.

same as A - B - A.
but in latter case u must choose one of the links to be added later, 
that is use_later=True for ForeignKey.
in former case the table declaration may or may not work without 
use_alter. 
in both cases u need post_update=True for the relation/mapper of the 
loop-closing link of your choice.


On Tuesday 04 September 2007 00:21:06 you wrote:
 Thanks for the reply, what do you mean by cyclic?

 On Sep 3, 9:00 pm, [EMAIL PROTECTED] wrote:
  On Monday 03 September 2007 19:57:54 voltron wrote:
   would this work?
  
   users = Table(users, metadata,
   Column(id,Integer,primary_key=True),
   Column(username,  String(50),unique=True,
   nullable=False),
   Column(password,  String(255)),
   Column(email, String(255),unique=True,
   nullable=False),
   Column(firstname, String(255)),
   Column(lastname,  String(255)),
  
   Column(modifiedby_id, Integer,  ForeignKey(users.id))
   Column(modifiedon,DateTime(timezone=True),
   default=func.now()),
  
   On Sep 3, 6:50 pm, voltron [EMAIL PROTECTED] wrote:
Excuse the newbie question, how does one declare a
sef-referencing key? I would like to have fields
modified_by and created_by in my users table, the field
should refer back to the user id or user name in the same
table.
   
Thanks
 
  yes, foreign key + eventualy use_alter=True if u get cyclic



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Best way to handle in()

2007-08-28 Thread sdobrev

sorry for my bad sql, but where have u specified that link?
u should have something like

 foo.filter( (Main.chidlid==Child.childid) 
Child.othercolumn.in_('a', 'b', 'c') )
or 
 foo.join( child).filter( Child.othercolumn.in_('a', 'b', 'c') )

(warning: the exact syntax may or may not be this, do check)

On Tuesday 28 August 2007 22:58:11 Kirk Strauser wrote:
 I have mappers configured for main and child, and those tables
 are linked on main.childid=child.childid.  How can I use in() to
 get rows where child.othercolumn is in a list of values?  I'd like
 to do something like:

 foo = session.query(MainTable).filter_by(customer='CUSTNAME')
 foo = foo.filter(ChildTable.c.othercolumn.in_('bar', 'baz', 'qux'))

 Or, even better, some variant on:

 foo = session.query(MainTable).filter_by(customer='CUSTNAME')
 foo = foo.filter(othercolumn in ('bar', 'baz', 'qux'))

 When I try to do that, though, I get SQL like:

 SELECT main.value AS main_value, main.childid AS main_childid,
 child.childid AS child_childid, child.othercolumn AS
 child_othercolumn
 FROM testing.main, testing.child
 WHERE child.othercolumn IN (%(child_othercolumn)s,
 %(child_othercolumn_1)s, %(child_othercolumn_2)s) ORDER BY
 main.value

 which is really doing a cartesian join and never enforcing
 main.childid=child.childid.  Is there another way I should be
 approaching this?



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: echo-es

2007-08-22 Thread sdobrev

 will restore engine.echo today.
what about Metadata's? why not leave some enginetype-indepedent kwargs 
there (or at least echo, its the most used in lets-try-this-now 
cases), which go together with the bind to the create()? i know i 
know explicit is better than implicit... noone would be forced to use 
them ;-) 
something like 
def __init__( ..., echo =None):
 ... me.echo = echo
def create( ..., echo=None, **kargs4engine)
  if echo is None: echo me.echo
  .. create engine( bind, echo=echo, **kargs4engine)

?

 leaving it out on session and pool, however (bet you didnt know
 they had echo too...)
i know but i always assumed all that is working (and it always has 
been so far) and its me/request going wrong, hence the sql echo...

 On Aug 22, 2007, at 5:11 AM, svilen wrote:
  in 0.3, one could do
   meta = MetaData( whatever, echo=True)
  later, early 0.4, the echo kwarg was gone, so it got less
  convenient, adding another line:
   meta.bind.echo = True
  As of latest trunk, neither works, one has to explicitly do
   meta = MetaData( create_engine(whatever, echo=True))
  which is probably fine for explicitness with lots of args, but is
  not very useful for a simple echo.
 
  IMO it is important to get the simplest use case (minimum
  explicit typing):
   meta = MetaData( dburl, echo=True)
  working.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] overriding collection methods

2007-08-20 Thread sdobrev
hi
i need to have a list collection with list.appender (in SA 0.4 terms) 
that accepts either one positional arg as the value, or keyword args 
which it uses to create the value. Each collection instance knows 
what type of values to create.

so i do:
class MyCollection( list):
factory = None
@collection.appender
def append( me, obj =_NOTSET, **kargs):
if obj is _NOTSET:#marker for notset
obj = me.factory( **kargs)
list.append( me, obj)
return obj

@classmethod
def myCollectionFactory( klas):
m = Association.MyCollection()
m.factory = klas
return m

and in the mapper, 
  ... relation( ..., 
   uselist = True,
   collection_class = assoc_klas.myCollectionFactory
 )

well, it doesnot work. 
all is well until in _instrument_class() the ABC decoration kicks in, 
and setup a preset append-wrapping decorator that has another 
interface (as in _list_decorators(): 
def append(self, item, _sa_initiator=None):...

Any idea to fix/enhance this, letting **kwargs through to my function?
The dynamic wrapper() can do this, while these preset ones cannot... 
while they should be equaly powerful.

There are 2 (different) uses of an appender, one is the SA itself, but 
the other is the programmer. SA will always use single 
arg/positionals, while i could use this or that or combination.

===
coupe of comments on orm.collections.py:
 - there are several lines like
   setattr(fn, '_sa_instrumented', True)
   why not just use fn._sa_instrumented= True ?

 - the repeated check/setup in _instrument_class() can be looped:

# ensure all roles are present, and apply implicit instrumentation 
if needed
for rolename,eventname in dict(
appender='fire_append_event',
remover ='fire_remove_event',
iterator=None,
).iteritems():
roler = roles.get( rolename, None)
if not role or not hasattr(cls, roler):
typename = cls.__name__
raise exceptions.ArgumentError(
Type %(typename)s must elect an %(rolename)s method 
to be a collection class % locals() )
elif (eventname and
roler not in methods and
not hasattr(getattr(cls, roler), '_sa_instrumented')):
methods[ roler] = ( eventname, 1, None)
 

patch attached.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---

Index: orm/collections.py
===
--- orm/collections.py	(revision )
+++ orm/collections.py	(working copy)
@@ -647,35 +655,32 @@
 
 # ensure all roles are present, and apply implicit instrumentation if
 # needed
-if 'appender' not in roles or not hasattr(cls, roles['appender']):
+for rolename,eventname in dict(
+appender='fire_append_event',
+remover ='fire_remove_event',
+iterator=None,
+).iteritems():
+roler = roles.get( rolename, None)
+if not role or not hasattr(cls, roler):
+typename = cls.__name__
 raise exceptions.ArgumentError(
-Type %s must elect an appender method to be 
-a collection class % cls.__name__)
-elif (roles['appender'] not in methods and
-  not hasattr(getattr(cls, roles['appender']), '_sa_instrumented')):
-methods[roles['appender']] = ('fire_append_event', 1, None)
-
-if 'remover' not in roles or not hasattr(cls, roles['remover']):
-raise exceptions.ArgumentError(
-Type %s must elect a remover method to be 
-a collection class % cls.__name__)
-elif (roles['remover'] not in methods and
-  not hasattr(getattr(cls, roles['remover']), '_sa_instrumented')):
-methods[roles['remover']] = ('fire_remove_event', 1, None)
-
-if 'iterator' not in roles or not hasattr(cls, roles['iterator']):
-raise exceptions.ArgumentError(
-Type %s must elect an iterator method to be 
-a collection class % cls.__name__)
+Type %(typename)s must elect an %(role)s method to be 
+a collection class % locals() )
+elif (eventname and
+roler not in methods and
+not hasattr(getattr(cls, roler), '_sa_instrumented')):
+methods[ roler] = ( eventname, 1, None)
 
 # apply ad-hoc instrumentation from decorators, class-level defaults
 # and implicit role declarations


[sqlalchemy] Re: overriding collection methods

2007-08-20 Thread sdobrev
sorry, fixed patch

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---

Index: orm/collections.py
===
--- orm/collections.py	(revision )
+++ orm/collections.py	(working copy)
@@ -647,35 +648,32 @@
 # ensure all roles are present, and apply implicit instrumentation if
 # needed
-if 'appender' not in roles or not hasattr(cls, roles['appender']):
+for rolename,eventname in dict(
+appender='fire_append_event',
+remover ='fire_remove_event',
+iterator=None,
+).iteritems():
+roler = roles.get( rolename, None)
+if not rolename or not hasattr(cls, roler):
+typename = cls.__name__
 raise exceptions.ArgumentError(
-Type %s must elect an appender method to be 
-a collection class % cls.__name__)
-elif (roles['appender'] not in methods and
-  not hasattr(getattr(cls, roles['appender']), '_sa_instrumented')):
-methods[roles['appender']] = ('fire_append_event', 1, None)
-
-if 'remover' not in roles or not hasattr(cls, roles['remover']):
-raise exceptions.ArgumentError(
-Type %s must elect a remover method to be 
-a collection class % cls.__name__)
-elif (roles['remover'] not in methods and
-  not hasattr(getattr(cls, roles['remover']), '_sa_instrumented')):
-methods[roles['remover']] = ('fire_remove_event', 1, None)
-
-if 'iterator' not in roles or not hasattr(cls, roles['iterator']):
-raise exceptions.ArgumentError(
-Type %s must elect an iterator method to be 
-a collection class % cls.__name__)
+Type %(typename)s must elect an %(role)s method to be 
+a collection class % locals() )
+elif (eventname and
+roler not in methods and
+not hasattr(getattr(cls, roler), '_sa_instrumented')):
+methods[ roler] = ( eventname, 1, None)
 
 # apply ad-hoc instrumentation from decorators, class-level defaults
 # and implicit role declarations


[sqlalchemy] Re: overriding collection methods

2007-08-20 Thread sdobrev

On Monday 20 August 2007 18:09:41 jason kirtland wrote:
 svilen wrote:
  And anyway i need to first create the object and just then append
  it (the decorators will first fire event on the object and just
  then append(), that is call me), so may have to look
  further/deeper. Maybe i can make my append create objects first
  and then call the actual appender - so yes, this is the way.

 Either way.  The @internally_instrumented is there exactly for that
 flexibility on ORM interface methods like 'append' and as an
 override for ABC decoration on python interface methods.  (The
 regular recipes also override ABC decoration.)   Or you can do you
 work elsewhere and forward to an instrumented method for event
 service.
an example on this? i can't figure it out, whatever i do, that 
ABC-auto-decorators loop kicks in and byebye my nice append - even if 
the appender is not append() at all. Why should append() be 
instrumented regardless of it being or not the used appender?

@internally_instrumented - u mean my append() to do everything, even 
firing of events? uhm. ugly, for my simple case. 
and are there some fire-this-event() funcs? those __del()/__set() do 
not seem like very exposed...


class _NotSet: pass 
class MyCollection( list):
factory = None
def append( me, obj =_NotSet, **kwargs):
if obj is _NotSet: 
print 'success!', kwargs
obj = factory(**kwargs)
me._append( obj)
return obj
@sqlalchemy.orm.collections.collection.appender
def _append( me, *a,**k): return list.append(me, *a,**k)

m = mapper( A, .., rel = relation( collection_class = 
MyCollection)... )

a=A()
a.rel.append( key1=val1, key2=val2)
#this always fails/TypeError, as the append called is not mine, but an 
wrapped/instrumented one and that one has no kwargs.


btw, i noted the sequence fire-event() / original-func-call() is not 
same/consistent in those auto-decorators, e.g. for remove() and del() 
etc.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: overriding collection methods

2007-08-20 Thread sdobrev

On Monday 20 August 2007 20:58:45 [EMAIL PROTECTED] wrote:
 On Monday 20 August 2007 18:09:41 jason kirtland wrote:
  svilen wrote:
   And anyway i need to first create the object and just then
   append it (the decorators will first fire event on the object
   and just then append(), that is call me), so may have to look
   further/deeper. Maybe i can make my append create objects first
   and then call the actual appender - so yes, this is the way.
 
  Either way.  The @internally_instrumented is there exactly for
  that flexibility on ORM interface methods like 'append' and as an
  override for ABC decoration on python interface methods.  (The
  regular recipes also override ABC decoration.)   Or you can do
  you work elsewhere and forward to an instrumented method for
  event service.

 an example on this? i can't figure it out, whatever i do, that
 ABC-auto-decorators loop kicks in and byebye my nice append - even
 if the appender is not append() at all. Why should append() be
 instrumented regardless of it being or not the used appender?

so @internally_instrumented prevents any automatic wrapping, ok i got 
it. (The docstrings in collections.py should make their way to the 
normal docs...)
it is not at all obvious why, if there is a choosen appender, the 
default one (append() or whatever) has to be instrumented too...

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: overriding collection methods

2007-08-20 Thread sdobrev

 But tacking a factory method onto a regular Python list is much
 simpler with a separation of concerns:

 class FactoryCollection(list):
def create(self, options, **kw):
eh sorry, i want it the hard way.. 
now as i think of it, its just me being lazy and fancy - preferring 
implicitness and less method names to remember.

 For a while there was a no-op decorator that did the same job as
 @internally_instrumented in the first example, just a different
 name for clarity.  It could easily come back if this pattern
 becomes common- I yanked it after working with the collections for
 a while and finding the second form much more common in my work.
@do_not_instrument might a be better name -- but regardless the name, 
if the opting-out concept is not not advertised, one does not know 
what to look for..


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: large amount of insert

2007-08-17 Thread sdobrev

there was some recent thread on this 2-3 weeks ago, lookup..

On Friday 17 August 2007 11:28:34 Glauco wrote:
 What's is the best solution for a web procedure , in TurboGear,
 that produce  a large amount of insert into ? (from 2000 to 2
 insertions on a submit)


 i've done some try with somethink like 5000 insertion and the
 transaction is huge


 do you think  sqlalchemy can do this better with some trick or  is
 better to transfer all data in a  copy from   file?




 Thank you
 Glauco



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SQLAlchemy 0.4 beta2 released

2007-08-15 Thread sdobrev

On Wednesday 15 August 2007 04:26:31 Michael Bayer wrote:
 On Aug 14, 2007, at 4:35 PM, Michael Bayer wrote:
  On Aug 14, 2007, at 12:38 PM, svilen wrote:
  ---
  orm.attribute
AttributeManager.init_attr():
  the saving this one eventualy does is too small, compared to
  a property call of ._state.
 
  i havent benched this in a while but my recollection is that the
  AttributeError raise is *much* slower than pre-calling this
  method. a single function call is always faster than an exception
  throw.
 
  however, i see that the exception throw is being suppressed also
  with a hasattr() being called every timeim not sure why thats
  that way now so i might change it back to throwing
  AttributeError.

 the results are in, running test/prof/masseagerload.py.  this test
 is very heavy on creating new instances from mapper rows, which is
 where the initialization of _state comes in.

 no init_attr(), detect missing with AttributeError
 Profiled target 'masseagerload', wall time: 0.59 seconds
 Profile report for target 'masseagerload' (masseagerload.prof)
 57039 function calls (55962 primitive calls) in 0.489 CPU seconds

 init_attr(), detect missing with AttributeError
 Profiled target 'masseagerload', wall time: 0.53 seconds
 57549 function calls (56472 primitive calls) in 0.426 CPU seconds

 init_attr(), detect missing with hasattr
 Profiled target 'masseagerload', wall time: 0.56 seconds
 57549 function calls (56472 primitive calls) in 0.431 CPU seconds

 no init_attr(), detect missing with hasattr
 Profiled target 'masseagerload', wall time: 0.49 seconds
 57039 function calls (55962 primitive calls) in 0.390 CPU seconds

 im not exactly sure why the hasattr() call, being present and
 then removed, doesnt change the number of function calls.  anyway, 
 the times vary a little bit but the hasattr call, even though its
 called many more times than the AttributeError gets raised, is
 slightly faster than raising AttributeError.  so no AttributeError,
 and I like getting rid of init_attr() very much so its out in
 r3313.

heh. 

First, the init_attr as it was, has never worked - it sets up a 
private __sa_attr_state, while what is used is plain non-private 
_sa_attr_state attribute (note starting underscores).

Second, i went to r3312, let init_attr() set a _state as plain dict 
and removed _state as property. The difference plain-dict/property 
(in favor of plain dict) is like 2-3%.

property:
loaded  10  items each with  500 subitems
523034 function calls (512957 primitive calls) in 2.556 CPU s
loaded  30  items each with  500 subitems
1564374 function calls (1534297 primitive calls) in 7.796 CPU s
loaded  16  items each with  1500 subitems
2499436 function calls (2451359 primitive calls) in 12.518 CPU s

plain dict:
loaded  10  items each with  500 subitems
513014 function calls (502937 primitive calls) in 2.525 CPU s
loaded  30  items each with  500 subitems
1534314 function calls (1504237 primitive calls) in 7.623 CPU s
loaded  16  items each with  1500 subitems
2451404 function calls (2403327 primitive calls) in 12.196 CPU s

up to you

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SQLAlchemy 0.4 beta2 released

2007-08-15 Thread sdobrev

On Tuesday 14 August 2007 23:05:44 Michael Bayer wrote:
 On Aug 14, 2007, at 3:30 PM, [EMAIL PROTECTED] wrote:
  databases/sqlite: (reflecttable)
pragma_names is missing the BOOLEAN word/type - nulltype
 
  btw why isn't each dialect-typeclass adding it's own entry to
  that pragma_names, respectively to the colspecs ?
  Or, each class to have those pragmaword and basetype, and the
  dicts to be made by walking locals() if issubclass(..) ?
 
  Anyway, these dicts (the grammar) should be automaticaly built
  from available typeclasses...

 patches welcome

 
here 2 versions. 
One is simple, walking the module.namespace for 
issubclass(TypeEngine), expecting to find .pragma and .colspec in 
that class and collects them. The .colspec can probably be figured 
out from __bases__ (as in other version)

pre
def _issubclass( obj, klas):
'fail/fool-proof issubclass() - works with ANY argument'
from types import ClassType
return isinstance(obj,(type,ClassType)) and issubclass(obj,klas)

def collect_colspecs( namespace):   #this can be moved out of here
colspecs = {}
pragma_names = {}
for obj in namespace.itervalues():
if _issubclass( kl, sqlalchemy.TypeEngine):
c = getattr( kl, 'colspec', None) #or 'basetype'
p = getattr( kl, 'pragma', None)  #or 'sqltype' or rawtype
if c and p:
colspec[c]=kl
pragma_names[c]=kl
return colspecs, pragma_names

class SLNumeric(sqltypes.Numeric):
colspec,pragma = sqltypes.Numeric, 'NUMERIC'
def get_col_spec(self):
if self.precision is None:
return NUMERIC
else:
return NUMERIC(%(precision)s, %(length)s)%self.__dict__

class SLInteger(sqltypes.Integer):
colspec,pragma = sqltypes.Integer, 'INTEGER'
def get_col_spec(self): return self.pragma
...
colspecs, pragma_names = collect_colspecs( locals() )

/pre



the other one uses metaclass, and .pragma is set up, and guesses 
colspec's abstract_type from __bases.

pre
class MetaDialectType( type):   #this can be moved out of here
def __new__( metacls, name, bases, dict_):
#find top-most abstract_type base 
abstract_type = None
for b in bases:
#XXX is walk in depth needed? 
#e.g. if allowed class SLInt2( SLInteger):...
if issubclass( b, sqltypes.TypeEngine):
abstract_type = b
break
assert abstract_type, 'class %s: cannot find any abstract \
base type; do inherit from some sqlalchemy type' % name

try:
pragma = dict_['pragma']
except KeyError:
assert 0, 'class %s: cannot find any pragma' % name

klas = type.__new__( metacls, name, bases, dict_)
metacls.colspecs[ abstract_type] = klas
metacls.pragma_names[ pragma]=klas
return klas

class SLMetaDialectType( MetaDialectType):
colspecs = {}
pragma_names = {}

class SLNumeric( sqltypes.Numeric):
__metaclass__ = SLMetaDialectType
pragma = 'NUMERIC'
def get_col_spec(self):
r = self.pragma
if self.precision is not None:
r += (%(precision)s, %(length)s) % self.__dict__
return r

class SLInteger( sqltypes.Integer):
__metaclass__ = SLMetaDialectType
pragma = 'INTEGER'
def get_col_spec(self): return self.pragma

...
colspecs = SLMetaDialectType.colspecs
pragma_names = SLMetaDialectType.pragma_names 
/pre

==
There are 2 choices to make: 
 - walk locals() vs using metaclass
 - whether to touch get_col_spec()s 

i wanted to have everything specified only once.
Therefore the get_col_spec() redefinition. 
It can be:
 1 left as is, just adding a separate .pragma (no gain, 
consistency-wise, e.g. VARCHR in one place and VARCHAR in another)
 2 remade to use the self.pragma where equivalent (like 80% of 
places) - a lot of same code repeated
 3 put a default one in some base class for all dialect-types, e.g. 
DialectBaseType, which can be then used for filtering locals() or to 
bring metaclass
 4 created in the metaclass unless explicitly specified - this is most 
obscure.

btw i suggest some namechanges, colspec - abstract_type and 
pragma_name - rawdb_type; or something alike.

ciao
svil

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SQLAlchemy 0.4 beta2 released

2007-08-15 Thread sdobrev

On Wednesday 15 August 2007 19:51:30 Michael Bayer wrote:
 On Aug 15, 2007, at 10:52 AM, [EMAIL PROTECTED] wrote:
  Second, i went to r3312, let init_attr() set a _state as plain
  dict and removed _state as property. The difference
  plain-dict/property (in favor of plain dict) is like 2-3%.
 
  property:
  loaded  10  items each with  500 subitems
  523034 function calls (512957 primitive calls) in 2.556 CPU s
  loaded  30  items each with  500 subitems
  1564374 function calls (1534297 primitive calls) in 7.796 CPU s
  loaded  16  items each with  1500 subitems
  2499436 function calls (2451359 primitive calls) in 12.518 CPU s
 
  plain dict:
  loaded  10  items each with  500 subitems
  513014 function calls (502937 primitive calls) in 2.525 CPU s
  loaded  30  items each with  500 subitems
  1534314 function calls (1504237 primitive calls) in 7.623 CPU s
  loaded  16  items each with  1500 subitems
  2451404 function calls (2403327 primitive calls) in 12.196 CPU s
 
  up to you

 great, post a patch for that on trac.
#741, using _sa_attr_state, set-up in mapper._create_instance()

btw (if i havent asked yet) - do u have a place describing all things 
that happen to some user's object/class once given into SA's arms? 
e.g. all the attributes that grow on the instances (and has to be 
avoided by user), replacement of __init__, all the descriptors for 
attributes, etc?
i can prepare something like half a page on this theme, but u'll have 
to fill up as i surely have no much idea of which is what/why.
Title like impact on user's object and class

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SQLAlchemy 0.4 beta2 released

2007-08-15 Thread sdobrev

On Wednesday 15 August 2007 20:54:27 Michael Bayer wrote:
 I had in mind that the metaclass approach would be used, but not
 necesarily with the walking stuff going on.
the walking is a quick and dirty and very simple way to get away with 
it - for now.

 if you really want to think about this, the idea for a types
 overhaul is ticket #526.  that breaks up the DDL from the
 adaptation side of things.  a metaclass approach would be at the
 base of it controlling a registry of information about types.

types... my static_type lib is just about types, and all the metainfo 
u could hang on them...

lets see. u have multiple types (and conversions) here, for same item: 
 a) the SA-abstract type in schema - eg. types.String
 b) the specific dialect implementation of a), e.g. sqlite.SLText
 c) the python type that is expected to live between SAdialect and 
dbapi, both ways
 d) the actual SQL server name/representation for the type
 e) python type that is expected to come out from SA of when loading, 
eg. unicode
 f) python type that is expected to go into SA when assigning. eg. 
str/utf8
 g) validation for e) (to SA)
  - so x.myint = 'qq' fails (or x.mystr=12 succeeds)
 h) validation for f) (from SA)
  - so attempting to load x.myint from column containing 'qq' fails

my experience says conversion and validation is same ONE thing, 
implementation-wise. u make one of these notions and use it to 
implement both, e.g. i have static_type.validators which i use for 
conversion and/or validation.

The g,h conversions/validations are outside of sql-related-scope, they 
are only application-side related. i guess they will be mostly 
user-specified, with very few automatical.

For example, in dbcook i set up another layer of type-translation on 
top of SA, exactly to address this issue. So application types stay 
applicational, and match to whatever (independently) at persistency  
level (SA). Thus i can use same model file, with same names, once 
having the (my) types as heavy-checking statictypes, and once as 
empty classes (plainwrap.py) used just to match the SA-type 
underneath.

Each dialect keeps its own registry of types, used for bidirectional 
abstract-dialect match, plus the actual sql (pragma) stuff like 
typenames/args and retrieving them back for reflection. This 
coresponds to current colspecs and pragma_names dicts. Reflection 
should be configurable whether to stop at dialect level (SLint) or go 
back to abstract types (types.Int) - see my autoload.py.

are there any cases of diality? e.g. str1 and str2 both being str?

current situation: IMO right now typewise all is ok, but u have one 
conversion only, happening inside the dialect, implicitly, cannot be 
extended/chained; and registries are separated and inconsistent and 
hard to find.

you wanted also specific converters as per dialect, e.g.
 python.str(-SA.date-SA.sqlite.SLdate)-sqlite.datetime 
might be different from
 python.str(-SA.date-SA.postgress.PGdate)-psycopg.datetime

so, do u want the Convertors/validators network to follow the way of 
Types network? i.e. abstract convertors and (actual) 
dialect-implementations? in another registry?

Are there going to be priorities/ordering in (auto) matching of 
convertors? e.g. u may have str2int and str2anynumber, which to 
choose...

btw. chaining converters changes/pushes expected (i/o) pythontype. The 
question what is the expected in/out pythontype? should ask the 
first converter in the chain from user-side, and result can be 
singular or multiple, e.g. anything to str, or (bool,int,long,float) 
to float).

am i missing something so far?

  btw why isn't each dialect-typeclass adding it's own entry to
  that pragma_names, respectively to the colspecs ?
  Anyway, these dicts (the grammar) should be automaticaly
  built from available typeclasses...
 
  patches welcome
 
  here 2 versions.
  One is simple, walking the module.namespace for
  issubclass(TypeEngine), expecting to find .pragma and .colspec in
  that class and collects them. The .colspec can probably be
  figured out from __bases__ (as in other version)
 
 
  the other one uses metaclass, and .pragma is set up, and guesses
  colspec's abstract_type from __bases.
 
  ==
  There are 2 choices to make:
   - walk locals() vs using metaclass
   - whether to touch get_col_spec()s
 
  i wanted to have everything specified only once.
  Therefore the get_col_spec() redefinition.
  It can be:
   1 left as is, just adding a separate .pragma (no gain,
  consistency-wise, e.g. VARCHR in one place and VARCHAR in
  another) 2 remade to use the self.pragma where equivalent (like
  80% of places) - a lot of same code repeated
   3 put a default one in some base class for all dialect-types,
  e.g. DialectBaseType, which can be then used for filtering
  locals() or to bring metaclass
   4 created in the metaclass unless explicitly specified - this is
  most obscure.
 
  btw i suggest some namechanges, colspec - abstract_type and
  pragma_name 

[sqlalchemy] Re: SQLAlchemy 0.4 beta2 released

2007-08-15 Thread sdobrev

 Reflection should be configurable whether to stop at dialect 
 level (SLint) or go back to abstract types (types.Int) - see my
 autoload.py.
why would one want to stop the reflection from going back to abstract 
types?
i.e. if the current reflection (dialevt-level) is made to autoguess 
the abstract SA type, would this break something?


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SQLAlchemy 0.4 beta2 released

2007-08-15 Thread sdobrev

On Thursday 16 August 2007 00:33:57 [EMAIL PROTECTED] wrote:
  Reflection should be configurable whether to stop at dialect
  level (SLint) or go back to abstract types (types.Int) - see my
  autoload.py.

 why would one want to stop the reflection from going back to
 abstract types?
 i.e. if the current reflection (dialevt-level) is made to autoguess
 the abstract SA type, would this break something?

Answering myself, dialects may have richer types that SA.abstract 
ones, in which case extra ones stay as is.

Also, seems there are duplicates, because of which the 
abstract-dialect is not 1:1, e.g. postgres has bigint and int that 
are both SA.integer. how about these?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SQLAlchemy 0.4 beta2 released

2007-08-14 Thread sdobrev

databases/sqlite: (reflecttable)
  pragma_names is missing the BOOLEAN word/type - nulltype

btw why isn't each dialect-typeclass adding it's own entry to that 
pragma_names, respectively to the colspecs ? 
Or, each class to have those pragmaword and basetype, and the dicts to 
be made by walking locals() if issubclass(..) ?

Anyway, these dicts (the grammar) should be automaticaly built from 
available typeclasses...

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SQLAlchemy 0.4 beta2 released

2007-08-14 Thread sdobrev

  orm.util.AliasedClauses._create_row_adapter()
  class AliasedRowAdapter( object):
   1. can't this be made as standalone class, returning an
  instance, initialized with the map, which is then __call__()ed ?

 is it faster to say self.map or to say map from locals() ?  its
 probably not very crucial either way.
well.. u save on class' creation :-), get a class that could be used 
elsewhere, and eventualy more obvious code - runtime outer-namespace 
bindings in python are veeery strange beasts sometimes, with funny 
side effects.

   2. this can be faster if:
  a) has_key = __contains__ #instead of yet another funccall
  b) __getitem__ uses try except instead of double lookup key
  in map

 im not sure try/except is faster here - im pretty sure a missing
 key is likely and exception throws are very expensive.  would be
 worth a try to see if the missing key exception actually occurs
 here.
i'll have to check, last 3 years i've being doing everything assuming 
that a (lookup:exception) is faster than 
(lookup-if-lookup:lookup-else) in both attribute and dict lookups. 
Plus that a function call is the most expensive python thing ever... 
apart of repeated a.x a.x a.x attribute access. h time to measure 
the myths.

  ---
  orm.attribute
AttributeManager.init_attr():
  the saving this one eventualy does is too small, compared to
  a property call of ._state.

 i havent benched this in a while but my recollection is that the
 AttributeError raise is *much* slower than pre-calling this method.
 a single function call is always faster than an exception throw.

 however, i see that the exception throw is being suppressed also
 with a hasattr() being called every timeim not sure why thats
 that way now so i might change it back to throwing AttributeError.

   btw: cant that ._state property be removed alltogether (i.e.
  made a plain attribute? then init_attr() MUST be there seting it
  up as plain dict.

 it should be named something non-collisionworthy such as the
 current _sa_attr_state.
the only gain from the property is that it is readonly, i.e. 
obj._state = None is not allowed (as well as del'ete). But i can do 
obj._sa_attr_state = None (and all goes to hell). If that property 
disappears alltogether, we are left with a plain _sa_attr_state 
attribute, which can be now set to None and deleted... which is more 
or less same as before less the complications with the property.
Fair trade, no?

  ---
  util:
ThreadLocal:
 - wouldnt be faster if the key in the _tdict is tuple(id,key)
  and not some formatted string off these? or the key is
  nonhashable?

 good catch, should be patched

 - the engine/threadlocal.TLEngine._session() issues a
  hasattr() on such object. how does it actualy work? IMO it always
  fails

 patch which includes a test case to add into test/engine/
 transaction.py would be welcome

i've never used these so no much idea what to do here - was just 
passing code along my eyeline.

will prepare some patches for the other things.

ciao
svilen

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Mapping and updating tables with no primary key

2007-08-12 Thread sdobrev

 Looking at account_stuff_table.foreign_keys I have:

 OrderedSet([ForeignKey(u'account_ids.account_id'),
 ForeignKey('account_ids.account_id')])
i see one is unicode'd (the autoloaded), another one is not (yours). 
unicode!=str so they probably appear differently named.
see if u can workaround that.

autoloading does not convert unicoded names back into str. 
(Paul, u see?)

 I'm guessing the load duplicated the key. Although I only have two
 columns looking at list(account_stuff.c)

 [Column(u'credit',SLNumeric(precision=10,length=2)),

 Column('account_id',Integer(),ForeignKey('account_ids.account_id'),
primary_key=True,nullable=False)]


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Using SA to move data between databases

2007-08-10 Thread sdobrev

On Thursday 09 August 2007 13:04:44 Paul Johnston wrote:
 Hi,

 A little update; 

Also, in the same direction, complete copy of some database seems to 
consist of (at least) 3 stages:
 1 recreate/remove the old one if it exists
 2 copy structure
 3 copy data


3 is your copy loop, which is independent of db type;
2 is the autoload, which does depend on db-dialect; 
i hope most of it can move into the SA-dialects themselves.

how about 1? it also does depend badly on db-dialect.
see 
http://dbcook.svn.sourceforge.net/viewvc/*checkout*/dbcook/trunk/dbcook/usage/sa_engine_defs.py

e.g. for sqlite, recreate mean 'rm -f file'; 
for postgres it means 'dropdb url' + 'createdb url';
for mssql it is even more tricky...



btw: why is the 'text_as_varchar=1' considered only if it is in url 
(see mssql.py create_connect_args()) and not if it is in the 
connect_args argument to create_engine()?


svil

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Using SA to move data between databases

2007-08-10 Thread sdobrev

On Thursday 09 August 2007 13:04:44 Paul Johnston wrote:
 Hi,

 A little update; this code handles the case where columns have a
 key attribute:

 model = __import__(sys.argv[1])
 if sys.argv[2] == 'copy':
 seng = create_engine(sys.argv[3])
 deng = create_engine(sys.argv[4])
 for tbl in model.metadata.table_iterator():
 print tbl
 mismatch = {}
 for col in tbl.c:
 if col.key != col.name:
 mismatch[col.name] = col.key
 def rewrite(x, mismatch):
 x = dict(x)
 for m in mismatch:
 x[mismatch[m]] = x[m]
 return x
 deng.execute(tbl.insert(), [rewrite(x, mismatch) for x in
 seng.execute(tbl.select())])

are u sure about the rewrite() part? 
x will contain both .key and .name with same values on them...

wouldn't this be working equivalent? it also copes with empty tables..

---
def copy( metadata, src_engine, dst_engine, echo =False ):
for tbl in metadata.table_iterator():
if echo: print tbl
data = [ dict( (col.key, x[ col.name]) for col in tbl.c)
for x in src_engine.execute( tbl.select()) ]
if data:
dst_engine.execute( tbl.insert(), data)


if __name__ == '__main__':
arg_model = sys.argv[1]
model = import__( arg_model )
copy( model.metadata, 
  src_engine= create_engine( sys.argv[2]), 
  dst_engine= create_engine( sys.argv[3]),
)


http://dbcook.svn.sourceforge.net/viewvc/*checkout*/dbcook/trunk/dbcook/misc/copydata.py
there is also copyall.py (at same place) that does all in once 
(autoload + copydata):
$ python copyall.py postgres://[EMAIL PROTECTED]/db1 sqlite:///db2

===
i'm Wondering if all the unicode strings (at least table/column names) 
should be converted back into plain strings as they have been before 
autoload reflecting them from database.

svil

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Using SA to move data between databases

2007-08-10 Thread sdobrev


 btw: why is the 'text_as_varchar=1' considered only if it is in
  url (see mssql.py create_connect_args()) and not if it is in the
  connect_args argument to create_engine()?

 Fair question, and the short answer is because that's all I needed.
 We did have a discussion about unifying create_engine args and URL
 params, but it turns out there are a few gotchas. We could allow
 specification in both places - is this important to you?
not really important, we dig it out already, but it would be more sane 
if power(connect_args) = power(url_args), that is, connect_args is 
the more general/powerful/ way, and url allows a subset (or all) of 
connect_args items; and not vice versa -- connect_args is the 
programatical way, so that should do _everything_..

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Using SA to move data between databases

2007-08-10 Thread sdobrev

 i'm Wondering if all the unicode strings (at least table/column
  names) should be converted back into plain strings as they have
  been before autoload reflecting them from database.

 Well, some databases do support unicode identifier names, some
 don't. I'd say don't do any conversion for now; if someone is faced
 with migrating tables with unicode names to a database that doesn't
 support it, well, let them sweat that one :-)

hmmm. i'll probably put that as some option, as my model's 
table/column names are never unicode, but once they go into db, all 
gets unicoded there. so i'm not sure if after some migration the 
model will match the database... 
e.g. sqlite turns everything into unicode and hence does not care if 
unicode or not - so it's all ok there; but once db-structure migrates 
into something that _does_ care about unicode or not, trouble 
trouble..
is this unicodeing everything a sqlite specific behaviour? 
de-unicoding it then should go into sqlite-dialect specific 
reflection then.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Aggregation

2007-08-08 Thread sdobrev

On Wednesday 08 August 2007 12:18:24 Paul Colomiets wrote:
 [EMAIL PROTECTED] wrote:
  hi, i have similar idea/need within dbcook, although on a
  somewhat higher level:
  pre
  cache_results/: (dbcook/SA) add-on for automaticaly-updated
  database denormalisation caches of intermediate results, each one
  depending on particular pattern of usage. Wishful syntax:
 
  class SomeCacheKlas( Base):
  fieldname = cache_aggregator( klas.field,
  AggrFilterCriteria) #e.g.
  #class Cache4averagePerson( Base):
  #age= cache_agregators.Average( Person.age, Filter1 )
  #salary = cache_agregators.Sum( Person.salary, Filter2 )
  /pre
  i was thinking on using triggers and/or sql-functions but then
  this is just one way to do it - and the idea is to hide the
  implementation.

 I've seen that, but found no code.
 Is it something only planned for dbcook?
yes, to-be-done, as a way to avoid DB to become denormalized in some 
particular unplanned/uncontrolled way, that fits some reports and 
screws all else. It takes me a great fight to convince users that 
denormalization is out of app's bare model... its something u lay 
over it.

  Can your implementation be extended to use:
   a) more complex aggregator expressions (e.g. average(), that is
  sum() / count(), and similar dependencies)

 Definetly can. Now you can just use property for that:
 avg = property(lambda self: self.sum / self.count)
naaah, sorry, that was too easy. i mean more complex aggregation 
functions... i guess it can, if u can do sum=sum+x, then u could do 
sq=sq+x*x/2

   b) more complex filters - e.g. not max() on all rows, but on
  some subset/select

 Yes. I'm looking for syntax for that. Functionality seems
 very similar for `relation()` so may be
 `a.Max(...,primaryjoin=...)` would do.

hmmm... relation... yess, it is like a plural relation but getting a 
singular result out of it, and never ever loading the items. 
Are u using something along PropertyLoader?

ciao
svil

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Using SA to move data between databases

2007-08-08 Thread sdobrev

On Wednesday 08 August 2007 11:44:57 Paul Johnston wrote:
 Hi,

 heh, adding this raw-data-copy to the autoload.py

  makes quite a database-copier/migrator...

 Yes indeed, I used this yesterday to migrate a legacy database, it
 was impressively quick and easy.

 I can see we've got similar requirements in this area. Perhaps you
 and I could work together to package up some of these techniques in
 a more polished manner. Maybe dbcook is the place to do this; I've
 still not downloaded it.

yes, it can be done there. 
Although to me it's all plain sqlalchemy (not using anyhing out of 
dbcook layers), and just extensions of MetaData: _reflect(), 
_copy_data(), _diff(). Once these things go into some MetaData 
methods in a way or another, it can go back into UsageRecipes as it 
would be all 10-15 lines of code.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Aggregation

2007-08-07 Thread sdobrev


 I've finally done first POC implementation of this feature.

 Basic usage looks like:

 import aggregator as a
 mapper(Line, lines,
  extension=a.Quick(a.Count(blocks.c.lines),
   a.Max(blocks.c.lastline, lines.c.id)))

 (You also need foreign keys)


hi, i have similar idea/need within dbcook, although on a somewhat 
higher level:
pre
cache_results/: (dbcook/SA) add-on for automaticaly-updated database 
denormalisation caches of intermediate results, each one depending on 
particular pattern of usage. Wishful syntax:

class SomeCacheKlas( Base):
fieldname = cache_aggregator( klas.field, AggrFilterCriteria)
#e.g.
#class Cache4averagePerson( Base):
#age= cache_agregators.Average( Person.age, Filter1 )
#salary = cache_agregators.Sum( Person.salary, Filter2 )
/pre
i was thinking on using triggers and/or sql-functions but then this is 
just one way to do it - and the idea is to hide the implementation.

Can your implementation be extended to use: 
 a) more complex aggregator expressions (e.g. average(), that is 
sum() / count(), and similar dependencies)
 b) more complex filters - e.g. not max() on all rows, but on some 
subset/select

think of generic report calculations/aggregations, and putting those 
into some cacheing table. 
Eventualy getting the report being built on-the-run - distributing the 
big wait over the atomary updates.


 http://www.mr-pc.kiev.ua/projects/SQLAlchemyAggregation
this gives me 404


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Using SA to move data between databases

2007-08-07 Thread sdobrev

On Monday 06 August 2007 02:09:45 Paul Johnston wrote:
 Hi,

 I'm in the same process, and very interested in the answer !

 I've found what I think is the best solution, and it sounds quite
 obvious thinking about it. Define the table, do a select on the old
 database and an insert on the new database. This leverages all
 SQLAlchemy's cleverness in converting types, etc. and keeps the ORM
 out of the picture. The code I'm using is:

 model = __import__(sys.argv[1])
 if sys.argv[2] == 'copy':
 seng = create_engine(sys.argv[3])
 deng = create_engine(sys.argv[4])
 for tbl in model.metadata.table_iterator():
 deng.execute(tbl.insert(), [dict(x) for x in
 seng.execute(tbl.select())])

 All it relies on is that you call your MetaData metadata. At the
 moment, it chokes on names that contain a dash (or maybe it's ones
 that use key=), but I'm sure I can workaround that. When I'm done,
 I'll put this up as a recipe.

 Paul

heh, adding this raw-data-copy to the autoload.py
$ python autoload.py postgres://[EMAIL PROTECTED]/db1 | python - sqlite:///db2
which copyies the structure of input db1 database into the output db2.

makes quite a database-copier/migrator...

svil

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: PROPOSAL: Session

2007-07-30 Thread sdobrev

On Sunday 29 July 2007 23:36:32 Michael Bayer wrote:
 This would be a new name available in 0.4 which would produce the
 same Session that we are familiar with, except it would be by
 default transactional and autoflushing.  The create_session()
 function stays around and does what it always did, producing a
 regular session which you flush().

 while we have both Session() and create_session() in the trunk now,
 Session() would be what we document going forward.  flags to make
 it act other ways are still available, like
 autoflush/transactional.

 any thoughts ?
as long as u have both ways (autoflush/noauto, trans/notrans) on same 
object - flags etc - and the difference is well documented, and the 
usage patterns of both (or are there 4 combinations) are explained...

will the readonly= flag go in too?

btw is autoflushing meaning that .save always calls .flush, 
instance-per-instance? 
Then how about making sess.save working over list or *args and saving 
them all? (may be another method savemany that just calls save in a 
loop)

And, would this mean that in an autoflushing session calling .flush  
is useless (being called anyway after each .save automaticaly)?
hmm. then just make .flush to issue .commit... (i know about different 
semantics etc, but most people do not want the semantix, they just 
want the data going to db). Maybe even add another method meaning 
either .flush for (trans=0, autoflush=whatever) or .commit for 
(trans=1, autoflush=1), or .flush then .commit for (trans=1, 
autoflush=0).

heh, this transactional Session() will make the antipattern 'just 
one-lng-session' appear somewhat different...

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Default value for objects' attributes

2007-07-30 Thread sdobrev

your _own_ ctor, or something around mapperExtension?

On Monday 30 July 2007 21:26:44 Jonathan Ballet wrote:
 No, after a flush(), everything is fine.
 However, I would like to have the default value _before_
 flush()-ing.

 Hmm, after thinking about it a few more minutes, it would be a bit
 restrictive, since it will work only for 'scalar' values :/

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Modifying metadata, autogenerating table definitions, etc.

2007-07-29 Thread sdobrev

i have moved the metadata autoloaddiff into own place under 
dbcook/misc/metadata/:

http://dbcook.svn.sourceforge.net/viewvc/dbcook/trunk/dbcook/misc/

svn co 
https://dbcook.svn.sourceforge.net/svnroot/dbcook/trunk/dbcook/misc/

IMO, wiki's are ok for readymade/works-for-me things and explanations, 
but any live/maintanable code should stay in a repository.

i'm planning to use that misc/ directory as repository for all sorts 
of recipes about dbcook/SA usage for various application field 
things. For now only the metadata/ is there - and once embedded into 
SA, it may disappear; from other things below, the bitemporal/ stuff 
is 100% ready but have to separate it - Ants, we could exchange some 
ideas about it later; multilang/ and nested_user_trans/ are invented 
but not written/ready, cache_results/ is sort of invented but far 
from ready (and may need triggers / in-SQL-funcs which aren't 
supported by SA yet). The rest is just eventualities.

here the dbcook/misc/metadata/readme.txt:
---
metadata/: (SA-only)
  metadata autoload and diff

bitemporal/: (SA ~only)
  addon for objects that have 2-time-history + state(enabled/disabled)
  needs following properties per object: 
obj_id  = ObjId()  #incremented by special ObjectCounter
time_valid  = Date()
time_trans  = Date()
disabled= Bool( default_value= False)

multilang/: (dbcook/SA)
  addon for transparent multi-language textual properties, allowing 
for history

nested_user_transactions/: (dbcook/SA)
  addon to support nested user transactions aspect (NOT nested DB 
transactions).
  needs following properties per object: 
transact= TransactID( default_value= 0)

cache_results/: (dbcook/SA)
  addon for automaticaly-updated database-denormalisation caches of 
intermediate results, each one depending on particular pattern of 
usage. Wishful usage syntax (ALL else automatic):
  class Cache4averagePerson( Base):
fieldname = cache_aggregator( klas.field, AggrFilterCriteria)
  #e.g.
#age = cache_agregators.Average( Person.age, FilterCriteria1)
#salary = cache_agregators.Sum( Person.salary, FilterCriteria2)

may be invented or may not:

  Numerator - something that gives numbers/ids to objects in whatever 
special way. Think of invoices, incoming/outgoing/reference 
documents, inventories etc.

  multivalue - a value that is represented/representable in many 
currencies/ways/measurements. Think of money, items that can be 
measured either as volume or as weight (e.g. gas), etc.

=

ciao
svil

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



  1   2   >