On Wednesday 15 August 2007 20:54:27 Michael Bayer wrote:
> I had in mind that the metaclass approach would be used, but not
> necesarily with the "walking" stuff going on.
the walking is a quick and dirty and very simple way to get away with 
it - for now.

> if you really want to think about this, the idea for a types
> overhaul is ticket #526.  that breaks up the "DDL" from the
> "adaptation" side of things.  a metaclass approach would be at the
> base of it controlling a registry of information about types.

types... my static_type lib is just about types, and all the metainfo 
u could hang on them...

lets see. u have multiple types (and conversions) here, for same item: 
 a) the SA-abstract type in schema - eg. types.String
 b) the specific dialect implementation of a), e.g. sqlite.SLText
 c) the python type that is expected to live between SAdialect and 
dbapi, both ways
 d) the actual SQL server name/representation for the type
 e) python type that is expected to come out from SA of when loading, 
eg. unicode
 f) python type that is expected to go into SA when assigning. eg. 
str/utf8
 g) validation for e) (to SA)
  - so x.myint = 'qq' fails (or x.mystr=12 succeeds)
 h) validation for f) (from SA)
  - so attempting to load x.myint from column containing 'qq' fails

my experience says conversion and validation is same ONE thing, 
implementation-wise. u make one of these notions and use it to 
implement both, e.g. i have static_type.validators which i use for 
conversion and/or validation.

The g,h conversions/validations are outside of sql-related-scope, they 
are only application-side related. i guess they will be mostly 
user-specified, with very few automatical.

For example, in dbcook i set up another layer of type-translation on 
top of SA, exactly to address this issue. So application types stay 
applicational, and match to whatever (independently) at persistency  
level (SA). Thus i can use same model file, with same names, once 
having the (my) types as heavy-checking statictypes, and once as 
empty classes (plainwrap.py) used just to match the SA-type 
underneath.

Each dialect keeps its own registry of types, used for bidirectional 
abstract<->dialect match, plus the actual sql (pragma) stuff like 
typenames/args and retrieving them back for reflection. This 
coresponds to current colspecs and pragma_names dicts. Reflection 
should be configurable whether to stop at dialect level (SLint) or go 
back to abstract types (types.Int) - see my autoload.py.

are there any cases of diality? e.g. str1 and str2 both being str?

current situation: IMO right now typewise all is ok, but u have one 
conversion only, happening inside the dialect, implicitly, cannot be 
extended/chained; and registries are separated and inconsistent and 
hard to find.

you wanted also specific converters as per dialect, e.g.
 python.str(->SA.date->SA.sqlite.SLdate)->sqlite.datetime 
might be different from
 python.str(->SA.date->SA.postgress.PGdate)->psycopg.datetime

so, do u want the Convertors/validators network to follow the way of 
Types network? i.e. abstract convertors and (actual) 
dialect-implementations? in another registry?

Are there going to be priorities/ordering in (auto) matching of 
convertors? e.g. u may have str2int and str2anynumber, which to 
choose...

btw. chaining converters changes/pushes expected (i/o) pythontype. The 
question "what is the expected in/out pythontype?" should ask the 
first converter in the chain from user-side, and result can be 
singular or multiple, e.g. anything to str, or (bool,int,long,float) 
to float).

am i missing something so far?

> >>> btw why isn't each dialect-typeclass adding it's own entry to
> >>> that pragma_names, respectively to the colspecs ?
> >>> Anyway, these dicts (the "grammar") should be automaticaly
> >>> built from available typeclasses...
> >>
> >> patches welcome....
> >
> > here 2 versions.
> > One is simple, walking the module.namespace for
> > issubclass(TypeEngine), expecting to find .pragma and .colspec in
> > that class and collects them. The .colspec can probably be
> > figured out from __bases__ (as in other version)
> >
> >
> > the other one uses metaclass, and .pragma is set up, and guesses
> > colspec's abstract_type from __bases.
> >
> > ==========
> > There are 2 choices to make:
> >  - walk locals() vs using metaclass
> >  - whether to touch get_col_spec()s
> >
> > i wanted to have everything specified only once.
> > Therefore the get_col_spec() redefinition.
> > It can be:
> >  1 left as is, just adding a separate .pragma (no gain,
> > consistency-wise, e.g. VARCHR in one place and VARCHAR in
> > another) 2 remade to use the self.pragma where equivalent (like
> > 80% of places) - a lot of same code repeated
> >  3 put a default one in some base class for all dialect-types,
> > e.g. DialectBaseType, which can be then used for filtering
> > locals() or to bring metaclass
> >  4 created in the metaclass unless explicitly specified - this is
> > most obscure.
> >
> > btw i suggest some namechanges, colspec -> abstract_type and
> > pragma_name -> rawdb_type; or something alike.
> >

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to