I want to direct your attention to some new features in trunk which
I'll also be demonstrating at this years Advanced SQLAlchemy
tutorial.
these features apply primarily to joined-table inheritance
scenarios, and are in response to the need to specify criterion
against subclasses as well
check MapperExtensions, u have pre+post ins/upd/del hooks there.
u may or may not have a mapper for the log-table.
On Friday 22 February 2008 21:21:52 Marco De Felice wrote:
Hi
I'm thinking about a simple client side table audit with SA. Given
the audit log pattern:
and, i do have bitemporal pattern implemented at
http://dbcook.svn.sourceforge.net/viewvc/dbcook/trunk/dbcook/misc/timed2/
it is not at all optimized but is correct.
check MapperExtensions, u have pre+post ins/upd/del hooks there.
u may or may not have a mapper for the log-table.
On Friday
On Wednesday 13 February 2008 22:06:54 Don Dwiggins wrote:
[EMAIL PROTECTED] wrote:
we've put such a notion in our db, so the db knows what
model-version it matches. Then, at start, depending on the
versions one can decide which migration script to execute (if the
db should be made to
according to sqlalchemy/types.py, the Decimal() is used straight away,
without any precision etc stuff. the numeric(precision/length) are only for
the db. i assume u have to use some precision-context around your db-related
stuff.
Werner F. Bruhin wrote:
I am converting an existing Firebird
theoreticaly, looking at the sql.expression.py/Select, try
for a in yourselect.inner_columns: print a
it's a yielding property.
alex bodnaru wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
hi friends,
could i know the columns a select would retrieve, without examining the
i just wonder whether the * (all columns) is being expanded there.
try?
[EMAIL PROTECTED] wrote:
theoreticaly, looking at the sql.expression.py/Select, try
for a in yourselect.inner_columns: print a
it's a yielding property.
alex bodnaru wrote:
could i know the columns a select would
you relation should have argument like
primary_join= engineers.c.hired_by_id==managers.c.employee_id
or similar. i do not know for sure as i've done a layer on top of SA that
stores most of this knowledge, so i dont bother with it. Have a look at
dbcook.sf.net. u may use it as ORM to build and
Rick Morrison wrote:
Such operations will likely trigger a full table scan
SQLite dates are stored as strings anyway, AFAIK there is little one can do
to avoid table-scans in SQLite based solely on date criteria. I use julian
dates stored as integers when working with large datasets in
Allen Bierbaum wrote:
Thanks, that worked great.
Have their been any new capabilities added to this code?
no idea, never used it
-Allen
On Jan 17, 2008 12:21 PM, [EMAIL PROTECTED] wrote:
use sqlalchemy.orm.class_mapper(cls) instead of cls.mapper, and it should
work?
Allen Bierbaum
use sqlalchemy.orm.class_mapper(cls) instead of cls.mapper, and it should work?
Allen Bierbaum wrote:
I was just taking a look at the recipes on the SA wiki and stumbled
across this one:
http://www.sqlalchemy.org/trac/wiki/UsageRecipes/SchemaDisplay
It is a pretty nice little piece of code
jason kirtland wrote:
Christophe Alexandre wrote:
Dear All,
Send me some study material on DBMS + $100 !
Or if it fits you better, can you please help on the issue described
below?
The closest situation to what I am facing is described here:
hmmm, specify explicitly?
e.g. query(A).eagerload( B.address)
joined-inh via left-outer-join is enough, no need for polymunion.
i dont know how the current machinery for eagerload works, but imo
knowing your level of lookahead-design, it should not be hard to
apply that machinery over a
Michael Bayer wrote:
On Jan 15, 2008, at 5:17 PM, [EMAIL PROTECTED] wrote:
hmmm, specify explicitly?
e.g. query(A).eagerload( B.address)
joined-inh via left-outer-join is enough, no need for polymunion.
i dont know how the current machinery for eagerload works, but imo
knowing your
How many levels I can inherit classes/tables without get something
wrong?
my tests go to 4, all works. And as all corner cases are already there,
i guess any level above will work too.
mixed inheritance (joined+concrete) also can be made to work, as long as
polymoprhic_union() is fixed
and what this is expected to do?
x = task and (max(task.sequence)+100) or 100 ?
Jonathan LaCour wrote:
I have been banging my head against the wall for a little bit
attempting to translate this SQL:
SELECT max(value) FROM (
SELECT max(sequence)+100 as value FROM task
Matt Haggard wrote:
I'm using SQLAlchemy with Pylons and am having trouble validating
data. I have an App object mapped to a table with a unique constraint
on App.number.
Here's some code:
q = Session.query(App)
if app_id:
q = q.filter_by(id=app_id).first()
i'm not sure how much this would help u, but 0.4 has better support for
your-own-collection-containers. see
http://www.sqlalchemy.org/docs/04/mappers.html#advdatamapping_relation_collections
e.g. subclass some list and u can do the callback at append() or whatever.
Dave Harrison wrote:
Hi
Alexandre da Silva wrote:
I am already trying go get the list of mapped tables. I currently got a
list from sqlalchemy.org.mapper from the weakref mapper_registry, but
I don't know if that values are useful for my context.
what u need?
all tables? see metadata.
all mappers? see the
i have such thing implemented externaly but it is definitely not nice (read:
tricky and underground) - replacing the __dict__ with something handmade
that does what i say as i say if i say. that's dbcook's reflector for my
static_type structures; look in dbcook/usage/static_type if interested.
Michael Bayer wrote:
On Dec 16, 2007, at 3:26 PM, [EMAIL PROTECTED] wrote:
and another issue around attribute.get_history...
i have a descriptor that is autosetting some defaultvalue at first
get.
a descriptor on top of the InstrumentedAttribute itself ? id wonder
how you are
yes and no, as i said i'm replacing the __dict__ with something
special; so
its IA riding on top of me (;-) but otherwise its that. no renaming,
i dont
want someone (thats can be me, later) to be able to workaround
either me or SA.
then have your magic __dict__ implement the same
the expire() is requesting a reload.
try moving that after the sending back stuff to user.
Utku Altinkaya wrote:
Hi,
I am using SQLAlchemy on a web application, I have used a base class
for ORM clases which provides soem web related things like validation
and loading data from forms etc.
ok;
see mapper.py line 1134, calling after_update with state instead of state.obj()
Michael Bayer wrote:
On Dec 16, 2007, at 2:40 AM, [EMAIL PROTECTED] wrote:
from sqlalchemy import *
m= MetaData()
trans =Table( 'trans', m, Column( 'date', Date), )
balance=Table( 'balance', m,
i used to get the original (before change) value of some attribute via
state.commited_state[key]... but seems now that dict is empty at the time
when ext.after_* are called.
any way to get that? storing copies at ext.before_* is not good alternative...
[EMAIL PROTECTED] wrote:
i used to get the original (before change) value of some attribute via
state.commited_state[key]... but seems now that dict is empty at the time
when ext.after_* are called.
any way to get that? storing copies at ext.before_* is not good alternative...
found some
expiring the obj has the effect that any further access to the object will
auto-refresh it. so if u expire(x) and then say x.a. x will be reloaded
first then u get x.a
Utku Altinkaya wrote:
On 16 Aralık, 17:46, [EMAIL PROTECTED] wrote:
the expire() is requesting a reload.
try moving
and another issue around attribute.get_history...
i have a descriptor that is autosetting some defaultvalue at first get.
before r3935 it was ok; now the atribute is not updated anymore (in exact
case, another object has to be inserted but it is not) as it seems that
ScalarObjectAttributeImpl
from sqlalchemy import *
m= MetaData()
trans =Table( 'trans', m, Column( 'date', Date), )
balance=Table( 'balance', m, Column( 'finaldate', Date), )
b = balance.alias('b')
sprev = select( [ func.max( b.c.finaldate)],
b.c.finaldate balance.c.finaldate
)
#correlate
your 'iva' table-column AND 'iva' attribute/relation/property have same
name, Thats what the error says. either rename one of them (e.g. the column
to become iva_id), or use that allow_column_override=True flag to the
producto mapper.
Marcos wrote:
Hello, first at all, sorry about my
Paul Johnston wrote:
Hi,
A Sample may be created by the web application or fetched from the
database. Later on, it may be disposed of, edited or checked back into
the db.
On the other hand, the requirements and coding of both classes are
kinda different, and I find myself changing
is that something looking like real concrete-polymorphism?
AFAIremember there was something composite there in the pattern.. the id is
actualy (id,type)
Michael Bayer wrote:
you cant do it right now. but its something we could support. its
unclear to me if we should just go for composite
if it's about concrete inheritance, then employee contains ALL info it
needs, that is, a full copy of person + whatever else is there,
and is completely independent from person table.
so for that case,
a) foregn key is not needed
b) inserting in employee_tbl will never insert stuff in
hi
1st one: i am saving some object; the mapperExtension of the object
fires additional atomic updates of other things elsewhere (aggregator).
These things has to be expired/refreshed... if i only knew them.
For certain cases, the object knows exactly which are these target
things. How (when)
hi
1st one: i am saving some object; the mapperExtension of the object
fires additional atomic updates of other things elsewhere
(aggregator).
These things has to be expired/refreshed... if i only knew them.
For certain cases, the object knows exactly which are these target
things. How
yeah this is the same thing. if you get A's ID column in there
instead of C's the problem would not occuri think this is why our
own test suite doesn't have these issues. ive made the A-B FK
match previous checkin recursive, so it also matches A-C,D,E,,
in r3759.
one more error in ACP, took me a day to find and separate.
it's very simple and very basic... ClauseAdapter does not work.
--
from sqlalchemy import *
from sqlalchemy.sql.util import ClauseAdapter
m = MetaData()
a=Table( 'a',m,
Column( 'id',Integer, primary_key=True),
om sqlalchemy import *
from sqlalchemy.sql.util import ClauseAdapter
m = MetaData()
a=Table( 'a',m,
Column( 'id',Integer, primary_key=True),
Column( 'xxx_id', Integer, ForeignKey( 'a.id', name='adf',
use_alter=True ) )
)
e = (a.c.id == a.c.xxx_id)
print e
b = a.alias()
Michael Bayer wrote:
On Nov 8, 2007, at 11:32 AM, svilen wrote:
mmmh. u can think of splitting the Visitor into 3: Guide (who
traverses _everything_ given), Visitor (who does things), and
intermediate Decisor, who decides where to go / what to do. But this
can get complicated (slow)
heres the structure of: select(from_obj=[t1, t2, t1.join(t2)])
select +--- t1 -+
|--- t2 |
+--- join of t1/t2 ---+
t2 and t1 both have two parents, and there are two paths to each of t1
and t2 from the head select. so its not a tree in the
i have a A-B-C test case where B inherits A via joined, and C inherits B
via concrete; anbd there are links to each other, e.g. A points to B. it
used to work before r3735.
now query(A) gives:
NoSuchColumnError: Could not locate column in row for column
'A_tbl.db_id'
if A-B link is not
ahha. so i am replacing one whole subexpr with somthing, and the
original subexpr is not traversed inside.
if i comment the stop_on.add(), it attempts to traverse the result
subexpr, not the original one.
i want the original to be traversed. Something like doing onExit
instead of current
hi.
i have somewhat messy setup (~test case), about association with
intermediate table/class, double pointing to one side and single
pointing to another. i do set up both A-links in one item; and set up
only first in another item, the other link (a2_link) is pre-set to None.
And, i have the
[EMAIL PROTECTED] wrote:
sorry, here the files
and the line 83 ( marked XXX ) there must be =None to get the error.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To post to this group, send
Michael Bayer wrote:
nevermind, this one was pretty straightforward and r3695 didnt
actually break things, it just revealed the lack of checking for
things elsewhere, so works in r3747.
yes, that works.
but now multiple other things broke. pf
- the mapper.properties in its new
I dont need history tracking, just revert documents to older ones.
that is history, just not timed history.
u'll have documents, and then for each document a bunch of versions.
Once get it working on simple form, then perhaps trying optimicing
and feeding only field, that has changed. Version
Next design problem for me is version table. I have Document model
with DocumentVersion model, but i dont know how to:
- get the latest version of document
- set creator and updator, automatic behavior for this
- update version number
- fetch thru Document(s) and DocumentVersion(s)
just
hi, i'm back to the cane field...
do your ABC tests all use select_mapper ? ticket 795 revealed
that totally basic ABC loading was broken if you're using
secondary loads of the remaining attributes (which is the default
behavior when you dont specify select_mapper).
u mean mapper's
On Friday 28 September 2007 01:14:32 Michael Bayer wrote:
On Sep 27, 2007, at 3:53 PM, [EMAIL PROTECTED] wrote:
i know in 0.4 one can request a polymorphic request to be
automaticaly split into multiple per-subtype requests. i've no
idea how this compares +/- to the huge union/outerjoin
i know in 0.4 one can request a polymorphic request to be automaticaly
split into multiple per-subtype requests. i've no idea how this
compares +/- to the huge union/outerjoin that gives all in one long
shot.
my question is.. can this mechanism/approach be used somehow for
(semi) automatic
just some ideas.
Here is an example of a properly interpreted row using the
dbutils.OID class:
08C82B7C6A844743::SDRAM::64Mb::Marketing::0C::70C::DC Electrical
Characteristics
Here is the binding statement being generated by SqlAlchemy:
2007-09-27 13:32:12,444 INFO
On Wednesday 26 September 2007 20:09:10 Michael Bayer wrote:
On Sep 25, 2007, at 12:15 PM, [EMAIL PROTECTED] wrote:
anyway, all 10328 (joined) cases pass, have a nice day.
svilen
ive changed my approach on this one to what i should have done in
the 1st place. try out 3518.
ok too now,
it something to do with that expects thing...
just do a x = str(b) before session.clear(), breaks it all.
On Monday 24 September 2007 18:57:36 Michael Bayer wrote:
somethings weird. if i take out your expects/me.query stuff (which
remains impossible to read), and do this :
On Monday 24 September 2007 22:31:35 Michael Bayer wrote:
On Sep 24, 2007, at 12:13 PM, [EMAIL PROTECTED] wrote:
it something to do with that expects thing...
just do a x = str(b) before session.clear(), breaks it all.
OK...that was just a *great* way to spend all day tracking that one
On Tuesday 25 September 2007 05:43:40 Huy Do wrote:
Michael Bayer wrote:
On Sep 24, 2007, at 11:48 AM, Huy Do wrote:
Hi,
Is it possible to get SA ORM to return plain python objects
(with eagerloaded relations and all) but without any attribute
instrumentation
(or anything else
On Monday 24 September 2007 22:31:35 Michael Bayer wrote:
On Sep 24, 2007, at 12:13 PM, [EMAIL PROTECTED] wrote:
it something to do with that expects thing...
just do a x = str(b) before session.clear(), breaks it all.
OK...that was just a *great* way to spend all day tracking that one
hi.
r3506 is still ok, while r3507 gives this:
result: [] expected: [35]
SAMPLE: 2006-09-11 00:00:00 2006-09-12 00:00:00 2006-09-14 00:00:00
[35]
'trans exact, valids between _2' FROM test_range TimedRangeTestCase
--
Traceback
i don't have recent py-gis experience, but from past, its been
tuple-likes and numpy arrays. Best option will be to have some
default data-representation constructor for each SA-GIS type, and
allow overriding that.
e.g. Point holds data by default in a tuple (Point.DataHoler=tuple),
but i can
see some notes at
http://www.sqlalchemy.org/trac/wiki/DatabaseNotes
also check dbcook.usage.sa_engine_defs.py at
(svn co)
https://dbcook.svn.sourceforge.net/svnroot/dbcook/trunk/dbcook/usage/
for some create/drop stuff, both pyodbs/pymssql.
3. I'll be piggy backing on an existing ERP system
In my database I have 5000 customers who made purchases and made
some form of payment. I need to find the names of all customers who
made payments by cash.
My SQL query looks like this:
SELECT customers.name, payments.payid
FROM customers, purchases, payments
WHERE customers.cid =
My problem is: I want to be able to select from Thread, ordering it
by descending order of the maximum tn_ctime for each thread, to
find the most recently referenced threads. Which is to say, I want
to do something like
select
t.*,
coalesce(c.most_recent_child, t.tn_ctime) as
g'day.
i have a subselect that may yield null (nothing found), and i want to
treat that as value of 0. i've read about coalesce() that would
return first non-null of its args.
plain query looks like:
expr = and_( trans.c.account.startswith( balance.c.account),
trans.c.date) =
On Sunday 09 September 2007 22:51:32 Michael Bayer wrote:
try calling scalar() on that subquery, it needs to be treated as
such.
oops, forgot to mention: this is 0.3.xx.
in 0.4 all is okay without scalars.
so, 0.3.latest, adding .scalar() after .correlate() complains about
None having no
On Sunday 09 September 2007 23:30:20 Michael Bayer wrote:
sorry, as_scalar() in 0.4. in 0.3, correlate() is not generative
(i.e. modifies the parent select(), returns None), so thats your
problem (call correlate() beforehand).
yeah that's it. thanks.
now back to that argument, months ago:
On Friday 07 September 2007 13:54:03 Jean-Philippe Dutreve wrote:
I was using SA 0.3.9 to insert an item in an ordered list with
bisect method insort (py 2.5):
mapper(Entry, table_entries)
mapper(Account, table_accounts, properties = dict(
entries = relation(Entry,
ok. So this time I am trying to get data from my widget from
database that has two compound keys, using assign_mapper.
#Initialize:
User_table = sqlalchemy.Table('User', metadata, autoload=True)
class User(object):
pass
usermapper=assign_mapper(session.context,User,user_table)
#get
or has something in MapperExt protocol changed?
File dbcook/usage/samanager.py, line 189, in query_BASE_instances
return session.query( m.plain )
File sqlalchemy/orm/session.py, line 638, in query
q = self._query_cls(mapper_or_class, self, **kwargs)
File sqlalchemy/orm/query.py, line 31,
On Friday 07 September 2007 20:25:50 Michael Bayer wrote:
we've got plenty of MapperExtensions running. i dont see how you
are getting mapper.extension to be your actual mapper, its supposed
to point to a container called ExtensionCarrier (
unless, you are
saying mapper.extension =
On Thursday 06 September 2007 23:03:35 Lukasz Szybalski wrote:
Hello,
So it seems to me there are two select function that I can use but
they are different
First:
s=Users.select(Users.c.LASTNAME=='Smith')
but when you want to select only two columns via :
s=Users.select([Users.c.LASTNAME,
hi.
For those interested, i've put a bitemporal mixin class under
dbcook/misc/timed2/. It handles Objects with multiple versions
(history), disabled/enabled state, and stays sane with
same-timestamp-versions.
The available queries are:
- get_time_clause( times):
return the clause
On Monday 03 September 2007 19:57:54 voltron wrote:
would this work?
users = Table(users, metadata,
Column(id,Integer,primary_key=True),
Column(username, String(50),unique=True,
nullable=False),
Column(password, String(255)),
Column(email,
A pointing to A, is cyclical dependency.
same as A - B - A.
but in latter case u must choose one of the links to be added later,
that is use_later=True for ForeignKey.
in former case the table declaration may or may not work without
use_alter.
in both cases u need post_update=True for the
sorry for my bad sql, but where have u specified that link?
u should have something like
foo.filter( (Main.chidlid==Child.childid)
Child.othercolumn.in_('a', 'b', 'c') )
or
foo.join( child).filter( Child.othercolumn.in_('a', 'b', 'c') )
(warning: the exact syntax may or may not be this,
will restore engine.echo today.
what about Metadata's? why not leave some enginetype-indepedent kwargs
there (or at least echo, its the most used in lets-try-this-now
cases), which go together with the bind to the create()? i know i
know explicit is better than implicit... noone would be
hi
i need to have a list collection with list.appender (in SA 0.4 terms)
that accepts either one positional arg as the value, or keyword args
which it uses to create the value. Each collection instance knows
what type of values to create.
so i do:
class MyCollection( list):
factory
sorry, fixed patch
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
On Monday 20 August 2007 18:09:41 jason kirtland wrote:
svilen wrote:
And anyway i need to first create the object and just then append
it (the decorators will first fire event on the object and just
then append(), that is call me), so may have to look
further/deeper. Maybe i can make my
On Monday 20 August 2007 20:58:45 [EMAIL PROTECTED] wrote:
On Monday 20 August 2007 18:09:41 jason kirtland wrote:
svilen wrote:
And anyway i need to first create the object and just then
append it (the decorators will first fire event on the object
and just then append(), that is
But tacking a factory method onto a regular Python list is much
simpler with a separation of concerns:
class FactoryCollection(list):
def create(self, options, **kw):
eh sorry, i want it the hard way..
now as i think of it, its just me being lazy and fancy - preferring
implicitness and
there was some recent thread on this 2-3 weeks ago, lookup..
On Friday 17 August 2007 11:28:34 Glauco wrote:
What's is the best solution for a web procedure , in TurboGear,
that produce a large amount of insert into ? (from 2000 to 2
insertions on a submit)
i've done some try with
On Wednesday 15 August 2007 04:26:31 Michael Bayer wrote:
On Aug 14, 2007, at 4:35 PM, Michael Bayer wrote:
On Aug 14, 2007, at 12:38 PM, svilen wrote:
---
orm.attribute
AttributeManager.init_attr():
the saving this one eventualy does is too small, compared to
a property
On Tuesday 14 August 2007 23:05:44 Michael Bayer wrote:
On Aug 14, 2007, at 3:30 PM, [EMAIL PROTECTED] wrote:
databases/sqlite: (reflecttable)
pragma_names is missing the BOOLEAN word/type - nulltype
btw why isn't each dialect-typeclass adding it's own entry to
that pragma_names,
On Wednesday 15 August 2007 19:51:30 Michael Bayer wrote:
On Aug 15, 2007, at 10:52 AM, [EMAIL PROTECTED] wrote:
Second, i went to r3312, let init_attr() set a _state as plain
dict and removed _state as property. The difference
plain-dict/property (in favor of plain dict) is like 2-3%.
On Wednesday 15 August 2007 20:54:27 Michael Bayer wrote:
I had in mind that the metaclass approach would be used, but not
necesarily with the walking stuff going on.
the walking is a quick and dirty and very simple way to get away with
it - for now.
if you really want to think about this,
Reflection should be configurable whether to stop at dialect
level (SLint) or go back to abstract types (types.Int) - see my
autoload.py.
why would one want to stop the reflection from going back to abstract
types?
i.e. if the current reflection (dialevt-level) is made to autoguess
the
On Thursday 16 August 2007 00:33:57 [EMAIL PROTECTED] wrote:
Reflection should be configurable whether to stop at dialect
level (SLint) or go back to abstract types (types.Int) - see my
autoload.py.
why would one want to stop the reflection from going back to
abstract types?
i.e. if the
databases/sqlite: (reflecttable)
pragma_names is missing the BOOLEAN word/type - nulltype
btw why isn't each dialect-typeclass adding it's own entry to that
pragma_names, respectively to the colspecs ?
Or, each class to have those pragmaword and basetype, and the dicts to
be made by walking
orm.util.AliasedClauses._create_row_adapter()
class AliasedRowAdapter( object):
1. can't this be made as standalone class, returning an
instance, initialized with the map, which is then __call__()ed ?
is it faster to say self.map or to say map from locals() ? its
probably not
Looking at account_stuff_table.foreign_keys I have:
OrderedSet([ForeignKey(u'account_ids.account_id'),
ForeignKey('account_ids.account_id')])
i see one is unicode'd (the autoloaded), another one is not (yours).
unicode!=str so they probably appear differently named.
see if u can workaround
On Thursday 09 August 2007 13:04:44 Paul Johnston wrote:
Hi,
A little update;
Also, in the same direction, complete copy of some database seems to
consist of (at least) 3 stages:
1 recreate/remove the old one if it exists
2 copy structure
3 copy data
3 is your copy loop, which
On Thursday 09 August 2007 13:04:44 Paul Johnston wrote:
Hi,
A little update; this code handles the case where columns have a
key attribute:
model = __import__(sys.argv[1])
if sys.argv[2] == 'copy':
seng = create_engine(sys.argv[3])
deng = create_engine(sys.argv[4])
for tbl
btw: why is the 'text_as_varchar=1' considered only if it is in
url (see mssql.py create_connect_args()) and not if it is in the
connect_args argument to create_engine()?
Fair question, and the short answer is because that's all I needed.
We did have a discussion about unifying
i'm Wondering if all the unicode strings (at least table/column
names) should be converted back into plain strings as they have
been before autoload reflecting them from database.
Well, some databases do support unicode identifier names, some
don't. I'd say don't do any conversion for
On Wednesday 08 August 2007 12:18:24 Paul Colomiets wrote:
[EMAIL PROTECTED] wrote:
hi, i have similar idea/need within dbcook, although on a
somewhat higher level:
pre
cache_results/: (dbcook/SA) add-on for automaticaly-updated
database denormalisation caches of intermediate results,
On Wednesday 08 August 2007 11:44:57 Paul Johnston wrote:
Hi,
heh, adding this raw-data-copy to the autoload.py
makes quite a database-copier/migrator...
Yes indeed, I used this yesterday to migrate a legacy database, it
was impressively quick and easy.
I can see we've got similar
I've finally done first POC implementation of this feature.
Basic usage looks like:
import aggregator as a
mapper(Line, lines,
extension=a.Quick(a.Count(blocks.c.lines),
a.Max(blocks.c.lastline, lines.c.id)))
(You also need foreign keys)
hi, i have similar idea/need within
On Monday 06 August 2007 02:09:45 Paul Johnston wrote:
Hi,
I'm in the same process, and very interested in the answer !
I've found what I think is the best solution, and it sounds quite
obvious thinking about it. Define the table, do a select on the old
database and an insert on the new
On Sunday 29 July 2007 23:36:32 Michael Bayer wrote:
This would be a new name available in 0.4 which would produce the
same Session that we are familiar with, except it would be by
default transactional and autoflushing. The create_session()
function stays around and does what it always did,
your _own_ ctor, or something around mapperExtension?
On Monday 30 July 2007 21:26:44 Jonathan Ballet wrote:
No, after a flush(), everything is fine.
However, I would like to have the default value _before_
flush()-ing.
Hmm, after thinking about it a few more minutes, it would be a bit
i have moved the metadata autoloaddiff into own place under
dbcook/misc/metadata/:
http://dbcook.svn.sourceforge.net/viewvc/dbcook/trunk/dbcook/misc/
svn co
https://dbcook.svn.sourceforge.net/svnroot/dbcook/trunk/dbcook/misc/
IMO, wiki's are ok for readymade/works-for-me things and
1 - 100 of 184 matches
Mail list logo