[sqlalchemy] Re: Consistency with DB while modifying metadata

2007-07-24 Thread svilen

> i just saw there is some usagerecipe ModelUpdate in the wiki, may
> be a good start point:
> http://www.sqlalchemy.org/trac/wiki/UsageRecipes/ModelUpdate
>
and this one: 
http://www.sqlalchemy.org/trac/wiki/UsageRecipes/AutoCode

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Consistency with DB while modifying metadata

2007-07-24 Thread svilen

i just saw there is some usagerecipe ModelUpdate in the wiki, may be a 
good start point:
http://www.sqlalchemy.org/trac/wiki/UsageRecipes/ModelUpdate

> > >> assert t.compare(t2)
> > >
> > > yes i was hoping for such method (:-)
> > > And the best will be if it can produce a list/ hierarchy of
> > > differences, which then programaticaly can be iterated - and
> > > checked and resolved or raised higher.
> > >
> > >> but why not just use autoload=True across the board in the
> > >> first place and eliminate the chance of any errors ?
> > >
> > > what do u mean? The db-model of the app will not be the
> > > db-model in the database - and the semantix will be gone.
> > > Example:
> > >  from simplistic renaming of columns/ tables, to splitting a
> > > class into clas+subclass (table into 2 joined-tables) etc
> >
> > ok, fine.  anyway, feel free to add a trac ticket for this one -
> > it'll need a volunteer.
>
> ticket #680, have a look if what i wrote is what was meant in this
> thread.
> i may look into it after 2-3 weeks - unless someone does it ahead
> of me ;P)
>
> 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL & Time

2007-07-24 Thread sdobrev

On Tuesday 24 July 2007 23:29:52 Rick Morrison wrote:
> The list is useful only for a hacker on the MSSQL module, not for
> general users, but FWIW, I've added it to
> http://www.sqlalchemy.org/trac/wiki/DatabaseNotes
thanks

> I generally try to respond directly to help encourage anyone
> willing to offer a hand on the MSSQL module, as I don't have the
> time these days to get all the tests fixed, 

> so sorry if the repeated postings are bugging you.
no, u got me wrong here. its not about bugging.
i'll need mssql working in several months, so i'll have to hack / 
patch it myself/ ourselves - and why starting from scratch if parts 
of the puzzle are already known...

keep adding to the list if u find other issues - or various bits 
around these - maybe one day the list will start shrinking instead of 
growing...

> On 7/24/07, [EMAIL PROTECTED] wrote:
> >
> > one suggestion - why not make one "organisational" ticket and
> > list all these there? compiled from other mails too? and any
> > other issues about mssql? and keep it up-to-date?
> > otherwise it could be 20 separate tickets... with little (if any)
> > relation to SA.
> > or it can be a wiki page. probably better, easier to update.
> >
> > i see such list for n-th time last 2-3 months, IMO its a waste of
> > time to recall all this, lets put it into some place...
> >
> > ciao
> > svilen

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL & Time

2007-07-24 Thread Rick Morrison
The list is useful only for a hacker on the MSSQL module, not for general
users, but FWIW, I've added it to
http://www.sqlalchemy.org/trac/wiki/DatabaseNotes

I generally try to respond directly to help encourage anyone willing to
offer a hand on the MSSQL module, as I don't have the time these days
to get all the tests fixed, so sorry if the repeated postings are bugging you.


On 7/24/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>
>
> > The reasons for the failures that I've had time to look into have
> > so far had as much to do with the tests as with the MSSQL module.
> > They are mostly fixable within the constraints that the various
> > supported MSSQL DBAPI modules impose:
> >
> > -- DBLib (required by pymssql) does not like Unicode, and has a
> > 30 character identifier limit
> > -- pymssql seems to
> > have troubles with multiple cursors on a single connection and
> > doesn't properly implement rowcounts on executemany()
> > -- Pyodbc doesn't work with multiple result sets (I believe a
> > patch to fix this is about)
> > -- Pyodbc has troubles with the SQL 2000 syntax of "select
> > scope_identity()"
> > -- There are issues on all DBAPI modules (really, an issue with
> > MSSQL) comparing Python strings to MSSQL TEXT columns
> > -- Probably a half-dozen more issues that I've missed here.
>
> one suggestion - why not make one "organisational" ticket and list all
> these there? compiled from other mails too? and any other issues
> about mssql? and keep it up-to-date?
> otherwise it could be 20 separate tickets... with little (if any)
> relation to SA.
> or it can be a wiki page. probably better, easier to update.
>
> i see such list for n-th time last 2-3 months, IMO its a waste of time
> to recall all this, lets put it into some place...
>
> ciao
> svilen
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL & Time

2007-07-24 Thread sdobrev

> The reasons for the failures that I've had time to look into have
> so far had as much to do with the tests as with the MSSQL module.
> They are mostly fixable within the constraints that the various
> supported MSSQL DBAPI modules impose:
>
> -- DBLib (required by pymssql) does not like Unicode, and has a
> 30 character identifier limit
> -- pymssql seems to
> have troubles with multiple cursors on a single connection and
> doesn't properly implement rowcounts on executemany()
> -- Pyodbc doesn't work with multiple result sets (I believe a
> patch to fix this is about)
> -- Pyodbc has troubles with the SQL 2000 syntax of "select
> scope_identity()"
> -- There are issues on all DBAPI modules (really, an issue with
> MSSQL) comparing Python strings to MSSQL TEXT columns
> -- Probably a half-dozen more issues that I've missed here.

one suggestion - why not make one "organisational" ticket and list all 
these there? compiled from other mails too? and any other issues 
about mssql? and keep it up-to-date? 
otherwise it could be 20 separate tickets... with little (if any) 
relation to SA.
or it can be a wiki page. probably better, easier to update.

i see such list for n-th time last 2-3 months, IMO its a waste of time 
to recall all this, lets put it into some place...

ciao
svilen

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL & Time

2007-07-24 Thread Rick Morrison
Hi Christope,

> I see. Are the reasons for thoses failures well known ? fixable ? If
> it's not too tricky I could spend a bit of time on it in a little
> while.


The reasons for the failures that I've had time to look into have so far had
as much to do with the tests as with the MSSQL module. They are mostly
fixable within the constraints that the various supported MSSQL DBAPI
modules impose:

-- DBLib (required by pymssql) does not like Unicode, and has a 30
character identifier limit
-- pymssql seems to
have troubles with multiple cursors on a single connection and doesn't
properly implement rowcounts on executemany()
-- Pyodbc doesn't work with multiple result sets (I believe a patch to
fix this is about)
-- Pyodbc has troubles with the SQL 2000 syntax of "select
scope_identity()"
-- There are issues on all DBAPI modules (really, an issue with MSSQL)
comparing Python strings to MSSQL TEXT columns
-- Probably a half-dozen more issues that I've missed here.

The failing tests were largely
either written before the MSSQL module became part of SA, or were
written by someone without access
to a MSSQL server, hence a lot of the issues. Writing tests that run across
database servers with different SQL syntax and capabilities is hard, so it
may be easier in some cases to provide a test
that is specific to the database server than try to get a single test
to work with all servers, especially for the more esoteric features.


Paul has already fixed quite a few, a continuing effort to fix more
(especially on *nix), would be *greatly* welcomed,
as "passes all unit tests" is often a required checklist item for a lot of
users before they'll continue with the software.

Rick




On 7/24/07, Christophe de VIENNE <[EMAIL PROTECTED]> wrote:
>
>
> 2007/7/24, Paul Johnston <[EMAIL PROTECTED]>:
> >
> > Hi,
> >
> > >Is there any special condition for the unittests to run on mssql ?
> > >
> > >
> > A few... you really need to be running on Windows and using PyODBC. And
> > then append ?text_as_varchar=1 to your DBURI.
>
> I'll try that :-)
>
> > Looking at your command line, it looks like you're running from Unix.
> > You can use PyMSSQL from Unix, and the basic features work fine, but a
> > number of edge cases don't, so you get quite a lot of unit test
> failures.
>
> I see. Are the reasons for thoses failures well known ? fixable ? If
> it's not too tricky I could spend a bit of time on it in a little
> while.
>
> Thanks,
>
> Christophe
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Cumulative Select

2007-07-24 Thread Koen Bok

And again you made my day...

On Jul 24, 7:17 pm, Michael Bayer <[EMAIL PROTECTED]> wrote:
> I had to put one little "trick" in here to make the subquery -  
>  work, which is something i should look into; otherwise it  
> went straight in.
>
> from sqlalchemy import *
>
> transaction = table('transaction',
>  column('id'),
>  column('id_product'),
>  column('price'),
>  column('quantity')
>  )
>
> f1 = transaction.alias('f1')
> f2 = transaction.alias('f2')
>
> subquery = select([func.coalesce(func.sum(f2.c.quantity))], and_
> (f2.c.id>f1.c.id, f2.c.id_product==bindparam('f2item')), scalar=True)
>
> s = select([f1.c.id, subquery.label('foo') - bindparam('offset').label
> ('offset')], and_(
>  f1.c.id_product==bindparam('f1item'),
>  bindparam('something') <= subquery
>  ) , order_by=[desc(f1.c.id)], limit = 1
> )
>
> print s
>
> I think in 0.4 im going to deprecate "scalar=True" and instead have  
> you sayselect(...).scalar()
>
> On Jul 24, 2007, at 9:38 AM, Koen Bok wrote:
>
>
>
> > I need to do a cumulative select on postgres, and I got it to work
> > with some extreme SQL query, but I was wondering if it could be done
> > more easily without having to drop to SQL but with SQLAlchemy
> > statements.
>
> > Let's say I have a table with stock transactions like this:
>
> > Transaction
> >id
> >id_product
> >price
> >quantity
>
> > And it is filled like this:
>
> > 1  1   12  10
> > 2  1   13  5
> > 3  1   12  3
> > 4  1   11  6
> > 5  1   10  5
>
> > Now at moment X my stock is 13 and I want to know the costs for each
> > product in my stock. So I add a cumulative column to select on and
> > expect to get the last three rows back as their cumulative total is <=
> > as my stock:
>
> >CUM
> > 1  1   12  10  29
> > 2  1   13  5   19
> > 3  1   12  3   14  this
> > 4  1   11  6   11  this
> > 5  1   10  5   5   and this...
>
> > Extra info:
>
> > This is the query I currently use to get the transaction ID and offset
> > back:
>
> > SELECT
> >f1.id,
> >(
> >SELECT
> >coalesce(sum(quantity), 0)
> >FROM transaction f2
> >WHERE f2.id>=f1.id
> >AND f2.id_item = %s
> >) - %s as offset
> > FROM
> >transaction f1
> > AND f1.id_item = %s
> > AND %s <= (
> >SELECT
> >coalesce(sum(quantity), 0)
> >FROM transaction f2
> >WHERE f2.id>=f1.id
> >AND f2.id_item = %s
> >)
> > ORDER BY f1.id DESC LIMIT 1


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Using SA to move data between databases

2007-07-24 Thread Christophe de VIENNE

Hi michael,

2007/7/24, michael <[EMAIL PROTECTED]>:
> Hello Christophe,
>
> If I am reading you intent (and forgive me if I am wrong), you would
> like to have one backup/restore routine for the same schema but on a
> variety of databases.

Yes

> And, you want the user to be responsible for
> doing both actions.

More precisely, I want to be able to provide a tool the user which
allow him to do it with minimal knowledge.

>
> Your references to "universal dump format" and "common backup format"
> point sqarely at a flat file; which can sometimes be
> [tab|space|comma|pipe|etc] delimited. Since you said that your
> databases will be small, text is truly universal.  It does not require
> python or any scripting language and can easily be viewable in any text
> editor.  As a side note, I have found pipe delimited to be the least
> troublesome when moving between databases that have user-input data.

Agree on the flat file. A Zipped Archive of flat files would do the
trick I guess.

What I found interesting with pytable is the high accessibility of the
data, even in a compressed file. This would make easier the
implementation of a partial restore. But this is just an idea I'm
throwing, maybe I'm totally wrong on what pytables would bring.

> The first thing that comes to mind is that there should probably
> already be an automated backup scheduled.  Users will forget and when a
> restore is needed, they will want fresh data.
>
> Each database has their own export-to-text command and each has their
> own import-from-text command; and related syntax. This can be triggered
> with cron for all unixes; On Windows, MSSQL has its own scheduler.   (Of
> course, one could also have a button do the same thing, in *addition* to
> routine backups.)

I agree, but that's almost orthogonal with what I have in mind. What
you suggest is what should be done by a descent admin or consultant
who is installing the software.

I have two real-life cases which are not covered by this approach :

1) My 'packaging team' prepares demonstrations databases as well as
ready-to-start ones. I want those databases to be accessible on our
website, easily choosable (idealy without having to know which
database server is used), and easily usable from within my app.

2) A user call the support team, and the problem seems to come from
bizarre datas in the database. I want the user (if he has the rights)
to be able to save his datas, attach them to an email and send it to
the support (or post it on a support form on the website).

> Are you asking for features in SA to handle scheduled backups and
> user initiated restores?

No. What I'd like to do is to provide facilities for dumping and
restoring a complete db. And I'd like this to work if to db with
different engines have (almost) the same schema.
This is a bit the same with migrate which is providing engine-agnostic
api for shema modifications.

One could easily, for e.g. build a sadump tool on top of this tool, or
a database manager tool for a particular application (this is what I
intend to do).

All this is 'just a thought' for now, but motivated by an actual future need.

Christophe

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Cumulative Select

2007-07-24 Thread Michael Bayer

I had to put one little "trick" in here to make the subquery -  
 work, which is something i should look into; otherwise it  
went straight in.

from sqlalchemy import *

transaction = table('transaction',
 column('id'),
 column('id_product'),
 column('price'),
 column('quantity')
 )

f1 = transaction.alias('f1')
f2 = transaction.alias('f2')

subquery = select([func.coalesce(func.sum(f2.c.quantity))], and_ 
(f2.c.id>f1.c.id, f2.c.id_product==bindparam('f2item')), scalar=True)

s = select([f1.c.id, subquery.label('foo') - bindparam('offset').label 
('offset')], and_(
 f1.c.id_product==bindparam('f1item'),
 bindparam('something') <= subquery
 ) , order_by=[desc(f1.c.id)], limit = 1
)

print s


I think in 0.4 im going to deprecate "scalar=True" and instead have  
you sayselect(...).scalar()

On Jul 24, 2007, at 9:38 AM, Koen Bok wrote:

>
> I need to do a cumulative select on postgres, and I got it to work
> with some extreme SQL query, but I was wondering if it could be done
> more easily without having to drop to SQL but with SQLAlchemy
> statements.
>
> Let's say I have a table with stock transactions like this:
>
> Transaction
>   id
>   id_product
>   price
>   quantity
>
> And it is filled like this:
>
> 1 1   12  10
> 2 1   13  5
> 3 1   12  3
> 4 1   11  6
> 5 1   10  5
>
> Now at moment X my stock is 13 and I want to know the costs for each
> product in my stock. So I add a cumulative column to select on and
> expect to get the last three rows back as their cumulative total is <=
> as my stock:
>
>   CUM
> 1 1   12  10  29
> 2 1   13  5   19
> 3 1   12  3   14  this
> 4 1   11  6   11  this
> 5 1   10  5   5   and this...
>
> Extra info:
>
> This is the query I currently use to get the transaction ID and offset
> back:
>
> SELECT
>   f1.id,
>   (
>   SELECT
>   coalesce(sum(quantity), 0)
>   FROM transaction f2
>   WHERE f2.id>=f1.id
>   AND f2.id_item = %s
>   ) - %s as offset
> FROM
>   transaction f1
> AND f1.id_item = %s
> AND %s <= (
>   SELECT
>   coalesce(sum(quantity), 0)
>   FROM transaction f2
>   WHERE f2.id>=f1.id
>   AND f2.id_item = %s
>   )
> ORDER BY f1.id DESC LIMIT 1
>
>
> >


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Consistency with DB while modifying metadata

2007-07-24 Thread svilen

> >> assert t.compare(t2)
> >
> > yes i was hoping for such method (:-)
> > And the best will be if it can produce a list/ hierarchy of
> > differences, which then programaticaly can be iterated - and
> > checked and resolved or raised higher.
> >
> >> but why not just use autoload=True across the board in the first
> >> place and eliminate the chance of any errors ?
> >
> > what do u mean? The db-model of the app will not be the db-model
> > in the database - and the semantix will be gone.
> > Example:
> >  from simplistic renaming of columns/ tables, to splitting a
> > class into clas+subclass (table into 2 joined-tables) etc
>
> ok, fine.  anyway, feel free to add a trac ticket for this one -
> it'll need a volunteer.
ticket #680, have a look if what i wrote is what was meant in this 
thread.
i may look into it after 2-3 weeks - unless someone does it ahead of 
me ;P)

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Using SA to move data between databases

2007-07-24 Thread michael

On Tue, 24 Jul 2007 17:52:55 +0200
"Christophe de VIENNE" <[EMAIL PROTECTED]> wrote:

> 
> Hi
> 
> 2007/7/24, michael <[EMAIL PROTECTED]>:
> >
> > On Tue, 24 Jul 2007 15:14:52 +0200
> > "Christophe de VIENNE" <[EMAIL PROTECTED]> wrote:
> >
> > >
> > > Hi,
> > >
> > > I'm in the same process, and very interested in the answer !
> > >
> > > One idea I had is to define an universal dump format, (based for
> > > example on pytables), which could be used to backup and restore
> > > datas from/to various databases.
> > > If this way is a good one and a good implementation proposed, it
> > > could become an interesting addon to SA.
> > >
> > > Regards,
> > >
> > > Christophe
> > >
> > > 2007/7/24, Paul Johnston <[EMAIL PROTECTED]>:
> > > >
> > > > Hi,
> > > >
> > > > I am migrating an Access database to MSSQL server. I have coded
> > > > up basic Access support for SQLAlchemy, which I'll be commiting
> > > > to the 0.4 branch shortly.
> > > >
> > > > Using autocode, I now have a set of SQLAlchemy table
> > > > definitions. My question is: what's a good way to move all the
> > > > data across?
> > > >
> > > > Thanks for any help,
> > > >
> > > > Paul
> > > >
> > > > >
> > > >
> >
> > With all due respect for the brilliance of SQLAlchemy.. it is not an
> > operating system and not a database.
> >
> > Maybe I am missing the point here, but, in the two hours it took to
> > get a reply to the OP, one could have output from one db (to csv)
> > and import to the other one.  Another alternative is to actually
> > use the db functionality.  MSAccess and MSSQL both start with
> > 'MS'.  If I am not mistaken, those are interoperable.  One can set
> > up a 'link' and transfer the data, no?  It has been years, but I
> > remember doing that.
> >
> > Moving data in/out of disparate data sources is a pretty common data
> > wharehouse process.  And if they are large datasets, native 'bulk'
> > transfers are fastest.  All of which can be automated... without
> > intervention from the application layer.  (was that blasphemy?)
> 
> I see no blasphemy, but that does not exactly address my personal
> issue (which is not exactly the same as Paul it seems).
> I will have, in a few months, clients running my software on mysql,
> other on mssql. I want to have a common backup format, so I can
> restore any backup on any supported db, and all that should be doable
> by a "Toto User" (toto=dummy).
> Having it in the application layer allow me do to that. And since I
> hate to re-do things, my approach will most probably to use SA to dump
> and restore the datas, even if it's a bit slow (the databases are not
> very big), and it will always be possible to optimize the process by
> doing db-specific operations.
> The pytables format looks attractive for this use because it's fast,
> scalable, compresses the datas, and have generic viewer.
> 
> My experience with SA is still a bit light, and I might say stupid
> things without seeing it, but that's the general idea.
> 
> My two cents :-)
> 
> Regards,
> 
> Christophe

Hello Christophe,

If I am reading you intent (and forgive me if I am wrong), you would
like to have one backup/restore routine for the same schema but on a
variety of databases. And, you want the user to be responsible for
doing both actions.

Your references to "universal dump format" and "common backup format"
point sqarely at a flat file; which can sometimes be
[tab|space|comma|pipe|etc] delimited. Since you said that your
databases will be small, text is truly universal.  It does not require
python or any scripting language and can easily be viewable in any text
editor.  As a side note, I have found pipe delimited to be the least
troublesome when moving between databases that have user-input data.

The first thing that comes to mind is that there should probably
already be an automated backup scheduled.  Users will forget and when a
restore is needed, they will want fresh data.  

Each database has their own export-to-text command and each has their
own import-from-text command; and related syntax. This can be triggered
with cron for all unixes; On Windows, MSSQL has its own scheduler.   (Of
course, one could also have a button do the same thing, in *addition* to
routine backups.)

Are you asking for features in SA to handle scheduled backups and
user initiated restores?

-- 

michael



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: How to use SessionContextExt with cherrypy?

2007-07-24 Thread Michael Bayer


On Jul 24, 2007, at 12:09 PM, alex.schenkman wrote:

> class Session(object):
> def __init__(self, name):
> self.db = create_engine( 'sqlite:///%s' % name )
> self.db.echo = False
> self.metadata = BoundMetaData(self.db)
> self.session = create_session()
> self.db.docs = Table('docs', self.metadata, autoload=True)
> self.db.mapper = mapper(Document, self.myDB.docs)
>

mappers are at the same level at which your mapped class is defined.   
So if you define your Document class at the module level, so must  
your Mapper be defined.  also, if you defined classes and mappers  
within a function for each session, that wouldnt scale anyway since  
the mappers get stored in a global registry and youd run out of  
memory after many distinct users visited the site.

so if your mapper is at the module level, so are your Tables and  
MetaData.   Sessions are not; so bind your individual sessions to the  
engines directly.  (Engines are usually module level too, but in this  
case you are opening many individual sqlite files so theyre local to  
your Session object)


metadata = MetaData()
class Document(object):
pass

# cant autoload=True here unless you have a specific SQLite file that  
is safe to use.  doesnt your
# system need to create the tables inside the sqlite databases anyway ?
docs = Table('docs', metadata,
Column(...)
)

mapper(Document, docs)

class Session(object):
def __init__(self, name):
self.db = create_engine('sqlite:///%s' % name)
self.session = create_session(bind = self.db)

thats it.  you dont need to reference "mapper" anywhere, just  
"self.session" and maybe "docs".



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Consistency with DB while modifying metadata

2007-07-24 Thread Michael Bayer


On Jul 24, 2007, at 11:07 AM, svilen wrote:

>
> On Tuesday 24 July 2007 17:30:27 Michael Bayer wrote:
>>
>> such a feature would make usage of table reflection, and then a
>> comparison operation, along the lines of :
>>
>> ...
>>
>> assert t.compare(t2)
> yes i was hoping for such method (:-)
> And the best will be if it can produce a list/ hierarchy of
> differences, which then programaticaly can be iterated - and checked
> and resolved or raised higher.
>
>> but why not just use autoload=True across the board in the first
>> place and eliminate the chance of any errors ?
> what do u mean? The db-model of the app will not be the db-model in
> the database - and the semantix will be gone.
> Example:
>  from simplistic renaming of columns/ tables, to splitting a class
> into clas+subclass (table into 2 joined-tables) etc

ok, fine.  anyway, feel free to add a trac ticket for this one -  
it'll need a volunteer.



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: How to use SessionContextExt with cherrypy?

2007-07-24 Thread alex.schenkman

Thanks for taking the time to answer this. I have had a frustrating
day and this gives me hope again, =)

I'm not trying to move objects between sessions.
I might be misunderstanding the APIs here, but what I think I need is
the following:

Two persons surf to my site at the same time and I keep some session-
specific data in an SQLite3 file.
One SQLite3 file per person/session.

(I know that CherryPy stores session info and I'm using it now, but
this part of the code was written before I understood that.)

So for each new person/session I instantiate the following class which
I then keep in memory through CherryPy session management.


class Session(object):
def __init__(self, name):
self.db = create_engine( 'sqlite:///%s' % name )
self.db.echo = False
self.metadata = BoundMetaData(self.db)
self.session = create_session()
self.db.docs = Table('docs', self.metadata, autoload=True)
self.db.mapper = mapper(Document, self.myDB.docs)


The second time I instantiate this class I get an error saying that
the class has already a primary mapper defined, which is true.

How do I go around this?
Thanks again!

On Jul 24, 4:47 pm, Michael Bayer <[EMAIL PROTECTED]> wrote:
> theres multiple levels of issues with this.
>
> one is, its not very clean to move objects between databases using
> sessions.   to do so, you have to remove the "_instance_key" of the
> object and save it into the other session:
>
> f = Foo()
> sess1.save(f)
> sess1.flush()
>
> sess1.expunge(f)
> del f._instance_key
> sess2.save(f)
> sess2.flush()
>
> the second is, a mapper does not define *where* you are storing your
> object, it only defines *how*.  therefore you *never* make a second
> mapper for a class, unless you are using one of two very specific
> recipes which are mentioned in the docs (which this example is not).
>
> third, the SessionContextExt shouldnt really "interfere" with this
> operation, in that it wont prevent you from expunging the object from
> one session and saving it into another, but it does make it
> confusing.  SessionContextExt is just the tiniest little convenience
> feature, that of "your objects automatically get saved into a
> session" and also "lazy loaders know how to find a session".  but if
> you are moving objects between sessions i would think its just going
> to confuse matters since its making decisions for you behind the
> scenes.  i think its important to try to make your code work while
> using the minimal (minimal here meaning, "none") number of "add-ons"
> to start with, so that you have something which works and can be
> understood.  then the add-ons can be implemented afterwards, as the
> need for them arises.
>
> On Jul 24, 2007, at 10:18 AM, alex.schenkman wrote:
>
>
>
> > Hi:
>
> > I'm new writing web apps and I'm using cherrypy with sqlalchemy.
> > As I understand it, each user navigating the site and clicking on
> > pages gets a new thread and thus it is necesary to  use sqlalchemy in
> > a thread-safe manner.
>
> > After reading the docs I assume that I have to use SessionContextExt,
> > but I don't figure out how.
>
> > As a test I try to write records to two different databases, but I
> > always get  a mapper error.
>
> > sqlalchemy.exceptions.ArgumentError: Class ''
> > already has a primary mapper defined with entity name 'None'.
>
> > Any hint is much appreciated!!
>
> > This is the code I try:
>
> > from sqlalchemy import *
> > from sqlalchemy.ext.sessioncontext import SessionContext
>
> > class Doc(object):
> > def __init__(self, id, path, state):
> > self.DocID = id
> > self.Path = path
> > self.Complete = state
>
> > if __name__ == "__main__":
> > db1 = create_engine( 'sqlite:///test.db' )
> > db1.echo = False
> > metadata = BoundMetaData( db1 )
> > docs = Table('docs', metadata, autoload=True)
>
> > ctx1 = SessionContext(create_session)
> > mapper(Doc, docs, extension=ctx1.mapper_extension)
> > d = Doc(43,'/etc/password',True)
> > ctx1.current.flush()
>
> > db2 = create_engine( 'sqlite:///test2.db' )
> > db2.echo = False
> > metadata2 = BoundMetaData( db2 )
>
> > d = Doc(15,'/etc/init',False)
>
> > ctx2 = SessionContext(create_session)
> > mapper(Doc, docs, extension=ctx2.mapper_extension)
> > ctx2.current.flush()


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Using SA to move data between databases

2007-07-24 Thread Christophe de VIENNE

Hi

2007/7/24, michael <[EMAIL PROTECTED]>:
>
> On Tue, 24 Jul 2007 15:14:52 +0200
> "Christophe de VIENNE" <[EMAIL PROTECTED]> wrote:
>
> >
> > Hi,
> >
> > I'm in the same process, and very interested in the answer !
> >
> > One idea I had is to define an universal dump format, (based for
> > example on pytables), which could be used to backup and restore datas
> > from/to various databases.
> > If this way is a good one and a good implementation proposed, it could
> > become an interesting addon to SA.
> >
> > Regards,
> >
> > Christophe
> >
> > 2007/7/24, Paul Johnston <[EMAIL PROTECTED]>:
> > >
> > > Hi,
> > >
> > > I am migrating an Access database to MSSQL server. I have coded up
> > > basic Access support for SQLAlchemy, which I'll be commiting to the
> > > 0.4 branch shortly.
> > >
> > > Using autocode, I now have a set of SQLAlchemy table definitions. My
> > > question is: what's a good way to move all the data across?
> > >
> > > Thanks for any help,
> > >
> > > Paul
> > >
> > > >
> > >
>
> With all due respect for the brilliance of SQLAlchemy.. it is not an
> operating system and not a database.
>
> Maybe I am missing the point here, but, in the two hours it took to get
> a reply to the OP, one could have output from one db (to csv) and
> import to the other one.  Another alternative is to actually use the db
> functionality.  MSAccess and MSSQL both start with 'MS'.  If I am not
> mistaken, those are interoperable.  One can set up a 'link' and
> transfer the data, no?  It has been years, but I remember doing that.
>
> Moving data in/out of disparate data sources is a pretty common data
> wharehouse process.  And if they are large datasets, native 'bulk'
> transfers are fastest.  All of which can be automated... without
> intervention from the application layer.  (was that blasphemy?)

I see no blasphemy, but that does not exactly address my personal
issue (which is not exactly the same as Paul it seems).
I will have, in a few months, clients running my software on mysql,
other on mssql. I want to have a common backup format, so I can
restore any backup on any supported db, and all that should be doable
by a "Toto User" (toto=dummy).
Having it in the application layer allow me do to that. And since I
hate to re-do things, my approach will most probably to use SA to dump
and restore the datas, even if it's a bit slow (the databases are not
very big), and it will always be possible to optimize the process by
doing db-specific operations.
The pytables format looks attractive for this use because it's fast,
scalable, compresses the datas, and have generic viewer.

My experience with SA is still a bit light, and I might say stupid
things without seeing it, but that's the general idea.

My two cents :-)

Regards,

Christophe

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL & Time

2007-07-24 Thread Christophe de VIENNE

Hi,

I have a working patch which add a MSTime type on mssql.

http://www.sqlalchemy.org/trac/attachment/ticket/679

The unittest DateTest runs successfully on my linux box with pymssql,
and I'd like somebody to try and run it under windows, as I don't have
time to set up a testing env' right now.

Regards,

Christophe

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Consistency with DB while modifying metadata

2007-07-24 Thread svilen

On Tuesday 24 July 2007 17:30:27 Michael Bayer wrote:
>
> such a feature would make usage of table reflection, and then a
> comparison operation, along the lines of :
>
> ...
>
> assert t.compare(t2)
yes i was hoping for such method (:-)
And the best will be if it can produce a list/ hierarchy of 
differences, which then programaticaly can be iterated - and checked 
and resolved or raised higher.

> but why not just use autoload=True across the board in the first
> place and eliminate the chance of any errors ?
what do u mean? The db-model of the app will not be the db-model in 
the database - and the semantix will be gone.
Example: 
 from simplistic renaming of columns/ tables, to splitting a class 
into clas+subclass (table into 2 joined-tables) etc

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: How to use SessionContextExt with cherrypy?

2007-07-24 Thread Michael Bayer


theres multiple levels of issues with this.

one is, its not very clean to move objects between databases using  
sessions.   to do so, you have to remove the "_instance_key" of the  
object and save it into the other session:

f = Foo()
sess1.save(f)
sess1.flush()

sess1.expunge(f)
del f._instance_key
sess2.save(f)
sess2.flush()

the second is, a mapper does not define *where* you are storing your  
object, it only defines *how*.  therefore you *never* make a second  
mapper for a class, unless you are using one of two very specific  
recipes which are mentioned in the docs (which this example is not).

third, the SessionContextExt shouldnt really "interfere" with this  
operation, in that it wont prevent you from expunging the object from  
one session and saving it into another, but it does make it  
confusing.  SessionContextExt is just the tiniest little convenience  
feature, that of "your objects automatically get saved into a  
session" and also "lazy loaders know how to find a session".  but if  
you are moving objects between sessions i would think its just going  
to confuse matters since its making decisions for you behind the  
scenes.  i think its important to try to make your code work while  
using the minimal (minimal here meaning, "none") number of "add-ons"  
to start with, so that you have something which works and can be  
understood.  then the add-ons can be implemented afterwards, as the  
need for them arises.

On Jul 24, 2007, at 10:18 AM, alex.schenkman wrote:

>
> Hi:
>
> I'm new writing web apps and I'm using cherrypy with sqlalchemy.
> As I understand it, each user navigating the site and clicking on
> pages gets a new thread and thus it is necesary to  use sqlalchemy in
> a thread-safe manner.
>
> After reading the docs I assume that I have to use SessionContextExt,
> but I don't figure out how.
>
> As a test I try to write records to two different databases, but I
> always get  a mapper error.
>
> sqlalchemy.exceptions.ArgumentError: Class ''
> already has a primary mapper defined with entity name 'None'.
>
> Any hint is much appreciated!!
>
>
> This is the code I try:
>
> from sqlalchemy import *
> from sqlalchemy.ext.sessioncontext import SessionContext
>
> class Doc(object):
> def __init__(self, id, path, state):
> self.DocID = id
> self.Path = path
> self.Complete = state
>
> if __name__ == "__main__":
> db1 = create_engine( 'sqlite:///test.db' )
> db1.echo = False
> metadata = BoundMetaData( db1 )
> docs = Table('docs', metadata, autoload=True)
>
>
> ctx1 = SessionContext(create_session)
> mapper(Doc, docs, extension=ctx1.mapper_extension)
> d = Doc(43,'/etc/password',True)
> ctx1.current.flush()
>
>
> db2 = create_engine( 'sqlite:///test2.db' )
> db2.echo = False
> metadata2 = BoundMetaData( db2 )
>
> d = Doc(15,'/etc/init',False)
>
> ctx2 = SessionContext(create_session)
> mapper(Doc, docs, extension=ctx2.mapper_extension)
> ctx2.current.flush()
>
>
> >


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Consistency with DB while modifying metadata

2007-07-24 Thread Michael Bayer


On Jul 24, 2007, at 9:59 AM, Anton V. Belyaev wrote:

>
> Of course db modification is hard. It cant be done completely
> automatically. For now I would like SQLAlchemy just to signal somehow
> when its definitions are different from already existing db tables.
> When I do create_all() it checks anyway tables properties, but doesnt
> let me know when there is mismatch.


such a feature would make usage of table reflection, and then a  
comparison operation, along the lines of :

t = Table('mytable', meta,
Column(...)

)


someothermeta = MetaData()
t2 = Table('mytable', someothermetadata, autoload=True,  
autoload_with=)

assert t.compare(t2)

so if we had a comprehensive "compare()" method, we could probably  
distill the above into something like:

table.verify()


but why not just use autoload=True across the board in the first  
place and eliminate the chance of any errors ?



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Connection initialization

2007-07-24 Thread Michael Bayer

current approach is to use the "creator" keyword to create_engine(),  
which references a callable that returns a connection.  you connect,  
and then issue whatever additional SQL on the connection before  
returning it.   however this means your URL is just "://"  
and any additional portions of the URL are not used.

Jason will be adding some new hooks in 0.4 such that you can add a  
function that gets called after the connection is established, so you  
still connect via normal URL.

On Jul 24, 2007, at 5:01 AM, Paul Colomiets wrote:

>
> Hi,
>
> How can I place some initialization code for each connection in the  
> pool?
>
> Currently I do something like this:
> engine.execute("SET collation_connection=utf8_general_ci")
> engine.execute("SET names utf8")
> engine.execute("SET character_set_client=utf8")
> engine.execute("SET character_set_results=utf8")
> before processing each http-request. But it needs to be done
> only once per connection.
>
> --
> Paul.
>
>
> >


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] How to use SessionContextExt with cherrypy?

2007-07-24 Thread alex.schenkman

Hi:

I'm new writing web apps and I'm using cherrypy with sqlalchemy.
As I understand it, each user navigating the site and clicking on
pages gets a new thread and thus it is necesary to  use sqlalchemy in
a thread-safe manner.

After reading the docs I assume that I have to use SessionContextExt,
but I don't figure out how.

As a test I try to write records to two different databases, but I
always get  a mapper error.

sqlalchemy.exceptions.ArgumentError: Class ''
already has a primary mapper defined with entity name 'None'.

Any hint is much appreciated!!


This is the code I try:

from sqlalchemy import *
from sqlalchemy.ext.sessioncontext import SessionContext

class Doc(object):
def __init__(self, id, path, state):
self.DocID = id
self.Path = path
self.Complete = state

if __name__ == "__main__":
db1 = create_engine( 'sqlite:///test.db' )
db1.echo = False
metadata = BoundMetaData( db1 )
docs = Table('docs', metadata, autoload=True)


ctx1 = SessionContext(create_session)
mapper(Doc, docs, extension=ctx1.mapper_extension)
d = Doc(43,'/etc/password',True)
ctx1.current.flush()


db2 = create_engine( 'sqlite:///test2.db' )
db2.echo = False
metadata2 = BoundMetaData( db2 )

d = Doc(15,'/etc/init',False)

ctx2 = SessionContext(create_session)
mapper(Doc, docs, extension=ctx2.mapper_extension)
ctx2.current.flush()


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Consistency with DB while modifying metadata

2007-07-24 Thread Anton V. Belyaev



On 24 июл, 17:34, svilen <[EMAIL PROTECTED]> wrote:
> On Tuesday 24 July 2007 16:22:43 Anton V. Belyaev wrote:
>
> > Hey,
>
> > I believe there is a common approach to the situation, but I just
> > dont know it.
>
> > Let say, I have some tables created in the DB using SQLAlchemy.
> > Then I modify Python code, which describes the table (add a column,
> > remove another column,...). What is the common way to handle this
> > situation? I guess it would be good to have an exception raised
> > when there is a mismatch between DB tables and Python-defined
> > (using SQLAlchemy).
>
> Very soon i'll be in your situation (with hundreds of tables), so i'm
> very interested if something comes up.
>
> it's in the todo list of dbcook. my idea so far is:
>  - automaticaly reverse engineer i.e. autoload the available
> db-structure into some metadata.
>  - create another metadata as of current code
>  - compare the 2 metadatas, and based on some rules - ??? -
> alter/migrate the DB into the new shape.
> This has to be as automatic as possible, leaving only certain - if
> any - decisions to the user.
> Assuming that the main decision - to upgrade or not to upgrade - is
> taken positive, and any locks etc explicit access is obtained.
>
> svil

Of course db modification is hard. It cant be done completely
automatically. For now I would like SQLAlchemy just to signal somehow
when its definitions are different from already existing db tables.
When I do create_all() it checks anyway tables properties, but doesnt
let me know when there is mismatch.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Using SA to move data between databases

2007-07-24 Thread michael

On Tue, 24 Jul 2007 15:14:52 +0200
"Christophe de VIENNE" <[EMAIL PROTECTED]> wrote:

> 
> Hi,
> 
> I'm in the same process, and very interested in the answer !
> 
> One idea I had is to define an universal dump format, (based for
> example on pytables), which could be used to backup and restore datas
> from/to various databases.
> If this way is a good one and a good implementation proposed, it could
> become an interesting addon to SA.
> 
> Regards,
> 
> Christophe
> 
> 2007/7/24, Paul Johnston <[EMAIL PROTECTED]>:
> >
> > Hi,
> >
> > I am migrating an Access database to MSSQL server. I have coded up
> > basic Access support for SQLAlchemy, which I'll be commiting to the
> > 0.4 branch shortly.
> >
> > Using autocode, I now have a set of SQLAlchemy table definitions. My
> > question is: what's a good way to move all the data across?
> >
> > Thanks for any help,
> >
> > Paul
> >
> > >
> >

With all due respect for the brilliance of SQLAlchemy.. it is not an
operating system and not a database.  

Maybe I am missing the point here, but, in the two hours it took to get
a reply to the OP, one could have output from one db (to csv) and
import to the other one.  Another alternative is to actually use the db
functionality.  MSAccess and MSSQL both start with 'MS'.  If I am not
mistaken, those are interoperable.  One can set up a 'link' and
transfer the data, no?  It has been years, but I remember doing that.

Moving data in/out of disparate data sources is a pretty common data
wharehouse process.  And if they are large datasets, native 'bulk'
transfers are fastest.  All of which can be automated... without
intervention from the application layer.  (was that blasphemy?)


-- 

michael



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Cumulative Select

2007-07-24 Thread Koen Bok

I need to do a cumulative select on postgres, and I got it to work
with some extreme SQL query, but I was wondering if it could be done
more easily without having to drop to SQL but with SQLAlchemy
statements.

Let's say I have a table with stock transactions like this:

Transaction
id
id_product
price
quantity

And it is filled like this:

1   1   12  10
2   1   13  5
3   1   12  3
4   1   11  6
5   1   10  5

Now at moment X my stock is 13 and I want to know the costs for each
product in my stock. So I add a cumulative column to select on and
expect to get the last three rows back as their cumulative total is <=
as my stock:

CUM
1   1   12  10  29
2   1   13  5   19
3   1   12  3   14  this
4   1   11  6   11  this
5   1   10  5   5   and this...

Extra info:

This is the query I currently use to get the transaction ID and offset
back:

SELECT
f1.id,
(
SELECT
coalesce(sum(quantity), 0)
FROM transaction f2
WHERE f2.id>=f1.id
AND f2.id_item = %s
) - %s as offset
FROM
transaction f1
AND f1.id_item = %s
AND %s <= (
SELECT
coalesce(sum(quantity), 0)
FROM transaction f2
WHERE f2.id>=f1.id
AND f2.id_item = %s
)
ORDER BY f1.id DESC LIMIT 1


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Consistency with DB while modifying metadata

2007-07-24 Thread svilen

On Tuesday 24 July 2007 16:22:43 Anton V. Belyaev wrote:
> Hey,
>
> I believe there is a common approach to the situation, but I just
> dont know it.
>
> Let say, I have some tables created in the DB using SQLAlchemy.
> Then I modify Python code, which describes the table (add a column,
> remove another column,...). What is the common way to handle this
> situation? I guess it would be good to have an exception raised
> when there is a mismatch between DB tables and Python-defined
> (using SQLAlchemy).

Very soon i'll be in your situation (with hundreds of tables), so i'm 
very interested if something comes up. 

it's in the todo list of dbcook. my idea so far is:
 - automaticaly reverse engineer i.e. autoload the available 
db-structure into some metadata.
 - create another metadata as of current code
 - compare the 2 metadatas, and based on some rules - ??? - 
alter/migrate the DB into the new shape.
This has to be as automatic as possible, leaving only certain - if 
any - decisions to the user.
Assuming that the main decision - to upgrade or not to upgrade - is 
taken positive, and any locks etc explicit access is obtained.

svil

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Consistency with DB while modifying metadata

2007-07-24 Thread Anton V. Belyaev

Hey,

I believe there is a common approach to the situation, but I just dont
know it.

Let say, I have some tables created in the DB using SQLAlchemy. Then I
modify Python code, which describes the table (add a column, remove
another column,...). What is the common way to handle this situation?
I guess it would be good to have an exception raised when there is a
mismatch between DB tables and Python-defined (using SQLAlchemy).


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Using SA to move data between databases

2007-07-24 Thread Christophe de VIENNE

Hi,

I'm in the same process, and very interested in the answer !

One idea I had is to define an universal dump format, (based for
example on pytables), which could be used to backup and restore datas
from/to various databases.
If this way is a good one and a good implementation proposed, it could
become an interesting addon to SA.

Regards,

Christophe

2007/7/24, Paul Johnston <[EMAIL PROTECTED]>:
>
> Hi,
>
> I am migrating an Access database to MSSQL server. I have coded up basic
> Access support for SQLAlchemy, which I'll be commiting to the 0.4 branch
> shortly.
>
> Using autocode, I now have a set of SQLAlchemy table definitions. My
> question is: what's a good way to move all the data across?
>
> Thanks for any help,
>
> Paul
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL & Time

2007-07-24 Thread Christophe de VIENNE

2007/7/24, Paul Johnston <[EMAIL PROTECTED]>:
>
> Hi,
>
> >Is there any special condition for the unittests to run on mssql ?
> >
> >
> A few... you really need to be running on Windows and using PyODBC. And
> then append ?text_as_varchar=1 to your DBURI.

I'll try that :-)

> Looking at your command line, it looks like you're running from Unix.
> You can use PyMSSQL from Unix, and the basic features work fine, but a
> number of edge cases don't, so you get quite a lot of unit test failures.

I see. Are the reasons for thoses failures well known ? fixable ? If
it's not too tricky I could spend a bit of time on it in a little
while.

Thanks,

Christophe

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL & Time

2007-07-24 Thread Paul Johnston

Hi,

>Is there any special condition for the unittests to run on mssql ?
>  
>
A few... you really need to be running on Windows and using PyODBC. And 
then append ?text_as_varchar=1 to your DBURI.

Looking at your command line, it looks like you're running from Unix. 
You can use PyMSSQL from Unix, and the basic features work fine, but a 
number of edge cases don't, so you get quite a lot of unit test failures.

Paul

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL & Time

2007-07-24 Thread Christophe de VIENNE

Hi,

I wrote a little patch for mysql.py that add a MSTime type, but before
submitting anything, I'd like the unittest to run run successfully.
The problem is that the tests (especially the DateTest) fail with
mssql before I touch anything in the code (cf details at the end of
the message).

Is there any special condition for the unittests to run on mssql ?

Regards,

Christophe

Here is the command I run :

#> PYTHONPATH=./test python test/sql/testtypes.py DateTest
--dburi=mssql://tester:[EMAIL PROTECTED]/sf_tmp

And I got :

testdate (__main__.DateTest) ... FAIL
testdate2 (__main__.DateTest) ... ERROR
testtextdate (__main__.DateTest) ... ERROR

==
ERROR: testdate (__main__.DateTest)
--
Traceback (most recent call last):
  File "/home/cdevienne/prog/sqlalchemy_trunk/test/testbase.py", line
398, in __call__
self._initTest.setUpAll()
  File "test/sql/testtypes.py", line 324, in setUpAll
users_with_date.insert().execute(**idict) # insert the data
  File "/home/cdevienne/prog/sqlalchemy_trunk/lib/sqlalchemy/sql.py",
line 1207, in execute
return self.compile(bind=self.bind,
parameters=compile_params).execute(*multiparams, **params)
  File "/home/cdevienne/prog/sqlalchemy_trunk/lib/sqlalchemy/sql.py",
line 1097, in execute
return e.execute_compiled(self, *multiparams, **params)
  File "/home/cdevienne/prog/sqlalchemy_trunk/lib/sqlalchemy/engine/base.py",
line 780, in execute_compiled
return connection.execute_compiled(compiled, *multiparams, **params)
  File "/home/cdevienne/prog/sqlalchemy_trunk/lib/sqlalchemy/engine/base.py",
line 568, in execute_compiled
self._execute_raw(context)
  File "/home/cdevienne/prog/sqlalchemy_trunk/lib/sqlalchemy/engine/base.py",
line 581, in _execute_raw
self._execute(context)
  File "/home/cdevienne/prog/sqlalchemy_trunk/lib/sqlalchemy/engine/base.py",
line 599, in _execute
raise exceptions.SQLError(context.statement, context.parameters, e)
SQLError: (DatabaseError) internal error: None (None) 'INSERT INTO
query_users_with_date (user_id, user_name, user_datetime, user_date)
VALUES (%(user_id)s, %(user_name)s, %(user_datetime)s, %(user_date)s)'
{'user_id': 7, 'user_name': 'jack', 'user_datetime':
datetime.datetime(2005, 11, 10, 0, 0), 'user_date':
datetime.datetime(2005, 11, 10, 0, 0)}

==
ERROR: testdate2 (__main__.DateTest)
--
Traceback (most recent call last):
  File "test/sql/testtypes.py", line 351, in testdate2
t.insert().execute(adate=d1, adatetime=d1)
  File "/home/cdevienne/prog/sqlalchemy_trunk/lib/sqlalchemy/sql.py",
line 1207, in execute
return self.compile(bind=self.bind,
parameters=compile_params).execute(*multiparams, **params)
  File "/home/cdevienne/prog/sqlalchemy_trunk/lib/sqlalchemy/sql.py",
line 1097, in execute
return e.execute_compiled(self, *multiparams, **params)
  File "/home/cdevienne/prog/sqlalchemy_trunk/lib/sqlalchemy/engine/base.py",
line 780, in execute_compiled
return connection.execute_compiled(compiled, *multiparams, **params)
  File "/home/cdevienne/prog/sqlalchemy_trunk/lib/sqlalchemy/engine/base.py",
line 568, in execute_compiled
self._execute_raw(context)
  File "/home/cdevienne/prog/sqlalchemy_trunk/lib/sqlalchemy/engine/base.py",
line 581, in _execute_raw
self._execute(context)
  File "/home/cdevienne/prog/sqlalchemy_trunk/lib/sqlalchemy/engine/base.py",
line 599, in _execute
raise exceptions.SQLError(context.statement, context.parameters, e)
SQLError: (DatabaseError) internal error: None (None) 'INSERT INTO
testdate (adate, adatetime) VALUES (%(adate)s, %(adatetime)s)'
{'adate': datetime.date(2007, 10, 30), 'adatetime':
datetime.date(2007, 10, 30)}

==
ERROR: testtextdate (__main__.DateTest)
--
Traceback (most recent call last):
  File "test/sql/testtypes.py", line 340, in testtextdate
self.assert_(isinstance(x[0][0], datetime.datetime))
IndexError: list index out of range

==
FAIL: testdate (__main__.DateTest)
--
Traceback (most recent call last):
  File "test/sql/testtypes.py", line 333, in testdate
self.assert_(l == insert_data, 'DateTest mismatch: got:%s
expected:%s' % (l, insert_data))
AssertionError: DateTest mismatch: got:[] expected:[[7, 'jack',
datetime.datetime(2005, 11, 10, 0, 0), datetime.datetime(2005, 11, 10,
0, 0)], [8, 'roy', datetime.datetime(2005, 11, 10, 11, 52, 35),
datetime.datetime(2006, 5, 10, 15, 32, 47)], [9, 'foo',
datetime.datetime(2005, 11, 10, 11, 52, 35), datetime.datetime(2004,
9, 18, 4, 0, 52)], [10, 'colb

[sqlalchemy] Re: MSSQL & Time

2007-07-24 Thread Christophe de VIENNE

Hi Paul,

Thanks, I will give it a try. Stay tuned :-)

Christophe

2007/7/24, Paul Johnston <[EMAIL PROTECTED]>:
>
> Hi Christophe,
>
> No particular reason, just no-one has needed it yet, and the unit tests
> don't cover it.
>
> If you want to have a go at a patch, that'd be great. Otherwise I'll
> take a look at this in the next few days.
>
> Paul
>
>
> Christophe de VIENNE wrote:
>
> >Hi,
> >
> >The mssql database does not provide a "Time" or a "Date" type, but
> >only datetime and smalldatetime.
> >In SQLAlchemy, the smalldatetime is used to implement the generic type
> >Date, but the Time type is not implemented.
> >Is there a particular reason for that ? could it inherit the DateTime
> >type, with a date set to 0 ?
> >
> >Regards,
> >
> >Christophe
> >
> >>
> >
> >
> >
>
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Using SA to move data between databases

2007-07-24 Thread Paul Johnston

Hi,

I am migrating an Access database to MSSQL server. I have coded up basic 
Access support for SQLAlchemy, which I'll be commiting to the 0.4 branch 
shortly.

Using autocode, I now have a set of SQLAlchemy table definitions. My 
question is: what's a good way to move all the data across?

Thanks for any help,

Paul

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL & Time

2007-07-24 Thread Paul Johnston

Hi Christophe,

No particular reason, just no-one has needed it yet, and the unit tests 
don't cover it.

If you want to have a go at a patch, that'd be great. Otherwise I'll 
take a look at this in the next few days.

Paul


Christophe de VIENNE wrote:

>Hi,
>
>The mssql database does not provide a "Time" or a "Date" type, but
>only datetime and smalldatetime.
>In SQLAlchemy, the smalldatetime is used to implement the generic type
>Date, but the Time type is not implemented.
>Is there a particular reason for that ? could it inherit the DateTime
>type, with a date set to 0 ?
>
>Regards,
>
>Christophe
>
>>
>
>  
>


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] MSSQL & Time

2007-07-24 Thread Christophe de VIENNE

Hi,

The mssql database does not provide a "Time" or a "Date" type, but
only datetime and smalldatetime.
In SQLAlchemy, the smalldatetime is used to implement the generic type
Date, but the Time type is not implemented.
Is there a particular reason for that ? could it inherit the DateTime
type, with a date set to 0 ?

Regards,

Christophe

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Connection initialization

2007-07-24 Thread Paul Colomiets

Hi,

How can I place some initialization code for each connection in the pool?

Currently I do something like this:
engine.execute("SET collation_connection=utf8_general_ci")
engine.execute("SET names utf8")
engine.execute("SET character_set_client=utf8")
engine.execute("SET character_set_results=utf8")
before processing each http-request. But it needs to be done
only once per connection.

--
Paul.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Model to Dictionary

2007-07-24 Thread King Simon-NFHD78

What do you mean by a model? If you are talking about an instance of a
mapped class, you could try something like this (untested):

def model_to_dict(instance):
model_dict = {}
for propname in instance.mapper.props:
model_dict[propname] = getattr(instance, propname)
return model_dict

Note that if you have relations to other mapped classes this will put
instances of those other classes in the dict. If you don't want that,
you could check to see if the result of the getattr is a mapped class
and call the function recursively. But you'd probably also need to check
for circular references, and treat x-to-many relationships specially.

Simon

> -Original Message-
> From: sqlalchemy@googlegroups.com 
> [mailto:[EMAIL PROTECTED] On Behalf Of HiTekElvis
> Sent: 23 July 2007 23:31
> To: sqlalchemy
> Subject: [sqlalchemy] Model to Dictionary
> 
> 
> Anybody know a way to change a model into a dictionary?
> 
> For those to whom it means anything, I'm hoping to pass that
> dictionary into a formencode.Schema.from_python method.
> 
> Any ideas?
> 
> -Josh
> 
> 
> > 
> 

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---