[sqlalchemy] Re: Model to Dictionary
What do you mean by a model? If you are talking about an instance of a mapped class, you could try something like this (untested): def model_to_dict(instance): model_dict = {} for propname in instance.mapper.props: model_dict[propname] = getattr(instance, propname) return model_dict Note that if you have relations to other mapped classes this will put instances of those other classes in the dict. If you don't want that, you could check to see if the result of the getattr is a mapped class and call the function recursively. But you'd probably also need to check for circular references, and treat x-to-many relationships specially. Simon -Original Message- From: sqlalchemy@googlegroups.com [mailto:[EMAIL PROTECTED] On Behalf Of HiTekElvis Sent: 23 July 2007 23:31 To: sqlalchemy Subject: [sqlalchemy] Model to Dictionary Anybody know a way to change a model into a dictionary? For those to whom it means anything, I'm hoping to pass that dictionary into a formencode.Schema.from_python method. Any ideas? -Josh --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Connection initialization
Hi, How can I place some initialization code for each connection in the pool? Currently I do something like this: engine.execute(SET collation_connection=utf8_general_ci) engine.execute(SET names utf8) engine.execute(SET character_set_client=utf8) engine.execute(SET character_set_results=utf8) before processing each http-request. But it needs to be done only once per connection. -- Paul. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] MSSQL Time
Hi, The mssql database does not provide a Time or a Date type, but only datetime and smalldatetime. In SQLAlchemy, the smalldatetime is used to implement the generic type Date, but the Time type is not implemented. Is there a particular reason for that ? could it inherit the DateTime type, with a date set to 0 ? Regards, Christophe --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: MSSQL Time
Hi Christophe, No particular reason, just no-one has needed it yet, and the unit tests don't cover it. If you want to have a go at a patch, that'd be great. Otherwise I'll take a look at this in the next few days. Paul Christophe de VIENNE wrote: Hi, The mssql database does not provide a Time or a Date type, but only datetime and smalldatetime. In SQLAlchemy, the smalldatetime is used to implement the generic type Date, but the Time type is not implemented. Is there a particular reason for that ? could it inherit the DateTime type, with a date set to 0 ? Regards, Christophe --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Using SA to move data between databases
Hi, I am migrating an Access database to MSSQL server. I have coded up basic Access support for SQLAlchemy, which I'll be commiting to the 0.4 branch shortly. Using autocode, I now have a set of SQLAlchemy table definitions. My question is: what's a good way to move all the data across? Thanks for any help, Paul --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: MSSQL Time
Hi Paul, Thanks, I will give it a try. Stay tuned :-) Christophe 2007/7/24, Paul Johnston [EMAIL PROTECTED]: Hi Christophe, No particular reason, just no-one has needed it yet, and the unit tests don't cover it. If you want to have a go at a patch, that'd be great. Otherwise I'll take a look at this in the next few days. Paul Christophe de VIENNE wrote: Hi, The mssql database does not provide a Time or a Date type, but only datetime and smalldatetime. In SQLAlchemy, the smalldatetime is used to implement the generic type Date, but the Time type is not implemented. Is there a particular reason for that ? could it inherit the DateTime type, with a date set to 0 ? Regards, Christophe --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: MSSQL Time
Hi, Is there any special condition for the unittests to run on mssql ? A few... you really need to be running on Windows and using PyODBC. And then append ?text_as_varchar=1 to your DBURI. Looking at your command line, it looks like you're running from Unix. You can use PyMSSQL from Unix, and the basic features work fine, but a number of edge cases don't, so you get quite a lot of unit test failures. Paul --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: MSSQL Time
2007/7/24, Paul Johnston [EMAIL PROTECTED]: Hi, Is there any special condition for the unittests to run on mssql ? A few... you really need to be running on Windows and using PyODBC. And then append ?text_as_varchar=1 to your DBURI. I'll try that :-) Looking at your command line, it looks like you're running from Unix. You can use PyMSSQL from Unix, and the basic features work fine, but a number of edge cases don't, so you get quite a lot of unit test failures. I see. Are the reasons for thoses failures well known ? fixable ? If it's not too tricky I could spend a bit of time on it in a little while. Thanks, Christophe --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Using SA to move data between databases
Hi, I'm in the same process, and very interested in the answer ! One idea I had is to define an universal dump format, (based for example on pytables), which could be used to backup and restore datas from/to various databases. If this way is a good one and a good implementation proposed, it could become an interesting addon to SA. Regards, Christophe 2007/7/24, Paul Johnston [EMAIL PROTECTED]: Hi, I am migrating an Access database to MSSQL server. I have coded up basic Access support for SQLAlchemy, which I'll be commiting to the 0.4 branch shortly. Using autocode, I now have a set of SQLAlchemy table definitions. My question is: what's a good way to move all the data across? Thanks for any help, Paul --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Consistency with DB while modifying metadata
Hey, I believe there is a common approach to the situation, but I just dont know it. Let say, I have some tables created in the DB using SQLAlchemy. Then I modify Python code, which describes the table (add a column, remove another column,...). What is the common way to handle this situation? I guess it would be good to have an exception raised when there is a mismatch between DB tables and Python-defined (using SQLAlchemy). --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Consistency with DB while modifying metadata
On Tuesday 24 July 2007 16:22:43 Anton V. Belyaev wrote: Hey, I believe there is a common approach to the situation, but I just dont know it. Let say, I have some tables created in the DB using SQLAlchemy. Then I modify Python code, which describes the table (add a column, remove another column,...). What is the common way to handle this situation? I guess it would be good to have an exception raised when there is a mismatch between DB tables and Python-defined (using SQLAlchemy). Very soon i'll be in your situation (with hundreds of tables), so i'm very interested if something comes up. it's in the todo list of dbcook. my idea so far is: - automaticaly reverse engineer i.e. autoload the available db-structure into some metadata. - create another metadata as of current code - compare the 2 metadatas, and based on some rules - ??? - alter/migrate the DB into the new shape. This has to be as automatic as possible, leaving only certain - if any - decisions to the user. Assuming that the main decision - to upgrade or not to upgrade - is taken positive, and any locks etc explicit access is obtained. svil --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Cumulative Select
I need to do a cumulative select on postgres, and I got it to work with some extreme SQL query, but I was wondering if it could be done more easily without having to drop to SQL but with SQLAlchemy statements. Let's say I have a table with stock transactions like this: Transaction id id_product price quantity And it is filled like this: 1 1 12 10 2 1 13 5 3 1 12 3 4 1 11 6 5 1 10 5 Now at moment X my stock is 13 and I want to know the costs for each product in my stock. So I add a cumulative column to select on and expect to get the last three rows back as their cumulative total is = as my stock: CUM 1 1 12 10 29 2 1 13 5 19 3 1 12 3 14 this 4 1 11 6 11 this 5 1 10 5 5 and this... Extra info: This is the query I currently use to get the transaction ID and offset back: SELECT f1.id, ( SELECT coalesce(sum(quantity), 0) FROM transaction f2 WHERE f2.id=f1.id AND f2.id_item = %s ) - %s as offset FROM transaction f1 AND f1.id_item = %s AND %s = ( SELECT coalesce(sum(quantity), 0) FROM transaction f2 WHERE f2.id=f1.id AND f2.id_item = %s ) ORDER BY f1.id DESC LIMIT 1 --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Using SA to move data between databases
On Tue, 24 Jul 2007 15:14:52 +0200 Christophe de VIENNE [EMAIL PROTECTED] wrote: Hi, I'm in the same process, and very interested in the answer ! One idea I had is to define an universal dump format, (based for example on pytables), which could be used to backup and restore datas from/to various databases. If this way is a good one and a good implementation proposed, it could become an interesting addon to SA. Regards, Christophe 2007/7/24, Paul Johnston [EMAIL PROTECTED]: Hi, I am migrating an Access database to MSSQL server. I have coded up basic Access support for SQLAlchemy, which I'll be commiting to the 0.4 branch shortly. Using autocode, I now have a set of SQLAlchemy table definitions. My question is: what's a good way to move all the data across? Thanks for any help, Paul With all due respect for the brilliance of SQLAlchemy.. it is not an operating system and not a database. Maybe I am missing the point here, but, in the two hours it took to get a reply to the OP, one could have output from one db (to csv) and import to the other one. Another alternative is to actually use the db functionality. MSAccess and MSSQL both start with 'MS'. If I am not mistaken, those are interoperable. One can set up a 'link' and transfer the data, no? It has been years, but I remember doing that. Moving data in/out of disparate data sources is a pretty common data wharehouse process. And if they are large datasets, native 'bulk' transfers are fastest. All of which can be automated... without intervention from the application layer. (was that blasphemy?) -- michael --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Consistency with DB while modifying metadata
On 24 июл, 17:34, svilen [EMAIL PROTECTED] wrote: On Tuesday 24 July 2007 16:22:43 Anton V. Belyaev wrote: Hey, I believe there is a common approach to the situation, but I just dont know it. Let say, I have some tables created in the DB using SQLAlchemy. Then I modify Python code, which describes the table (add a column, remove another column,...). What is the common way to handle this situation? I guess it would be good to have an exception raised when there is a mismatch between DB tables and Python-defined (using SQLAlchemy). Very soon i'll be in your situation (with hundreds of tables), so i'm very interested if something comes up. it's in the todo list of dbcook. my idea so far is: - automaticaly reverse engineer i.e. autoload the available db-structure into some metadata. - create another metadata as of current code - compare the 2 metadatas, and based on some rules - ??? - alter/migrate the DB into the new shape. This has to be as automatic as possible, leaving only certain - if any - decisions to the user. Assuming that the main decision - to upgrade or not to upgrade - is taken positive, and any locks etc explicit access is obtained. svil Of course db modification is hard. It cant be done completely automatically. For now I would like SQLAlchemy just to signal somehow when its definitions are different from already existing db tables. When I do create_all() it checks anyway tables properties, but doesnt let me know when there is mismatch. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Consistency with DB while modifying metadata
On Jul 24, 2007, at 9:59 AM, Anton V. Belyaev wrote: Of course db modification is hard. It cant be done completely automatically. For now I would like SQLAlchemy just to signal somehow when its definitions are different from already existing db tables. When I do create_all() it checks anyway tables properties, but doesnt let me know when there is mismatch. such a feature would make usage of table reflection, and then a comparison operation, along the lines of : t = Table('mytable', meta, Column(...) ) someothermeta = MetaData() t2 = Table('mytable', someothermetadata, autoload=True, autoload_with=connection) assert t.compare(t2) so if we had a comprehensive compare() method, we could probably distill the above into something like: table.verify(connection) but why not just use autoload=True across the board in the first place and eliminate the chance of any errors ? --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: How to use SessionContextExt with cherrypy?
theres multiple levels of issues with this. one is, its not very clean to move objects between databases using sessions. to do so, you have to remove the _instance_key of the object and save it into the other session: f = Foo() sess1.save(f) sess1.flush() sess1.expunge(f) del f._instance_key sess2.save(f) sess2.flush() the second is, a mapper does not define *where* you are storing your object, it only defines *how*. therefore you *never* make a second mapper for a class, unless you are using one of two very specific recipes which are mentioned in the docs (which this example is not). third, the SessionContextExt shouldnt really interfere with this operation, in that it wont prevent you from expunging the object from one session and saving it into another, but it does make it confusing. SessionContextExt is just the tiniest little convenience feature, that of your objects automatically get saved into a session and also lazy loaders know how to find a session. but if you are moving objects between sessions i would think its just going to confuse matters since its making decisions for you behind the scenes. i think its important to try to make your code work while using the minimal (minimal here meaning, none) number of add-ons to start with, so that you have something which works and can be understood. then the add-ons can be implemented afterwards, as the need for them arises. On Jul 24, 2007, at 10:18 AM, alex.schenkman wrote: Hi: I'm new writing web apps and I'm using cherrypy with sqlalchemy. As I understand it, each user navigating the site and clicking on pages gets a new thread and thus it is necesary to use sqlalchemy in a thread-safe manner. After reading the docs I assume that I have to use SessionContextExt, but I don't figure out how. As a test I try to write records to two different databases, but I always get a mapper error. sqlalchemy.exceptions.ArgumentError: Class 'class '__main__.Doc'' already has a primary mapper defined with entity name 'None'. Any hint is much appreciated!! This is the code I try: from sqlalchemy import * from sqlalchemy.ext.sessioncontext import SessionContext class Doc(object): def __init__(self, id, path, state): self.DocID = id self.Path = path self.Complete = state if __name__ == __main__: db1 = create_engine( 'sqlite:///test.db' ) db1.echo = False metadata = BoundMetaData( db1 ) docs = Table('docs', metadata, autoload=True) ctx1 = SessionContext(create_session) mapper(Doc, docs, extension=ctx1.mapper_extension) d = Doc(43,'/etc/password',True) ctx1.current.flush() db2 = create_engine( 'sqlite:///test2.db' ) db2.echo = False metadata2 = BoundMetaData( db2 ) d = Doc(15,'/etc/init',False) ctx2 = SessionContext(create_session) mapper(Doc, docs, extension=ctx2.mapper_extension) ctx2.current.flush() --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Consistency with DB while modifying metadata
On Tuesday 24 July 2007 17:30:27 Michael Bayer wrote: such a feature would make usage of table reflection, and then a comparison operation, along the lines of : ... assert t.compare(t2) yes i was hoping for such method (:-) And the best will be if it can produce a list/ hierarchy of differences, which then programaticaly can be iterated - and checked and resolved or raised higher. but why not just use autoload=True across the board in the first place and eliminate the chance of any errors ? what do u mean? The db-model of the app will not be the db-model in the database - and the semantix will be gone. Example: from simplistic renaming of columns/ tables, to splitting a class into clas+subclass (table into 2 joined-tables) etc --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Using SA to move data between databases
Hi 2007/7/24, michael [EMAIL PROTECTED]: On Tue, 24 Jul 2007 15:14:52 +0200 Christophe de VIENNE [EMAIL PROTECTED] wrote: Hi, I'm in the same process, and very interested in the answer ! One idea I had is to define an universal dump format, (based for example on pytables), which could be used to backup and restore datas from/to various databases. If this way is a good one and a good implementation proposed, it could become an interesting addon to SA. Regards, Christophe 2007/7/24, Paul Johnston [EMAIL PROTECTED]: Hi, I am migrating an Access database to MSSQL server. I have coded up basic Access support for SQLAlchemy, which I'll be commiting to the 0.4 branch shortly. Using autocode, I now have a set of SQLAlchemy table definitions. My question is: what's a good way to move all the data across? Thanks for any help, Paul With all due respect for the brilliance of SQLAlchemy.. it is not an operating system and not a database. Maybe I am missing the point here, but, in the two hours it took to get a reply to the OP, one could have output from one db (to csv) and import to the other one. Another alternative is to actually use the db functionality. MSAccess and MSSQL both start with 'MS'. If I am not mistaken, those are interoperable. One can set up a 'link' and transfer the data, no? It has been years, but I remember doing that. Moving data in/out of disparate data sources is a pretty common data wharehouse process. And if they are large datasets, native 'bulk' transfers are fastest. All of which can be automated... without intervention from the application layer. (was that blasphemy?) I see no blasphemy, but that does not exactly address my personal issue (which is not exactly the same as Paul it seems). I will have, in a few months, clients running my software on mysql, other on mssql. I want to have a common backup format, so I can restore any backup on any supported db, and all that should be doable by a Toto User (toto=dummy). Having it in the application layer allow me do to that. And since I hate to re-do things, my approach will most probably to use SA to dump and restore the datas, even if it's a bit slow (the databases are not very big), and it will always be possible to optimize the process by doing db-specific operations. The pytables format looks attractive for this use because it's fast, scalable, compresses the datas, and have generic viewer. My experience with SA is still a bit light, and I might say stupid things without seeing it, but that's the general idea. My two cents :-) Regards, Christophe --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Consistency with DB while modifying metadata
On Jul 24, 2007, at 11:07 AM, svilen wrote: On Tuesday 24 July 2007 17:30:27 Michael Bayer wrote: such a feature would make usage of table reflection, and then a comparison operation, along the lines of : ... assert t.compare(t2) yes i was hoping for such method (:-) And the best will be if it can produce a list/ hierarchy of differences, which then programaticaly can be iterated - and checked and resolved or raised higher. but why not just use autoload=True across the board in the first place and eliminate the chance of any errors ? what do u mean? The db-model of the app will not be the db-model in the database - and the semantix will be gone. Example: from simplistic renaming of columns/ tables, to splitting a class into clas+subclass (table into 2 joined-tables) etc ok, fine. anyway, feel free to add a trac ticket for this one - it'll need a volunteer. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: How to use SessionContextExt with cherrypy?
On Jul 24, 2007, at 12:09 PM, alex.schenkman wrote: class Session(object): def __init__(self, name): self.db = create_engine( 'sqlite:///%s' % name ) self.db.echo = False self.metadata = BoundMetaData(self.db) self.session = create_session() self.db.docs = Table('docs', self.metadata, autoload=True) self.db.mapper = mapper(Document, self.myDB.docs) mappers are at the same level at which your mapped class is defined. So if you define your Document class at the module level, so must your Mapper be defined. also, if you defined classes and mappers within a function for each session, that wouldnt scale anyway since the mappers get stored in a global registry and youd run out of memory after many distinct users visited the site. so if your mapper is at the module level, so are your Tables and MetaData. Sessions are not; so bind your individual sessions to the engines directly. (Engines are usually module level too, but in this case you are opening many individual sqlite files so theyre local to your Session object) metadata = MetaData() class Document(object): pass # cant autoload=True here unless you have a specific SQLite file that is safe to use. doesnt your # system need to create the tables inside the sqlite databases anyway ? docs = Table('docs', metadata, Column(...) ) mapper(Document, docs) class Session(object): def __init__(self, name): self.db = create_engine('sqlite:///%s' % name) self.session = create_session(bind = self.db) thats it. you dont need to reference mapper anywhere, just self.session and maybe docs. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Using SA to move data between databases
On Tue, 24 Jul 2007 17:52:55 +0200 Christophe de VIENNE [EMAIL PROTECTED] wrote: Hi 2007/7/24, michael [EMAIL PROTECTED]: On Tue, 24 Jul 2007 15:14:52 +0200 Christophe de VIENNE [EMAIL PROTECTED] wrote: Hi, I'm in the same process, and very interested in the answer ! One idea I had is to define an universal dump format, (based for example on pytables), which could be used to backup and restore datas from/to various databases. If this way is a good one and a good implementation proposed, it could become an interesting addon to SA. Regards, Christophe 2007/7/24, Paul Johnston [EMAIL PROTECTED]: Hi, I am migrating an Access database to MSSQL server. I have coded up basic Access support for SQLAlchemy, which I'll be commiting to the 0.4 branch shortly. Using autocode, I now have a set of SQLAlchemy table definitions. My question is: what's a good way to move all the data across? Thanks for any help, Paul With all due respect for the brilliance of SQLAlchemy.. it is not an operating system and not a database. Maybe I am missing the point here, but, in the two hours it took to get a reply to the OP, one could have output from one db (to csv) and import to the other one. Another alternative is to actually use the db functionality. MSAccess and MSSQL both start with 'MS'. If I am not mistaken, those are interoperable. One can set up a 'link' and transfer the data, no? It has been years, but I remember doing that. Moving data in/out of disparate data sources is a pretty common data wharehouse process. And if they are large datasets, native 'bulk' transfers are fastest. All of which can be automated... without intervention from the application layer. (was that blasphemy?) I see no blasphemy, but that does not exactly address my personal issue (which is not exactly the same as Paul it seems). I will have, in a few months, clients running my software on mysql, other on mssql. I want to have a common backup format, so I can restore any backup on any supported db, and all that should be doable by a Toto User (toto=dummy). Having it in the application layer allow me do to that. And since I hate to re-do things, my approach will most probably to use SA to dump and restore the datas, even if it's a bit slow (the databases are not very big), and it will always be possible to optimize the process by doing db-specific operations. The pytables format looks attractive for this use because it's fast, scalable, compresses the datas, and have generic viewer. My experience with SA is still a bit light, and I might say stupid things without seeing it, but that's the general idea. My two cents :-) Regards, Christophe Hello Christophe, If I am reading you intent (and forgive me if I am wrong), you would like to have one backup/restore routine for the same schema but on a variety of databases. And, you want the user to be responsible for doing both actions. Your references to universal dump format and common backup format point sqarely at a flat file; which can sometimes be [tab|space|comma|pipe|etc] delimited. Since you said that your databases will be small, text is truly universal. It does not require python or any scripting language and can easily be viewable in any text editor. As a side note, I have found pipe delimited to be the least troublesome when moving between databases that have user-input data. The first thing that comes to mind is that there should probably already be an automated backup scheduled. Users will forget and when a restore is needed, they will want fresh data. Each database has their own export-to-text command and each has their own import-from-text command; and related syntax. This can be triggered with cron for all unixes; On Windows, MSSQL has its own scheduler. (Of course, one could also have a button do the same thing, in *addition* to routine backups.) Are you asking for features in SA to handle scheduled backups and user initiated restores? -- michael --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Consistency with DB while modifying metadata
assert t.compare(t2) yes i was hoping for such method (:-) And the best will be if it can produce a list/ hierarchy of differences, which then programaticaly can be iterated - and checked and resolved or raised higher. but why not just use autoload=True across the board in the first place and eliminate the chance of any errors ? what do u mean? The db-model of the app will not be the db-model in the database - and the semantix will be gone. Example: from simplistic renaming of columns/ tables, to splitting a class into clas+subclass (table into 2 joined-tables) etc ok, fine. anyway, feel free to add a trac ticket for this one - it'll need a volunteer. ticket #680, have a look if what i wrote is what was meant in this thread. i may look into it after 2-3 weeks - unless someone does it ahead of me ;P) --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Cumulative Select
I had to put one little trick in here to make the subquery - something work, which is something i should look into; otherwise it went straight in. from sqlalchemy import * transaction = table('transaction', column('id'), column('id_product'), column('price'), column('quantity') ) f1 = transaction.alias('f1') f2 = transaction.alias('f2') subquery = select([func.coalesce(func.sum(f2.c.quantity))], and_ (f2.c.idf1.c.id, f2.c.id_product==bindparam('f2item')), scalar=True) s = select([f1.c.id, subquery.label('foo') - bindparam('offset').label ('offset')], and_( f1.c.id_product==bindparam('f1item'), bindparam('something') = subquery ) , order_by=[desc(f1.c.id)], limit = 1 ) print s I think in 0.4 im going to deprecate scalar=True and instead have you sayselect(...).scalar() On Jul 24, 2007, at 9:38 AM, Koen Bok wrote: I need to do a cumulative select on postgres, and I got it to work with some extreme SQL query, but I was wondering if it could be done more easily without having to drop to SQL but with SQLAlchemy statements. Let's say I have a table with stock transactions like this: Transaction id id_product price quantity And it is filled like this: 1 1 12 10 2 1 13 5 3 1 12 3 4 1 11 6 5 1 10 5 Now at moment X my stock is 13 and I want to know the costs for each product in my stock. So I add a cumulative column to select on and expect to get the last three rows back as their cumulative total is = as my stock: CUM 1 1 12 10 29 2 1 13 5 19 3 1 12 3 14 this 4 1 11 6 11 this 5 1 10 5 5 and this... Extra info: This is the query I currently use to get the transaction ID and offset back: SELECT f1.id, ( SELECT coalesce(sum(quantity), 0) FROM transaction f2 WHERE f2.id=f1.id AND f2.id_item = %s ) - %s as offset FROM transaction f1 AND f1.id_item = %s AND %s = ( SELECT coalesce(sum(quantity), 0) FROM transaction f2 WHERE f2.id=f1.id AND f2.id_item = %s ) ORDER BY f1.id DESC LIMIT 1 --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Cumulative Select
And again you made my day... On Jul 24, 7:17 pm, Michael Bayer [EMAIL PROTECTED] wrote: I had to put one little trick in here to make the subquery - something work, which is something i should look into; otherwise it went straight in. from sqlalchemy import * transaction = table('transaction', column('id'), column('id_product'), column('price'), column('quantity') ) f1 = transaction.alias('f1') f2 = transaction.alias('f2') subquery = select([func.coalesce(func.sum(f2.c.quantity))], and_ (f2.c.idf1.c.id, f2.c.id_product==bindparam('f2item')), scalar=True) s = select([f1.c.id, subquery.label('foo') - bindparam('offset').label ('offset')], and_( f1.c.id_product==bindparam('f1item'), bindparam('something') = subquery ) , order_by=[desc(f1.c.id)], limit = 1 ) print s I think in 0.4 im going to deprecate scalar=True and instead have you sayselect(...).scalar() On Jul 24, 2007, at 9:38 AM, Koen Bok wrote: I need to do a cumulative select on postgres, and I got it to work with some extreme SQL query, but I was wondering if it could be done more easily without having to drop to SQL but with SQLAlchemy statements. Let's say I have a table with stock transactions like this: Transaction id id_product price quantity And it is filled like this: 1 1 12 10 2 1 13 5 3 1 12 3 4 1 11 6 5 1 10 5 Now at moment X my stock is 13 and I want to know the costs for each product in my stock. So I add a cumulative column to select on and expect to get the last three rows back as their cumulative total is = as my stock: CUM 1 1 12 10 29 2 1 13 5 19 3 1 12 3 14 this 4 1 11 6 11 this 5 1 10 5 5 and this... Extra info: This is the query I currently use to get the transaction ID and offset back: SELECT f1.id, ( SELECT coalesce(sum(quantity), 0) FROM transaction f2 WHERE f2.id=f1.id AND f2.id_item = %s ) - %s as offset FROM transaction f1 AND f1.id_item = %s AND %s = ( SELECT coalesce(sum(quantity), 0) FROM transaction f2 WHERE f2.id=f1.id AND f2.id_item = %s ) ORDER BY f1.id DESC LIMIT 1 --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: MSSQL Time
The list is useful only for a hacker on the MSSQL module, not for general users, but FWIW, I've added it to http://www.sqlalchemy.org/trac/wiki/DatabaseNotes I generally try to respond directly to help encourage anyone willing to offer a hand on the MSSQL module, as I don't have the time these days to get all the tests fixed, so sorry if the repeated postings are bugging you. On 7/24/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote: The reasons for the failures that I've had time to look into have so far had as much to do with the tests as with the MSSQL module. They are mostly fixable within the constraints that the various supported MSSQL DBAPI modules impose: -- DBLib (required by pymssql) does not like Unicode, and has a 30 character identifier limit -- pymssql seems to have troubles with multiple cursors on a single connection and doesn't properly implement rowcounts on executemany() -- Pyodbc doesn't work with multiple result sets (I believe a patch to fix this is about) -- Pyodbc has troubles with the SQL 2000 syntax of select scope_identity() -- There are issues on all DBAPI modules (really, an issue with MSSQL) comparing Python strings to MSSQL TEXT columns -- Probably a half-dozen more issues that I've missed here. one suggestion - why not make one organisational ticket and list all these there? compiled from other mails too? and any other issues about mssql? and keep it up-to-date? otherwise it could be 20 separate tickets... with little (if any) relation to SA. or it can be a wiki page. probably better, easier to update. i see such list for n-th time last 2-3 months, IMO its a waste of time to recall all this, lets put it into some place... ciao svilen --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: MSSQL Time
On Tuesday 24 July 2007 23:29:52 Rick Morrison wrote: The list is useful only for a hacker on the MSSQL module, not for general users, but FWIW, I've added it to http://www.sqlalchemy.org/trac/wiki/DatabaseNotes thanks I generally try to respond directly to help encourage anyone willing to offer a hand on the MSSQL module, as I don't have the time these days to get all the tests fixed, so sorry if the repeated postings are bugging you. no, u got me wrong here. its not about bugging. i'll need mssql working in several months, so i'll have to hack / patch it myself/ ourselves - and why starting from scratch if parts of the puzzle are already known... keep adding to the list if u find other issues - or various bits around these - maybe one day the list will start shrinking instead of growing... On 7/24/07, [EMAIL PROTECTED] wrote: one suggestion - why not make one organisational ticket and list all these there? compiled from other mails too? and any other issues about mssql? and keep it up-to-date? otherwise it could be 20 separate tickets... with little (if any) relation to SA. or it can be a wiki page. probably better, easier to update. i see such list for n-th time last 2-3 months, IMO its a waste of time to recall all this, lets put it into some place... ciao svilen --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---