[sqlalchemy] Re: memory and cpu usage growing like crazy
On Thursday 19 June 2008 01:43:16 Arun Kumar PG wrote: one more point that i forgot to mention in the workflow, so i re-wrote it again: - search request comes - if orm mapping is not created it's get created now (only happens one time) - new session is created using orm.create_session(weak_identity_map=True). now this new session is added to a python dict like this: resources = { SESSION: session OTHER_RESOURCE: obj } and then this resources dict is attached to the current request thread (this is done so that different DAOs can access the same session and other resources from the current thread). - all orm queries are fired.. results processed - finally, current thread is accessed again and tear down happens as below: resources = currentThread().resources resources[SESSION].clear() del resources my question is that i am deleting resources dict but not resources[SESSION] (session object) which might be being pointed to by sa data structure associated as a part of initial orm.create_session call? i have not done a deep dive in sa source code but just guessing. so never delete that session? and that thread.resources thing? do u have non-pure-python objects involved? it's easy to forget a reference in there. also (repeating) check for caches. OR things the were not intended to be caches but actualy became/are (anything that lives forever is a candidate) it's up to you, noone can do the dirty work... --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] embedding aggregates into query or mapper
Hi! Let's have a typical schema: Company - Employee How can I add a scalar column (in a query or a mapper) to the Company containing the employees count? E.g. query(Company).all() will return list of Company instances and each will have an extra property with emloyees count. But I don't want that retrieved by an extra query for each company. I want that that in one query like SELECT companies.*, count(employees.*) David --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: embedding aggregates into query or mapper
On Thursday 19 June 2008 14:05:08 ml wrote: Hi! Let's have a typical schema: Company - Employee How can I add a scalar column (in a query or a mapper) to the Company containing the employees count? E.g. query(Company).all() will return list of Company instances and each will have an extra property with emloyees count. But I don't want that retrieved by an extra query for each company. I want that that in one query like SELECT companies.*, count(employees.*) do u want it once per query or u want it regularly updated/cached? first one, u can add column expressions to the query http://www.sqlalchemy.org/docs/04/mappers.html#advdatamapping_mapper_expressions which would be ok if u dont do it frequently. second one, there is SQLAlchemy Aggregator: http://dev.gafol.net/t/aggregator (there's docs + package + svn) or my version in svn co https://dbcook.svn.sourceforge.net/svnroot/dbcook/trunk/dbcook/misc/aggregator we havent merged long ago, may be there are some small differences (e.g. i've added 0.5 support to dbcook's copy) example: see it's tests or https://dbcook.svn.sourceforge.net/svnroot/dbcook/trunk/dbcook/misc/aggregator4dbcook_example.py have fun svilen --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Installing pyodbc
Hello, I'm just getting started with python and SQLAlchemy and have started through the SA (0.4.6) tutorial on a Windows XP machine. Having successfully done some exercises with the SQLite database, I want to try connecting to a legacy SQLServer database in my lab. I am attempting to install and use pyodbc, so I did the following: -- session follows -- C:\easy_install pyodbc Searching for pyodbc Reading http://pypi.python.org/simple/pyodbc/ Reading http://pyodbc.sourceforge.net Reading https://sourceforge.net/project/showfil … _id=162557 Best match: pyodbc 2.0.58 Downloading http://downloads.sourceforge.net/pyodbc … 2-py2.5.ex e?modtime=1208279386big_mirror=0 Processing pyodbc-2.0.58.win32-py2.5.exe creating 'c:\docume~1\ras~1.kel\locals~1\temp\easy_install-8vqzth \pyodbc-2.0.58- py2.5-win32.egg' and adding 'c:\docume~1\ras~1.kel\locals~1\temp \easy_install-8v qzth\pyodbc-2.0.58-py2.5-win32.egg.tmp' to it Moving pyodbc-2.0.58-py2.5-win32.egg to c:\python25\lib\site-packages Adding pyodbc 2.0.58 to easy-install.pth file Installed c:\python25\lib\site-packages\pyodbc-2.0.58-py2.5-win32.egg Processing dependencies for pyodbc Finished processing dependencies for pyodbc[/color] -- session preceeds -- So, that didn't look bad, but when I tried import pyodbc ImportError: No module named pyodbc When I look in my C:\Python25\Lib\site-packages directory I find the following: Directory of C:\Python25\Lib\site-packages 06/17/2008 00:24DIR . 06/17/2008 00:24DIR .. 06/17/2008 00:2331,421 adodbapi-2.1-py2.5.egg 06/13/2008 21:34DIR beaker-0.9.4-py2.5.egg 06/13/2008 21:33 5,720 decorator-2.2.0-py2.5.egg 06/17/2008 00:23 629 easy-install.pth 06/13/2008 21:34DIR formencode-0.7.1-py2.5.egg 06/13/2008 21:33DIR mako-0.1.8-py2.5.egg 06/13/2008 21:33DIR nose-0.10.3-py2.5.egg 06/13/2008 21:34DIR paste-1.4.2-py2.5.egg 06/13/2008 21:34DIR pastedeploy-1.3.1-py2.5.egg 06/13/2008 21:34DIR pastescript-1.3.6-py2.5.egg 06/13/2008 21:33DIR Pylons-0.9.6.2-py2.5.egg 06/17/2008 00:2296,341 pyodbc-2.0.58-py2.5-win32.egg 10/28/2005 19:07 121 README.txt 06/13/2008 21:34DIR Routes-1.7.3-py2.5.egg 06/14/2008 01:06 327,726 setuptools-0.6c8-py2.5-win32.egg 06/13/2008 21:14 324,858 setuptools-0.6c8-py2.5.egg 06/14/2008 01:0636 setuptools.pth 06/13/2008 21:3341,161 simplejson-1.7.1-py2.5.egg 06/14/2008 01:16DIR sqlalchemy-0.4.6-py2.5.egg 06/14/2008 00:49DIR sqlalchemy-0.5.0beta1-py2.5.egg 06/13/2008 21:34DIR webhelpers-0.3.2-py2.5.egg Instead of a directory named pyodbc-2.0.58-py2.5-win32.egg there's just a simple file by that name Anybody know what I've done wrong? Thanks in advance Rob --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Installing pyodbc
Hello, I'm just getting started with python and SQLAlchemy and have started through the SA (0.4.6) tutorial on a Windows XP machine. Having successfully done some exercises with the SQLite database, I want to try connecting to a legacy SQLServer database in my lab. I am attempting to install and use pyodbc, so I did the following: C:\easy_install pyodbc Searching for pyodbc Reading http://pypi.python.org/simple/pyodbc/ Reading http://pyodbc.sourceforge.net Reading https://sourceforge.net/project/showfil … _id=162557 Best match: pyodbc 2.0.58 Downloading http://downloads.sourceforge.net/pyodbc … 2-py2.5.ex e?modtime=1208279386big_mirror=0 Processing pyodbc-2.0.58.win32-py2.5.exe creating 'c:\docume~1\ras~1.kel\locals~1\temp\easy_install-8vqzth\pyodbc-2.0.58- py2.5-win32.egg' and adding 'c:\docume~1\ras~1.kel\locals~1\temp\easy_install-8v qzth\pyodbc-2.0.58-py2.5-win32.egg.tmp' to it Moving pyodbc-2.0.58-py2.5-win32.egg to c:\python25\lib\site-packages Adding pyodbc 2.0.58 to easy-install.pth file Installed c:\python25\lib\site-packages\pyodbc-2.0.58-py2.5-win32.egg Processing dependencies for pyodbc Finished processing dependencies for pyodbc So, that didn't look bad, but when I tried import pyodbc ImportError: No module named pyodbc When I look in my C:\Python25\Lib\site-packages directory I find the following: Directory of C:\Python25\Lib\site-packages 06/17/2008 00:24DIR . 06/17/2008 00:24DIR .. 06/17/2008 00:2331,421 adodbapi-2.1-py2.5.egg 06/13/2008 21:34DIR beaker-0.9.4-py2.5.egg 06/13/2008 21:33 5,720 decorator-2.2.0-py2.5.egg 06/17/2008 00:23 629 easy-install.pth 06/13/2008 21:34DIR formencode-0.7.1-py2.5.egg 06/13/2008 21:33DIR mako-0.1.8-py2.5.egg 06/13/2008 21:33DIR nose-0.10.3-py2.5.egg 06/13/2008 21:34DIR paste-1.4.2-py2.5.egg 06/13/2008 21:34DIR pastedeploy-1.3.1-py2.5.egg 06/13/2008 21:34DIR pastescript-1.3.6-py2.5.egg 06/13/2008 21:33DIR Pylons-0.9.6.2-py2.5.egg 06/17/2008 00:2296,341 pyodbc-2.0.58-py2.5-win32.egg 10/28/2005 19:07 121 README.txt 06/13/2008 21:34DIR Routes-1.7.3-py2.5.egg 06/14/2008 01:06 327,726 setuptools-0.6c8-py2.5-win32.egg 06/13/2008 21:14 324,858 setuptools-0.6c8-py2.5.egg 06/14/2008 01:0636 setuptools.pth 06/13/2008 21:3341,161 simplejson-1.7.1-py2.5.egg 06/14/2008 01:16DIR sqlalchemy-0.4.6-py2.5.egg 06/14/2008 00:49DIR sqlalchemy-0.5.0beta1-py2.5.egg 06/13/2008 21:34DIR webhelpers-0.3.2-py2.5.egg Instead of a directory named pyodbc-2.0.58-py2.5-win32.egg there's just a simple file by that name Is that my problem? Anybody know what I've done wrong? Thanks in advance Rob --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] mapping and to the back
Good day community. first of all I want to say that I'm 2nd day with python... and SQLAlchemy so maybe I don't understand the whole idea I already have database (postgresql) for mapping mine database to classes I use such a thing (I'm using SQLAlchemy with trubogears) table_object = Table('table_name', metadata, schema='schema_name', autogenerate=True) class TableClass(object): def somemethod(self, param) pass mapper(TableClass, table_object) that works fine for me and i can write queries with TableClass ... cool!!! now the different task somewhere in the controller (but I can see mapper, metadata and so on) i get the table_name and the schema_name its easy to create table inside the method... but I want to find which tableclass was mapped to this pair (table_name, schema_name). and to use that exact method I want something like that object = some_sql_alchemy_container.some_method(table_name, schema_name) object.somemethod(with_param) Is this possible ?? If yes - how to achieve this result ? Best regards, Ilya Dyoshin --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Py 2.3 supported in SA 0.5?
Will Python 2.3 still be supported by SA 0.5? I noticed that sqlalchemy.ext.orderinglist uses the new decorator syntax. -- Christoph --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: mapping and to the back
once mapped, the link is the otherway around: class - mapper - sqltable. the backlink... u have to cache it yourself when u create things? e.g. mytablemap[tablename,schemaname]=TableClass why need that? i mean why starting from tablename? On Thursday 19 June 2008 18:48:42 zipito wrote: Good day community. first of all I want to say that I'm 2nd day with python... and SQLAlchemy so maybe I don't understand the whole idea I already have database (postgresql) for mapping mine database to classes I use such a thing (I'm using SQLAlchemy with trubogears) table_object = Table('table_name', metadata, schema='schema_name', autogenerate=True) class TableClass(object): def somemethod(self, param) pass mapper(TableClass, table_object) that works fine for me and i can write queries with TableClass ... cool!!! now the different task somewhere in the controller (but I can see mapper, metadata and so on) i get the table_name and the schema_name its easy to create table inside the method... but I want to find which tableclass was mapped to this pair (table_name, schema_name). and to use that exact method I want something like that object = some_sql_alchemy_container.some_method(table_name, schema_name) object.somemethod(with_param) Is this possible ?? If yes - how to achieve this result ? Best regards, Ilya Dyoshin --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: mapping and to the back
On 19 июн, 20:12, [EMAIL PROTECTED] wrote: once mapped, the link is the otherway around: class - mapper - sqltable. the backlink... u have to cache it yourself when u create things? e.g. mytablemap[tablename,schemaname]=TableClass why need that? i mean why starting from tablename? 'cause mine application is designed in such way - every record in the database is the document with it's unique id. (additionally with rights) there is document_types reference which shows the document_types with its location in database - so user clicks - i want to get the content of the object - and receives the exact content - but in methods i define additional filters. So this would be great to make working so creating own tables cache... that's a solution... but I've been thinking there is something which would help me. Additional question (maybe of the topic of mine base question) when I autogenerate the table from the database and then map it to the TableClass how can I get the object from one-to-many relation?? i.e. 2 simple tables with 1 to many relation autogenerated. table1 = Table(..., autogenerate=True) class Table1Class(object): pass table2 = Table(..., autogenerate=True) class Table2Class(object): pass mapper(Table1Class, table1) mapper(Table2Class, table2) so would be genereated there the one-to-many relation between Table1Class and Table2Class?? If yes - how to get all subsequent records from slave (Table2Class) table from instance of master table(Table1Class) table1ref = Table1Class() table2refs = table1ref.getall_table2_subsequent() ??? --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: mapping and to the back
On Thursday 19 June 2008 20:38:08 zipito wrote: On 19 июн, 20:12, [EMAIL PROTECTED] wrote: once mapped, the link is the otherway around: class - mapper - sqltable. the backlink... u have to cache it yourself when u create things? e.g. mytablemap[tablename,schemaname]=TableClass why need that? i mean why starting from tablename? 'cause mine application is designed in such way - every record in the database is the document with it's unique id. (additionally with rights) there is document_types reference which shows the document_types with its location in database - so user clicks - i want to get the content of the object - and receives the exact content - but in methods i define additional filters. So this would be great to make working where u get the tablename from? i dont get it but it's your app so whatever. so creating own tables cache... that's a solution... but I've been thinking there is something which would help me. Additional question (maybe of the topic of mine base question) when I autogenerate the table from the database and then map it to the TableClass how can I get the object from one-to-many relation?? u'll probably get foreign keys etc sql-schema stuff but actual mapper relations u have to do yourself. maybe guessing from foreign keys, i dont know. there is sqlsoup - http://www.sqlalchemy.org/trac/wiki/SqlSoup - which may do what u want. i.e. 2 simple tables with 1 to many relation autogenerated. table1 = Table(..., autogenerate=True) class Table1Class(object): pass table2 = Table(..., autogenerate=True) class Table2Class(object): pass mapper(Table1Class, table1) mapper(Table2Class, table2) so would be genereated there the one-to-many relation between Table1Class and Table2Class?? If yes - how to get all subsequent records from slave (Table2Class) table from instance of master table(Table1Class) table1ref = Table1Class() table2refs = table1ref.getall_table2_subsequent() ??? --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Py 2.3 supported in SA 0.5?
Christoph Zwerschke wrote: Will Python 2.3 still be supported by SA 0.5? I noticed that sqlalchemy.ext.orderinglist uses the new decorator syntax. Oops, that snuck in accidentally is fixed in the trunk for the time being. However, 2.3 will probably not be supported in the final 0.5.0 release. We've had a couple threads on the 2.3 support and no one came to 2.3's defense, so it's on the chopping block. If the migration from list comprehensions to generators negatively impacts performance, then perhaps 2.3 support will stay, but barring that... --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] sharding database elixir metadata.drop_all and metadata.create_all problem
I have tried this command: a.a_metadata.drop_all(bind=db) a.a_metadata.create_all(bind=db) and they don't create any tables and I don't get any error at all. With, setup_all(True, bind=db), creates table A and B on each engine. I just want table A on m1,m2 and table B on n1,n2. I have spent quite some time searching and going over the docs but I can't figure out the problem is. = ### file c.py #!/usr/bin/env python from sqlalchemy import create_engine import b import a m1 = create_engine(mysql://m1:[EMAIL PROTECTED]:3306/m1, echo=True) m2 = create_engine(mysql://m2:[EMAIL PROTECTED]:3306/m2, echo=True) n1 = create_engine(mysql://n1:[EMAIL PROTECTED]:3306/n1, echo=True) n2 = create_engine(mysql://n2:[EMAIL PROTECTED]:3306/n2, echo=True) # create tables for db in (m1, m2): a.a_metadata.drop_all(bind=db) a.a_metadata.create_all(bind=db) #setup_all(True, bind=db) for db in (n1, n2): # setup_all(True, bind=db) b.b_metadata.drop_all(bind=db) b.b_metadata.create_all(bind=db) === ### file a.py from elixir import * from datetime import datetime a_metadata = metadata __metada__ = a_metadata class A(Entity): using_options(tablename='a', auto_primarykey = False) aname = Field(String(30), primary_key = True, nullable = False, unique=True) = ### file b.py from elixir import * from datetime import datetime b_metadata = metadata __metada__ = b_metadata class B(Entity): using_options(tablename='b', auto_primarykey = False) bname = Field(String(30), primary_key = True, nullable = False, unique=True) --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: mapping and to the back
On Jun 19, 2008, at 11:48 AM, zipito wrote: I want something like that object = some_sql_alchemy_container.some_method(table_name, schema_name) object.somemethod(with_param) Is this possible ?? If yes - how to achieve this result ? make a mapper() decorator that does the accounting you want. from sqlalchemy.orm import mapper as _mapper my_table_registry = {} def mapper(cls, table, *args, **kwargs): my_table_registry[(table.name, table.schema)] = cls return _mapper(cls, table, *args, **kwargs) def my_method(table_name, schema_name): return my_table_registry[(table_name, schema_name)] --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: sharding database elixir metadata.drop_all and metadata.create_all problem
im not familiar with what __metada__ is and this seems to be an elixir specific issue. metadata.drop_all()/create_all() always do what they say. On Jun 19, 2008, at 2:01 PM, lilo wrote: I have tried this command: a.a_metadata.drop_all(bind=db) a.a_metadata.create_all(bind=db) and they don't create any tables and I don't get any error at all. With, setup_all(True, bind=db), creates table A and B on each engine. I just want table A on m1,m2 and table B on n1,n2. I have spent quite some time searching and going over the docs but I can't figure out the problem is. = ### file c.py #!/usr/bin/env python from sqlalchemy import create_engine import b import a m1 = create_engine(mysql://m1:[EMAIL PROTECTED]:3306/m1, echo=True) m2 = create_engine(mysql://m2:[EMAIL PROTECTED]:3306/m2, echo=True) n1 = create_engine(mysql://n1:[EMAIL PROTECTED]:3306/n1, echo=True) n2 = create_engine(mysql://n2:[EMAIL PROTECTED]:3306/n2, echo=True) # create tables for db in (m1, m2): a.a_metadata.drop_all(bind=db) a.a_metadata.create_all(bind=db) #setup_all(True, bind=db) for db in (n1, n2): # setup_all(True, bind=db) b.b_metadata.drop_all(bind=db) b.b_metadata.create_all(bind=db) === ### file a.py from elixir import * from datetime import datetime a_metadata = metadata __metada__ = a_metadata class A(Entity): using_options(tablename='a', auto_primarykey = False) aname = Field(String(30), primary_key = True, nullable = False, unique=True) = ### file b.py from elixir import * from datetime import datetime b_metadata = metadata __metada__ = b_metadata class B(Entity): using_options(tablename='b', auto_primarykey = False) bname = Field(String(30), primary_key = True, nullable = False, unique=True) --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: memory and cpu usage growing like crazy
On Jun 18, 2008, at 6:43 PM, Arun Kumar PG wrote: one more point that i forgot to mention in the workflow, so i re- wrote it again: - search request comes - if orm mapping is not created it's get created now (only happens one time) - new session is created using orm.create_session(weak_identity_map=True). now this new session is added to a python dict like this: resources = { SESSION: session OTHER_RESOURCE: obj } and then this resources dict is attached to the current request thread (this is done so that different DAOs can access the same session and other resources from the current thread). - all orm queries are fired.. results processed - finally, current thread is accessed again and tear down happens as below: resources = currentThread().resources resources[SESSION].clear() del resources my question is that i am deleting resources dict but not resources[SESSION] (session object) which might be being pointed to by sa data structure associated as a part of initial orm.create_session call? i have not done a deep dive in sa source code but just guessing. nothing strongly references the SA session within SQLA's implementation (unless you are using SessionContext/scoped_session). As I suggested earlier, if you suspect this Session is still hanging around, write some debugging code that searches in gc.get_objects() for it, after a gc.collect() has been called. I think it also might be worth it to look into standard SQLA usage patterns instead of the customized things you're doing, i.e. declare mappers within modules and not functions, use SessionContext, etc. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: mapping and to the back
make a mapper() decorator that does the accounting you want. from sqlalchemy.orm import mapper as _mapper my_table_registry = {} def mapper(cls, table, *args, **kwargs): my_table_registry[(table.name, table.schema)] = cls return _mapper(cls, table, *args, **kwargs) def my_method(table_name, schema_name): return my_table_registry[(table_name, schema_name)] Thanks that is the solution! now i understand why python is called python :) --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: memory and cpu usage growing like crazy
On Thursday 19 June 2008 21:49:33 Michael Bayer wrote: On Jun 18, 2008, at 6:43 PM, Arun Kumar PG wrote: one more point that i forgot to mention in the workflow, so i re- wrote it again: - search request comes - if orm mapping is not created it's get created now (only happens one time) - new session is created using orm.create_session(weak_identity_map=True). now this new session is added to a python dict like this: resources = { SESSION: session OTHER_RESOURCE: obj } and then this resources dict is attached to the current request thread (this is done so that different DAOs can access the same session and other resources from the current thread). - all orm queries are fired.. results processed - finally, current thread is accessed again and tear down happens as below: resources = currentThread().resources resources[SESSION].clear() del resources my question is that i am deleting resources dict but not resources[SESSION] (session object) which might be being pointed to by sa data structure associated as a part of initial orm.create_session call? i have not done a deep dive in sa source code but just guessing. nothing strongly references the SA session within SQLA's implementation (unless you are using SessionContext/scoped_session). As I suggested earlier, if you suspect this Session is still hanging around, write some debugging code that searches in gc.get_objects() for it, after a gc.collect() has been called. I think it also might be worth it to look into standard SQLA usage patterns instead of the customized things you're doing, i.e. declare mappers within modules and not functions, use SessionContext, etc. while on the theme, here a function from dbcook.tests.util.sa_gentestbase.py: def memusage(): import os pid = os.getpid() m = '' for l in file( '/proc/%(pid)s/status' % locals() ): l = l.strip() for k in 'VmPeak VmRSS VmData'.split(): if l.startswith(k): m += '; '+l if m: print m reduce the query to smaller results, repeat it 100 times and run above each time... --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: sharding database elixir metadata.drop_all and metadata.create_all problem
Michael Bayer wrote: im not familiar with what __metada__ is and this seems to be an elixir specific issue. metadata.drop_all()/create_all() always do what they say. Yeah, the metadata's he's creating are empty, so they aren't creating anything. He needs to associate the metadata with some tables. I've linked to (and quoted from) the relevant documentation over on the elixir list, where the original poster initially raised the question. Sorry for the noise. -- Jonathan LaCour http://cleverdevil.org --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] is metadata.dropall dialect aware?
hi i just run into a bad loop of - creating some schema in postgres - error in the middle of creation - the schema is left in some sorry-state, half there half not - next run, it complains about things (constraints etc) not present as the tables are there but constraints not. - dropall does not work complaining about 'cannot drop A as other stuff depends on it' only killing the whole db solved this mess. any better way to deal with such things? in this case i just wanted to start a new.. i see postgres has some drop..cascade construct.. can that be used? ciao svil --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] inconsistency in default arguments in trunk
Hi there, I just noticed on the trunk that on sessionmaker(), autocommit defaults to False, but autocommit defaults to True when you use create_session. autoflush is also True for sessionmaker, but for create_session, it's False. Finally autoexpire is True for sessionmaker, but False for create_session. Is this intentional? Regards, Martijn --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---