[sqlalchemy] Re: Using sqlalchemy in twisted.

2009-03-04 Thread 一首诗

I'm not quite sure, but I think I'm pretty careful of sharing objects
between threads.

1st, I only cached as few as possible orm objects.  I tried to detach
them, but I found that if I detach them,  I can't access any of their
fields any more.

2nd, I create new orm objects based on client request, pass them to
class Database and then merge them to scoped sessions, change, commit
and then discard these objects.

3rd, I switch to sqlite frequently to check if there is any database
operation outside Database, because sqlite doesn't allow multi-thread
access.

Actually it seems to work until 2 or 3 days ago suddenly cases hang
the server.

Ah, as I've already written lots of code in ORM, I think maybe I
should try to change Database to use a dedicated thread to handle all
database operations.

That might be a bottle neck of my application, but I really can't give
up orm as these mapper classes are used everywhere in my application.

On Mar 4, 7:26 pm, 一首诗  wrote:
> Hi, all
>
> I am using sqlalchemy in twisted in my project in the way below.
> Defer any database operation so the twisted's main thread won't be
> blocked.
>
> And I use scoped_session, so that sessions won't have to be created
> again and again.
>
> ==
> class Database()
> def __init__(self, conn_str):
> self.conn_str = conn_str
> self.engine = create_engine(self.conn_str, echo=False)
> self.Session = scoped_session(sessionmaker(bind = self.engine,
>  expire_on_commit=False))
>
> def getObjectById(self, klass, id):
> return threads.deferToThread(self._getObjectById, klass, id)
>
> def _getObjectById(self, klass, id):
> sess = self.Session()
> return sess.query(klass).get(id)
> ==
>
> The code doesn't work.   When I limit the thread numbers to 1
>
> reactor.suggestThreadPoolSize(1)
>
> Everything goes fine.  Other wise the server would be blocked and must
> be killed by "kill 9 ...".
>
> The result conflicts with my understanding of sqlalchemy.  Since I
> don't share any object between threads, there should be no problem!
>
> Ah  It always have risk to use something you haven't tried
> before 
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Is it possible to find a column in a Table SchemaItem by the original database name?

2009-03-04 Thread phrrn...@googlemail.com

Some of the unit-tests have parameters which don't refer to column
names so this is the latest formulation:

if identity_column in [t.c[key].name for key in
self.compiled_parameters[0] if hasattr(t.c, key)]:


On Mar 4, 5:34 pm, "phrrn...@googlemail.com" 
wrote:
> It is common (for us) to have tables with an identity column and that
> column *not* be the primary key. I am dealing with this by querying
> the system catalogs and caching the result within the table meta-data
> object itself (probably a bad idea but doing it for expediency). The
> system catalog query returns -- of course -- the database-level
> identifier for the column with the identity property set. How do I
> find out if the insert parameters have a value for the identity
> column? I am using something like this which has a very evil look to
> it:
>
>  if identity_column in [t.c[key].name for key in
> self.compiled_parameters[0]]:
>
> pjjH
>
> if self.compiled.isinsert:
> self.logger.debug('pre_exec for an
> INSERT')
> t = self.compiled.statement.table
>if self._table_identity_column(t) is not None:
> self.HAS_IDENTITY = True
> # This returns the database-level name of the column
> # Since the parameters may have different names (this
> # will happen if a named parameter, 'key', was passed
> # to the Column constructor), we need to project out
> the names
> identity_column = self._table_identity_column(t)
> self.logger.debug([t.c[key].name for key in
> self.compiled_parameters[0]])
> if identity_column in [t.c[key].name for key in
> self.compiled_parameters[0]]:
> self.IDENTITY_INSERT = True
> self.logger.debug('setting IDENTITY_INSERT ON FOR
> %s' % (t))
>
> # detect if the table has an identity column by direct query against
> the system catalogs
>def _table_identity_column(self, t):
> """Return the name of the this table's identity column"""
> # negative caching
>
> if not hasattr(t, '_identity_column'):
> t._identity_column = None
> s = sql.select([syscolumns.c.name],
> from_obj=syscolumns.join(sysobjects))
> s = s.where(sql.and_(sysobjects.c.name == t.name, sql.text
> ("(status & 128 = 128)")))
> r = self.connection.execute(s).fetchone()
> if r is not None:
> t._identity_column  = r[0]
>
return t._identity_column
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: query - automatic lazy attribute expiration

2009-03-04 Thread Michael Bayer


there's a populate_existing() option which will lead to all the found
objects being repopulated, its not often used but you might try that.

ml wrote:
>
> Hi!
>
> Is it possible to force query to automatically expire instance's lazy
> attributes? And I mean instances that were queried and accessed before
> the "second" query?
>
> Example:
>
> mapper(MyObject, my_table, properties={
> "myattr": relation(ChildObject)
>   })
>
> following code will lead in only 3 SELECTs (because the attribute will
> be in the identity map due to the first access)
>
> ... clean identity map ...
> obj = sess.query(MyObject).first()
> print obj.myattr
> obj = sess.query(MyObject).first()
> print obj.myattr # this will not do a new SELECT
>
> I don't want to do something like:
> objs = sess.query(MyObject).all()
> for obj in objs:
> sess.expire(obj, "myattr")
>
> Is there any query option so solve that or I have to use sess.expire?
>
> Thanks.
>
> David
>
> >
>


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Is it possible to find a column in a Table SchemaItem by the original database name?

2009-03-04 Thread phrrn...@googlemail.com

It is common (for us) to have tables with an identity column and that
column *not* be the primary key. I am dealing with this by querying
the system catalogs and caching the result within the table meta-data
object itself (probably a bad idea but doing it for expediency). The
system catalog query returns -- of course -- the database-level
identifier for the column with the identity property set. How do I
find out if the insert parameters have a value for the identity
column? I am using something like this which has a very evil look to
it:

 if identity_column in [t.c[key].name for key in
self.compiled_parameters[0]]:


pjjH


if self.compiled.isinsert:
self.logger.debug('pre_exec for an
INSERT')
t = self.compiled.statement.table
   if self._table_identity_column(t) is not None:
self.HAS_IDENTITY = True
# This returns the database-level name of the column
# Since the parameters may have different names (this
# will happen if a named parameter, 'key', was passed
# to the Column constructor), we need to project out
the names
identity_column = self._table_identity_column(t)
self.logger.debug([t.c[key].name for key in
self.compiled_parameters[0]])
if identity_column in [t.c[key].name for key in
self.compiled_parameters[0]]:
self.IDENTITY_INSERT = True
self.logger.debug('setting IDENTITY_INSERT ON FOR
%s' % (t))

# detect if the table has an identity column by direct query against
the system catalogs
   def _table_identity_column(self, t):
"""Return the name of the this table's identity column"""
# negative caching

if not hasattr(t, '_identity_column'):
t._identity_column = None
s = sql.select([syscolumns.c.name],
from_obj=syscolumns.join(sysobjects))
s = s.where(sql.and_(sysobjects.c.name == t.name, sql.text
("(status & 128 = 128)")))
r = self.connection.execute(s).fetchone()
if r is not None:
t._identity_column  = r[0]
return t._identity_column

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Seeing Queries in Postgres

2009-03-04 Thread Wichert Akkerman

Previously Alex Ezell wrote:
> 
> Sorry the double and top post, but please disregard. I've been
> informed by a coworker that this is a PostgreSQL limitation on the
> length of the current_query column and that thus far, the PostgreSQL
> devs will not change it.

Postgres has a log_statement configuration settings which you can use to
make it log all queries to its logfile. Perhaps that would be useful for
you?

Wichert.

-- 
Wichert Akkerman It is simple to make things.
http://www.wiggy.net/   It is hard to make things simple.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Why do I have to begin a transaction?

2009-03-04 Thread Gunnlaugur Thor Briem
See:

http://www.sqlalchemy.org/docs/05/reference/orm/sessions.html#sqlalchemy.orm.create_session

“The defaults of create_session() are the opposite of that of
sessionmaker(); autoflush and expire_on_commit are False, autocommit is
True. [...] It is recommended to use sessionmaker() instead of
create_session().”

Isn't that it?

- G.


On Wed, Mar 4, 2009 at 7:21 PM, Mike Orr  wrote:

>
> I have a standalone utility using a model based on Declarative on
> MySQL using SQLAlchemy 0.5.2.  Most of the code is at the SQL level
> but at one point I use the ORM to update or insert a summary record.
> So I figured I'd use create_session because it's a single-threaded
> utility.
>
> ===
> sess = orm.create_session(bind=conn)
> q = sess.query(model.Monthly).filter_by(...)
> monthly = q.first()
> if not monthly:
>monthly = model.Monthly()
>...
>sess.add(monthly)
> ...
> sess.commit()
> ===
>
> That raises "sqlalchemy.exc.InvalidRequestError: No transaction is
> begun."  To work around that I have to put "sess.begin()" after
> creating the session.  But why?  I don't have to do this when using
> scoped_session in Pylons.  The SQLAlchemy docs seem to say that it
> automatically manages transactions if you don't change the default
> session arguments, and that this works identically with Session,
> create_session, and sessionmaker.  So why is it behaving differently
> here?
>
> --
> Mike Orr 
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Why do I have to begin a transaction?

2009-03-04 Thread Michael Bayer


create_session() has different defaults than sessionmaker() for historical
reasons.   for the usual defaults, use sessionmaker()().  if there's a
document which says create_session() has the *same* default behavior,
that's a bug.  I thought we had added some explicit note to the session
docs about this but I haven't checked lately.


Mike Orr wrote:
>
> I have a standalone utility using a model based on Declarative on
> MySQL using SQLAlchemy 0.5.2.  Most of the code is at the SQL level
> but at one point I use the ORM to update or insert a summary record.
> So I figured I'd use create_session because it's a single-threaded
> utility.
>
> ===
> sess = orm.create_session(bind=conn)
> q = sess.query(model.Monthly).filter_by(...)
> monthly = q.first()
> if not monthly:
> monthly = model.Monthly()
> ...
> sess.add(monthly)
> ...
> sess.commit()
> ===
>
> That raises "sqlalchemy.exc.InvalidRequestError: No transaction is
> begun."  To work around that I have to put "sess.begin()" after
> creating the session.  But why?  I don't have to do this when using
> scoped_session in Pylons.  The SQLAlchemy docs seem to say that it
> automatically manages transactions if you don't change the default
> session arguments, and that this works identically with Session,
> create_session, and sessionmaker.  So why is it behaving differently
> here?
>
> --
> Mike Orr 
>
> >
>


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] query - automatic lazy attribute expiration

2009-03-04 Thread ml

Hi!

Is it possible to force query to automatically expire instance's lazy
attributes? And I mean instances that were queried and accessed before
the "second" query?

Example:

mapper(MyObject, my_table, properties={
"myattr": relation(ChildObject)
  })

following code will lead in only 3 SELECTs (because the attribute will
be in the identity map due to the first access)

... clean identity map ...
obj = sess.query(MyObject).first()
print obj.myattr
obj = sess.query(MyObject).first()
print obj.myattr # this will not do a new SELECT

I don't want to do something like:
objs = sess.query(MyObject).all()
for obj in objs:
sess.expire(obj, "myattr")

Is there any query option so solve that or I have to use sess.expire?

Thanks.

David

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Why do I have to begin a transaction?

2009-03-04 Thread Mike Orr

I have a standalone utility using a model based on Declarative on
MySQL using SQLAlchemy 0.5.2.  Most of the code is at the SQL level
but at one point I use the ORM to update or insert a summary record.
So I figured I'd use create_session because it's a single-threaded
utility.

===
sess = orm.create_session(bind=conn)
q = sess.query(model.Monthly).filter_by(...)
monthly = q.first()
if not monthly:
monthly = model.Monthly()
...
sess.add(monthly)
...
sess.commit()
===

That raises "sqlalchemy.exc.InvalidRequestError: No transaction is
begun."  To work around that I have to put "sess.begin()" after
creating the session.  But why?  I don't have to do this when using
scoped_session in Pylons.  The SQLAlchemy docs seem to say that it
automatically manages transactions if you don't change the default
session arguments, and that this works identically with Session,
create_session, and sessionmaker.  So why is it behaving differently
here?

-- 
Mike Orr 

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Seeing Queries in Postgres

2009-03-04 Thread Michael Bayer

Alex Ezell wrote:
>
>
>
> On Mar 4, 10:01 am, "Michael Bayer"  wrote:
>> Alex Ezell wrote:
>>
>> > We often do diagnostics on our PostgreSQL systems by looking at
>> > currently running queries with some sql like this:
>>
>> > select procpid, to_char((now() - query_start), 'HH24:MI:SS.MS') as
>> > query_time, client_addr as client_host, current_query
>> > from pg_stat_activity
>> > where current_query not ilike ''
>> > order by query_time desc;
>>
>> > However, since we've moved to sqlalchemy, we've found that we can no
>> > longer see the full text of the current_query because of all the
>> > aliasing that sqlalchemy does in its select statements. Has anyone had
>> > this issue or know of any workarounds whether they be sqlalchemy-based
>> > or in postgres?
>>
>> > Sorry if this is completely off-topic. I'm just at a loss for where to
>> > turn.
>>
>> can you be more specific how using aliases in SELECT statements prevents
>> them from being seen ?  do you mean that they're more difficult to read
>> ?
>> or just they're too long ?
>
> They are too long to see the full query and PostgreSQL has a
> limitation on the length of that current_query column that cannot be
> overcome. This really isn't a sqlalchemy issue at all.
>
>

there is a label_length parameter you can pass to create_engine() which
will shrink the size of column labels.  this can dramatically reduce the
size of the query.  If you set it to any value below 6, you'll get labels
like:

select table.foo as _1, table.bar as _2

etc.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] miss for postgres_from in update() :(

2009-03-04 Thread sector119

Hi all!

u = transactions_table.update().\
where(and_(transactions_table.c.commit_date==current_date, \
   transactions_table.c.serial==serial, \
   transactions_table.c.user_id==users_table.c.id, \
   users_table.c.office_id==id)).\
values(rollback_date=current_date, rollback_time=current_time)

When I perform this query I get (ProgrammingError) missing FROM-clause
entry for table "users".

I use PostgreSQL and I can't find any postgres_from keyword argument
in update() is it possible to add it, or where I have to look to try
to implement it, or how to perform this query with sqlalchemy?

Thanks a lot!
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Seeing Queries in Postgres

2009-03-04 Thread Alex Ezell



On Mar 4, 10:01 am, "Michael Bayer"  wrote:
> Alex Ezell wrote:
>
> > We often do diagnostics on our PostgreSQL systems by looking at
> > currently running queries with some sql like this:
>
> > select procpid, to_char((now() - query_start), 'HH24:MI:SS.MS') as
> > query_time, client_addr as client_host, current_query
> > from pg_stat_activity
> > where current_query not ilike ''
> > order by query_time desc;
>
> > However, since we've moved to sqlalchemy, we've found that we can no
> > longer see the full text of the current_query because of all the
> > aliasing that sqlalchemy does in its select statements. Has anyone had
> > this issue or know of any workarounds whether they be sqlalchemy-based
> > or in postgres?
>
> > Sorry if this is completely off-topic. I'm just at a loss for where to
> > turn.
>
> can you be more specific how using aliases in SELECT statements prevents
> them from being seen ?  do you mean that they're more difficult to read ?
> or just they're too long ?

They are too long to see the full query and PostgreSQL has a
limitation on the length of that current_query column that cannot be
overcome. This really isn't a sqlalchemy issue at all.

Thanks for the quick reply!

/alex
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Seeing Queries in Postgres

2009-03-04 Thread Alex Ezell

Sorry the double and top post, but please disregard. I've been
informed by a coworker that this is a PostgreSQL limitation on the
length of the current_query column and that thus far, the PostgreSQL
devs will not change it.

Sorry for the trouble.

/alex

On Mar 4, 9:56 am, Alex Ezell  wrote:
> We often do diagnostics on our PostgreSQL systems by looking at
> currently running queries with some sql like this:
>
> select procpid, to_char((now() - query_start), 'HH24:MI:SS.MS') as
> query_time, client_addr as client_host, current_query
> from pg_stat_activity
> where current_query not ilike ''
> order by query_time desc;
>
> However, since we've moved to sqlalchemy, we've found that we can no
> longer see the full text of the current_query because of all the
> aliasing that sqlalchemy does in its select statements. Has anyone had
> this issue or know of any workarounds whether they be sqlalchemy-based
> or in postgres?
>
> Sorry if this is completely off-topic. I'm just at a loss for where to
> turn.
>
> Thanks!
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Seeing Queries in Postgres

2009-03-04 Thread Michael Bayer

Alex Ezell wrote:
>
> We often do diagnostics on our PostgreSQL systems by looking at
> currently running queries with some sql like this:
>
> select procpid, to_char((now() - query_start), 'HH24:MI:SS.MS') as
> query_time, client_addr as client_host, current_query
> from pg_stat_activity
> where current_query not ilike ''
> order by query_time desc;
>
> However, since we've moved to sqlalchemy, we've found that we can no
> longer see the full text of the current_query because of all the
> aliasing that sqlalchemy does in its select statements. Has anyone had
> this issue or know of any workarounds whether they be sqlalchemy-based
> or in postgres?
>
> Sorry if this is completely off-topic. I'm just at a loss for where to
> turn.

can you be more specific how using aliases in SELECT statements prevents
them from being seen ?  do you mean that they're more difficult to read ? 
or just they're too long ?


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Seeing Queries in Postgres

2009-03-04 Thread Alex Ezell

We often do diagnostics on our PostgreSQL systems by looking at
currently running queries with some sql like this:

select procpid, to_char((now() - query_start), 'HH24:MI:SS.MS') as
query_time, client_addr as client_host, current_query
from pg_stat_activity
where current_query not ilike ''
order by query_time desc;

However, since we've moved to sqlalchemy, we've found that we can no
longer see the full text of the current_query because of all the
aliasing that sqlalchemy does in its select statements. Has anyone had
this issue or know of any workarounds whether they be sqlalchemy-based
or in postgres?

Sorry if this is completely off-topic. I'm just at a loss for where to
turn.

Thanks!
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Using sqlalchemy in twisted.

2009-03-04 Thread Gunnlaugur Briem

Hi 一首诗,

what database engine are you using? On PostgreSQL at least, table
creation and dropping (and some other operations) take an ACCESS
EXCLUSIVE lock on the database, and will wait for this lock
indefinitely if there are open transactions hanging around. My app
creates and drops tables willy-nilly, not just at start-up, and I find
that I have to be very careful about session lifetimes. So I gave up
scoped_session entirely.

Note that you don't need to avoid creating sessions again and again
(not to save on performance anyway), see
http://www.sqlalchemy.org/docs/05/session.html#frequently-asked-questions
"Sessions are very inexpensive to make, and don't use any resources
whatsoever until they are first used...so create some!" In particular,
note that a session is distinct from a database connection -- database
connections are (typically) expensive to create, and SQLAlchemy
handles connection pooling to save on those costs. That's probably
what you were intending, and it happens even if you keep making new
sessions.

Regards,

- Gulli



On Mar 4, 11:26 am, 一首诗  wrote:
> Hi, all
>
> I am using sqlalchemy in twisted in my project in the way below.
> Defer any database operation so the twisted's main thread won't be
> blocked.
>
> And I use scoped_session, so that sessions won't have to be created
> again and again.
>
> ==
> class Database()
> def __init__(self, conn_str):
> self.conn_str = conn_str
> self.engine = create_engine(self.conn_str, echo=False)
> self.Session = scoped_session(sessionmaker(bind = self.engine,
>  expire_on_commit=False))
>
> def getObjectById(self, klass, id):
> return threads.deferToThread(self._getObjectById, klass, id)
>
> def _getObjectById(self, klass, id):
> sess = self.Session()
> return sess.query(klass).get(id)
> ==
>
> The code doesn't work.   When I limit the thread numbers to 1
>
> reactor.suggestThreadPoolSize(1)
>
> Everything goes fine.  Other wise the server would be blocked and must
> be killed by "kill 9 ...".
>
> The result conflicts with my understanding of sqlalchemy.  Since I
> don't share any object between threads, there should be no problem!
>
> Ah  It always have risk to use something you haven't tried
> before 
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Using sqlalchemy in twisted.

2009-03-04 Thread 一首诗

Hi, all

I am using sqlalchemy in twisted in my project in the way below.
Defer any database operation so the twisted's main thread won't be
blocked.

And I use scoped_session, so that sessions won't have to be created
again and again.

==
class Database()
def __init__(self, conn_str):
self.conn_str = conn_str
self.engine = create_engine(self.conn_str, echo=False)
self.Session = scoped_session(sessionmaker(bind = self.engine,
 expire_on_commit=False))

def getObjectById(self, klass, id):
return threads.deferToThread(self._getObjectById, klass, id)

def _getObjectById(self, klass, id):
sess = self.Session()
return sess.query(klass).get(id)
==

The code doesn't work.   When I limit the thread numbers to 1

reactor.suggestThreadPoolSize(1)

Everything goes fine.  Other wise the server would be blocked and must
be killed by "kill 9 ...".

The result conflicts with my understanding of sqlalchemy.  Since I
don't share any object between threads, there should be no problem!

Ah  It always have risk to use something you haven't tried
before 
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---