Re: [sqlalchemy] Re: SQLite: Rolling back DDL requests

2010-11-02 Thread Torsten Landschoff
Hi Michael,

I only noticed the activity on this thread today.

On Sat, 2010-08-14 at 11:07 -0400, Michael Bayer wrote:

  reproduce the problem and it suggests the problem stems from some
  behaviour of transactions or of the engine.base.Connection class.  I
  don't quite know what to make of it yet, but I think it shows that the
  effect of passing it through is being counteracted by something
  else.  I ran this on Python 2.6.5 with SA 0.6.3 on Windows.
 
 Postgresql and MS-SQL support transactional DDL, and I use this feature all 
 the time with SQLAlchemy which does nothing special to support them.

That is out of the question. This seems only related to SQLite.

 In addition, isolation_level=None with pysqlite disables the DBAPIs entire 
 transactional system.  SQLAlchemy relies upon this system to handle proper 
 transactional behavior.  Per their documentation, some statements will fail 
 if executed in a transaction - SQLAlchemy does not want to be involved in 
 reorganizing how the DBAPI wants to approach things, emitting manual 'begin' 
 and 'commit' strings, etc.

Reading the pysqlite source, all that this magic isolation_level setting
does is to emit an begin statement in front of UPDATE, DELETE, INSERT
and REPLACE commands. And implicitly commit whenever any other
non-select statement is executed (which includes SAVEPOINT commands!).

I don't think it makes a huge difference for SQLAlchemy to rely on this
insertion of begin statements.

References:

Emitting begin (via _pysqlite_connection_begin):
http://code.google.com/p/pysqlite/source/browse/src/cursor.c?r=2.6.0#598

That's the only invocation to _pysqlite_connection_begin. After reading
a bit more, I can see your point. Setting isolation_level to None
actually disables the commit and rollback methods on cursor objects.
Whee!


 So I consider this a pysqlite bug, and they should offer a mode by which 
 there is no implicit commit for CREATE TABLE.
 
 See http://docs.python.org/library/sqlite3.html#controlling-transactions 

Agreed.

Greetings, Torsten


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: SQLite: Rolling back DDL requests

2010-11-02 Thread Michael Bayer

On Nov 2, 2010, at 5:15 PM, Torsten Landschoff wrote:

 In addition, isolation_level=None with pysqlite disables the DBAPIs entire 
 transactional system.  SQLAlchemy relies upon this system to handle proper 
 transactional behavior.  Per their documentation, some statements will fail 
 if executed in a transaction - SQLAlchemy does not want to be involved in 
 reorganizing how the DBAPI wants to approach things, emitting manual 'begin' 
 and 'commit' strings, etc.
 
 Reading the pysqlite source, all that this magic isolation_level setting
 does is to emit an begin statement in front of UPDATE, DELETE, INSERT
 and REPLACE commands. And implicitly commit whenever any other
 non-select statement is executed (which includes SAVEPOINT commands!).
 
 I don't think it makes a huge difference for SQLAlchemy to rely on this
 insertion of begin statements.
 
 References:
 
 Emitting begin (via _pysqlite_connection_begin):
 http://code.google.com/p/pysqlite/source/browse/src/cursor.c?r=2.6.0#598
 
 That's the only invocation to _pysqlite_connection_begin. After reading
 a bit more, I can see your point. Setting isolation_level to None
 actually disables the commit and rollback methods on cursor objects.
 Whee!

we get occasional requests to work around Pysqlite's bugs regarding 
transactional behavior, since they seem to be extremely slow in fixing them.   
I'm very uncomfortable bypassing normal DBAPI behavior by default, but I would 
support a flag in 0.7 to the pysqlite dialect manual_transactions which sets 
isolation_level=None, emits BEGIN, ROLLBACK, COMMIT.   That way users who 
insist on doing it this way are the only ones exposed to unforeseen issues, and 
they can contribute tests and patches to make it work better.   Some initial 
testing showed that it definitely had issues,  so it remains to be seen how 
much of an impact this has.   As we come across new caveats with this manual 
mode and add further adjustments, we would in effect be reimplementing 
pysqlite's own feature.


 
 
 So I consider this a pysqlite bug, and they should offer a mode by which 
 there is no implicit commit for CREATE TABLE.
 
 See http://docs.python.org/library/sqlite3.html#controlling-transactions 
 
 Agreed.
 
 Greetings, Torsten
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Best way to get data from database

2010-11-02 Thread Alvaro Reinoso
Thank you Eric! It's good approach and I'm gonna keep it in mind. I
can get the data in hidden fields and just use the REST API to do
POST, PUT and DELETE.

On Oct 29, 10:33 pm, Eric Ongerth ericonge...@gmail.com wrote:
 I understand your question if you are getting different data from the
 server in the two database accesses.  But if you are loading the exact
 same data twice for a page load, you should try to eliminate that
 redundancy instead of finding a plan to perform the redundancy in the
 best way.

 If it's the identical data twice, then why not render it into the page
 when you are rendering the HTML... you can render hidden fields, CDATA
 sections, regions of javascript containing any data structure you
 need, etc.

 It's a confusing question because if it's two different DB requests
 then you wouldn't be inquiring about caching for this purpose, but if
 it's two identical DB requests I suspect you already would have
 realized that the data could easily be encoded in the original page
 render.

 On Oct 28, 4:22 pm, Alvaro Reinoso alvrein...@gmail.com wrote:

  Hey guys,

  I have a doubt. I need to get the data from the sever twice every time
  when I load a page, one to render the HTML and another one to get the
  data for client side (javascript).

  So I don't know exactly what it's the best way and fastest. I was
  trying to implement a session object and store the data once using
  joinedload loading technique. When the data is in the client side, to
  kill the session object.

  Another one it's to call the database twice.

  I don't know which one is faster and better because I don't know if
  the database or server stores the first call in memory. If so it's not
  like to call the database twice, right?

  And if the second choice is better which loading technique
  (joinedload, eagerload or subqueryload) is better to use.

  Every call could be a bunch of data.

  Any help could be really useful.

  Thanks in advance!

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Weird error when I add a new attribute to one class

2010-11-02 Thread Alvaro Reinoso
Hello,

I got a weird error when I try to create a new object. Before It
worked without any problem, but I got this error after adding a new
attribute to the User class. This attribute is related to Screen in a
relation many to many through user_screens. This is the error:

InvalidRequestError: One or more mappers failed to compile. Exception
was probably suppressed within a hasattr() call. Message was: One or
more mappers failed to compile. Exception was probably suppressed
within a hasattr() call. Message was: Class
'zeppelinlib.screen.ScreenTest.Screen' is not mapped

These are the classes:

class Screen(rdb.Model):
Set up screens table in the database
rdb.metadata(metadata)
rdb.tablename(screens)

id = Column(id, Integer, primary_key=True)
title = Column(title, String(100))
ip = Column(ip, String(20))
...

user_screens = Table(
user_screens,
metadata,
Column(user_id, Integer, ForeignKey(users.id)),
Column(screen_id, Integer, ForeignKey(screens.id))
)

class User(rdb.Model):
Set up users table in the database
rdb.metadata(metadata)
rdb.tablename(users)

id = Column(id, Integer, primary_key=True)
name = Column(name, String(50))
...

group = relationship(UserGroup, uselist=False)
channels = relationship(Channel, secondary=user_channels,
order_by=Channel.titleView, backref=users)
mediaGroups = relationship(MediaGroup,
secondary=user_media_groups, order_by=MediaGroup.title,
backref=users)
screens = relationship(Screen, secondary=user_screens,
backref=users)

I might not added new relation to user because I really don't know
what the problem is...

Thanks in avance!

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Connection / disconnect / Pool

2010-11-02 Thread Warwick Prince
Hi Michael

I have an issue I can't fathom regarding Pools.I'm doing testing and hit 
the following error at exatly the same point each time;

  File C:\Documents and Settings\wprince\Desktop\PY CODE 
DEVELOPMENT\pyDAP\DAPForm.py, line 3805, in _processQuery
self._queryCount = self._query.count()
  File C:\Python26\lib\site-packages\sqlalchemy\orm\query.py, line 1888, in 
count
should_nest = should_nest[0]
  File C:\Python26\lib\site-packages\sqlalchemy\orm\query.py, line 1924, in 
_col_aggregate
mapper=self._mapper_zero())
  File C:\Python26\lib\site-packages\sqlalchemy\orm\session.py, line 740, in 
scalar
return self.execute(clause, params=params, mapper=mapper, **kw).scalar()
  File C:\Python26\lib\site-packages\sqlalchemy\orm\session.py, line 734, in 
execute
return self._connection_for_bind(engine, close_with_result=True).execute(
TimeoutError: QueuePool limit of size 5 overflow 10 reached, connection timed 
out, timeout 30

I'm creating a basic create_engine using defaults for the pool settings (as 
you can tell).   Within each Thread that does the connection, I create a new 
engine, metadata and scoped session.  That thread then does a basic query (in 
this case NOT using a session at all, just a direct table query) and then I do 
everything I can think of to close the connections.   I do this;

engine.dispose()
session.commit()
session.close()

and then the class instance that holds all these is removed and the thread 
terminates.   All works fine (i.e. no errors or exceptions) until I reach the 
10 overflow limit and then it dies.   Each destroy of an engine and recreate of 
an engine is at least a second apart so it's not being thrashed at all.

What am I missing?

Cheers
Warwick

Warwick Prince 
Managing Director 
mobile: +61 411 026 992 
skype: warwickprince  
 
phone: +61 7 3102 3730 
fax:  +61 7 3319 6734 
web: www.mushroomsys.com 


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.