[sqlalchemy] Table Reflection Error

2021-04-23 Thread Jeff Griffin
Using sqlalchemy 1.4.5 and pymssql 2.1.5,  I am reflecting an Oracle Table 
from one database (Oracle) and attempting to create it in a second database 
(MS SQL Server).  On table.create I get the following error:

Compiler http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/bda43a6c-7d98-4aa7-ae8d-f8bbcae192a6n%40googlegroups.com.


[sqlalchemy] Assigning SQL expression to a version column.

2019-12-04 Thread Jeff Horemans

What is the expected behavior of assigning an SQL expression to a version 
column?
Before migrating from version 1.2.4. to 1.3.3. the priority was given to 
the assigned expression, afterwards it seems to be the other way around.
I think there is something to say about both, but I wanted to check if this 
change was purposely done and what the reasoning is going forward.
Thanks in advance.




-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/990ee6f0-76f7-4c28-abe2-23de1a866ce6%40googlegroups.com.


Re: [sqlalchemy] Problems with versioning when using a class mapped on a select of a table in SQLAlchemy 1.2

2018-02-22 Thread Jeff Horemans
Ok, thanks for the quick response! Keep up the exceptionally good work!

Op woensdag 21 februari 2018 17:19:44 UTC+1 schreef Mike Bayer:
>
> On Wed, Feb 21, 2018 at 10:04 AM, Mike Bayer <mik...@zzzcomputing.com 
> > wrote: 
> > On Wed, Feb 21, 2018 at 6:33 AM, Jeff Horemans <horema...@gmail.com 
> > wrote: 
> >> When migrating from 1.1 tot 1.2, we noticed that the version_id is not 
> >> getting set on a class that maps to a select of a table. 
> >> I've added a test case class below to the test_versioning.py included 
> in 
> >> SQLAlchemy to confirm this behaviour. 
> >> This case runs fine in versions 1.0 and 1.1, but gives a KeyError on 
> the 
> >> version_id in 1.2 as shown in the stack traces below. 
> >> 
> >> I'll be happy to make an issue on the repository if needed. 
> > 
> > no need, this is completely perfect, I'll create the issue.  I hope to 
> > get this into 1.2.4 which I need to release hopefully this week / 
> > today preferred (but I've wanted to release for two days already, ran 
> > out of time).   thanks! 
>
> this is 
> https://bitbucket.org/zzzeek/sqlalchemy/issues/4193/versioning-logic-fails-for-mapping-against
>  
> , where you'll note that I found two more issues with versioning and 
> mappers to select statements, which you've already been working 
> around.  this is not a use case we tested for. 
>
>
> > 
> > 
> >> 
> >> class VersioningSelectTest(fixtures.MappedTest): 
> >> 
> >> __backend__ = True 
> >> 
> >> @classmethod 
> >> def define_tables(cls, metadata): 
> >> Table('version_table', metadata, 
> >>   Column('id', Integer, primary_key=True, 
> >>  test_needs_autoincrement=True), 
> >>   Column('version_id', Integer, nullable=False), 
> >>   Column('value', String(40), nullable=False)) 
> >> 
> >> @classmethod 
> >> def setup_classes(cls): 
> >> class Foo(cls.Basic): 
> >> pass 
> >> 
> >> def _fixture(self): 
> >> Foo, version_table = self.classes.Foo, 
> self.tables.version_table 
> >> 
> >> current = version_table.select().where(version_table.c.id > 
> >> 0).alias('current_table') 
> >> 
> >> mapper(Foo, current, version_id_col=version_table.c.version_id) 
> >> s1 = Session() 
> >> return s1 
> >> 
> >> @testing.emits_warning(r".*versioning cannot be verified") 
> >> def test_multiple_updates(self): 
> >> Foo = self.classes.Foo 
> >> 
> >> s1 = self._fixture() 
> >> f1 = Foo(value='f1') 
> >> f2 = Foo(value='f2') 
> >> s1.add_all((f1, f2)) 
> >> s1.commit() 
> >> 
> >> f1.value = 'f1rev2' 
> >> f2.value = 'f2rev2' 
> >> s1.commit() 
> >> 
> >> eq_( 
> >> s1.query(Foo.id, Foo.value, 
> >> Foo.version_id).order_by(Foo.id).all(), 
> >> [(f1.id, 'f1rev2', 2), (f2.id, 'f2rev2', 2)] 
> >> ) 
> >> 
> >> 
> >> 
> >> FAIL 
> >> 
> test/orm/test_versioning.py::VersioningSelectTest_postgresql+psycopg2_9_5_11::()::test_multiple_updates
>  
>
> >> 
> >> == 
> FAILURES 
> >> == 
> >> ___ 
> >> VersioningSelectTest_postgresql+psycopg2_9_5_11.test_multiple_updates 
> >>  
> >> Traceback (most recent call last): 
> >>   File 
> >> 
> "/home/jeffh/vortex-workspace/v-finance/subrepos/SQLAlchemy-1.2.3/test/orm/test_versioning.py",
>  
>
> >> line 131, in test_multiple_updates 
> >> s1.commit() 
> >>   File 
> >> 
> "/home/jeffh/vortex-workspace/v-finance/subrepos/SQLAlchemy-1.2.3/test/../lib/sqlalchemy/orm/session.py",
>  
>
> >> line 943, in commit 
> >> self.transaction.commit() 
> >>   File 
> >> 
> "/home/jeffh/vortex-workspace/v-finance/subrepos/SQLAlchemy-1.2.3/test/../lib/sqlalchemy/orm/session.py",
>  
>
> >> line 467, in commit 
> >> self._prepare_impl() 
> >>   File 
> >> 
> "/home/jeffh/vortex-w

[sqlalchemy] Problems with versioning when using a class mapped on a select of a table in SQLAlchemy 1.2

2018-02-21 Thread Jeff Horemans
When migrating from 1.1 tot 1.2, we noticed that the version_id is not 
getting set on a class that maps to a select of a table.
I've added a test case class below to the test_versioning.py included in 
SQLAlchemy to confirm this behaviour.
This case runs fine in versions 1.0 and 1.1, but gives a KeyError on the 
version_id in 1.2 as shown in the stack traces below.

I'll be happy to make an issue on the repository if needed.

class VersioningSelectTest(fixtures.MappedTest):

__backend__ = True

@classmethod
def define_tables(cls, metadata):
Table('version_table', metadata,
  Column('id', Integer, primary_key=True,
 test_needs_autoincrement=True),
  Column('version_id', Integer, nullable=False),
  Column('value', String(40), nullable=False))  

@classmethod
def setup_classes(cls):
class Foo(cls.Basic):
pass 

def _fixture(self):
Foo, version_table = self.classes.Foo, self.tables.version_table

current = version_table.select().where(version_table.c.id > 0).alias
('current_table')

mapper(Foo, current, version_id_col=version_table.c.version_id)
s1 = Session()
return s1

@testing.emits_warning(r".*versioning cannot be verified")
def test_multiple_updates(self):
Foo = self.classes.Foo

s1 = self._fixture()
f1 = Foo(value='f1')
f2 = Foo(value='f2')
s1.add_all((f1, f2))
s1.commit()

f1.value = 'f1rev2'
f2.value = 'f2rev2'
s1.commit()

eq_(
s1.query(Foo.id, Foo.value, Foo.version_id).order_by(Foo.id).all
(),
[(f1.id, 'f1rev2', 2), (f2.id, 'f2rev2', 2)]
)



FAIL 
test/orm/test_versioning.py::VersioningSelectTest_postgresql+psycopg2_9_5_11::()::test_multiple_updates

== FAILURES 
==
___ 
VersioningSelectTest_postgresql+psycopg2_9_5_11.test_multiple_updates 

Traceback (most recent call last):
  File 
"/home/jeffh/vortex-workspace/v-finance/subrepos/SQLAlchemy-1.2.3/test/orm/test_versioning.py",
 
line 131, in test_multiple_updates
s1.commit()
  File 
"/home/jeffh/vortex-workspace/v-finance/subrepos/SQLAlchemy-1.2.3/test/../lib/sqlalchemy/orm/session.py",
 
line 943, in commit
self.transaction.commit()
  File 
"/home/jeffh/vortex-workspace/v-finance/subrepos/SQLAlchemy-1.2.3/test/../lib/sqlalchemy/orm/session.py",
 
line 467, in commit
self._prepare_impl()
  File 
"/home/jeffh/vortex-workspace/v-finance/subrepos/SQLAlchemy-1.2.3/test/../lib/sqlalchemy/orm/session.py",
 
line 447, in _prepare_impl
self.session.flush()
  File 
"/home/jeffh/vortex-workspace/v-finance/subrepos/SQLAlchemy-1.2.3/test/../lib/sqlalchemy/orm/session.py",
 
line 2243, in flush
self._flush(objects)
  File 
"/home/jeffh/vortex-workspace/v-finance/subrepos/SQLAlchemy-1.2.3/test/../lib/sqlalchemy/orm/session.py",
 
line 2369, in _flush
transaction.rollback(_capture_exception=True)
  File 
"/home/jeffh/vortex-workspace/v-finance/subrepos/SQLAlchemy-1.2.3/test/../lib/sqlalchemy/util/langhelpers.py",
 
line 66, in __exit__
compat.reraise(exc_type, exc_value, exc_tb)
  File 
"/home/jeffh/vortex-workspace/v-finance/subrepos/SQLAlchemy-1.2.3/test/../lib/sqlalchemy/orm/session.py",
 
line 2333, in _flush
flush_context.execute()
  File 
"/home/jeffh/vortex-workspace/v-finance/subrepos/SQLAlchemy-1.2.3/test/../lib/sqlalchemy/orm/unitofwork.py",
 
line 391, in execute
rec.execute(self)
  File 
"/home/jeffh/vortex-workspace/v-finance/subrepos/SQLAlchemy-1.2.3/test/../lib/sqlalchemy/orm/unitofwork.py",
 
line 556, in execute
uow
  File 
"/home/jeffh/vortex-workspace/v-finance/subrepos/SQLAlchemy-1.2.3/test/../lib/sqlalchemy/orm/persistence.py",
 
line 193, in save_obj
update_version_id in states_to_update
  File 
"/home/jeffh/vortex-workspace/v-finance/subrepos/SQLAlchemy-1.2.3/test/../lib/sqlalchemy/orm/persistence.py",
 
line 1131, in _finalize_insert_update_commands
if state_dict[mapper._version_id_prop.key] is None:
KeyError: u'version_id

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: pickled automap.metadata and AmbiguousForeignKeysError

2016-01-20 Thread Jeff Laughlin
I have the exact same issue. I create a new MetaData, use tometadata to 
copy the first to the second, and then if I use the second metadata it 
works fine. But if I pickle and unpickle it first then it doesn't work. So 
something about pickling/unpickling is breaking it.

On Monday, February 24, 2014 at 5:32:59 PM UTC-5, bkcsfi sfi wrote:
>
> I have a legacy database that I would like to use with automap
>
> unfortunately a number of tables each have multiple fks to the same table. 
>
> Using docs at 
> http://docs.sqlalchemy.org/en/rel_0_9/orm/relationships.html#handling-multiple-join-paths
>
> and
>
>
> http://docs.sqlalchemy.org/en/rel_0_9/orm/extensions/automap.html#specifying-classes-explcitly
>
> I thought I could define just some of the relationships before calling 
> base.prepare(), but that's not working for me.
>
> First, because reflection takes a long time on my database, I am pickling 
> base.metadata and restoring it later for use
>
> e..g to store the meta-data
>
> def generate_metadata_from_engine(engine):
> base = automap_base()
> base.prepare(engine, reflect=True)
> return base.metadata
>
>
> def store_metadata_to_file(metadata):
> cPickle.dump(metadata,
>  file(get_metadata_path(), 'wb'),
>  cPickle.HIGHEST_PROTOCOL)
>
>
> Later I restore it like this
>
> def get_unprepared_sqla_base():
> """load metadata from file and return auto-map base"""
> return automap_base(metadata=load_metadata_from_file())
>
> def load_metadata_from_file():
> return cPickle.load(file(get_metadata_path(), 'rb'))
>
>
> Given a table in part like this:
>
> CREATE TABLE ORG
> (
>   DEFAULT_MANIFEST INTEGER,
>   RETURN_MANIFEST INTEGER
>   
> ) 
>
> ALTER TABLE ORG ADD CONSTRAINT C644ORG_DEFAULT_
>   FOREIGN KEY (DEFAULT_MANIFEST) REFERENCES MANIFEST (ID) ON UPDATE 
> CASCADE ON DELETE SET NULL;
> ALTER TABLE ORG ADD CONSTRAINT C644ORG_RETURN
>   FOREIGN KEY (RETURN_MANIFEST) REFERENCES MANIFEST (ID) ON UPDATE CASCADE 
> ON DELETE SET NULL;
>
> I have 2 fks to the manifest table
>
>
> When running a query against the org table, I get this error
>
> sqlalchemy.exc.ArgumentError: Error creating backref 'org_collection' on 
> relationship 'org.org_collection': property of that name exists on mapper 
> 'Mapper|org|org'
>
>
> It looks like the above error is actually on the manifest table, however 
> it's cleaner for me to define the relationship on the Org table and hope 
> that automap figures out the backref .. (not sure that works)
>
> So I'm trying this 
>
> def test():
> engine = get_firebird_engine()
> base = get_unprepared_sqla_base()
>
> class Org(base):
> __tablename__ = 'org'
>
> default_manifest_collection = relationship('manifest', 
> foreign_keys="org.default_manifest")
> current_manifest_collection = relationship('manifest', 
> foreign_keys="org.current_manifest")
>
> base.prepare()
> session = get_session()
> session.query(base.classes.org).first()
>
>
> when I call test(), I now get this error::
>
> Traceback (most recent call last):
>   File "database_metadata/test.py", line 46, in 
> main(args=sys.argv[1:])
>   File "database_metadata/test.py", line 39, in main
> test()
>   File "database_metadata/test.py", line 25, in test
> session.query(base.classes.org).first()
>   File 
> "/home/bkc/Python_Environments/mwd/local/lib/python2.7/site-packages/sqlalchemy/util/_collections.py",
>  
> line 174, in __getattr__
> raise AttributeError(key)
> AttributeError: org
>
>
> If I comment out the definition of class Org in the test(), I go back to 
> getting the sqlalchemy.exc.ArgumentError: (though it's random which table 
> it fails on first)
>
> 1. am I correctly using pickled automap metadata?
>
> 2. does the existence of the Org class in base metadata break 
> base.prepare() because I'm not also reflecting from the database at that 
> time?
>
> 3. should I instead declare the Org class fragment before reflecting, and 
> then pickl'ing the meta-data with my modified org class will work?
>
> something else instead?
>
> Thanks
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] FYI: Put up a blog post on how to manage PostgreSQL Materialized Views using SQLAlchemy

2016-01-07 Thread Jeff Widman
PostgreSQL materialized views have been extremely useful as a caching layer
for some of my projects. They're fast, easy to setup, don't require extra
infrastructure, and easy to refresh.

However, it took me forever to figure out how to manage them using
SQLAlchemy, so I wrote it up in the hopes of saving others some time.

If interested, you can see the code here:
https://github.com/jeffwidman/sqlalchemy-postgresql-materialized-views

And my blog post of explanatory notes here:
http://www.jeffwidman.com/blog/847/using-sqlalchemy-to-create-and-manage-postgresql-materialized-views/

Much thanks to Mike Bayer for his help.

Cheers,
Jeff


-- 

*Jeff Widman*
jeffwidman.com <http://www.jeffwidman.com/> | 740-WIDMAN-J (943-6265)
<><
ᐧ

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] using savepoint transactions in unittests for a webapp which uses a scoped session?

2015-12-20 Thread Jeff Widman
Have you seen this?

koo.fi/blog/2015/10/22/flask-sqlalchemy-and-postgresql-unit-testing-with-transaction-savepoints/

He's using Flask-SQLAlchemy, but the underlying principles should be the
same. It's not quite the same since IIRC he uses nested savepoints for
everything rather than scoped session but it still might be useful.

I'm doing something similar although with Pytest, although I haven't yet
set it up so the nested savepoints enclose creating/dropping the tables...
I've been meaning to write a blogpost explaining it over the holidays, if I
get to it I'll send over a link.
ᐧ

On Sun, Dec 20, 2015 at 4:16 PM, Gerald Thibault <dieselmach...@gmail.com>
wrote:

> I've been struggling with this for a few days now, and I've tried a whole
> slew of different approaches, but I'm just missing something.
>
> I started with the example at
> http://docs.sqlalchemy.org/en/latest/orm/session_transaction.html#joining-a-session-into-an-external-transaction-such-as-for-test-suites
> and have been trying variations on it to suit my goals.
>
> We have a webapp which, in simplest form, could be represented as this:
>
> # in the db setup file
>
> dsn = 'mysql://...t'
> engine = create_engine(dsn)
> session_factory = sessionmaker(bind=engine)
> scoped = scoped_session(session_factory)
>
> # the app
>
> class Webapp(object):
>
> def dispatch(self, name):
> view = getattr(self, name)
> session = scoped()
> try:
> rsp = view()
> session.commit()
> return rsp
> except:
> session.rollback()
> return 'error'
> finally:
> session.close()
> pass
>
> # the views files
>
> def add_user(self, session):
> user = User(username='test')
> session = scoped()
> session.add(user)
> session.flush()
> return 'user %s' % user.id
>
> The app uses commit, rollback, and close on a scoped session.
>
> I'd like to know how to adjust the example at the above URL so the test is
> able to "enclose" the webapp, so the calls to sessionmaker, and all
> operation that occur within the webapp, are within the scope of the test
> transaction, so everything that happens in the app can be rolled back at
> the end of the test. The example at the url acquires a connection, and
> binds the session to the connection, and I'm not sure how to force the
> sessionmaker to work with that. Could 'scopefunc' be used to somehow force
> the returned session to be within the context of the transaction?
>
> There is also fixture loading code, which starts by acquiring a session
> from the scoped_session, adding fixtures, then committing. A simple example
> could be this function:
>
> def load_fixture(username):
> session = scoped()
> session.add(User(username=username))
> session.commit()
>
> I'd like to use this in unittests, and have it rolled back along with the
> transaction.
>
> I attached a full (nonworking) example, can you tell me what I am doing
> wrong? Is this even possible with a scope session, or is this sort of
> testing limited to to sessions bound to connections? Would I need to
> rewrite the session acquisition method to return a globally stored
> connection-bound session before defaulting to the scoped (engine-bound)
> session?
>
> An example I saw at
> https://web.archive.org/web/20140419235219/http://sontek.net/blog/detail/writing-tests-for-pyramid-and-sqlalchemy
> seems to indicate that person overwrote the app session from within the
> unittest, to ensure the app used that session. Is that the approach I would
> need to take?
>
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>



-- 

*Jeff Widman*
jeffwidman.com <http://www.jeffwidman.com/> | 740-WIDMAN-J (943-6265)
<><

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] adjacency list to nested dictionary

2015-12-16 Thread Jeff Widman
What database are you using?

Are you trying to solve data insert or retrieval?

Do you want to do your traversal in your app or use SQLAlchemy to generate
a SQL query that does all the work within the DB and then returns the
result?


ᐧ

On Wed, Dec 16, 2015 at 1:28 PM, Horcle <g...@umn.edu> wrote:

> I have the following SQLAlchemy class representing an adjacency list:
>
> class Node(db.Model):
> __tablename__ = 'meds'
> id = Column(Integer, primary_key=True)
> type = Column(String(64))
> name = Column(String(64))
> parent_id = Column(Integer, ForeignKey('node.id'))
> children = relationship("Node")
>
> I need to create a dictionary to represent a tree of arbitrary depth that
> would look like:
>
>
> {
> "children": [
> {
>   "children": [
> {
>   "id": 4,
>   "name": "Child1",
>   "parent_id": 3,
>   "type": "Parent 2"
>   "children": [
> {
>   "id": 6,
>   "name": "Child3",
>   "parent_id": 3,
>   "type": "Parent 3",
>   "children": [...]
> },
> {
>   "id": 7,
>   "name": "Child4",
>   "parent_id": 3,
>   "type": "Leaf"
>}
>   ]
> },
> {
>   "id": 5,
>   "name": "Child2",
>   "parent_id": 3,
>   "type": "Leaf"
> }
>   ],
>   "id": 3,
>   "name": "CardioTest",
>   "parent_id": null,
>   "type": "Parent"
> }
>   ]
> }
>
>
> Can this dictionary be built non-recursively? I am not sure how to
> manually do this otherwise.
>
> Thanks in advance!
>
> Greg--
>
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>



-- 

*Jeff Widman*
jeffwidman.com <http://www.jeffwidman.com/> | 740-WIDMAN-J (943-6265)
<><

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] adjacency list to nested dictionary

2015-12-16 Thread Jeff Widman
You've got to separate issues here--one of retrieving the data, and the
second of serializing it to JSON. They're related, but perhaps easier to
solve if you mentally think of them as distinct problems.

Since you're storing the data as an adjacency list, then you'll need to
either use a recursive function or a while loop to traverse the database
up/down levels of the tree. There's no avoiding that unfortunately.
(Technically if you know the maximum depth of the tree you could use a for
loop, but I would never do that because it isn't future-proof.)

AFAIK MySQL doesn't support Recursive CTEs--there are some hacks posted on
StackOverflow, but it'll almost certainly be eaiser to do this within your
app (although slow because for each level of the tree you're issuing a new
query, processing it within your app, and then re-issuing a new query for
the next level up or down the tree).

For serializing the data as nested JSON, I hear good things about
marshmallow's support for nested json. You could certainly write your own
function, it's just Marshmallow provides some niceties like allowing you to
specify the maximum nesting depth:
https://github.com/marshmallow-code/marshmallow/issues/9

If I were doing it, my first prototype would probably write two
functions--one that maps the adjacency list in SQL to an equivalent list of
lists in Python, and then a second that unwraps the python lists and
serializes them into nested marshmallow json. From there it'll be easier to
decide if it's worthwhile to eliminate the python lists of lists by doing
the serializing inline with traversing the adjacency list.





ᐧ

On Wed, Dec 16, 2015 at 4:08 PM, Horcle <g...@umn.edu> wrote:

> We're using MySQL and need retrieval of all data from the table in the
> format given (nested JSON). Simplest solution would be good (whether in app
> or SQLAlchemy). I tried using the JsonSerializer as noted here
> http://stackoverflow.com/questions/30367450/how-to-create-a-json-object-from-tree-data-structure-in-database,
> but could not get it to work.
>
> Thanks!
>
> Greg--
>
> On Wednesday, December 16, 2015 at 5:42:01 PM UTC-6, Jeff Widman wrote:
>>
>> What database are you using?
>>
>> Are you trying to solve data insert or retrieval?
>>
>> Do you want to do your traversal in your app or use SQLAlchemy to
>> generate a SQL query that does all the work within the DB and then returns
>> the result?
>>
>>
>> ᐧ
>>
>> On Wed, Dec 16, 2015 at 1:28 PM, Horcle <g...@umn.edu> wrote:
>>
>>> I have the following SQLAlchemy class representing an adjacency list:
>>>
>>> class Node(db.Model):
>>> __tablename__ = 'meds'
>>> id = Column(Integer, primary_key=True)
>>> type = Column(String(64))
>>> name = Column(String(64))
>>> parent_id = Column(Integer, ForeignKey('node.id'))
>>> children = relationship("Node")
>>>
>>> I need to create a dictionary to represent a tree of arbitrary depth
>>> that would look like:
>>>
>>>
>>> {
>>> "children": [
>>> {
>>>   "children": [
>>> {
>>>   "id": 4,
>>>   "name": "Child1",
>>>   "parent_id": 3,
>>>   "type": "Parent 2"
>>>   "children": [
>>> {
>>>   "id": 6,
>>>   "name": "Child3",
>>>   "parent_id": 3,
>>>   "type": "Parent 3",
>>>   "children": [...]
>>> },
>>> {
>>>   "id": 7,
>>>   "name": "Child4",
>>>   "parent_id": 3,
>>>   "type": "Leaf"
>>>}
>>>   ]
>>> },
>>> {
>>>   "id": 5,
>>>   "name": "Child2",
>>>   "parent_id": 3,
>>>   "type": "Leaf"
>>> }
>>>   ],
>>>   "id": 3,
>>>   "name": "CardioTest",
>>>   "parent_id": null,
>>>   "type": "Parent"
>>> }
>>>   ]
>>> }
>>>
>>>
>>> Can this dictionary be built non-recursively? I am not sure how to
>>> manually do this otherwise.
>>>
>>> Thanks i

Re: [sqlalchemy] How do a I create Postgres Materialized View using SQLAlchemy?

2015-12-14 Thread Jeff Widman
>
> > the descriptions on that page for several other methods say 'see
> > execute_at() for more information' so it doesn't exactly "feel"
> deprecated.
>
> I see it just at DDLElement.execute() where that doc should be updated.
>  Where else?
>

I think here as well?

http://docs.sqlalchemy.org/en/latest/core/ddl.html#sqlalchemy.schema.DDL.params.statement

http://docs.sqlalchemy.org/en/latest/core/compiler.html?highlight=execute_at#subclassing-guidelines
(search for execute_at)




> > I tried calling the normal Index function, but it threw an exception
> > when I passed it a materialized view object
>
> what exception?


> OK this might be because that recipe uses a lower-case "table()" object
> which is not as full featured as Table, i'd alter the recipe to use
> Table perhaps
>


It may be related to using __table__() rather than Table(). I tried
altering the recipe to use Table(), but couldn't figure it out, since my
materialized views need to be created after the other tables are created.
Perhaps Flask-SQLAlchemy was doing some magic when I called db.create_all()
that grabbed everything with Table().

Not a big deal. I'm planning to blog how I worked everything out later this
week as reference for anyone else trying to create/manage a postgresql
materialized view with sqlalchemy, and when I do I'll send out a link to
the list.

For reference, here is the exception:

*# materialized view already exists as a __table__() object called
'GearCategoryMV'*
*# Afterwards I try creating an index on it:*

*db.Index('myindex', GearCategoryMV.id, unique=False)*

*# fails with the following traceback*

*Traceback (most recent call last):*
*  File "manage.py", line 11, in *
*from app.models.gear_models import (GearCategory, GearCategoryMV,
GearItem,*
*  File "/Users/jeffwidman/Code/rc/api_rc_flask/app/models/gear_models.py",
line 428, in *
*db.Index('myindex', GearCategoryMV.id, unique=True)*
*  File
"/Users/jeffwidman/.virtualenvs/api_rc_flask/lib/python3.5/site-packages/sqlalchemy/sql/schema.py",
line 3172, in __init__*
*ColumnCollectionMixin.__init__(self, *columns)*
*  File
"/Users/jeffwidman/.virtualenvs/api_rc_flask/lib/python3.5/site-packages/sqlalchemy/sql/schema.py",
line 2446, in __init__*
*self._check_attach()*
*  File
"/Users/jeffwidman/.virtualenvs/api_rc_flask/lib/python3.5/site-packages/sqlalchemy/sql/schema.py",
line 2491, in _check_attach*
*col._on_table_attach(_col_attached)*
*  File
"/Users/jeffwidman/.virtualenvs/api_rc_flask/lib/python3.5/site-packages/sqlalchemy/sql/schema.py",
line 1306, in _on_table_attach*
*fn(self, self.table)*
*  File
"/Users/jeffwidman/.virtualenvs/api_rc_flask/lib/python3.5/site-packages/sqlalchemy/sql/schema.py",
line 2488, in _col_attached*
*self._check_attach(evt=True)*
*  File
"/Users/jeffwidman/.virtualenvs/api_rc_flask/lib/python3.5/site-packages/sqlalchemy/sql/schema.py",
line 2479, in _check_attach*
*assert not evt, "Should not reach here on event call"*
*AssertionError: Should not reach here on event call*
ᐧ

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Determination of string lengths

2015-12-05 Thread Jeff Widman
This is probably the fastest answer:
http://stackoverflow.com/questions/15743121/how-to-filter-in-sqlalchemy-by-string-length
ᐧ

On Sat, Dec 5, 2015 at 1:34 AM, SF Markus Elfring <
elfr...@users.sourceforge.net> wrote:

> Hello,
>
> I would like to reuse the standard function "len" for the determination
> of string lengths from specific database fields in a query.
> http://www.dailyfreecode.com/code/len-function-296.aspx
>
> Which interface does provide this functionality for the software
> "SQLAlchemy"?
>
> Regards,
> Markus
>
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at http://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>



-- 

*Jeff Widman*
jeffwidman.com <http://www.jeffwidman.com/> | 740-WIDMAN-J (943-6265)
<><

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Splitting a table ?

2015-12-02 Thread Jeff Widman
I've also been dealing with a similar problem of managing a parent/child
table that is the result of splitting an original table, and also curious
to hear suggested solutions.

Solutions I've thought about so far:
a) using a hybrid property or column_property to map the columns back onto
the original table--a little tedious because each one has to be re-added

b) Creating a new object that's the result of joining the two tables--main
problem is I have to update all the app code to use the new object

Curious to hear what others do.
ᐧ

On Wed, Dec 2, 2015 at 7:18 AM, Jonathan Vanasco <jonat...@findmeon.com>
wrote:

>
> I recently had to "split" or partition another table into 2 -- one of high
> write and low write access.  The new table is just the high-write columns
> fkey'd onto the original table, and handled with a relationship.
>
> I was wondering if there was any "shortcut" in sqlalchemy to automatically
> handle stuff like this, or a common pattern.
>
> The best I could think of is using an association proxy to map the columns
> back -- but that must be done for every column, and doesn't handle the
> creation of the new table as a dependency.
>
> My current manual solution works, just wondering if there are better ways.
>
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at http://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>



-- 

*Jeff Widman*
jeffwidman.com <http://www.jeffwidman.com/> | 740-WIDMAN-J (943-6265)
<><

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] How do a I create Postgres Materialized View using SQLAlchemy?

2015-11-15 Thread Jeff Widman
Thanks Mike. That helped tremendously.

A couple of followup questions:
(questions based on this code extending the view recipe:
https://gist.github.com/jeffwidman/1656498de21dc0afcdab)

1) In the recipe for creating views, I see:

CreateView(name, selectable).execute_at('after-create', metadata)


But in the docs it looks like this 'execute_at()' method is deprecated
<http://docs.sqlalchemy.org/en/latest/core/ddl.html#sqlalchemy.schema.DDLElement.execute_at>,
should I be doing something different?
Also, not clear to my why this method is deprecated, especially since the
descriptions on that page for several other methods say 'see execute_at()
for more information' so it doesn't exactly "feel" deprecated.


2) What is the preferred way to create a unique index on the materialized
view? PostgreSQL requires the index in order to refresh the view
concurrently.

I tried calling the normal Index function, but it threw an exception when I
passed it a materialized view object (but worked perfectly when I passed a
table object, so I know it's not incorrect params). So instead, I created a
new subclass of DDLElement specifically for indexing materialized views...
you can see it in the gist, and it works fine, just not sure if there's a
better way.


3) How do I set this index-creating-ddlelement to trigger (using sqlalchemy
events) after the materialized view is created?

Currently I first call CreateView_DDLElement.execute_at('after-create',
db.metadata), and then immediately after call
CreateViewIndex_DDLElement.execute_at('after-create', db.metadata). So far
it's worked fine, but there's a potential for the 'create index' to be
called before 'create view' since they both are listening for the same
event. So how do I set this index to be triggered on completion of creating
the view?

More than happy to submit a PR helping tidy up the docs on this, just not
sure what I should be saying.

Cheers,
Jeff


On Mon, Nov 9, 2015 at 1:58 PM, Mike Bayer <mike...@zzzcomputing.com> wrote:

>
>
> On 11/09/2015 03:52 PM, Jeff Widman wrote:
> > A couple of quick questions about Postgres Materialized Views--searched,
> > but found very little:
> >
> > I've got a few values that are fairly expensive to calculate, so I'd
> > like to pre-calculate the results. It's fine if the the data is stale
> > for a few hours, so I'd like to use Postgres Materialized Views:
> >
> > 1) How do I create a Materialized View using SQLAlchemy?
>
> connection.execute("CREATE MATERIALIZED VIEW ")
>
> alternatively you can adapt the recipe at
> https://bitbucket.org/zzzeek/sqlalchemy/wiki/UsageRecipes/Views to
> support the "MATERIALIZED VIEW" syntax supported by your database.
>
>
> > I checked the docs, and all I could find is how to reflect an
> > already-created materialized view. All my other DDL is managed by
> > SQLAlchemy in my app, so for simplicity I'd rather handle the
> > materialized view definition using SQLAlchemy as well.
> >
> > 2) How do I query this materialized view?
>
> a view is like any other table-oriented structure in the database.  a
> Table metadata object or Table-mapped ORM class that uses the name and
> columns of this view will suffice.  I've added a short example of a
> declarative mapping against the custom view object to the above example.
>
>
>
>
>
> >
> > 3) Is there a special method for refreshing?
> > Or should I just do /db.engine.execute("REFRESH MATERIALIZED VIEW
> > view_name CONCURRENTLY") /?
> >
> > Cheers,
> > Jeff
> >
> > --
> > You received this message because you are subscribed to the Google
> > Groups "sqlalchemy" group.
> > To unsubscribe from this group and stop receiving emails from it, send
> > an email to sqlalchemy+unsubscr...@googlegroups.com
> > <mailto:sqlalchemy+unsubscr...@googlegroups.com>.
> > To post to this group, send email to sqlalchemy@googlegroups.com
> > <mailto:sqlalchemy@googlegroups.com>.
> > Visit this group at http://groups.google.com/group/sqlalchemy.
> > For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at http://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>



-- 

*Jeff Widman*
jeffwidman.com <http://www.jeffwidman.com/> | 740-WIDMAN-J (943-6265)
<><
ᐧ

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] How do a I create Postgres Materialized View using SQLAlchemy?

2015-11-09 Thread Jeff Widman
A couple of quick questions about Postgres Materialized Views--searched, 
but found very little:

I've got a few values that are fairly expensive to calculate, so I'd like 
to pre-calculate the results. It's fine if the the data is stale for a few 
hours, so I'd like to use Postgres Materialized Views:

1) How do I create a Materialized View using SQLAlchemy? 
I checked the docs, and all I could find is how to reflect an 
already-created materialized view. All my other DDL is managed by 
SQLAlchemy in my app, so for simplicity I'd rather handle the materialized 
view definition using SQLAlchemy as well.

2) How do I query this materialized view?

3) Is there a special method for refreshing? 
Or should I just do *db.engine.execute("REFRESH MATERIALIZED VIEW view_name 
CONCURRENTLY") *?

Cheers,
Jeff

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Routing Session and query count problem

2014-10-16 Thread Jeff Oliver


On Wednesday, October 15, 2014 2:55:22 PM UTC-7, Michael Bayer wrote:


 The system currently locates the bind via the tables present in the 
 selectable, as when you bind to a mapper, the tables that the mapper 
 selects from are extracted and also set up. 

 Issue is added at 
 https://bitbucket.org/zzzeek/sqlalchemy/issue/3227/count-doesnt-send-the-mapper-to-get_bind
  
  but this is only for 1.0. 

 stick with the workaround in the meantime, or use the approach within 
 get_bind() that also takes the “clause” argument into account (e.g. look 
 for parententity in _annotations, or keep track of the mapper.tables 
 collection and match it up). 



Thanks for the reply, and filing a bug/enhancement report already.
 
Since I don't expect to have a situation where one query crosses databases 
(or servers), i wrote a function to find the first Table class in the 
clause, recursively looking through the clause children.  Its not the most 
glamorous solution, but the application is not a glamorous application 
either.  Once i find the table, I look up the class in the mapper registry, 
and select the engine from there.  It works for me, for now.  I don't know 
if it'll work for all of my situations or not, but for now it'll fly, im 
sure to be tweaked in the future.

I'll be on the lookout for the issue to be resolved.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Routing Session and query count problem

2014-10-15 Thread Jeff Oliver
I've got a problem with SQLAlchemy 0.9.8 with a Routing Session setup and 
using the query count method.

I started from this post on Mike Bayer's 
blog: 
http://techspot.zzzeek.org/2012/01/11/django-style-database-routers-in-sqlalchemy/

I did some tweaks such that I could specify the database engine to use 
within the class definition of a model.  The RoutingSession class ended up 
like this:

class RoutingSession(Session):
def get_bind(self, mapper=None, clause=None):
if mapper:
try:
return engines[mapper.class_.__db_source__]
except (AttributeError, KeyError):
return engines['primary']
else:
return engines['primary']

So in one of my model classes, i can switch the database engine by simply 
adding a class attribute as such:

class HWInventory(Base):
__tablename__ = 'Inventory'
__db_source__ = 'inventory_db'

So, if I do a DBSession.query(HWInventory), the inventory_db engine would 
be used.  If I queried some other table, the primary engine would be used.

However, this only works if I execute the query using .one(), or .first, or 
.all().  If i attempt to count the results using .count(), the primary 
engine gets used.

Adding a bit of debugging into this class, it turns out that for the 
.count() call, there is no mapper when get_bind is called!  So the result 
is that i send back the primary engine by default.  Of course when i count 
the HWInventory table, I get an error from the database server saying that 
it has no idea what I'm talking about.

If i change the query to be something like 
DBSession.query(func.count(HWInventory.id)), it works, but I had been using 
WebHelpers pagination classes, which uses the generic 
DBSession.query(HWInventory).count() form of the query.  

The only thing I can think of here is that .count() causes an anonymous 
query to be executed.  The main query is not connected to a table, but the 
subquery within it is.  Why doesn't the subquery cause the query to bind to 
the correct engine?  Why doesn't the subquery provide a mapper for the 
whole thing?  

Of course, what I can do to fix this?  Is there a way in the get_bind() to 
maybe detect if there are subqueries and select the engine based upon 
something in there?

Jeff

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] queue pooling, mysql sleeping, for very occasional use apps?

2014-09-19 Thread Jeff Dairiki
On Fri, Sep 19, 2014 at 10:40:28AM -0700, Iain Duncan wrote:
 Hi folks, I'm putting together something for a client that will get used
 very occasionally, and has a some worker processes that use SQLA. I'm
 wondering what the right approach is for the dreaded MySQL has gone away.
 Should I just use some absurdly high number for pool_recycle ( ie months)
 or is there a smarter way to do it.

Sqlalchemy drops connections from the pool which have been idle longer
than pool_recycle.  If you're having gone aways, you don't need a longer
pool_recycle, you need a shorter one.  It needs to be shorter than
mysql's connect-timeout.

 It's a process to send out a bunch of
 sms messages via twilio, so it totally doesn't matter if takes a bit to
 wake up.
 
 Is disabling pooling with NullPool the right way to go?

That would work, I guess (at the expense of pooling.)

Other options:

- Optimistic: Use (a shorter) pool_recycle

  http://docs.sqlalchemy.org/en/rel_0_9/dialects/mysql.html#connection-timeouts

- Pessimistic: Ping the connection on every checkout

  
http://docs.sqlalchemy.org/en/rel_0_9/core/pooling.html#disconnect-handling-pessimistic

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: queue pooling, mysql sleeping, for very occasional use apps?

2014-09-19 Thread Jeff Dairiki
On Fri, Sep 19, 2014 at 05:43:46PM -0700, Iain Duncan wrote:
 So can anyone tell me what exactly happens with NullPool, I'm not clear
 from the docs. Does that mean a fresh connection will be made on a new hit,
 and we'll never get the gone away, but at the expense of slower
 connections?

Yes, that's right, I think.  You'll get a brand new connection every time.
(The connection will be closed when the Session committed or rolled back,
and a new one opened whenever the Session needs one again.)
 
 Is is terribly slower?

It depends on how many connections you're making.  My guess is that
the difference is probably not noticeable until your connection rate
gets up to a few connections per second.

Using a (non-null) pool with a short (few seconds) setting for
``recycle`` would accomplish much the same thing, while allowing for
connection pooling during times of high connection rate.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: tools to print sql statement in pyramid with sqlalchemy

2014-08-08 Thread Jeff Dairiki
On Thursday, August 7, 2014 6:28:24 PM UTC-7, 王凯凯 wrote:

 Hi, I am using pyramid with sqlalchemy. I wonder wether there are some 
 tools  like 'django-admin.py sqlall' in django that print the sql 
 statement of 'CREATE TABLE'.


I’m unfamiliar with Django, so maybe you’re looking for something other 
than this, but there’s a FAQ about this:

  
http://docs.sqlalchemy.org/en/rel_0_9/faq.html#how-can-i-get-the-create-table-drop-table-output-as-a-string

(If you want a powerful tool to help with migrating your database between 
schema revisions, see Alembic: http://alembic.readthedocs.org/ .)

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] SQLAlchemy 0.9.2 UnicodeEncodeErrors with PG 9.3.2 on Amazon RDS

2014-02-18 Thread Jeff Dairiki
On Tue, Feb 18, 2014 at 02:42:36PM -0800, Valentino Volonghi wrote:
 Hey guys,
 
 we've moved recently to SQLAlchemy 0.9.2 from 0.7.9.
 
 This move coincided with the introduction of UnicodeEncoreErrors in our 
 system. They appear to be happening at random and have no real way for us 
 to debug as we can't really reproduce them, except that they happen in our 
 system and the tracebacks lead directly to the insides of sqlalchemy.
 
 https://gist.github.com/dialtone/9081835
 
 This is the traceback we get, with the nice and clear:
 UnicodeEncodeError: 'ascii' codec can't encode character u'\xdc' in 
 position 1: ordinal not in range(128)
 
 Our PG 9.3 is setup with encoding at utf8, we also have the client_encoding 
 set at utf8 but it still seems that the library randomly picks what to do 
 in that spot.
 
 The stacktrace points to this error:
 https://github.com/zzzeek/sqlalchemy/blob/rel_0_9_2/lib/sqlalchemy/orm/strategies.py#L154
 
 Almost no matter what our calls are, when we fetch an object with a Unicode 
 field that is actually using multi-bytes it ends up failing point to that 
 line with the UnicodeEncodeError.
 
 If I were to trust my guts I'd say it might be related to py3k support but 
 I'd probably be wrong.
 
 Can anyone help us figure out what this issue might be?

I've been having this same (I think) problem for some time.  Running
sqlalchemy 0.9.2, using the mysqldb driver, python 2.6.  The problem
has been happening sporadically on our production server.  Until
today, I could not reproduce it in a test environment, so I've slowly
been adding debug logging to the production code to try to see what's
going on.

I've finally figured it out, I think, just a couple of hours ago, as
it turns out.  There is a race condition having to do with calling
dialect.initialize().  If multiple threads are clamoring to access the
database when the app starts, some of them may get to the database
before the dialect is completely initialized.  Since
dialect.initialize() is responsible for (among other things) correctly
setting dialect.returns_unicode_strings, this can result in sqlalchemy
trying to erroneously attempting to decoding unicode strings to
unicode (which results in the UnicodeEncodeError.)

Anyhow, bug report is at:
  https://bitbucket.org/zzzeek/sqlalchemy/issue/2964/

As a workaround, at app config time, right after create_engine is
called, I execute a query (before there is a possibility of a
multi-thread race.)  E.g.

engine = sa.create_engine(...)

# early query to force dialect.initialize()
engine.execute(sa.sql.select([1]))


Jeff 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


[sqlalchemy] Re: using Utf8 problem

2014-01-30 Thread Jeff Dairiki


On Thursday, January 30, 2014 6:11:15 AM UTC-8, Alexander Peprepelica wrote:

 I have such code

 engine = create_engine(u'mysql+mysqldb://
 login:pass@127.0.0.1:3307/mydb?charset=utf8use_unicode=1')

 in windows all is fine

 But when I execute code in linux I have problem with unicode
 What I do wrong?


Can you be more specific as to what problem you are having?

Not having much to go on my first guess is: What version of MySQLdb 
(mysql-python) are you using?
Current is 1.2.5.  If you're using a distribution-installed version what 
you have may be quite a bit older.
You probably want at least 1.2.3 (which fixed a memory leak having to do 
with the handling of unicode
values.)

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] Anybody have twophase/zope.sqlalchemy/MySQL working?

2014-01-13 Thread Jeff Dairiki
On Sun, Jan 12, 2014 at 07:46:13PM -0500, Michael Bayer wrote:
 On Jan 11, 2014, at 9:42 PM, Laurence Rowe laurencer...@gmail.com wrote:
  2. Make the engine Transaction aware of the configured reset_on_return 
  behaviour so that Transaction.close() can either rollback or commit. 
  Connection.close() could then call Transaction.close().
 
 Sorta went with #2, you can review how I went about it in 
 https://github.com/zzzeek/sqlalchemy/commit/9c64607572a04eb2ed7981db8999732100f39d4d
  .  It should be going through the jenkins tests in the next few hours.

Thank you Mike and Laurence!

FWIW, preliminary testing here (with the web app that was causing
the original trouble) indicates that this fixes the issue.

Jeff

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] Anybody have twophase/zope.sqlalchemy/MySQL working?

2014-01-09 Thread Jeff Dairiki
Thank you for the reply!  Sorry for the delayed response.  (Holidays.)

On Mon, Dec 23, 2013 at 11:52:25PM -0800, Laurence Rowe wrote:
 On Thursday, 12 December 2013 16:30:59 UTC-8, Jeff Dairiki wrote:
 
  Do you understand why the datamanager is finding the SessionTransaction 
  and using that directly?  (At least I think that's what it's doing --- 
  I haven't sussed this out completely.)  I'm referring to the line from 
  SessionDataManager.__init__: 
 
 self.tx = session.transaction._iterate_parents()[-1] 
 
 
 This is to handle the case of a session being a nested transaction at the 
 time the ``zope.sqlalchemy`` datamanager joins the ``transaction`` 
 transaction
  
 
  and then later the session manager calls .prepare(), .commit() 
  and/or .rollback() on self.tx, *if* self.tx is not None. 
 
  The thing is, for me, if the session has only been used for read 
  operation, 
  self.tx seems to be None.  So the datamanager never commits anything. 
 
  I don't understand (yet) why the data manager doesn't just call 
  .prepare() and .commit() directly on the sqlalchemy session instance. 
 
 
 The zope.sqlalchemy datamanager will rollback the transaction when it 
 detects that no work is done. In that case self.tx is set to None during 
 SessionTransaction.commit and during the two phase commit there is nothing 
 to do.
  
 The sequence in which the datamanager methods are called is found in 
 Transaction._commitResources: 
 https://github.com/zopefoundation/transaction/blob/1.4.1/transaction/_transaction.py#L382
 
 Dunno.  It doesn't seem like it should be MySQL specific thing, but maybe 
  postgres and others are more forgiving of a two-phase XA BEGIN being 
  terminated by a regular one-phase ROLLBACK? 
 
  Anyhow, I'll keep poking when I find a moment.  
 
 
 (And from a later message in this thread)
 
 Okay, so this was not a complete solution.  It does cause the 
  datamanager to commit the the sessions when the transaction is terminated 
  by transaction.commit(), but neither setting the 
  initial state to STATUS_CHANGED, nor calling mark_changed() is enough 
  to get the datamanager to rollback the session if the transaction 
  is ended with transaction.abort(). 
 
 
 Looking at the datamanager logic again, I don't think self.tx can ever be 
 None in abort() (at least not normally) but closing the session will close 
 the underlying transaction on the connection, which issues the rollback: 
 https://github.com/zzzeek/sqlalchemy/blob/rel_0_9_0b1/lib/sqlalchemy/engine/base.py#L1151

Okay, I've traced things out a bit more.

If the session state is not STATUS_INVALIDATED (aka STATUS_CHANGED),
SessionDataManager.commit() does a self._finish('no work').  That is
where self.tx gets set to None (this --- correctly --- then
causes .tpc_vote() and .tpc_finish() to be no-ops.)

So here's the crux of the biscuit: in two-phase-commit mode (at least
with MySQL) the sqlalchemy session (or session transaction) must be either
committed or explicitly rolled back before it is closed.
SessionDataManager.commit() does not do a rollback.

Example code:

import sqlalchemy as sa 

engine = sa.create_engine('mysql://guest@furry/test',
  echo='debug', 
  echo_pool='debug')
Sess = sa.orm.sessionmaker(bind=engine, twophase=True)
sess = Sess()
sess.query(sa.null()).scalar()
#sess.rollback()
sess.close()

Edited log output:

DEBUG:sqlalchemy.pool.QueuePool:Connection _mysql.connection open to 
'furry' at 29a3370 checked out from pool
INFO:sqlalchemy.engine.base.Engine:BEGIN TWOPHASE (implicit)
INFO:sqlalchemy.engine.base.Engine:XA BEGIN %s
INFO:sqlalchemy.engine.base.Engine:('_sa_a38de3b7234d5fe8ad55d2bb13ec195c',)
INFO:sqlalchemy.engine.base.Engine:SELECT NULL AS anon_1
DEBUG:sqlalchemy.pool.QueuePool:Connection _mysql.connection open to 
'furry' at 29a3370 being returned to pool
DEBUG:sqlalchemy.pool.QueuePool:Connection _mysql.connection open to 
'furry' at 29a3370 rollback-on-return
INFO:sqlalchemy.pool.QueuePool:Invalidate connection _mysql.connection 
open to 'furry' at 29a3370 (reason: OperationalError:(1399, 'XAER_RMFAIL: The 
command cannot be executed when global transaction is in the  ACTIVE state'))
DEBUG:sqlalchemy.pool.QueuePool:Closing connection _mysql.connection open 
to 'furry' at 29a3370

Here, no attempt is made to terminate the two-phase (XA) transaction
until the connection is returned to the pool, at which point a
plain one-phase 'ROLLBACK' is issued.  MySQL does not like this,
thus the XAER_RMFAIL error.

Uncommenting the 'sess.rollback()' in the above example results in an
'XA END and 'XA ROLLBACK' being emitted before the connection is
returned to the pool, properly ending the two-phase (XA) transaction.
This eliminates the XAER_RMFAIL error, and results in proper recycling
of the pooled connection.


In zope.sqlalchemy.datamanger, if I change SessionDataManager.commit()
from

Re: [sqlalchemy] Anybody have twophase/zope.sqlalchemy/MySQL working?

2013-12-14 Thread Jeff Dairiki
Thanks for the replies!

On Fri, Dec 13, 2013 at 08:40:33AM -0800, Jonathan Vanasco wrote:
 Also, I'm remembering the stuff about the need to use 'mark_changed'.  on 
 the project page ( https://pypi.python.org/pypi/zope.sqlalchemy ) search 
 for the section about `mark_changed` -- that info might be relevant

Aha!  That does work.  Equivalently, it turns out, one can start the
datamanager in the changed state:

from zope.sqlalchemy.datamanager import STATUS_CHANGED

Session = sa.orm.sessionmaker(
bind=engine, twophase=True,
extension=ZopeTransactionExtension(STATUS_CHANGED))

which *swear* I had already tried without success, but I just tried
it again and it worked.

(Still, it doesn't feel right to have to mark my only-used-for-reading
session changed, but, whatever works, I guess...)

And I would still like to understand why zope.sqlalchemy is
poking around in sqlalchemy internals, rather than relying on the public
API.  Calling session.prepare()/session.commit() (or session.rollback())
seems to work just fine — the session knows what to do, after all.

 just a thought--
 
 two things:
 
 1-  The thing is, for me, if the session has only been used for read 
 operation, 
 self.tx seems to be None.  So the datamanager never commits anything.
 
 Check out Tres Seaver's reply 
 here: https://groups.google.com/d/msg/pylons-discuss/R4S-UwHV6ww/ekD7M9UEvp8J

Not sure, but I think Tres is talking about the case where the database
is not accessed at all (no read access either.)
(In that case, no datamanager will be created or attached to the
session, and correctly no commit/rollback will be issued.)

 
 2- 
 
 is it possible that this behavior is driven by a MySQL server sql mode ?  
 
 if you're not familiar with them, they let you completely change the 
 behavior of mysql -- on the server, connection, or statement.   ( 
 incidentally, they were what drove me mad and into the warm , stable, and 
 loving arms of PostgreSQL )
 
 there could be a SQL mode setting that is causing some weirdness for you -- 
  http://dev.mysql.com/doc/refman/5.7/en/server-sql-mode.html 

Oh my god.  I'm just going to close that page, and try to forget I
ever saw that...  On quick look though, none of those modes seem to
have anything to do with transactions (thankfully).


Thank you for the pointers!

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] Anybody have twophase/zope.sqlalchemy/MySQL working?

2013-12-14 Thread Jeff Dairiki
On Sat, Dec 14, 2013 at 06:58:14PM -0800, Jeff Dairiki wrote:
 Thanks for the replies!
 
 On Fri, Dec 13, 2013 at 08:40:33AM -0800, Jonathan Vanasco wrote:
  Also, I'm remembering the stuff about the need to use 'mark_changed'.  on 
  the project page ( https://pypi.python.org/pypi/zope.sqlalchemy ) search 
  for the section about `mark_changed` -- that info might be relevant
 
 Aha!  That does work.  Equivalently, it turns out, one can start the
 datamanager in the changed state:
 
 from zope.sqlalchemy.datamanager import STATUS_CHANGED
 
 Session = sa.orm.sessionmaker(
 bind=engine, twophase=True,
 extension=ZopeTransactionExtension(STATUS_CHANGED))
 
 which *swear* I had already tried without success, but I just tried
 it again and it worked.

Okay, so this was not a complete solution.  It does cause the
datamanager to commit the the sessions when the transaction is terminated
by transaction.commit(), but neither setting the
initial state to STATUS_CHANGED, nor calling mark_changed() is enough
to get the datamanager to rollback the session if the transaction
is ended with transaction.abort().
 
 (Still, it doesn't feel right to have to mark my only-used-for-reading
 session changed, but, whatever works, I guess...)
 
 And I would still like to understand why zope.sqlalchemy is
 poking around in sqlalchemy internals, rather than relying on the public
 API.  Calling session.prepare()/session.commit() (or session.rollback())
 seems to work just fine — the session knows what to do, after all.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] Anybody have twophase/zope.sqlalchemy/MySQL working?

2013-12-12 Thread Jeff Dairiki
On Fri, Dec 13, 2013 at 12:26:08AM +0100, Thierry Florac wrote:
 Hi,
 I'm using two-phase transactions with ZODB, PostgreSQL and Oracle databases
 connected with SQLAlchemy without problem.
 I'm not using native zope.sqlalchemy package, but another package called
 ztfy.alchemy that I've built (based on zc.alchemy and zope.sqlalchemy), and
 I didn't tried with MySQL.
 Maybe you can have a look if that can help... =
 https://pypi.python.org/pypi/ztfy.alchemy

Hi Thierry,  Thank you for the reply.

I tried to install ztfy.alchemy to give it a try, but installation failed
with No distributions at all found for zc.set.

In any case, your data manager looks quite a bit like the one in
zope.sqlalchemy.  I have a suspicion that it has the same problem.

Do you understand why the datamanager is finding the SessionTransaction
and using that directly?  (At least I think that's what it's doing ---
I haven't sussed this out completely.)  I'm referring to the line from
SessionDataManager.__init__:

   self.tx = session.transaction._iterate_parents()[-1]

and then later the session manager calls .prepare(), .commit()
and/or .rollback() on self.tx, *if* self.tx is not None.

The thing is, for me, if the session has only been used for read operation,
self.tx seems to be None.  So the datamanager never commits anything.

I don't understand (yet) why the data manager doesn't just call
.prepare() and .commit() directly on the sqlalchemy session instance.

Dunno.  It doesn't seem like it should be MySQL specific thing, but maybe
postgres and others are more forgiving of a two-phase XA BEGIN being
terminated by a regular one-phase ROLLBACK?

Anyhow, I'll keep poking when I find a moment.

Cheers,
Jeff


  I just noticed my connection pool isn't pooling.  Whenever a connection
  which has been used only for reading is returned to the pool an
  XAER_RMFAIL operational error is returned in response to the pools
  rollback-on-return.  (This causes the connection to
  be closed rather than returned to the pool.  So far I haven't noticed
  any other deleterious effects.)
 
  Here's a simple test script.  I've tried this with 0.9b1 and 0.8.4.
 
import sqlalchemy as sa
from zope.sqlalchemy import ZopeTransactionExtension
import transaction
 
engine = sa.create_engine('mysql://guest@furry/test',
  echo='debug',
  echo_pool='debug')
 
Sess = sa.orm.sessionmaker(bind=engine, twophase=True,
   extension=ZopeTransactionExtension())
sess = Sess()
sess.query(sa.null()).scalar()
transaction.commit()
 
  Log output looks like (edited for brevity):
 
[...]
INFO sqlalchemy.engine.base.Engine BEGIN TWOPHASE (implicit)
INFO sqlalchemy.engine.base.Engine XA BEGIN %s
INFO sqlalchemy.engine.base.Engine
  ('_sa_ab2538f3cc26258e0a30bfd407d0d687',)
INFO sqlalchemy.engine.base.Engine SELECT NULL AS anon_1
INFO sqlalchemy.engine.base.Engine ()
DEBUG sqlalchemy.engine.base.Engine Col ('anon_1',)
DEBUG sqlalchemy.engine.base.Engine Row (None,)
DEBUG sqlalchemy.pool.QueuePool Connection _mysql.connection open to
  'furry' at 2093320 being returned to pool
DEBUG sqlalchemy.pool.QueuePool Connection _mysql.connection open to
  'furry' at 2093320 rollback-on-return
INFO sqlalchemy.pool.QueuePool Invalidate connection _mysql.connection
  open to 'furry' at 2093320 (reason: OperationalError:(1399, 'XAER_RMFAIL:
  The command cannot be executed when global transaction is in the  ACTIVE
  state'))
DEBUG sqlalchemy.pool.QueuePool Closing connection _mysql.connection
  open to 'furry' at 2093320
 
  So it looks like the zope.sqla data manager is not managing to commit
  the transaction.
 
  Things work okay with twophase=False.
 
  Anybody have a hint?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


[sqlalchemy] Anybody have twophase/zope.sqlalchemy/MySQL working?

2013-12-10 Thread Jeff Dairiki
I just noticed my connection pool isn't pooling.  Whenever a connection
which has been used only for reading is returned to the pool an
XAER_RMFAIL operational error is returned in response to the pools
rollback-on-return.  (This causes the connection to
be closed rather than returned to the pool.  So far I haven't noticed
any other deleterious effects.)

Here's a simple test script.  I've tried this with 0.9b1 and 0.8.4.

  import sqlalchemy as sa 
  from zope.sqlalchemy import ZopeTransactionExtension
  import transaction

  engine = sa.create_engine('mysql://guest@furry/test',
echo='debug', 
echo_pool='debug')

  Sess = sa.orm.sessionmaker(bind=engine, twophase=True,
 extension=ZopeTransactionExtension())
  sess = Sess()
  sess.query(sa.null()).scalar()
  transaction.commit()

Log output looks like (edited for brevity):
 
  [...]
  INFO sqlalchemy.engine.base.Engine BEGIN TWOPHASE (implicit)
  INFO sqlalchemy.engine.base.Engine XA BEGIN %s
  INFO sqlalchemy.engine.base.Engine ('_sa_ab2538f3cc26258e0a30bfd407d0d687',)
  INFO sqlalchemy.engine.base.Engine SELECT NULL AS anon_1
  INFO sqlalchemy.engine.base.Engine ()
  DEBUG sqlalchemy.engine.base.Engine Col ('anon_1',)
  DEBUG sqlalchemy.engine.base.Engine Row (None,)
  DEBUG sqlalchemy.pool.QueuePool Connection _mysql.connection open to 'furry' 
at 2093320 being returned to pool
  DEBUG sqlalchemy.pool.QueuePool Connection _mysql.connection open to 'furry' 
at 2093320 rollback-on-return
  INFO sqlalchemy.pool.QueuePool Invalidate connection _mysql.connection open 
to 'furry' at 2093320 (reason: OperationalError:(1399, 'XAER_RMFAIL: The 
command cannot be executed when global transaction is in the  ACTIVE state'))
  DEBUG sqlalchemy.pool.QueuePool Closing connection _mysql.connection open to 
'furry' at 2093320

So it looks like the zope.sqla data manager is not managing to commit
the transaction.

Things work okay with twophase=False.

Anybody have a hint?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] SQLAlchemy 0.8.3 / 0.9.0b1 released

2013-10-30 Thread Jeff Dairiki
Thanks for the quick fix, Mike!

On Wed, Oct 30, 2013 at 10:37:48AM -0400, Michael Bayer wrote:
 this is a regression from http://www.sqlalchemy.org/trac/ticket/2818 and that 
 is fixed in eee219bc7e0656fb8afa9879 / 2576b5cdfb09fd1fb28 .
 
 
 On Oct 29, 2013, at 11:40 PM, Jeff Dairiki dair...@dairiki.org wrote:
 
  On Sat, Oct 26, 2013 at 05:41:04PM -0400, Michael Bayer wrote:
  I’ve released (hopefully without mistakes…) SQLAlchemy 0.8.3 and 0.9.0b1.
  
  Having just updated to 0.8.3 I'm seeing a new SAWarning.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] SQLAlchemy 0.8.3 / 0.9.0b1 released

2013-10-29 Thread Jeff Dairiki
On Sat, Oct 26, 2013 at 05:41:04PM -0400, Michael Bayer wrote:
 I’ve released (hopefully without mistakes…) SQLAlchemy 0.8.3 and 0.9.0b1.

Having just updated to 0.8.3 I'm seeing a new SAWarning.  (I'm not quite
sure whether this qualifies as a bug, or whether maybe I was just doing
it wrong.)

Here's the shortest example I've come up with which elicits the warning:

import sqlalchemy as sa
from sqlalchemy.ext.declarative import declarative_base

Base = declarative_base()

class ModelBase(Base):
__tablename__ = 'ModelBases'
id = sa.Column(sa.Integer, primary_key=True)
model_type = sa.Column(sa.Integer)
__mapper_args__ = {'polymorphic_on': model_type}

class Model(ModelBase):
__tablename__ = 'Models'
__mapper_args__ = {'polymorphic_identity': 1}
id = sa.Column(sa.ForeignKey(ModelBase.id), primary_key=True)

sess = sa.orm.Session()

model_exists = sess.query(Model).exists()

The last line produces:

[...]/site-packages/sqlalchemy/sql/expression.py:2491: SAWarning: Column 'id' 
on table sqlalchemy.sql.expression.Select at 0x4203f50; Select object being 
replaced by Column(u'id', Integer(), table=Select object, primary_key=True, 
nullable=False), which has the same key.  Consider use_labels for select() 
statements.
  self[column.key] = column

Adding .with_labels() to the last line quiets the warning:

model_exists = sess.query(Model).with_labels().exists()

as does changing the name of the ``id`` column to something (anything)
else.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


RE: [sqlalchemy] timing / profiling object loading ?

2013-09-27 Thread Jeff Peck
On Thursday, September 26, 2013 11:58:26 AM UTC-4, Michael Bayer wrote:

That's why I don't have a caching function included with SQLAlchemy.
Because then I'd be debugging it, not you :)

 

Ha!

 

My caching is pretty lightweight.  I do need to figure out a better system
though -- that's for post-launch / investing though !  I'll toss you a
preview when it's close to launch.  It's built on quite a bit of Bayer-Tech.

[Jeff Peck] 

 

Jonathan,

I'm late to this, but I was debugging a Flask app last night and found an
excellent extension called flask_debugtoolbar. I just did a quick check, and
there appears to be a port of this for Pyramid too. If this works like the
Flask version, it will intercept your page responses and give you a chance
to look at every query you're making via sqlalchemy. You can even click on
the queries to get the EXPLAIN. It also does a complete profile, so you can
see which functions are taking up the most time.

 

https://github.com/Pylons/pyramid_debugtoolbar

 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


RE: [sqlalchemy] Automatically create secondary tables for many to many relationship?

2013-09-22 Thread Jeff Peck
 
 And replace it with something like this:
 class Keyword(Base):
  snip  (same as before) 
 
 class Foo(Base):
  snip  
 keywords = generate_many_to_many_for_me('Foo', 'Keyword')

 there's a recipe for this at this blog post, you might consider building
on it's general example:
http://techspot.zzzeek.org/2011/05/17/magic-a-new-orm/
 the missing link is to just use mapping events to set things up at the
appropriate time.


Michael,
Your cats are awesome! I remember reading this the day you put it out there,
but I just didn't understand it at the time. I've been going over this and
now I understand it perfectly! Any chance of pulling some of this back into
sqlalchemy? many_to_many alone saves quite a bit of boilerplate code. 


The only change I made was to let many_to_many set up default local / remote
columns on the secondary table so you can just do:

class Video(Base):
keywords = many_to_many(Keyword, video_keyword)

class many_to_many(DeferredProp):
Generates a many to many relationship.

def __init__(self, target, tablename, **kw):
self.target = target
self.tablename = tablename
self.local = kw.get('local')
self.remote = kw.get('remote')
self.kw = kw

def _config(self, cls, key):
Create an association table between parent/target
as well as a relationship().

target_cls = cls._decl_class_registry[self.target]
local_pk = list(cls.__table__.primary_key)[0]
target_pk = list(target_cls.__table__.primary_key)[0]

if not self.local:
self.local = cls.__tablename__.lower() + _id
if not self.remote:
self.remote = target_cls.__tablename__.lower() + _id

t = Table(
self.tablename,
cls.metadata,
Column(self.local, ForeignKey(local_pk),
primary_key=True),
Column(self.remote, ForeignKey(target_pk),
primary_key=True),
keep_existing=True
)
rel = relationship(target_cls,
secondary=t,
collection_class=self.kw.get('collection_class', set)
)
setattr(cls, key, rel)
self._setup_reverse(key, rel, target_cls)



Thanks,
Jeff Peck


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


RE: [sqlalchemy] How-to filter by the time part of a datetime field?

2013-08-30 Thread Jeff Peck
One way to do this is to use a function within your database to convert a
timestamp down to a basic time type, and then do comparison on the converted
value. Here is an example using sqlite as the back end. Sqlite has a time
function that can convert a datetime down to a time for you, so we get at
that using sqlalchemy's func:

from sqlalchemy import create_engine, Column, Integer, DateTime
from sqlalchemy.orm import sessionmaker
from sqlalchemy.ext.declarative import declarative_base

engine = create_engine('sqlite:///test.sqlite')
Session = sessionmaker(bind=engine)
Base = declarative_base()
class Test(Base):
__tablename__ = 'test'
id = Column(Integer, primary_key=True)
timestamp = Column(DateTime, nullable=False)
Base.metadata.create_all(bind=engine)


# This is just sticking random timestamps into the database...
import datetime
import random
session = Session()
session.query(Test).delete()
for i in range(100):
d = random.randint(1, 30)
h = random.randint(0, 23)
m = random.randint(0, 59)
test = Test()
test.timestamp = datetime.datetime(2013, 8, d, h, m)
session.add(test)
session.commit()

# Heres the important part. Pull in func 
from sqlalchemy import func

# Say we want any timestamp in the db regardless of date where the time
# is between 12:00 and 12:30
t1 = datetime.time(12, 00)
t2 = datetime.time(12, 30)

query = session.query(Test).filter(func.time(Test.timestamp).between(t1,
t2))
for row in query.all():
print(row.timestamp)

Regards,
Jeff Peck

-Original Message-
From: sqlalchemy@googlegroups.com [mailto:sqlalchemy@googlegroups.com] On
Behalf Of Warwick Prince
Sent: Friday, August 30, 2013 8:01 AM
To: sqlalchemy@googlegroups.com
Subject: Re: [sqlalchemy] How-to filter by the time part of a datetime
field?

 Hi Warwick,
 
 On 30/08/2013 14:38, Warwick Prince wrote:
 I'm sure there is a better way, but you could always filter using a
date/time and supply the date part as well (i.e. today) so that you are
comparing datetime to datetime.  (Something like: select all rows where the
datetime is between 2013-08-30 06:00:00 and 2013-08-30 11:00:00)  :-)
 
 Thanks for your suggestion, this could do the trick.
 
 However my rows are split over a lot of days and if I follow your advice
I'll end with a lot of between 2013-08-30 06:00:00 and 2013-08-30 11:00:00
(one for each day).  This will certainly work as expected, but I'm looking
for a more efficient way of doing this.
 
 
 Thanks.
 -- 
 Laurent Meunier laur...@deltalima.net
 
Ahh - I see.  Oh well, I'm sure someone with infinitely better SQL skills
with chime in shortly.  :-)

Warwick

 -- 
 You received this message because you are subscribed to the Google Groups
sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an
email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/groups/opt_out.

-- 
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an
email to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


RE: [sqlalchemy] SQLAlchemy hot-copy capability

2013-08-30 Thread Jeff Peck
Hi,
 
I'm currently running several python applications (each app using
sqlalchemy) accessing (read/write) a single SQLite database stored on disk.
 
For performance reasons, I would like to store this db file in RAM memory
(ie, in my /dev/shm)
 
The applications would then access a shared in-memory db through SQLAlchemy
and a backup application would periodically make a hot copy of the in-memory
db to disk. Then, on a [power off, power on] sequence, this backup app would
copy the backed up db file from disk to RAM before launching the other apps.
 
Before starting, I would like to know if you think it is feaseable. My
questions are:
   1- Has SQLALchemy an API to do a hot copy? (based on
http://sqlite.org/backup.html http://sqlite.org/backup.html for this
particular db type)
   2- If so, is this an actual hot copy, ie: the other apps will still run
without waiting for the backup app to finish he backup?
   3- Is there a particular configuration in SQLAlchemy that enables sharing
an in-momory db from different apps (python processes)?
   
Thanks a lot for your feedback,
 
Pierre
-- 


 
Pierre,
While I do think this is feasible,  I would discourage going down this path
unless you have a really good reason. It sounds like you need a real
database engine like postgres here. It basically does everything you
describe out of the box, is easy to set up, and will likely be more reliable
than anything you could come up with on your own. It will also perform
better when dealing with many simultaneous transactions. 
 
If you still run into performance problems you could look into introducing a
caching layer such as memcached, but I wouldn't cross that bridge until I
had thoroughly tweaked my db settings and identified real bottlenecks in my
application(s).
 
Jeff Peck

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] TypeDecorator to store bool as ENUM('N', 'Y')?

2013-08-16 Thread Jeff Dairiki
On Thu, Aug 15, 2013 at 12:21:43PM -0400, Michael Bayer wrote:
 
 On Aug 14, 2013, at 9:24 PM, Jeff Dairiki dair...@dairiki.org wrote:
 
  I'm working with an existing MySQL schema that has lots of columns of
  type ENUM('N', 'Y').  I'd like to deal with them as real booleans on
  the python side.
  
  I have a simple TypeDecorator which almost works (I think):
  
 class YNBoolean(sqlalchemy.types.TypeDecorator):
  
 impl = mysql.ENUM('N', 'Y', charset='ascii')
  
 def process_bind_param(self, value, dialect):
 if value is None:
 return value
 return 'Y' if value else 'N'
  
 def process_result_value(self, value, dialect):
 if value is None:
 return None
 return value == 'Y'
  
  The one problem I've discovered with this is that
  
 session.query(MyTable).filter(MyTable.ynbool)
  
  produces a query like
  
 SELECT ... FROM MyTable WHERE MyTable.ynbool;
  
  What I really want is
  
 SELECT ... FROM MyTable WHERE MyTable.ynbool = 'Y';
  
  
  (If I do .filter(MyTable.ynbool == True) that does work as 
  desired/expected.)
  
  
  Is there a way to customize how my column gets compiled when used
  in an expression in a boolean context?  (If not, I can live with it
  as is.  I'll just get surprised once in awhile when I forget that
  treating the column as a boolean in expressions won't work.)
  
 
 there is now, check out 
 http://docs.sqlalchemy.org/en/rel_0_8/core/types.html#types-operators .

Thank you for the quick response (as always), Mike.

But I'm still fuddled.

Okay, so I can customize how __invert__(), __and__() and __or__() (all
the operations that evaluate their arguments in a boolean context) get
compiled.  (I think that's what you're pointing me at, right?)
(Really cool, by the way!)

But that still won't fix my original example where .filter(MyTable.ynbool)
produces SELECT ... WHERE MyTable.ynbool;, since none of those operations
are involved.

I've tried adding a custom comparator_factory.__nonzero__() operator
to my type, but that didn't seem to work.  (Though it's possible I did
it wrong.)  Should it have?

Anyhow, this is not even close to the highest priority item in my
queue, ATM.  (So it shouldn't be for you either, unless there's an
easy solution that I'm not seeing.)  I will look into it further at
some point in the future.

Jeff



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


[sqlalchemy] TypeDecorator to store bool as ENUM('N', 'Y')?

2013-08-14 Thread Jeff Dairiki
I'm working with an existing MySQL schema that has lots of columns of
type ENUM('N', 'Y').  I'd like to deal with them as real booleans on
the python side.

I have a simple TypeDecorator which almost works (I think):

class YNBoolean(sqlalchemy.types.TypeDecorator):

impl = mysql.ENUM('N', 'Y', charset='ascii')

def process_bind_param(self, value, dialect):
if value is None:
return value
return 'Y' if value else 'N'

def process_result_value(self, value, dialect):
if value is None:
return None
return value == 'Y'

The one problem I've discovered with this is that

session.query(MyTable).filter(MyTable.ynbool)

produces a query like

SELECT ... FROM MyTable WHERE MyTable.ynbool;

What I really want is

SELECT ... FROM MyTable WHERE MyTable.ynbool = 'Y';


(If I do .filter(MyTable.ynbool == True) that does work as desired/expected.)


Is there a way to customize how my column gets compiled when used
in an expression in a boolean context?  (If not, I can live with it
as is.  I'll just get surprised once in awhile when I forget that
treating the column as a boolean in expressions won't work.)

Thank you for any help.
Jeff



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


[sqlalchemy] Re: Can't make an association table use InnoDB

2012-05-31 Thread Jeff
The tables don't exist yet. The Base.metadata.create_all(engine) is to
create them.

Thanks!

On May 30, 11:52 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 This might be because the tables you're trying to reference are themselves 
 not InnoDB.  Try running DESCRIBE on the referenced tables at the MySQL 
 console to help confirm this, as well as the same CREATE TABLE statement 
 below.

 On May 30, 2012, at 11:31 PM, Jeff wrote:







  Having difficulty creating a database that includes the following
  plumbing:

  class Base(object):
     id = Column(Integer, primary_key=True)
     __table_args__ = {'mysql_engine': 'InnoDB'}

  Base = declarative_base(cls=Base)

  class Event(Base):
    

  Avalanche_Event_Association = Table('Avalanche_Event_Association',
     Base.metadata,
     Column('avalanche_id', Integer, ForeignKey('Avalanche.id')),
     Column('event_id', Integer, ForeignKey('Event.id')),
     mysql_engine='InnoDB')

  class Avalanche(Base):
    

  Doing Base.metadata.create_all(engine) yields:

  OperationalError: (OperationalError) (1005, Can't create table
  'alstottj.Avalanche_Event_Association' (errno: 150)) '\nCREATE TABLE
  `Avalanche_Event_Association` (\n\tavalanche_id INTEGER, \n\tevent_id
  INTEGER, \n\tFOREIGN KEY(avalanche_id) REFERENCES `Avalanche` (id), \n
  \tFOREIGN KEY(event_id) REFERENCES `Event` (id)\n)ENGINE=InnoDB\n
  \n' ()

  Commenting out the line mysql_engine='InnoDB' removes the error and
  the tables are all created, but the association table is now MyISAM.

  I have some feelings on what could be causing the error, but they all
  seem improbable. Thoughts?

  --
  You received this message because you are subscribed to the Google Groups 
  sqlalchemy group.
  To post to this group, send email to sqlalchemy@googlegroups.com.
  To unsubscribe from this group, send email to 
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Can't make an association table use InnoDB

2012-05-31 Thread Jeff
Perhaps it's relevant (though I suspect not) that the class Avalanche
actually contains:

class Avalanche(Base):

events = relationship(Event,
secondary=Avalanche_Event_Association)

This is what prevents us from writing the classes in the following
order in the database definition .py file:

class Event(Base):
.

class Avalanche(Base):


Avalanche_Event_Association = Table('Avalanche_Event_Association',


Because Avalanche needs to reference Avalanche_Event_Association. I
hope, however, that the the create_all function is able to
appropriately create the tables anyway, regardless of their order in
the database definition .py file.

Thanks!

On May 31, 2:21 pm, Jeff jeffalst...@gmail.com wrote:
 The tables don't exist yet. The Base.metadata.create_all(engine) is to
 create them.

 Thanks!

 On May 30, 11:52 pm, Michael Bayer mike...@zzzcomputing.com wrote:







  This might be because the tables you're trying to reference are themselves 
  not InnoDB.  Try running DESCRIBE on the referenced tables at the MySQL 
  console to help confirm this, as well as the same CREATE TABLE statement 
  below.

  On May 30, 2012, at 11:31 PM, Jeff wrote:

   Having difficulty creating a database that includes the following
   plumbing:

   class Base(object):
      id = Column(Integer, primary_key=True)
      __table_args__ = {'mysql_engine': 'InnoDB'}

   Base = declarative_base(cls=Base)

   class Event(Base):
     

   Avalanche_Event_Association = Table('Avalanche_Event_Association',
      Base.metadata,
      Column('avalanche_id', Integer, ForeignKey('Avalanche.id')),
      Column('event_id', Integer, ForeignKey('Event.id')),
      mysql_engine='InnoDB')

   class Avalanche(Base):
     

   Doing Base.metadata.create_all(engine) yields:

   OperationalError: (OperationalError) (1005, Can't create table
   'alstottj.Avalanche_Event_Association' (errno: 150)) '\nCREATE TABLE
   `Avalanche_Event_Association` (\n\tavalanche_id INTEGER, \n\tevent_id
   INTEGER, \n\tFOREIGN KEY(avalanche_id) REFERENCES `Avalanche` (id), \n
   \tFOREIGN KEY(event_id) REFERENCES `Event` (id)\n)ENGINE=InnoDB\n
   \n' ()

   Commenting out the line mysql_engine='InnoDB' removes the error and
   the tables are all created, but the association table is now MyISAM.

   I have some feelings on what could be causing the error, but they all
   seem improbable. Thoughts?

   --
   You received this message because you are subscribed to the Google Groups 
   sqlalchemy group.
   To post to this group, send email to sqlalchemy@googlegroups.com.
   To unsubscribe from this group, send email to 
   sqlalchemy+unsubscr...@googlegroups.com.
   For more options, visit this group 
   athttp://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Can't make an association table use InnoDB

2012-05-31 Thread Jeff
Thanks! I don't quite follow the statement about fully mapped
association table being unusual. The first Many-to-Many example you
linked was the structure I copied when making my own tables here. Have
I deviated from it in some way? Or should the example on the site have
viewonly=True, if being used with InnoDB? Perhaps I just wasn't being
clear in my reproducing them here. Just once again now, with the
additional relevant bits in:

class Base(object):
id = Column(Integer, primary_key=True)
__table_args__ = {'mysql_engine': 'InnoDB'}

Base = declarative_base(cls=Base)

class Event(Base):
   

Avalanche_Event_Association = Table('Avalanche_Event_Association',
Base.metadata,
Column('avalanche_id', Integer, ForeignKey('Avalanche.id')),
Column('event_id', Integer, ForeignKey('Event.id')),
mysql_engine='InnoDB')

class Avalanche(Base):
events = relationship(Event,
secondary=Avalanche_Event_Association)


Doing Base.metadata.create_all(engine) yields an error creating the
Avalanche_Event_Association table.

On May 31, 3:28 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 create_all() only can determine the order of tables if you use ForeignKey and 
 ForeignKeyConstraint objects correctly on the source Table objects and/or 
 declarative classes.

 Seehttp://docs.sqlalchemy.org/en/rel_0_7/orm/relationships.html#many-to-...andhttp://docs.sqlalchemy.org/en/rel_0_7/orm/relationships.html#associat...for
  examples of these configurations.  Note that mixing a fully mapped 
 association object and secondary is a bit unusual and you'll want 
 viewonly=True if you're doing that.

 On May 31, 2012, at 2:32 PM, Jeff wrote:







  Perhaps it's relevant (though I suspect not) that the class Avalanche
  actually contains:

  class Avalanche(Base):
     
     events = relationship(Event,
  secondary=Avalanche_Event_Association)

  This is what prevents us from writing the classes in the following
  order in the database definition .py file:

  class Event(Base):
     .

  class Avalanche(Base):
     

  Avalanche_Event_Association = Table('Avalanche_Event_Association',
     

  Because Avalanche needs to reference Avalanche_Event_Association. I
  hope, however, that the the create_all function is able to
  appropriately create the tables anyway, regardless of their order in
  the database definition .py file.

  Thanks!

  On May 31, 2:21 pm, Jeff jeffalst...@gmail.com wrote:
  The tables don't exist yet. The Base.metadata.create_all(engine) is to
  create them.

  Thanks!

  On May 30, 11:52 pm, Michael Bayer mike...@zzzcomputing.com wrote:

  This might be because the tables you're trying to reference are 
  themselves not InnoDB.  Try running DESCRIBE on the referenced tables at 
  the MySQL console to help confirm this, as well as the same CREATE TABLE 
  statement below.

  On May 30, 2012, at 11:31 PM, Jeff wrote:

  Having difficulty creating a database that includes the following
  plumbing:

  class Base(object):
     id = Column(Integer, primary_key=True)
     __table_args__ = {'mysql_engine': 'InnoDB'}

  Base = declarative_base(cls=Base)

  class Event(Base):
    

  Avalanche_Event_Association = Table('Avalanche_Event_Association',
     Base.metadata,
     Column('avalanche_id', Integer, ForeignKey('Avalanche.id')),
     Column('event_id', Integer, ForeignKey('Event.id')),
     mysql_engine='InnoDB')

  class Avalanche(Base):
    

  Doing Base.metadata.create_all(engine) yields:

  OperationalError: (OperationalError) (1005, Can't create table
  'alstottj.Avalanche_Event_Association' (errno: 150)) '\nCREATE TABLE
  `Avalanche_Event_Association` (\n\tavalanche_id INTEGER, \n\tevent_id
  INTEGER, \n\tFOREIGN KEY(avalanche_id) REFERENCES `Avalanche` (id), \n
  \tFOREIGN KEY(event_id) REFERENCES `Event` (id)\n)ENGINE=InnoDB\n
  \n' ()

  Commenting out the line mysql_engine='InnoDB' removes the error and
  the tables are all created, but the association table is now MyISAM.

  I have some feelings on what could be causing the error, but they all
  seem improbable. Thoughts?

  --
  You received this message because you are subscribed to the Google 
  Groups sqlalchemy group.
  To post to this group, send email to sqlalchemy@googlegroups.com.
  To unsubscribe from this group, send email to 
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/sqlalchemy?hl=en.

  --
  You received this message because you are subscribed to the Google Groups 
  sqlalchemy group.
  To post to this group, send email to sqlalchemy@googlegroups.com.
  To unsubscribe from this group, send email to 
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from

[sqlalchemy] Re: Can't make an association table use InnoDB

2012-05-31 Thread Jeff
Well, one of the worst things that can happen in programming has
happened: It now works, and I don't know why _ I didn't change
anything that I know of, and I definitely didn't change the
capitalization. Guess I'll just slowly back away from the machine and
hope everything stays that way.

Thanks for the tip on capitalization, though. Good to know!

On May 31, 3:55 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 On May 31, 2012, at 3:49 PM, Jeff wrote:

  Thanks! I don't quite follow the statement about fully mapped
  association table being unusual.

 your name Avalanche_Event_Association with CamelCase made me think it was 
 mapped class, but this is not the case as you have it as a Table.

 the problem might be those uppercase names you're using in your ForeignKey 
 declarations, as your MySQL may or may not actually be case sensitive.  The 
 attached script works for me on OSX, however MySQLs case sensitivity is 
 platform-dependent.   Keep all the tablenames totally lower case with MySQL 
 as its a nightmare with case sensitivity.  Note SQLAlchemy treats names that 
 aren't all lower case as case sensitive.

  test.py
  1KViewDownload









  The first Many-to-Many example you
  linked was the structure I copied when making my own tables here. Have
  I deviated from it in some way? Or should the example on the site have
  viewonly=True, if being used with InnoDB? Perhaps I just wasn't being
  clear in my reproducing them here. Just once again now, with the
  additional relevant bits in:

  class Base(object):
     id = Column(Integer, primary_key=True)
     __table_args__ = {'mysql_engine': 'InnoDB'}

  Base = declarative_base(cls=Base)

  class Event(Base):
    

  Avalanche_Event_Association = Table('Avalanche_Event_Association',
     Base.metadata,
     Column('avalanche_id', Integer, ForeignKey('Avalanche.id')),
     Column('event_id', Integer, ForeignKey('Event.id')),
     mysql_engine='InnoDB')

  class Avalanche(Base):
     events = relationship(Event,
  secondary=Avalanche_Event_Association)
     

  Doing Base.metadata.create_all(engine) yields an error creating the
  Avalanche_Event_Association table.

  On May 31, 3:28 pm, Michael Bayer mike...@zzzcomputing.com wrote:
  create_all() only can determine the order of tables if you use ForeignKey 
  and ForeignKeyConstraint objects correctly on the source Table objects 
  and/or declarative classes.

  Seehttp://docs.sqlalchemy.org/en/rel_0_7/orm/relationships.html#many-to-...examples
   of these configurations.  Note that mixing a fully mapped association 
  object and secondary is a bit unusual and you'll want viewonly=True if 
  you're doing that.

  On May 31, 2012, at 2:32 PM, Jeff wrote:

  Perhaps it's relevant (though I suspect not) that the class Avalanche
  actually contains:

  class Avalanche(Base):
     
     events = relationship(Event,
  secondary=Avalanche_Event_Association)

  This is what prevents us from writing the classes in the following
  order in the database definition .py file:

  class Event(Base):
     .

  class Avalanche(Base):
     

  Avalanche_Event_Association = Table('Avalanche_Event_Association',
     

  Because Avalanche needs to reference Avalanche_Event_Association. I
  hope, however, that the the create_all function is able to
  appropriately create the tables anyway, regardless of their order in
  the database definition .py file.

  Thanks!

  On May 31, 2:21 pm, Jeff jeffalst...@gmail.com wrote:
  The tables don't exist yet. The Base.metadata.create_all(engine) is to
  create them.

  Thanks!

  On May 30, 11:52 pm, Michael Bayer mike...@zzzcomputing.com wrote:

  This might be because the tables you're trying to reference are 
  themselves not InnoDB.  Try running DESCRIBE on the referenced tables 
  at the MySQL console to help confirm this, as well as the same CREATE 
  TABLE statement below.

  On May 30, 2012, at 11:31 PM, Jeff wrote:

  Having difficulty creating a database that includes the following
  plumbing:

  class Base(object):
     id = Column(Integer, primary_key=True)
     __table_args__ = {'mysql_engine': 'InnoDB'}

  Base = declarative_base(cls=Base)

  class Event(Base):
    

  Avalanche_Event_Association = Table('Avalanche_Event_Association',
     Base.metadata,
     Column('avalanche_id', Integer, ForeignKey('Avalanche.id')),
     Column('event_id', Integer, ForeignKey('Event.id')),
     mysql_engine='InnoDB')

  class Avalanche(Base):
    

  Doing Base.metadata.create_all(engine) yields:

  OperationalError: (OperationalError) (1005, Can't create table
  'alstottj.Avalanche_Event_Association' (errno: 150)) '\nCREATE TABLE
  `Avalanche_Event_Association` (\n\tavalanche_id INTEGER, \n\tevent_id
  INTEGER, \n\tFOREIGN KEY(avalanche_id) REFERENCES `Avalanche` (id), \n
  \tFOREIGN KEY(event_id) REFERENCES `Event` (id)\n)ENGINE=InnoDB\n
  \n' ()

  Commenting out the line mysql_engine='InnoDB' removes the error and
  the tables are all created

[sqlalchemy] Re: Lock table, do things to table, unlock table: Best way?

2012-05-30 Thread Jeff
Unique constraints have worked well. Thanks!

On May 29, 1:44 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 yup

 On May 29, 2012, at 1:01 PM, Jeff wrote:







  Thanks  Michael,

  Just to make clear what exactly begin_nested() is contributing:

  Normal case:
  session.rollback() goes back to the last session.commit()

  session.begin_nested() case:
  session.rollback() goes back to the last session.begin_nested() or
  session.commit(), whichever occurred last.

  Correct?

  On May 28, 11:54 am, Michael Bayer mike...@zzzcomputing.com wrote:
  An option to add along to the unique constraint, if you expect to get 
  collisions often, is to use a SAVEPOINT so that a process can roll back 
  partially if this particular INSERT fails, then use the row.  The Session 
  offers SAVEPOINT via begin_nested():

  session.begin_nested()
  try:
       session.add(thing_that_may_exist_already)
       session.commit()  # flushes, and commits only the savepoint
  except exc.IntegrityError:
      session.rollback()
      thing_that_may_exist_already = 
  session.query(Thing).filter_by(criteiron).one()

  the difference between using locks to prevent concurrent dupes versus 
  using constraints and expecting dupes to fail is known as pessimistic 
  versus optimistic locking.

  On May 28, 2012, at 10:38 AM, Jeff wrote:

  The unique constraint sounds like a workable solution! I'll implement
  that with a try/except and report back if that was effective. Thanks!

  On May 28, 5:43 am, Simon King si...@simonking.org.uk wrote:
  On Sun, May 27, 2012 at 6:18 PM, Jeff jeffalst...@gmail.com wrote:
  Thanks,

  I have indeed spent a lot of time looking at SELECT FOR UPDATE, but as
  far as I can tell that locks rows that have been selected. That is not
  helpful in this use case, in which the issue is rows not existing, and
  then later existing. Am I misunderstanding?

  On May 27, 11:48 am, A.M. age...@themactionfaction.com wrote:
  On May 27, 2012, at 1:07 AM, Jeff wrote:

  I have multiple processes accessing  a table. All of these processes
  want to read a set of rows from the table, and if the rows are not
  present they will make a calculation and insert the rows themselves.
  The issue comes where process  A does a query to see if the target set
  of rows is present in the table, and they're not, and then another
  starts calculating. While it's calculating, process B inserts the
  rows. Then process A inserts the rows, and now we have two copies of
  these sets of rows. Bad.

  You should look at SELECT FOR UPDATE.

 http://docs.sqlalchemy.org/en/rel_0_7/orm/query.html?highlight=lockmo...

  Cheers,
  M

  Could you put unique constraints on the table so that the second
  process will get an error when it tries to insert the duplicate rows?
  It won't prevent you from performing the calculations twice, but at
  least you won't get the duplicates.

  Another option would be to write some sort of pending marker into
  the table, so that subsequent processes know that the result is
  already being calculated.

  Simon

  --
  You received this message because you are subscribed to the Google Groups 
  sqlalchemy group.
  To post to this group, send email to sqlalchemy@googlegroups.com.
  To unsubscribe from this group, send email to 
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/sqlalchemy?hl=en.

  --
  You received this message because you are subscribed to the Google Groups 
  sqlalchemy group.
  To post to this group, send email to sqlalchemy@googlegroups.com.
  To unsubscribe from this group, send email to 
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Can't make an association table use InnoDB

2012-05-30 Thread Jeff
Having difficulty creating a database that includes the following
plumbing:

class Base(object):
id = Column(Integer, primary_key=True)
__table_args__ = {'mysql_engine': 'InnoDB'}

Base = declarative_base(cls=Base)

class Event(Base):
   

Avalanche_Event_Association = Table('Avalanche_Event_Association',
Base.metadata,
Column('avalanche_id', Integer, ForeignKey('Avalanche.id')),
Column('event_id', Integer, ForeignKey('Event.id')),
mysql_engine='InnoDB')

class Avalanche(Base):
   

Doing Base.metadata.create_all(engine) yields:

OperationalError: (OperationalError) (1005, Can't create table
'alstottj.Avalanche_Event_Association' (errno: 150)) '\nCREATE TABLE
`Avalanche_Event_Association` (\n\tavalanche_id INTEGER, \n\tevent_id
INTEGER, \n\tFOREIGN KEY(avalanche_id) REFERENCES `Avalanche` (id), \n
\tFOREIGN KEY(event_id) REFERENCES `Event` (id)\n)ENGINE=InnoDB\n
\n' ()

Commenting out the line mysql_engine='InnoDB' removes the error and
the tables are all created, but the association table is now MyISAM.

I have some feelings on what could be causing the error, but they all
seem improbable. Thoughts?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Lock table, do things to table, unlock table: Best way?

2012-05-29 Thread Jeff
Thanks  Michael,

Just to make clear what exactly begin_nested() is contributing:

Normal case:
session.rollback() goes back to the last session.commit()

session.begin_nested() case:
session.rollback() goes back to the last session.begin_nested() or
session.commit(), whichever occurred last.

Correct?


On May 28, 11:54 am, Michael Bayer mike...@zzzcomputing.com wrote:
 An option to add along to the unique constraint, if you expect to get 
 collisions often, is to use a SAVEPOINT so that a process can roll back 
 partially if this particular INSERT fails, then use the row.  The Session 
 offers SAVEPOINT via begin_nested():

 session.begin_nested()
 try:
      session.add(thing_that_may_exist_already)
      session.commit()  # flushes, and commits only the savepoint
 except exc.IntegrityError:
     session.rollback()
     thing_that_may_exist_already = 
 session.query(Thing).filter_by(criteiron).one()

 the difference between using locks to prevent concurrent dupes versus using 
 constraints and expecting dupes to fail is known as pessimistic versus 
 optimistic locking.

 On May 28, 2012, at 10:38 AM, Jeff wrote:







  The unique constraint sounds like a workable solution! I'll implement
  that with a try/except and report back if that was effective. Thanks!

  On May 28, 5:43 am, Simon King si...@simonking.org.uk wrote:
  On Sun, May 27, 2012 at 6:18 PM, Jeff jeffalst...@gmail.com wrote:
  Thanks,

  I have indeed spent a lot of time looking at SELECT FOR UPDATE, but as
  far as I can tell that locks rows that have been selected. That is not
  helpful in this use case, in which the issue is rows not existing, and
  then later existing. Am I misunderstanding?

  On May 27, 11:48 am, A.M. age...@themactionfaction.com wrote:
  On May 27, 2012, at 1:07 AM, Jeff wrote:

  I have multiple processes accessing  a table. All of these processes
  want to read a set of rows from the table, and if the rows are not
  present they will make a calculation and insert the rows themselves.
  The issue comes where process  A does a query to see if the target set
  of rows is present in the table, and they're not, and then another
  starts calculating. While it's calculating, process B inserts the
  rows. Then process A inserts the rows, and now we have two copies of
  these sets of rows. Bad.

  You should look at SELECT FOR UPDATE.

 http://docs.sqlalchemy.org/en/rel_0_7/orm/query.html?highlight=lockmo...

  Cheers,
  M

  Could you put unique constraints on the table so that the second
  process will get an error when it tries to insert the duplicate rows?
  It won't prevent you from performing the calculations twice, but at
  least you won't get the duplicates.

  Another option would be to write some sort of pending marker into
  the table, so that subsequent processes know that the result is
  already being calculated.

  Simon

  --
  You received this message because you are subscribed to the Google Groups 
  sqlalchemy group.
  To post to this group, send email to sqlalchemy@googlegroups.com.
  To unsubscribe from this group, send email to 
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Multiple inserts with .append()'d associations

2012-05-29 Thread Jeff
engine.dispose() is an inefficient operation, as the Engine is a
factory for connections, not a connection itself.   If you'd like
there to be zero actual database connections open when the application
is idle, you can disable pooling using NullPool.

Very good point. Will do!

To be more clear, I was actually doing this:

all_events =
list(session.query(Event).filter().order_by(...).all())
session.close()
session.bind.dispose()

for a in analyses_parameters:
analysis = Analysis()
events_in_this_analysis = all_events[...]
analysis.events = events_in_this_analysis
session.append(analysis)
session.commit()
session.close()
session.bind.dispose()

Thus, there is no connection to the database during the for loop. And
that loop takes a long time, so the lack of a connection is much
appreciated. The issue comes during the session.commit(), which has
thousands of Analyses with hundreds of thousands of associated Events.
I can watch the database and the open connections, and I see that:

1. All these Analyses are inserted (which takes awhile)
2. THEN the Analysis-Event associations are inserted, which also takes
awhile. It's during this time that I see the connection for this
session goes idle for a few cycles, then is inserting, then goes idle,
and then is inserting, etc. This is the behavior that seemed could be
improved. It's unclear to me from your comments whether or not it
actually can be.

SQLAlchemy 0.7 will also perform the INSERT of all the above records
as a single prepared statement if you assign a primary key value to
the objects ahead of time
Good idea, but unfortunately not doable in this case; lots of
independent processes are doing this at the same time, so I don't see
a coherent way to effectively determine primary keys ahead of time.

Or, you can get the same effect by inserting the records using a non-
ORM executemany, where in this case you wouldn't need to pre-assign
primary key values:

Session.execute(Event.__table__.insert(), params=[{a:evt.a,
b:evt.b, ...} for evt in events}]
... 

I'd actually be inserting into Analysis_Event_Association. That
association table needs the primary keys of the Analyses and the
Events. Doing that manually might look like:

(using NullPool)
all_events =
session.query(Event).filter().order_by(...).values(id)
session.close()

for a in analyses_parameters:
analysis = Analysis()
session.append(analysis)
session.commit()
session.close()

all_analyses =
session.query(Analysis).filter().order_by(...).values(id)
session.close()

for a in all_analyses =
events_in_this_analysis_ids = all_events[...]
for e in events_in_this_analysis_ids:
session.add(Analysis_Event_Association(analysis_id=a,
event_id=e)
session.commit()
session.close()

That seems like it would be the most efficient in terms of connections
only being open during the session.commit(), and each of those having
the most efficient INSERTing. The Python end of things, however, looks
a lot less elegant/simple, making me think I'm missing something.

Thanks!

On May 28, 12:11 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 On May 28, 2012, at 10:52 AM, Jeff wrote:









  There are Analyses, which have Events. There's an
  Analysis_Event_Association table. Events are written before the
  Analyses. Later, I have a for loop calculating multiple Analyses. For
  each Analysis, we identify the Events it includes and append them,
  using sqlalchemy's relationship plumbing. This just writes the
  appropriate row to the association table. It works great:

  all_events =
  list(session.query(Event).filter().order_by(...).all())

  for a in analyses_parameters:
     analysis = Analysis()
     events_in_this_analysis = all_events[...]
     analysis.events = events_in_this_analysis
     session.append(analysis)

  session.commit()
  session.close()
  session.bind.dispose()

  The issue is that I'm in a situation where there are many, many
  processes writing to the same database, and the database only allows
  us a limited number of connections. So I'm trying to have connections
  open only when they're actually being used, hence the
  session.bind.dispose().

 engine.dispose() is an inefficient operation, as the Engine is a factory for 
 connections, not a connection itself.   If you'd like there to be zero actual 
 database connections open when the application is idle, you can disable 
 pooling using NullPool.



  Writing all the analyses is a single insert, and thus is efficient
  with regards to having a connection open. HOWEVER, it appears that
  each of the Events associations we appended are being inserted
  individually, which is bad.  The connection sits there, then is used,
  then sits there, then is used, etc. This contributes to the maximum
  connections open, which is a limited resource.

 Just to make sure we're on the same page, the Session procures a single DBAPI 
 connection, and uses it for the lifespan of that transaction

[sqlalchemy] Re: Lock table, do things to table, unlock table: Best way?

2012-05-28 Thread Jeff
The unique constraint sounds like a workable solution! I'll implement
that with a try/except and report back if that was effective. Thanks!

On May 28, 5:43 am, Simon King si...@simonking.org.uk wrote:
 On Sun, May 27, 2012 at 6:18 PM, Jeff jeffalst...@gmail.com wrote:
  Thanks,

  I have indeed spent a lot of time looking at SELECT FOR UPDATE, but as
  far as I can tell that locks rows that have been selected. That is not
  helpful in this use case, in which the issue is rows not existing, and
  then later existing. Am I misunderstanding?

  On May 27, 11:48 am, A.M. age...@themactionfaction.com wrote:
  On May 27, 2012, at 1:07 AM, Jeff wrote:

   I have multiple processes accessing  a table. All of these processes
   want to read a set of rows from the table, and if the rows are not
   present they will make a calculation and insert the rows themselves.
   The issue comes where process  A does a query to see if the target set
   of rows is present in the table, and they're not, and then another
   starts calculating. While it's calculating, process B inserts the
   rows. Then process A inserts the rows, and now we have two copies of
   these sets of rows. Bad.

  You should look at SELECT FOR UPDATE.

 http://docs.sqlalchemy.org/en/rel_0_7/orm/query.html?highlight=lockmo...

  Cheers,
  M

 Could you put unique constraints on the table so that the second
 process will get an error when it tries to insert the duplicate rows?
 It won't prevent you from performing the calculations twice, but at
 least you won't get the duplicates.

 Another option would be to write some sort of pending marker into
 the table, so that subsequent processes know that the result is
 already being calculated.

 Simon

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Multiple inserts with .append()'d associations

2012-05-28 Thread Jeff
There are Analyses, which have Events. There's an
Analysis_Event_Association table. Events are written before the
Analyses. Later, I have a for loop calculating multiple Analyses. For
each Analysis, we identify the Events it includes and append them,
using sqlalchemy's relationship plumbing. This just writes the
appropriate row to the association table. It works great:

all_events =
list(session.query(Event).filter().order_by(...).all())

for a in analyses_parameters:
analysis = Analysis()
events_in_this_analysis = all_events[...]
analysis.events = events_in_this_analysis
session.append(analysis)

session.commit()
session.close()
session.bind.dispose()

The issue is that I'm in a situation where there are many, many
processes writing to the same database, and the database only allows
us a limited number of connections. So I'm trying to have connections
open only when they're actually being used, hence the
session.bind.dispose().

Writing all the analyses is a single insert, and thus is efficient
with regards to having a connection open. HOWEVER, it appears that
each of the Events associations we appended are being inserted
individually, which is bad.  The connection sits there, then is used,
then sits there, then is used, etc. This contributes to the maximum
connections open, which is a limited resource. I'd like all the
appended Events association to be inserted in one go. Is there a way
to do this? Am I correct about what's going on?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Lock table, do things to table, unlock table: Best way?

2012-05-27 Thread Jeff
Thanks,

I have indeed spent a lot of time looking at SELECT FOR UPDATE, but as
far as I can tell that locks rows that have been selected. That is not
helpful in this use case, in which the issue is rows not existing, and
then later existing. Am I misunderstanding?

On May 27, 11:48 am, A.M. age...@themactionfaction.com wrote:
 On May 27, 2012, at 1:07 AM, Jeff wrote:

  I have multiple processes accessing  a table. All of these processes
  want to read a set of rows from the table, and if the rows are not
  present they will make a calculation and insert the rows themselves.
  The issue comes where process  A does a query to see if the target set
  of rows is present in the table, and they're not, and then another
  starts calculating. While it's calculating, process B inserts the
  rows. Then process A inserts the rows, and now we have two copies of
  these sets of rows. Bad.

 You should look at SELECT FOR UPDATE.

 http://docs.sqlalchemy.org/en/rel_0_7/orm/query.html?highlight=lockmo...

 Cheers,
 M

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Lock table, do things to table, unlock table: Best way?

2012-05-26 Thread Jeff
I have multiple processes accessing  a table. All of these processes
want to read a set of rows from the table, and if the rows are not
present they will make a calculation and insert the rows themselves.
The issue comes where process  A does a query to see if the target set
of rows is present in the table, and they're not, and then another
starts calculating. While it's calculating, process B inserts the
rows. Then process A inserts the rows, and now we have two copies of
these sets of rows. Bad.

The current solution is to have a process:
1. Lock the table with session.execute(LOCK TABLES table WRITE)
2. Query
3. If need be, calculate and commit
4. Unlock with session.execute(UNLOCK TABLES)

Then if another process wants to query while the calculation is
happening, it will have to wait until the table unlocks, and it will
see the result of the commit. The wait time of the locked table is
acceptable.

That should work, but is ugly and, worse, requires explicit MySQL
code. I understand this could lead to dependency on the backend
implementation (ie. If this code was ever used on an sqlite
database?). Regardless, I'd like to do this completely within
sqlalchemy.  Is there a way to do this? It seems like there might be a
way to do this with transactions, but I can't parse it. Thoughts?

Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Lock table, do things to table, unlock table: Best way?

2012-05-26 Thread Jeff
Hmmm. It also appears that the 4 step solution I gave above doesn't
work consistently. Any ideas as to why not?

On May 27, 1:07 am, Jeff jeffalst...@gmail.com wrote:
 I have multiple processes accessing  a table. All of these processes
 want to read a set of rows from the table, and if the rows are not
 present they will make a calculation and insert the rows themselves.
 The issue comes where process  A does a query to see if the target set
 of rows is present in the table, and they're not, and then another
 starts calculating. While it's calculating, process B inserts the
 rows. Then process A inserts the rows, and now we have two copies of
 these sets of rows. Bad.

 The current solution is to have a process:
 1. Lock the table with session.execute(LOCK TABLES table WRITE)
 2. Query
 3. If need be, calculate and commit
 4. Unlock with session.execute(UNLOCK TABLES)

 Then if another process wants to query while the calculation is
 happening, it will have to wait until the table unlocks, and it will
 see the result of the commit. The wait time of the locked table is
 acceptable.

 That should work, but is ugly and, worse, requires explicit MySQL
 code. I understand this could lead to dependency on the backend
 implementation (ie. If this code was ever used on an sqlite
 database?). Regardless, I'd like to do this completely within
 sqlalchemy.  Is there a way to do this? It seems like there might be a
 way to do this with transactions, but I can't parse it. Thoughts?

 Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Efficient Inserting to Same Table Across 100s of Processes

2012-05-23 Thread Jeff
Hello,

I have hundreds of independent jobs on a cluster all writing entries
to the same MySQL database table. Every time one job INSERTs, it locks
the table, and the other jobs have to queue up for their turn. So at
that point, the massively parallel cluster has turned into a massively
serial cluster :-( What are routes available in SQLAlchemy to improve
this situation?

Thanks!
Jeff

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Efficient Inserting to Same Table Across 100s of Processes

2012-05-23 Thread Jeff
More data:
A typical not-quite-worst-but-in-the-class-of-worst case scenario is a
half a million rows per insert. Absolute worst case scenarios could be
10 times that. So that insert will take awhile.

Would there be any logic to breaking up all the inserts into one row
per insert? Would that allow independent jobs to meaningfully
interleave their inserts? Does the overhead of sqlalchemy's session
adding and committing not really work that way?

One uglier route that has been suggested to me is to first write all
the rows to independent local files, then later merge and insert them.
Does SQLAlchemy have functionality that might help in this direction?

On May 23, 4:19 pm, Jeff jeffalst...@gmail.com wrote:
 Hello,

 I have hundreds of independent jobs on a cluster all writing entries
 to the same MySQL database table. Every time one job INSERTs, it locks
 the table, and the other jobs have to queue up for their turn. So at
 that point, the massively parallel cluster has turned into a massively
 serial cluster :-( What are routes available in SQLAlchemy to improve
 this situation?

 Thanks!
 Jeff

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Efficient Inserting to Same Table Across 100s of Processes

2012-05-23 Thread Jeff
Thanks for the help and links! One additional data point: The table
has an id field that autoincrements. A friend thought that might be a
barrier to non-locking inserts, but wasn't sure. I'm having difficulty
finding any resource explicitly saying that, though, and simply trying
it would be costly.

The described behavior for MySIAM's concurrent_inserts setting appears
to be the behavior I'm describing (queued inserts). Am I missing some
detail? I actually found that documentation earlier today, and had
assumed that the inserts that are queued and performed in sequence
would result in the inserting process (the cluster's jobs) waiting
until they're done in the queue. Which, of course, is what's happening
in my case.

Now I'll attempt to parse the rest of your comments :-) Please let me
know if the above paragraphs are accurate, though.

Thanks!

On May 23, 5:20 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 My initial thought is that INSERTs shouldn't be locking the whole table, at 
 least not throughout a whole transaction.      There's some MySQL hints that 
 can help with this, if you're on MyISAM take a look 
 athttp://dev.mysql.com/doc/refman/5.0/en/concurrent-inserts.html, possibly 
 using the concurrent_inserts setting, as well as some of the INSERT hints 
 specific to MySQL:  http://dev.mysql.com/doc/refman/5.5/en/insert.html.    On 
 InnoDB it doesn't lock the whole table unless you do so explicitly.

 SQLAlchemy is just a tool to help emit messages to the database.  You'd need 
 to figure out what scheme you want to use to improve your INSERT throughput 
 first, then apply that strategy through SQLAlchemy's API.   It might be as 
 easy as switching to InnoDB, not sure.

 If you're looking to serialize inserts after the fact, you can use a message 
 queue for that, such as a Celery process.    This might be the way you want 
 to go if you get large bursts of data with quiet periods, and it's OK that 
 the MySQL database receives all the data eventually.

 On May 23, 2012, at 4:56 PM, Jeff wrote:







  More data:
  A typical not-quite-worst-but-in-the-class-of-worst case scenario is a
  half a million rows per insert. Absolute worst case scenarios could be
  10 times that. So that insert will take awhile.

  Would there be any logic to breaking up all the inserts into one row
  per insert? Would that allow independent jobs to meaningfully
  interleave their inserts? Does the overhead of sqlalchemy's session
  adding and committing not really work that way?

  One uglier route that has been suggested to me is to first write all
  the rows to independent local files, then later merge and insert them.
  Does SQLAlchemy have functionality that might help in this direction?

  On May 23, 4:19 pm, Jeff jeffalst...@gmail.com wrote:
  Hello,

  I have hundreds of independent jobs on a cluster all writing entries
  to the same MySQL database table. Every time one job INSERTs, it locks
  the table, and the other jobs have to queue up for their turn. So at
  that point, the massively parallel cluster has turned into a massively
  serial cluster :-( What are routes available in SQLAlchemy to improve
  this situation?

  Thanks!
  Jeff

  --
  You received this message because you are subscribed to the Google Groups 
  sqlalchemy group.
  To post to this group, send email to sqlalchemy@googlegroups.com.
  To unsubscribe from this group, send email to 
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] stucco_evolution 0.4 released

2012-04-20 Thread Jeff Dairiki
On Fri, Apr 20, 2012 at 03:44:55PM -0400, Michael Bayer wrote:
 
 The one thing that's needed as far as Alembic is concerned is the
 ability to control the name of the actual migration table per
 environment, this is a short feature add that's been sitting as an
 enhancement request for some time now.

Since you mention it, I posted patches to

  
https://bitbucket.org/zzzeek/alembic/issue/34/make-version-table-name-configurable

awhile ago and was awaiting feedback on them (until I forgot about them.)
The patches in addition to supporting a configurable version table name
also support two-column version tables which can be shared between Alembic
environments.

(If you want to veto the two-column version table idea, I can whittle
it down to just the configurable-version-table-name part pretty easily.)

Cheers,
Jeff

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Eager loading a relationship which spans databases

2012-04-15 Thread Jeff Dairiki
As always, thank you for the quick reply, Mike!

On Sat, Apr 14, 2012 at 06:34:10PM -0400, Michael Bayer wrote:
 
 On Apr 14, 2012, at 5:38 PM, Jeff Dairiki wrote:
 
  I have a one-to-one relationship to a table in a different database
  which I'd like to, effectively, eager-load.  Lazy='joined' and
  lazy='subquery' won't work, since they rely an impossible JOIN between
  tables in two different databases.
  
  Lazy='immediate' works, but results in one query per instance.  If I
  have a one-to-one relationship between A and B, and execute a query
  which loads 100 A's, there will be 100 more queries which each load
  one B.  What I'd like is to have two queries total: one to load the As
  and a second to load the Bs.
  
  I've been able to do this manually by doing something like this:
  
# Do some query
parents = session.query(Parent).filter(some_condition).all()
  
# Eager load 
ids = [ parent.id for parent in parents
if 'child' in instance_state(parent).unloaded ]
get_parent = session.query(Parent).get
for child in session.query(Child).filter(Child.parent_id.in_(ids)):
parent = get_parent(child.parent_id)
set_committed_value(child, 'parent', parent)
set_committed_value(parent, 'child', child)
  
  Is there a good way to do this more automatically?
 
 I don't have a spectacular suggestion at the moment, if you've
 looked at how the subquery/immediate loading schemes work, the first
 row we get for the parent triggers the load of all the child
 objects, which is a product of the loader strategy system for that
 particular attribute.  The mechanics here would need to do
 everything entirely after the full parent collection is loaded.

I did dig far enough to figure that out.  I was sort of
expecting/hoping to find an event I could listen for, but alas.

 I'd probably keep it simple and just do what you're doing, just
 either inside of a helper function like
 iterate_with_related(myquery, children)

Will do.  Actually, I'm already doing roughly that.  It works fine,
it's just that one must remember to invoke the helper function.

 or perhaps do a subclass of Query.

I thought of that, bug it felt somehow wrong to customize behavior for
a specific mapped attribute by specializing Query.

 As far as the actual load operation, there's a recipe for this kind
 of operation at
 http://www.sqlalchemy.org/trac/wiki/UsageRecipes/DisjointEagerLoading
 which will do things a little more directly, without the need for
 query.get().  There's a generalized recipe there too which you can
 probably adapt to what you're doing here.

Aha.  Yes, that helps a bit.

Thanks again!
Jeff

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Eager loading a relationship which spans databases

2012-04-14 Thread Jeff Dairiki
I have a one-to-one relationship to a table in a different database
which I'd like to, effectively, eager-load.  Lazy='joined' and
lazy='subquery' won't work, since they rely an impossible JOIN between
tables in two different databases.

Lazy='immediate' works, but results in one query per instance.  If I
have a one-to-one relationship between A and B, and execute a query
which loads 100 A's, there will be 100 more queries which each load
one B.  What I'd like is to have two queries total: one to load the As
and a second to load the Bs.

I've been able to do this manually by doing something like this:

   # Do some query
   parents = session.query(Parent).filter(some_condition).all()

   # Eager load 
   ids = [ parent.id for parent in parents
   if 'child' in instance_state(parent).unloaded ]
   get_parent = session.query(Parent).get
   for child in session.query(Child).filter(Child.parent_id.in_(ids)):
   parent = get_parent(child.parent_id)
   set_committed_value(child, 'parent', parent)
   set_committed_value(parent, 'child', child)

Is there a good way to do this more automatically?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] 'Too many connections': Where am I handling sessions incorrectly?

2011-09-15 Thread Jeff
SQLAlchemy version 0.7.1
MySQL Ver 14.12 Distrib 5.0.77

We have a series of tables with one to many connections:
A - B - C - D-E etc.

Script1 has a big for loop over several hundred/thousand values. In
each loop iteration it goes through A,B,C, makes some new entries,
then calls Function1 (passing some ids from A,B,C).
Function1 makes a new entry in D, then calls Function2 (passing ids
from A,B,C,D).
Function2 makes modification to the entry in D and makes several new
entries in E.

Not far into the loop we get an error saying the MySQL database has
run out of connections:
(Operational Error) (1040, 'Too many connections')

I suspect this is due to me mishandling sessions, and I've included
the likely relevant snippets of code below. Am I doing something
obviously incorrect? The goal in the near future is to have Function1
submit Function2 as a job to a cluster with PBS, so Function2 needs to
independently access the database. Thanks!

In Script1:
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
import database_classes as dc
database_url = 

engine = create_engine(database_url, echo=False)
dc.Base.metadata.create_all(engine)
Session = sessionmaker(bind=engine)

for fname in dirList:
session = Session()

Function1(database_url,)

In Function1:
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
import database_classes as dc
engine = create_engine(database_url, echo=False)
Session = sessionmaker(bind=engine)
session = Session()
.
Function2(database_url, )

session.close()

In Function2:
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
import database_classes as dc
engine = create_engine(database_url, echo=False)
Session = sessionmaker(bind=engine)
session = Session()

session.close()



Database classes: 
https://github.com/jeffalstott/research_code/blob/master/database_classes.py

Script1:
https://github.com/jeffalstott/research_code/blob/master/MRC_avalanche_analyses.py

Functions:
https://github.com/jeffalstott/research_code/blob/master/criticality.py
Function1 is avalanche_analyses
Function2 is avalanche_statistics

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: 'Too many connections': Where am I handling sessions incorrectly?

2011-09-15 Thread Jeff
Thanks for the guidance!

In a situation which a script submits a swarm of independent jobs for
a cluster, and then finishes before some/all of those jobs have
started running, each job will need to create the engine, yes? Or is
there a better way to do it?

On Sep 15, 2:30 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 On Sep 15, 2011, at 6:39 AM, Jeff wrote:









  SQLAlchemy version 0.7.1
  MySQL Ver 14.12 Distrib 5.0.77

  We have a series of tables with one to many connections:
  A - B - C - D-E etc.

  Script1 has a big for loop over several hundred/thousand values. In
  each loop iteration it goes through A,B,C, makes some new entries,
  then calls Function1 (passing some ids from A,B,C).
  Function1 makes a new entry in D, then calls Function2 (passing ids
  from A,B,C,D).
  Function2 makes modification to the entry in D and makes several new
  entries in E.

  Not far into the loop we get an error saying the MySQL database has
  run out of connections:
  (Operational Error) (1040, 'Too many connections')

 Your scripts call create_engine() essentially in a loop.     This isn't 
 really the appropriate usage of create_engine().   The Engine does not 
 represent a single database connection; is an expensive-to-create registry of 
 information about your database and DBAPI as well as a connection pool 
 (seehttp://www.sqlalchemy.org/docs/core/engines.htmlfor an overview).  
 Dereferencing it will eventually close out connections which were open from 
 it, but not immediately as the garbage collector thread typically needs to 
 find those unreachable objects.

 The appropriate scope for Engine is once per url per application, at the 
 module level.   That means if your application has only one URL, there should 
 be exactly one call to create_engine() in just one place, and the resulting 
 Engine should be placed in a single module made available for other modules 
 to import.  Otherwise you're working against the intended design of 
 create_engine().

 With that, all functions that call upon the Engine will be calling upon the 
 underlying connection pool so that the total number of connections used by 
 the application can be managed.

 The guidelines for  Session are less stringent, though again generally a 
 single Session is shared among all functions and methods for a particular 
 operation.   I didn't read your script carefully but typically a single 
 Session is passed along all functions that need to operate on data, so that 
 all those functions can share the same pool of objects which all interact 
 cleanly, not to mention all within one transaction.    The script as it is 
 now creates many new transactions.

 If you really do want to use a Session inside a function you can forego the 
 usage of sessionmaker as again that function is just a helper for declaring 
 module-level patterns.    The Session constructor can be called directly, 
 i.e. session = Session(engine).

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: 'Too many connections': Where am I handling sessions incorrectly?

2011-09-15 Thread Jeff
Great. I implemented your suggestions and it was (predictably) a
significant speedup. Cheers!

On Sep 15, 4:31 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 On Sep 15, 2011, at 10:20 AM, Jeff wrote:

  Thanks for the guidance!

  In a situation which a script submits a swarm of independent jobs for
  a cluster, and then finishes before some/all of those jobs have
  started running, each job will need to create the engine, yes? Or is
  there a better way to do it?

 Anytime you start a new process, that's where create_engine() would need to 
 be called once.    

 When I use the Python multiprocessing library for example, I have a function 
 init_for_subprocess() which I can pass as the on init function to the 
 various multiprocessing functions, or if I'm writing a function that I know 
 is the starting point of the process boundary I'd put init_for_subprocess() 
 at the top.   init_for_subprocess() then ultimately does the create_engine() 
 and establishes it as a module level global in the appropriate place).









  On Sep 15, 2:30 pm, Michael Bayer mike...@zzzcomputing.com wrote:
  On Sep 15, 2011, at 6:39 AM, Jeff wrote:

  SQLAlchemy version 0.7.1
  MySQL Ver 14.12 Distrib 5.0.77

  We have a series of tables with one to many connections:
  A - B - C - D-E etc.

  Script1 has a big for loop over several hundred/thousand values. In
  each loop iteration it goes through A,B,C, makes some new entries,
  then calls Function1 (passing some ids from A,B,C).
  Function1 makes a new entry in D, then calls Function2 (passing ids
  from A,B,C,D).
  Function2 makes modification to the entry in D and makes several new
  entries in E.

  Not far into the loop we get an error saying the MySQL database has
  run out of connections:
  (Operational Error) (1040, 'Too many connections')

  Your scripts call create_engine() essentially in a loop.     This isn't 
  really the appropriate usage of create_engine().   The Engine does not 
  represent a single database connection; is an expensive-to-create registry 
  of information about your database and DBAPI as well as a connection pool 
  (seehttp://www.sqlalchemy.org/docs/core/engines.htmlforan overview).  
  Dereferencing it will eventually close out connections which were open 
  from it, but not immediately as the garbage collector thread typically 
  needs to find those unreachable objects.

  The appropriate scope for Engine is once per url per application, at the 
  module level.   That means if your application has only one URL, there 
  should be exactly one call to create_engine() in just one place, and the 
  resulting Engine should be placed in a single module made available for 
  other modules to import.  Otherwise you're working against the intended 
  design of create_engine().

  With that, all functions that call upon the Engine will be calling upon 
  the underlying connection pool so that the total number of connections 
  used by the application can be managed.

  The guidelines for  Session are less stringent, though again generally a 
  single Session is shared among all functions and methods for a particular 
  operation.   I didn't read your script carefully but typically a single 
  Session is passed along all functions that need to operate on data, so 
  that all those functions can share the same pool of objects which all 
  interact cleanly, not to mention all within one transaction.    The script 
  as it is now creates many new transactions.

  If you really do want to use a Session inside a function you can forego 
  the usage of sessionmaker as again that function is just a helper for 
  declaring module-level patterns.    The Session constructor can be called 
  directly, i.e. session = Session(engine).

  --
  You received this message because you are subscribed to the Google Groups 
  sqlalchemy group.
  To post to this group, send email to sqlalchemy@googlegroups.com.
  To unsubscribe from this group, send email to 
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group 
  athttp://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] commit after every insert

2011-05-19 Thread jeff
I have an app that does an update or insert (can't guarantee the entry
exists to start with).  I'm trying to increase efficiency and notice a
commit after every insert or update.  Is there a way to hold off on
the commit until one final commit?  Not using the ORM for this
particular piece.

create a dict with the updated values:

...
 slotargs['part_num'] = row[DB_PART_NUM]
 slotargs['family']   = row[DB_FAMILY]
 ...
create the update object (where slot is the Table object):

 u = slot.update().where( and_(slot.c.host_name ==
self.HostName, slot.c.row_num == row[DB_ROW_NUM],
slot.c.col_num==row[DB_COLUMN_NUM],
   slot.c.tray_num ==
row[DB_TRAY_INDEX],slot.c.port_num ==
row[DB_PORT_INDEX] ) ).values(**slotargs)

 self.GCDB.updateOrInsert(u, slot, slotargs)

  function body for updateOrInsert

   # Arguments are:
   #  SQL Alchemy update object bound to a table and database
connection.
   #  SQL ALchemy table object associated with above update
#  Dictionary of columns that are being updated, should include
the primary key column(s) in case we do the insert
   #
   # Returns the result object.
 
#--
   def updateOrInsert(self, updObj, tableObj, columns):
  result = updObj.execute()# Attempt an update
  if result.rowcount == 0: # See if it
succeeded
 i = tableObj.insert().values(**columns)   # If not then
create an insert object, columns must have primary keys for this table
 result = i.execute()  # Now do the insert
  return result


Thanks for the help!

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: [Grok-dev] [Solved? ] Creating a simple 1:2 relationship with MeGrok and SqlAlchemy

2010-11-01 Thread Jeff Peterson
Would make a nice addition to the ORM docs on grok.zope.org.

--
Jeff Peterson

Sent from my iPod

On Nov 1, 2010, at 5:31 PM, Hector Blanco white.li...@gmail.com wrote:

 Hi List...
 
 I have been asking a lot lately about a 1:2 relationship with MeGrok
 and SqlAlchemy, and I think I've solved it.
 
 I have created a mini how to just in case it could help anyone.
 
 ODT  http://www.hectorblanco.org/files/odt/Megrok%20Relation%201:2.odt
 PDF  http://www.hectorblanco.org/files/pdf/Megrok%20Relation%201:2.pdf
 
 If you wanna take a look, and criticize, correct... whatever, I'll be 
 thankful.
 
 I'd like to thank to everyone who helped me. And thank you to everyone
 who tried, even by reading my lng emails.
 ___
 Grok-dev mailing list
 grok-...@zope.org
 https://mail.zope.org/mailman/listinfo/grok-dev

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



RE: [sqlalchemy] Containers/collections with SQLAlchemy

2010-10-26 Thread Jeff Peterson
http://grok.zope.org/documentation/how-to/orm-using-megrok.rdb-and-sqlalchemy

http://www.sqlalchemy.org/docs/

--
Jeffrey D Peterson
Webmaster
Crary Industries, Inc.
237 12th St NW
West Fargo, ND 58078
P: 701-499-5928
E: jeff.peter...@crary.com

 -Original Message-
 From: sqlalchemy@googlegroups.com [mailto:sqlalch...@googlegroups.com]
 On Behalf Of Hector Blanco
 Sent: Tuesday, October 26, 2010 5:40 PM
 To: sqlalchemy@googlegroups.com
 Subject: [sqlalchemy] Containers/collections with SQLAlchemy
 
 Hi group!
 
 I am trying to migrate my application from ZopeDB to MySql. I am using
 sqlalchemy under megrok.rdb.
 
 I have a class which inherits from list() and has a few extra methods.
 Reading the chapter about custom collections in sqlalchemy
 (http://www.sqlalchemy.org/docs/orm/collections.html#custom-collection-
 implementations),
 I thought that I'd be great having that class as a custom collection
 (extending from list() but simulating a set(), where the items can't
 be repeated)
 
 That class that extends from list, acts as a container of other
 classes (mmm... yeah, as all the collections do... pretty obvious). It
 is used (as an attribute) in other classes as well:
 
 class MyEntry(object):
   def __init__(self):
   self.field1 = field1
   self.field2 = field2
 
 
 class MyContainer(list):
   __emulates__ = set
   def __init__(self):
   super(MyContainer,self).__init__()
 
   def add(self, myEntry):
   if isinstance(myEntry, MyEntry):
   if not(myEntry in self):
   super(MyContainer, self).append(myEntry)
 
   def getByField1(self, field1):
   for element in self:
   if element.field1 == field1:
   return element
   return None
 # [ ...
 # more useful methods,
 # overloading to make the list behave like a set,
 # yada yada yada
 # ...]
 
 
 class MyClass(object):
   def __init__(self):
   self.container1 = MyContainer()
   self.container2 = MyContainer()
   self.anotherField = hello world
   def getContainer1():
   return self.container1
   def getContainer2():
   return self.container2
 
 I see clearly the MyEntry and (more or less clearly) MyClass
 classes modeled in tables.
 I also see clearly the intermediate table for MyContainer:
 
 my_containers_table = Table(
   my_containers_table,
   metadata,
   Column(id, Integer,  primary_key=True),
   Column(my_entry_id, Integer, ForeignKey(my_entries_table.id))
 )
 
 So, in the MyClass class, each of MyContainer() instances can have an
 id and when someone wants to retrieve the MyEntry() elements that are
 in container1 (to say so), the my_containers_table can be used as a
 middle table to get said MyEntries.
 
 but I don't know how to link the MyContainer(list) object with
 my_containers_table (and from there with MyClass) :-(
 
 I'm not even sure whether MyClass.container1 and MyClass.container2
 should be ForeignKeys or Relations.
 
 How can I establish the relationship between MyClass.container1 or
 MyClass.container2 with my_containers_table?
 
 On one hand, I want MyClass.container1 and MyClass.container2 to be
 foreign keys, but on the other, I want them to be instances of
 MyContainer(list)... And that's where I start banging my head on the
 wall :-)
 
 In my mind (preferably before banging it against the wall) I see this
 schema:
 
  +---Entry+
  | id = 1|
  |field1   | +container1---+
  |field2   |  |id = 10|
  +-+ |  foreign[0] = 1   |
 |  foreign[1] = 2   |+- myClass
 +
  +---Entry---++--+   |  id = 101
 |
  |id = 2 | |
 anotherField|
  |field1   | |
 container1 = 10  |
  |field2   | +container2---+| container2 = 20  |
  +-+ |  id = 20  | +---
 --+
 |  foreign[0] = 3   |
  +---Entry---++-+
  |id = 3|
  |field1  |
  |field2  |
  ++
 
 [I hope the Ascii thing is properly displayed]
 
 When I want to get all what is in myClass.container1, the system
 should go to my_containers_table with the myClass.container1's id (10)
 and retrieve all the MyEntries (id=1 and id=2 in the example above)
 pointed by the ForeingKey of my_containers_table. That's what I want
 the system to do. But that's not what it's doing.
 
 Any tip will be deeply appreciated. Links to manuals,
 documentations... whatever (I'm a total newbie in sqlmyalchemy)
 
 Thank you again!
 
 --
 You 

[sqlalchemy] Microsoft Access

2010-09-20 Thread Jeff Peterson
Microsoft Access is supported, albeit experimentally in 0.5.8.  Will this 
functionality make it past that at any point or has that been basically 
scrapped?

--
Jeffrey D Peterson
Webmaster
Crary Industries, Inc.
237 12th St NW
West Fargo, ND 58078
P: 701-499-5928
E: jeff.peter...@crary.com

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] How to map to read only descriptor

2010-06-11 Thread Jeff Peck
On Fri, Jun 11, 2010 at 4:16 AM, jpeck peck.j...@gmail.com wrote:
 I am trying to map to a simple read only property. According to the
 docs, I *think* I am supposed to use synonym. The problem is that I am
 getting a None value for the mapped descriptor's column.


Ok, so I got this working with synonym, but I am not sure I am doing
this the best way. As long as I access the descriptor before an insert
or update, the synonym worked ok.

I ended up using a MapperExtension like so:

class UpdatePropertiesExtension(MapperExtension):
def __init__(self, properties):
self.properties = properties

def _update_properties(self, instance):
for p in self.properties:
getattr(instance, p)

def before_insert(self, mapper, connection, instance):
self._update_properties(instance)

def before_update(self, mapper, connection, instance):
self._update_properties(instance)


This works fine for me, but I'm wondering if there is a better way to do this?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] 0.6 Beta 3: specifying bindparams in update()

2010-04-12 Thread Jeff Peck
I recently tried out 0.6 beta 3, and I noticed that the following
construct is no longer allowed using the sql expression language:

def update_foos(connection, foo_items):
update = foo_table.update(
foo_table.c.id == bindparam('id'),
values = { 'column_1' : bindparam('column_1'),  ...}
)
connection.execute(update, foo_items)


Apparently you can't specify a bindparam in the where clause now. I
was just kind of curious as to what led to this change, and what is
the recommended way of doing this now? Should I have been doing this
in the first place?


Thanks,
Jeff Peck

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Calling functions from a package from the session

2010-03-01 Thread Jeff Peterson
I have a function in an Oracle package called CRARY_WEB_USERS_SECURITY_API it 
looks like this:

function GetPassword(
parent_ in varchar2,
user_ in varchar2
) return varchar2;

How would I call that function from a session?  Or can I even do that.

TIA,

--
Jeffrey D Peterson
Webmaster
Crary Industries, Inc.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



RE: [sqlalchemy] Warnings take a really long time / NotImplementedError

2010-02-11 Thread Jeff Peterson


--
Jeffrey D Peterson
Webmaster
Crary Industries, Inc.

From: sqlalchemy@googlegroups.com [mailto:sqlalch...@googlegroups.com] On 
Behalf Of Michael Bayer
Sent: Wednesday, February 10, 2010 6:30 PM
To: sqlalchemy@googlegroups.com
Subject: Re: [sqlalchemy] Warnings take a really long time / NotImplementedError


On Feb 10, 2010, at 6:05 PM, Jeff Peterson wrote:



It's touching a ton of tables, hundreds...if I had to guess I'd say every table 
in the schema.  The reasons for this are unknown to me, certainly all those 
tables are not related specifically to the single view I am attempting to 
reflect.

that shouldn't be possible on a single table reflect, if its a view.  Views 
have no foreign key metadata so it would have no reason to go anywhere else.   
you'd have to provide more specifics in order for us to see how that might be 
reproduced.

Seeing this for basicall(I think) everything in the Schema:

 INFO [sqlalchemy.engine.base.Engine.0x...5bcc]
SELECT a.index_name, a.column_name, b.uniqueness
FROM ALL_IND_COLUMNS a,
ALL_INDEXES b
WHERE
a.index_name = b.index_name
AND a.table_owner = b.table_owner
AND a.table_name = b.table_name

AND a.table_name = :table_name
AND a.table_owner = :schema
ORDER BY a.index_name, a.column_position
2010-02-11 11:01:43,410 INFO sqlalchemy.engine.base.Engine.0x...5bcc 
{'table_name': 'CFA_CASH_FLOW_STATUS_TAB', 'schema': 'CRAR1APP'}
2010-02-11 11:01:43,410 INFO [sqlalchemy.engine.base.Engine.0x...5bcc] 
{'table_name': 'CFA_CASH_FLOW_STATUS_TAB', 'schema': 'CRAR1APP'}
2010-02-11 11:01:43,413 INFO sqlalchemy.engine.base.Engine.0x...5bcc SELECT 
column_name, data_type, data_length, data_precision, data_scale, nullable, 
data_default FROM ALL_TAB_COLUMNS WHERE table_name = :table_name AND owner = 
:owner ORDER BY column_id
2010-02-11 11:01:43,413 INFO [sqlalchemy.engine.base.Engine.0x...5bcc] SELECT 
column_name, data_type, data_length, data_precision, data_scale, nullable, 
data_default FROM ALL_TAB_COLUMNS WHERE table_name = :table_name AND owner = 
:owner ORDER BY column_id
2010-02-11 11:01:43,414 INFO sqlalchemy.engine.base.Engine.0x...5bcc {'owner': 
'CRAR1APP', 'table_name': 'CFA_CASH_FLOW_TAB'}
2010-02-11 11:01:43,414 INFO [sqlalchemy.engine.base.Engine.0x...5bcc] 
{'owner': 'CRAR1APP', 'table_name': 'CFA_CASH_FLOW_TAB'}
2010-02-11 11:01:43,421 INFO sqlalchemy.engine.base.Engine.0x...5bcc SELECT
 ac.constraint_name,
 ac.constraint_type,
 loc.column_name AS local_column,
 rem.table_name AS remote_table,
 rem.column_name AS remote_column,
 rem.owner AS remote_owner,
 loc.position as loc_pos,
 rem.position as rem_pos
   FROM all_constraints ac,
 all_cons_columns loc,
 all_cons_columns rem
   WHERE ac.table_name = :table_name
   AND ac.constraint_type IN ('R','P')
   AND ac.owner = :owner
   AND ac.owner = loc.owner
   AND ac.constraint_name = loc.constraint_name
   AND ac.r_owner = rem.owner(+)
   AND ac.r_constraint_name = rem.constraint_name(+)
   AND (rem.position IS NULL or loc.position=rem.position)
   ORDER BY ac.constraint_name, loc.position
2010-02-11 11:01:43,421 INFO [sqlalchemy.engine.base.Engine.0x...5bcc] SELECT
 ac.constraint_name,
 ac.constraint_type,
 loc.column_name AS local_column,
 rem.table_name AS remote_table,
 rem.column_name AS remote_column,
 rem.owner AS remote_owner,
 loc.position as loc_pos,
 rem.position as rem_pos
   FROM all_constraints ac,
 all_cons_columns loc,
 all_cons_columns rem
   WHERE ac.table_name = :table_name
   AND ac.constraint_type IN ('R','P')
   AND ac.owner = :owner
   AND ac.owner = loc.owner
   AND ac.constraint_name = loc.constraint_name
   AND ac.r_owner = rem.owner(+)
   AND ac.r_constraint_name = rem.constraint_name(+)
   AND (rem.position IS NULL or loc.position=rem.position)
   ORDER BY ac.constraint_name, loc.position
2010-02-11 11:01:43,421 INFO sqlalchemy.engine.base.Engine.0x...5bcc {'owner': 
'CRAR1APP', 'table_name': 'CFA_CASH_FLOW_TAB'}

2010-02-111:01:43,421 INFO [sqlalchemy.engine.base.Engine.0x...5bcc] {'owner': 
'CRAR1APP', 'table_name': 'CFA_CASH_FLOW_TAB'}



* Snip *

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



RE: [sqlalchemy] Warnings take a really long time / NotImplementedError

2010-02-11 Thread Jeff Peterson
Right, and there is that same code outputted  for every table in the schema, 
when reflecting that one view.  What I posted was just the one snippet, it is 
repeated over and over for each different table.

--
Jeffrey D Peterson
Webmaster
Crary Industries, Inc.

 -Original Message-
 From: sqlalchemy@googlegroups.com [mailto:sqlalch...@googlegroups.com]
 On Behalf Of Michael Bayer
 Sent: Thursday, February 11, 2010 11:46 AM
 To: sqlalchemy@googlegroups.com
 Subject: RE: [sqlalchemy] Warnings take a really long time /
 NotImplementedError
 
 that SQL output is specific to 'table_name': 'CFA_CASH_FLOW_STATUS_TAB'
 and 'table_name': 'CFA_CASH_FLOW_TAB'.  that's two tables.
 
 
 Jeff Peterson wrote:
 
 
  --
  Jeffrey D Peterson
  Webmaster
  Crary Industries, Inc.
 
  From: sqlalchemy@googlegroups.com
 [mailto:sqlalch...@googlegroups.com] On
  Behalf Of Michael Bayer
  Sent: Wednesday, February 10, 2010 6:30 PM
  To: sqlalchemy@googlegroups.com
  Subject: Re: [sqlalchemy] Warnings take a really long time /
  NotImplementedError
 
 
  On Feb 10, 2010, at 6:05 PM, Jeff Peterson wrote:
 
 
 
  It's touching a ton of tables, hundreds...if I had to guess I'd say
 every
  table in the schema.  The reasons for this are unknown to me,
 certainly
  all those tables are not related specifically to the single view I am
  attempting to reflect.
 
  that shouldn't be possible on a single table reflect, if its a view.
  Views have no foreign key metadata so it would have no reason to go
  anywhere else.   you'd have to provide more specifics in order for us
 to
  see how that might be reproduced.
 
  Seeing this for basicall(I think) everything in the Schema:
 
   INFO [sqlalchemy.engine.base.Engine.0x...5bcc]
  SELECT a.index_name, a.column_name, b.uniqueness
  FROM ALL_IND_COLUMNS a,
  ALL_INDEXES b
  WHERE
  a.index_name = b.index_name
  AND a.table_owner = b.table_owner
  AND a.table_name = b.table_name
 
  AND a.table_name = :table_name
  AND a.table_owner = :schema
  ORDER BY a.index_name, a.column_position
  2010-02-11 11:01:43,410 INFO sqlalchemy.engine.base.Engine.0x...5bcc
  {'table_name': 'CFA_CASH_FLOW_STATUS_TAB', 'schema': 'CRAR1APP'}
  2010-02-11 11:01:43,410 INFO
 [sqlalchemy.engine.base.Engine.0x...5bcc]
  {'table_name': 'CFA_CASH_FLOW_STATUS_TAB', 'schema': 'CRAR1APP'}
  2010-02-11 11:01:43,413 INFO sqlalchemy.engine.base.Engine.0x...5bcc
  SELECT column_name, data_type, data_length, data_precision,
 data_scale,
  nullable, data_default FROM ALL_TAB_COLUMNS WHERE table_name =
 :table_name
  AND owner = :owner ORDER BY column_id
  2010-02-11 11:01:43,413 INFO
 [sqlalchemy.engine.base.Engine.0x...5bcc]
  SELECT column_name, data_type, data_length, data_precision,
 data_scale,
  nullable, data_default FROM ALL_TAB_COLUMNS WHERE table_name =
 :table_name
  AND owner = :owner ORDER BY column_id
  2010-02-11 11:01:43,414 INFO sqlalchemy.engine.base.Engine.0x...5bcc
  {'owner': 'CRAR1APP', 'table_name': 'CFA_CASH_FLOW_TAB'}
  2010-02-11 11:01:43,414 INFO
 [sqlalchemy.engine.base.Engine.0x...5bcc]
  {'owner': 'CRAR1APP', 'table_name': 'CFA_CASH_FLOW_TAB'}
  2010-02-11 11:01:43,421 INFO sqlalchemy.engine.base.Engine.0x...5bcc
  SELECT
   ac.constraint_name,
   ac.constraint_type,
   loc.column_name AS local_column,
   rem.table_name AS remote_table,
   rem.column_name AS remote_column,
   rem.owner AS remote_owner,
   loc.position as loc_pos,
   rem.position as rem_pos
 FROM all_constraints ac,
   all_cons_columns loc,
   all_cons_columns rem
 WHERE ac.table_name = :table_name
 AND ac.constraint_type IN ('R','P')
 AND ac.owner = :owner
 AND ac.owner = loc.owner
 AND ac.constraint_name = loc.constraint_name
 AND ac.r_owner = rem.owner(+)
 AND ac.r_constraint_name = rem.constraint_name(+)
 AND (rem.position IS NULL or loc.position=rem.position)
 ORDER BY ac.constraint_name, loc.position
  2010-02-11 11:01:43,421 INFO
 [sqlalchemy.engine.base.Engine.0x...5bcc]
  SELECT
   ac.constraint_name,
   ac.constraint_type,
   loc.column_name AS local_column,
   rem.table_name AS remote_table,
   rem.column_name AS remote_column,
   rem.owner AS remote_owner,
   loc.position as loc_pos,
   rem.position as rem_pos
 FROM all_constraints ac,
   all_cons_columns loc,
   all_cons_columns rem
 WHERE ac.table_name = :table_name
 AND ac.constraint_type IN ('R','P')
 AND ac.owner = :owner
 AND ac.owner = loc.owner
 AND ac.constraint_name = loc.constraint_name
 AND ac.r_owner = rem.owner

RE: [sqlalchemy] Warnings take a really long time / NotImplementedError

2010-02-11 Thread Jeff Peterson
That is the troubling part, I am reflecting a view, and yet it is still 
touching all those tables in the DB for schema='CRAR1APP'

--
Jeffrey D Peterson
Webmaster
Crary Industries, Inc.


 -Original Message-
 From: sqlalchemy@googlegroups.com [mailto:sqlalch...@googlegroups.com]
 On Behalf Of Michael Bayer
 Sent: Thursday, February 11, 2010 12:05 PM
 To: sqlalchemy@googlegroups.com
 Subject: RE: [sqlalchemy] Warnings take a really long time /
 NotImplementedError
 
 I thought you were reflecting a view ?  a table will fan out to all of
 its
 constraints, yes.
 
 
 Jeff Peterson wrote:
  Right, and there is that same code outputted  for every table in the
  schema, when reflecting that one view.  What I posted was just the
 one
  snippet, it is repeated over and over for each different table.
 
  --
  Jeffrey D Peterson
  Webmaster
  Crary Industries, Inc.
 
  -Original Message-
  From: sqlalchemy@googlegroups.com
 [mailto:sqlalch...@googlegroups.com]
  On Behalf Of Michael Bayer
  Sent: Thursday, February 11, 2010 11:46 AM
  To: sqlalchemy@googlegroups.com
  Subject: RE: [sqlalchemy] Warnings take a really long time /
  NotImplementedError
 
  that SQL output is specific to 'table_name':
 'CFA_CASH_FLOW_STATUS_TAB'
  and 'table_name': 'CFA_CASH_FLOW_TAB'.  that's two tables.
 
 
  Jeff Peterson wrote:
  
  
   --
   Jeffrey D Peterson
   Webmaster
   Crary Industries, Inc.
  
   From: sqlalchemy@googlegroups.com
  [mailto:sqlalch...@googlegroups.com] On
   Behalf Of Michael Bayer
   Sent: Wednesday, February 10, 2010 6:30 PM
   To: sqlalchemy@googlegroups.com
   Subject: Re: [sqlalchemy] Warnings take a really long time /
   NotImplementedError
  
  
   On Feb 10, 2010, at 6:05 PM, Jeff Peterson wrote:
  
  
  
   It's touching a ton of tables, hundreds...if I had to guess I'd
 say
  every
   table in the schema.  The reasons for this are unknown to me,
  certainly
   all those tables are not related specifically to the single view I
 am
   attempting to reflect.
  
   that shouldn't be possible on a single table reflect, if its a
 view.
   Views have no foreign key metadata so it would have no reason to
 go
   anywhere else.   you'd have to provide more specifics in order for
 us
  to
   see how that might be reproduced.
  
   Seeing this for basicall(I think) everything in the Schema:
  
INFO [sqlalchemy.engine.base.Engine.0x...5bcc]
   SELECT a.index_name, a.column_name, b.uniqueness
   FROM ALL_IND_COLUMNS a,
   ALL_INDEXES b
   WHERE
   a.index_name = b.index_name
   AND a.table_owner = b.table_owner
   AND a.table_name = b.table_name
  
   AND a.table_name = :table_name
   AND a.table_owner = :schema
   ORDER BY a.index_name, a.column_position
   2010-02-11 11:01:43,410 INFO
 sqlalchemy.engine.base.Engine.0x...5bcc
   {'table_name': 'CFA_CASH_FLOW_STATUS_TAB', 'schema': 'CRAR1APP'}
   2010-02-11 11:01:43,410 INFO
  [sqlalchemy.engine.base.Engine.0x...5bcc]
   {'table_name': 'CFA_CASH_FLOW_STATUS_TAB', 'schema': 'CRAR1APP'}
   2010-02-11 11:01:43,413 INFO
 sqlalchemy.engine.base.Engine.0x...5bcc
   SELECT column_name, data_type, data_length, data_precision,
  data_scale,
   nullable, data_default FROM ALL_TAB_COLUMNS WHERE table_name =
  :table_name
   AND owner = :owner ORDER BY column_id
   2010-02-11 11:01:43,413 INFO
  [sqlalchemy.engine.base.Engine.0x...5bcc]
   SELECT column_name, data_type, data_length, data_precision,
  data_scale,
   nullable, data_default FROM ALL_TAB_COLUMNS WHERE table_name =
  :table_name
   AND owner = :owner ORDER BY column_id
   2010-02-11 11:01:43,414 INFO
 sqlalchemy.engine.base.Engine.0x...5bcc
   {'owner': 'CRAR1APP', 'table_name': 'CFA_CASH_FLOW_TAB'}
   2010-02-11 11:01:43,414 INFO
  [sqlalchemy.engine.base.Engine.0x...5bcc]
   {'owner': 'CRAR1APP', 'table_name': 'CFA_CASH_FLOW_TAB'}
   2010-02-11 11:01:43,421 INFO
 sqlalchemy.engine.base.Engine.0x...5bcc
   SELECT
ac.constraint_name,
ac.constraint_type,
loc.column_name AS local_column,
rem.table_name AS remote_table,
rem.column_name AS remote_column,
rem.owner AS remote_owner,
loc.position as loc_pos,
rem.position as rem_pos
  FROM all_constraints ac,
all_cons_columns loc,
all_cons_columns rem
  WHERE ac.table_name = :table_name
  AND ac.constraint_type IN ('R','P')
  AND ac.owner = :owner
  AND ac.owner = loc.owner
  AND ac.constraint_name = loc.constraint_name
  AND ac.r_owner = rem.owner(+)
  AND ac.r_constraint_name = rem.constraint_name(+)
  AND (rem.position IS NULL or loc.position=rem.position)
  ORDER BY ac.constraint_name, loc.position
   2010-02-11 11:01:43,421 INFO
  [sqlalchemy.engine.base.Engine.0x...5bcc

RE: [sqlalchemy] Warnings take a really long time / NotImplementedError

2010-02-11 Thread Jeff Peterson
The view name itself isn't but the names of all the tables that make up that 
view are.  So I guess that must be why.

--
Jeffrey D Peterson
Webmaster
Crary Industries, Inc.


 -Original Message-
 From: sqlalchemy@googlegroups.com [mailto:sqlalch...@googlegroups.com]
 On Behalf Of Michael Bayer
 Sent: Thursday, February 11, 2010 12:59 PM
 To: sqlalchemy@googlegroups.com
 Subject: RE: [sqlalchemy] Warnings take a really long time /
 NotImplementedError
 
 Jeff Peterson wrote:
  That is the troubling part, I am reflecting a view, and yet it is
 still
  touching all those tables in the DB for schema='CRAR1APP'
 
 does the name of your view appear at all in ALL_CONS_COLUMNS.TABLE_NAME
 ?
 that's the only way reflection of a view could get the name of a table
 to
 reflect.if you turn on echo='debug' or set sqlalchemy.engine to
 DEBUG level logging, you'd see all the rows returned from every query.
 
 
 
  --
  Jeffrey D Peterson
  Webmaster
  Crary Industries, Inc.
 
 
  -Original Message-
  From: sqlalchemy@googlegroups.com
 [mailto:sqlalch...@googlegroups.com]
  On Behalf Of Michael Bayer
  Sent: Thursday, February 11, 2010 12:05 PM
  To: sqlalchemy@googlegroups.com
  Subject: RE: [sqlalchemy] Warnings take a really long time /
  NotImplementedError
 
  I thought you were reflecting a view ?  a table will fan out to all
 of
  its
  constraints, yes.
 
 
  Jeff Peterson wrote:
   Right, and there is that same code outputted  for every table in
 the
   schema, when reflecting that one view.  What I posted was just the
  one
   snippet, it is repeated over and over for each different table.
  
   --
   Jeffrey D Peterson
   Webmaster
   Crary Industries, Inc.
  
   -Original Message-
   From: sqlalchemy@googlegroups.com
  [mailto:sqlalch...@googlegroups.com]
   On Behalf Of Michael Bayer
   Sent: Thursday, February 11, 2010 11:46 AM
   To: sqlalchemy@googlegroups.com
   Subject: RE: [sqlalchemy] Warnings take a really long time /
   NotImplementedError
  
   that SQL output is specific to 'table_name':
  'CFA_CASH_FLOW_STATUS_TAB'
   and 'table_name': 'CFA_CASH_FLOW_TAB'.  that's two tables.
  
  
   Jeff Peterson wrote:
   
   
--
Jeffrey D Peterson
Webmaster
Crary Industries, Inc.
   
From: sqlalchemy@googlegroups.com
   [mailto:sqlalch...@googlegroups.com] On
Behalf Of Michael Bayer
Sent: Wednesday, February 10, 2010 6:30 PM
To: sqlalchemy@googlegroups.com
Subject: Re: [sqlalchemy] Warnings take a really long time /
NotImplementedError
   
   
On Feb 10, 2010, at 6:05 PM, Jeff Peterson wrote:
   
   
   
It's touching a ton of tables, hundreds...if I had to guess I'd
  say
   every
table in the schema.  The reasons for this are unknown to me,
   certainly
all those tables are not related specifically to the single
 view I
  am
attempting to reflect.
   
that shouldn't be possible on a single table reflect, if its a
  view.
Views have no foreign key metadata so it would have no reason
 to
  go
anywhere else.   you'd have to provide more specifics in order
 for
  us
   to
see how that might be reproduced.
   
Seeing this for basicall(I think) everything in the Schema:
   
 INFO [sqlalchemy.engine.base.Engine.0x...5bcc]
SELECT a.index_name, a.column_name, b.uniqueness
FROM ALL_IND_COLUMNS a,
ALL_INDEXES b
WHERE
a.index_name = b.index_name
AND a.table_owner = b.table_owner
AND a.table_name = b.table_name
   
AND a.table_name = :table_name
AND a.table_owner = :schema
ORDER BY a.index_name, a.column_position
2010-02-11 11:01:43,410 INFO
  sqlalchemy.engine.base.Engine.0x...5bcc
{'table_name': 'CFA_CASH_FLOW_STATUS_TAB', 'schema':
 'CRAR1APP'}
2010-02-11 11:01:43,410 INFO
   [sqlalchemy.engine.base.Engine.0x...5bcc]
{'table_name': 'CFA_CASH_FLOW_STATUS_TAB', 'schema':
 'CRAR1APP'}
2010-02-11 11:01:43,413 INFO
  sqlalchemy.engine.base.Engine.0x...5bcc
SELECT column_name, data_type, data_length, data_precision,
   data_scale,
nullable, data_default FROM ALL_TAB_COLUMNS WHERE table_name =
   :table_name
AND owner = :owner ORDER BY column_id
2010-02-11 11:01:43,413 INFO
   [sqlalchemy.engine.base.Engine.0x...5bcc]
SELECT column_name, data_type, data_length, data_precision,
   data_scale,
nullable, data_default FROM ALL_TAB_COLUMNS WHERE table_name =
   :table_name
AND owner = :owner ORDER BY column_id
2010-02-11 11:01:43,414 INFO
  sqlalchemy.engine.base.Engine.0x...5bcc
{'owner': 'CRAR1APP', 'table_name': 'CFA_CASH_FLOW_TAB'}
2010-02-11 11:01:43,414 INFO
   [sqlalchemy.engine.base.Engine.0x...5bcc]
{'owner': 'CRAR1APP', 'table_name': 'CFA_CASH_FLOW_TAB'}
2010-02-11 11:01:43,421 INFO
  sqlalchemy.engine.base.Engine.0x...5bcc
SELECT
 ac.constraint_name,
 ac.constraint_type

RE: [sqlalchemy] Warnings take a really long time / NotImplementedError

2010-02-11 Thread Jeff Peterson
 -Original Message-
 From: sqlalchemy@googlegroups.com [mailto:sqlalch...@googlegroups.com]
 On Behalf Of Michael Bayer
 Sent: Thursday, February 11, 2010 1:26 PM
 To: sqlalchemy@googlegroups.com
 Subject: RE: [sqlalchemy] Warnings take a really long time /
 NotImplementedError

 Jeff Peterson wrote:
  The view name itself isn't but the names of all the tables that make
 up
  that view are.  So I guess that must be why.

 It is only looking at the columns declared in your view - the Table
 reflection logic doesn't actually look at the original definition of
 the
 view (there is a function for that available but that's not what you're
 using here).

 I'm not familiar with what Oracle does here but if it places view
 columns
 into ALL_CONS_COLUMNS corresponding to the table column they represent,
 that would be the effect.  But it seems strange that would be the case,
 since there's no constraint on your view.

 The other possibility is that you are actually reflecting tables
 somewhere
 else.

If I am it's not on purpose. ;)  I was able to make one observation 
though...during my test, trying to get all the kinks worked out I setup 2 
connection strings, 1) the schema owner (who has rights to everything) and 2) 
my limited user that only has select rights on certain views.  When this 
happens, I am connected as the schema user.  When connected as the limited user 
it's lightning fast (I commented out the create code in the lib, I can't create 
new tables as it sits but it'll reflect just fine). So, bottom line is, despite 
the strangeness, I guess I can, just not worry about it, at least for now.  But 
it's clear that when it can't touch those tables it doesn't perform those 
commands.



 
  --
  Jeffrey D Peterson
  Webmaster
  Crary Industries, Inc.
 
 
  -Original Message-
  From: sqlalchemy@googlegroups.com
 [mailto:sqlalch...@googlegroups.com]
  On Behalf Of Michael Bayer
  Sent: Thursday, February 11, 2010 12:59 PM
  To: sqlalchemy@googlegroups.com
  Subject: RE: [sqlalchemy] Warnings take a really long time /
  NotImplementedError
 
  Jeff Peterson wrote:
   That is the troubling part, I am reflecting a view, and yet it is
  still
   touching all those tables in the DB for schema='CRAR1APP'
 
  does the name of your view appear at all in
 ALL_CONS_COLUMNS.TABLE_NAME
  ?
  that's the only way reflection of a view could get the name of a
 table
  to
  reflect.if you turn on echo='debug' or set sqlalchemy.engine
 to
  DEBUG level logging, you'd see all the rows returned from every
 query.
 
 
  
   --
   Jeffrey D Peterson
   Webmaster
   Crary Industries, Inc.
  
  
   -Original Message-
   From: sqlalchemy@googlegroups.com
  [mailto:sqlalch...@googlegroups.com]
   On Behalf Of Michael Bayer
   Sent: Thursday, February 11, 2010 12:05 PM
   To: sqlalchemy@googlegroups.com
   Subject: RE: [sqlalchemy] Warnings take a really long time /
   NotImplementedError
  
   I thought you were reflecting a view ?  a table will fan out to
 all
  of
   its
   constraints, yes.
  
  
   Jeff Peterson wrote:
Right, and there is that same code outputted  for every table
 in
  the
schema, when reflecting that one view.  What I posted was just
 the
   one
snippet, it is repeated over and over for each different table.
   
--
Jeffrey D Peterson
Webmaster
Crary Industries, Inc.
   
-Original Message-
From: sqlalchemy@googlegroups.com
   [mailto:sqlalch...@googlegroups.com]
On Behalf Of Michael Bayer
Sent: Thursday, February 11, 2010 11:46 AM
To: sqlalchemy@googlegroups.com
Subject: RE: [sqlalchemy] Warnings take a really long time /
NotImplementedError
   
that SQL output is specific to 'table_name':
   'CFA_CASH_FLOW_STATUS_TAB'
and 'table_name': 'CFA_CASH_FLOW_TAB'.  that's two tables.
   
   
Jeff Peterson wrote:


 --
 Jeffrey D Peterson
 Webmaster
 Crary Industries, Inc.

 From: sqlalchemy@googlegroups.com
[mailto:sqlalch...@googlegroups.com] On
 Behalf Of Michael Bayer
 Sent: Wednesday, February 10, 2010 6:30 PM
 To: sqlalchemy@googlegroups.com
 Subject: Re: [sqlalchemy] Warnings take a really long time /
 NotImplementedError


 On Feb 10, 2010, at 6:05 PM, Jeff Peterson wrote:



 It's touching a ton of tables, hundreds...if I had to guess
 I'd
   say
every
 table in the schema.  The reasons for this are unknown to
 me,
certainly
 all those tables are not related specifically to the single
  view I
   am
 attempting to reflect.

 that shouldn't be possible on a single table reflect, if its
 a
   view.
 Views have no foreign key metadata so it would have no
 reason
  to
   go
 anywhere else.   you'd have to provide more specifics in
 order
  for
   us
to
 see how that might be reproduced.

 Seeing this for basicall(I think) everything in the Schema:

  INFO

[sqlalchemy] Warnings take a really long time / NotImplementedError

2010-02-10 Thread Jeff Peterson
First spin though, I get these errors/warnings:

/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'ROWID' of column 'objid'
  ret = fn(self, con, *args, **kw)
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'LONG RAW' of column 'data'
  ret = fn(self, con, *args, **kw)
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'ROWID' of column 'source_rowid'
  ret = fn(self, con, *args, **kw)
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'ROWID' of column 'arowid'
  ret = fn(self, con, *args, **kw)
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'ROWID' of column 'browid'
  ret = fn(self, con, *args, **kw)
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'ROWID' of column 'crowid'
  ret = fn(self, con, *args, **kw)
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'ROWID' of column 'at_rowid'
  ret = fn(self, con, *args, **kw)
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'ROWID' of column 'drowid'
  ret = fn(self, con, *args, **kw)
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'ROWID' of column 'row_a'
  ret = fn(self, con, *args, **kw)
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'ROWID' of column 'row_b'
  ret = fn(self, con, *args, **kw)
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'ROWID' of column 'row_c'
  ret = fn(self, con, *args, **kw)
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'ROWID' of column 'textkey'
  ret = fn(self, con, *args, **kw)
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'ROWID' of column 'object_id'
  ret = fn(self, con, *args, **kw)
2010-02-10 14:00:33,891 ERROR [SiteError] 
http://portal-dev.craryindustries.com/testsa.html
Traceback (most recent call last):
  File 
/home/zope/.buildout/eggs/zope.publisher-3.4.9-py2.5.egg/zope/publisher/publish.py,
 line 133, in publish
result = publication.callObject(request, obj)
  File /home/zope/.buildout/eggs/grok-1.0-py2.5.egg/grok/publication.py, line 
89, in callObject
return super(ZopePublicationSansProxy, self).callObject(request, ob)
  File 
/home/zope/.buildout/eggs/zope.app.publication-3.4.3-py2.5.egg/zope/app/publication/zopepublication.py,
 line 167, in callObject
return mapply(ob, request.getPositionalArguments(), request)
  File 
/home/zope/.buildout/eggs/zope.publisher-3.4.9-py2.5.egg/zope/publisher/publish.py,
 line 108, in mapply
return debug_call(obj, args)
  File 
/home/zope/.buildout/eggs/zope.publisher-3.4.9-py2.5.egg/zope/publisher/publish.py,
 line 114, in debug_call
return obj(*args)
  File 
/home/zope/.buildout/eggs/grokcore.view-1.12.2-py2.5.egg/grokcore/view/components.py,
 line 64, in __call__
return mapply(self.render, (), self.request)
  File 
/home/zope/.buildout/eggs/zope.publisher-3.4.9-py2.5.egg/zope/publisher/publish.py,
 line 108, in mapply
return debug_call(obj, args)
  File 
/home/zope/.buildout/eggs/zope.publisher-3.4.9-py2.5.egg/zope/publisher/publish.py,
 line 114, in debug_call
return obj(*args)
  File 
/home/zope/development/grokenv/projects/portal/src/portal/lib/db/sql_alchemy_test.py,
 line 30, in render
session = rdb.Session()
  File 
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/orm/scoping.py, 
line 52, in __call__
return self.registry()
  File build/bdist.linux-i686/egg/sqlalchemy/util.py, line 1225, in __call__
  File 
/home/zope/.buildout/eggs/z3c.saconfig-0.9.1-py2.5.egg/z3c/saconfig/scopedsession.py,
 line 12, in session_factory
return utility.sessionFactory()
  File 
/home/zope/.buildout/eggs/z3c.saconfig-0.9.1-py2.5.egg/z3c/saconfig/utility.py,
 line 70, in sessionFactory
kw['bind'] = engine_factory()
  File 
/home/zope/.buildout/eggs/z3c.saconfig-0.9.1-py2.5.egg/z3c/saconfig/utility.py,
 line 165, in __call__
notify(EngineCreatedEvent(engine))
  File 
/home/zope/.buildout/eggs/zope.event-3.4.0-py2.5.egg/zope/event/__init__.py, 
line 23, in notify
subscriber(event)
  File 
/home/zope/.buildout/eggs/zope.component-3.4.0-py2.5.egg/zope/component/event.py,
 line 26, in dispatch
for ignored in zope.component.subscribers(event, None):
  File 

[sqlalchemy] RE: Warnings take a really long time / NotImplementedError

2010-02-10 Thread Jeff Peterson
BTW, this is using 0.6 beta1 build 6743 on Grok, reflecting a view from an 
Oracle (10.2) 10g DB.

--
Jeffrey D Peterson
Webmaster
Crary Industries, Inc.

From: sqlalchemy@googlegroups.com [mailto:sqlalch...@googlegroups.com] On 
Behalf Of Jeff Peterson
Sent: Wednesday, February 10, 2010 2:29 PM
To: sqlalchemy@googlegroups.com
Subject: [sqlalchemy] Warnings take a really long time / NotImplementedError

First spin though, I get these errors/warnings:

/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'ROWID' of column 'objid'
  ret = fn(self, con, *args, **kw)
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'LONG RAW' of column 'data'
  ret = fn(self, con, *args, **kw)
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'ROWID' of column 'source_rowid'
  ret = fn(self, con, *args, **kw)
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'ROWID' of column 'arowid'
  ret = fn(self, con, *args, **kw)
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'ROWID' of column 'browid'
  ret = fn(self, con, *args, **kw)
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'ROWID' of column 'crowid'
  ret = fn(self, con, *args, **kw)
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'ROWID' of column 'at_rowid'
  ret = fn(self, con, *args, **kw)
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'ROWID' of column 'drowid'
  ret = fn(self, con, *args, **kw)
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'ROWID' of column 'row_a'
  ret = fn(self, con, *args, **kw)
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'ROWID' of column 'row_b'
  ret = fn(self, con, *args, **kw)
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'ROWID' of column 'row_c'
  ret = fn(self, con, *args, **kw)
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'ROWID' of column 'textkey'
  ret = fn(self, con, *args, **kw)
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'ROWID' of column 'object_id'
  ret = fn(self, con, *args, **kw)
2010-02-10 14:00:33,891 ERROR [SiteError] 
http://portal-dev.craryindustries.com/testsa.html
Traceback (most recent call last):
  File 
/home/zope/.buildout/eggs/zope.publisher-3.4.9-py2.5.egg/zope/publisher/publish.py,
 line 133, in publish
result = publication.callObject(request, obj)
  File /home/zope/.buildout/eggs/grok-1.0-py2.5.egg/grok/publication.py, line 
89, in callObject
return super(ZopePublicationSansProxy, self).callObject(request, ob)
  File 
/home/zope/.buildout/eggs/zope.app.publication-3.4.3-py2.5.egg/zope/app/publication/zopepublication.py,
 line 167, in callObject
return mapply(ob, request.getPositionalArguments(), request)
  File 
/home/zope/.buildout/eggs/zope.publisher-3.4.9-py2.5.egg/zope/publisher/publish.py,
 line 108, in mapply
return debug_call(obj, args)
  File 
/home/zope/.buildout/eggs/zope.publisher-3.4.9-py2.5.egg/zope/publisher/publish.py,
 line 114, in debug_call
return obj(*args)
  File 
/home/zope/.buildout/eggs/grokcore.view-1.12.2-py2.5.egg/grokcore/view/components.py,
 line 64, in __call__
return mapply(self.render, (), self.request)
  File 
/home/zope/.buildout/eggs/zope.publisher-3.4.9-py2.5.egg/zope/publisher/publish.py,
 line 108, in mapply
return debug_call(obj, args)
  File 
/home/zope/.buildout/eggs/zope.publisher-3.4.9-py2.5.egg/zope/publisher/publish.py,
 line 114, in debug_call
return obj(*args)
  File 
/home/zope/development/grokenv/projects/portal/src/portal/lib/db/sql_alchemy_test.py,
 line 30, in render
session = rdb.Session()
  File 
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/orm/scoping.py, 
line 52, in __call__
return self.registry()
  File build/bdist.linux-i686/egg/sqlalchemy/util.py, line 1225, in __call__
  File 
/home/zope/.buildout/eggs/z3c.saconfig-0.9.1-py2.5.egg/z3c/saconfig/scopedsession.py,
 line 12, in session_factory
return utility.sessionFactory()
  File 
/home/zope/.buildout/eggs/z3c.saconfig-0.9.1-py2.5.egg/z3c/saconfig/utility.py,
 line 70, in sessionFactory
kw['bind'] = engine_factory()
  File 
/home/zope/.buildout/eggs/z3c.saconfig-0.9.1-py2.5.egg/z3c/saconfig

RE: [sqlalchemy] Warnings take a really long time / NotImplementedError

2010-02-10 Thread Jeff Peterson


--
Jeffrey D Peterson
Webmaster
Crary Industries, Inc.

From: sqlalchemy@googlegroups.com [mailto:sqlalch...@googlegroups.com] On 
Behalf Of Michael Bayer
Sent: Wednesday, February 10, 2010 2:38 PM
To: sqlalchemy@googlegroups.com
Subject: Re: [sqlalchemy] Warnings take a really long time / NotImplementedError


On Feb 10, 2010, at 3:28 PM, Jeff Peterson wrote:


First spin though, I get these errors/warnings:

/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'ROWID' of column 'objid'
  ret = fn(self, con, *args, **kw)
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'LONG RAW' of column 'data'
  ret = fn(self, con, *args, **kw)
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'ROWID' of column 'source_rowid'
  ret = fn(self, con, *args, **kw)


these are oracle column types that aren't present in the reflected types list.  
 this error is harmless (assuming you don't issue CREATE TABLE like you're 
doing later).

  File 
/home/zope/.buildout/eggs/megrok.rdb-0.10-py2.5.egg/megrok/rdb/setup.py, line 
93, in createTables
metadata.create_all(engine)
NotImplementedError: Can't generate DDL for the null type

this is more of a problem.  you're reflecting views (and I assume table 
objects) from your database, and then emitting metadata.create_all() - the 
views you've reflected are assumed to be tables, which don't exist, and it 
attempts to issue CREATE TABLE for them, and fails due to the types missing 
above (but luckily, else it would generate a new table for every view).

You shouldn't be calling create_all().  Especially not in a web application 
when it starts up, thats kind of crazy, and also not when your application 
receives its table metadata through reflection.

The module calling create_all() is a third party lib for Grok, which clearly 
doesn't handle reflection very well as it makes that create call regardless.  
It may be I need to step back and try and handle the SQLA stuff on my own.

Is there any way to skip/speed up the unrecognized column warnings?

what evidence do you have that the warnings themselves are slow ?   just 
because thats what you see actually dumped in your output has no relevance to 
the work that is actually going on, in this case, the vast majority of columns 
being reflected that do *not* generate any warning, since you would appear to 
be reflecting at least 12 views.  Reflecting a whole database is not a quick 
operation.

I am, in fact, only reflecting one view, but you got me to thinking, that view 
has 50+ columns and joins in two other views, does it attempt to reflects all 
the tables/view that make up that view?


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



RE: [sqlalchemy] Warnings take a really long time / NotImplementedError

2010-02-10 Thread Jeff Peterson
From: sqlalchemy@googlegroups.com [mailto:sqlalch...@googlegroups.com] On 
Behalf Of Michael Bayer
Sent: Wednesday, February 10, 2010 2:38 PM
To: sqlalchemy@googlegroups.com
Subject: Re: [sqlalchemy] Warnings take a really long time / NotImplementedError


On Feb 10, 2010, at 3:28 PM, Jeff Peterson wrote:

First spin though, I get these errors/warnings:

/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'ROWID' of column 'objid'
  ret = fn(self, con, *args, **kw)
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'LONG RAW' of column 'data'
  ret = fn(self, con, *args, **kw)
/home/zope/.buildout/eggs/SQLAlchemy-0.6-py2.5.egg/sqlalchemy/engine/reflection.py:40:
 SAWarning: Did not recognize type 'ROWID' of column 'source_rowid'
  ret = fn(self, con, *args, **kw)


these are oracle column types that aren't present in the reflected types list.  
 this error is harmless (assuming you don't issue CREATE TABLE like you're 
doing later).

  File 
/home/zope/.buildout/eggs/megrok.rdb-0.10-py2.5.egg/megrok/rdb/setup.py, line 
93, in createTables
metadata.create_all(engine)
NotImplementedError: Can't generate DDL for the null type

this is more of a problem.  you're reflecting views (and I assume table 
objects) from your database, and then emitting metadata.create_all() - the 
views you've reflected are assumed to be tables, which don't exist, and it 
attempts to issue CREATE TABLE for them, and fails due to the types missing 
above (but luckily, else it would generate a new table for every view).

You shouldn't be calling create_all().  Especially not in a web application 
when it starts up, thats kind of crazy, and also not when your application 
receives its table metadata through reflection.

The module calling create_all() is a third party lib for Grok, which clearly 
doesn't handle reflection very well as it makes that create call regardless.  
It may be I need to step back and try and handle the SQLA stuff on my own.

Is there any way to skip/speed up the unrecognized column warnings?

what evidence do you have that the warnings themselves are slow ?   just 
because thats what you see actually dumped in your output has no relevance to 
the work that is actually going on, in this case, the vast majority of columns 
being reflected that do *not* generate any warning, since you would appear to 
be reflecting at least 12 views.  Reflecting a whole database is not a quick 
operation.

I am, in fact, only reflecting one view, but you got me to thinking, that view 
has 50+ columns and joins in two other views, does it attempt to reflects all 
the tables/view that make up that view?

However, I had made some changes to the lib to allow the reflected views to be 
skipped basically removing them from the metadata, which worked, the views 
were reflected but it still took 30-40 seconds for it to reflect this one view.

The code for this:

class CrarySalesPart(rdb.Model):
rdb.reflected()
rdb.tablename('crary_sales_part')
rdb.tableargs(schema='crar1app', useexisting=True)

contract = Column('contract', String, nullable=False, primary_key=True)
catalog_no = Column('catalog_no', String, nullable=False, primary_key=True)

class Index(grok.View):
grok.context(Portal)
grok.name('testsa.html')

def render(self):
session = rdb.Session()
sp = 
session.query(CrarySalesPart).filter(CrarySalesPart.contract=='20').limit(10)
msg = ''.join(['p style=padding:0;margin:0%s: %s, %s/p' % 
(o.catalog_no, o.catalog_desc, o.part_product_code) for o in sp])

return htmlhead/headbody%s/body/html % msg

The first time I render that view, the reflection takes place and it takes the 
30-40 seconds to load the page (during which time the warnings are being 
generated), once it's mapped it is very fast.

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.
--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



RE: [sqlalchemy] Warnings take a really long time / NotImplementedError

2010-02-10 Thread Jeff Peterson


--
Jeffrey D Peterson
Webmaster
Crary Industries, Inc.

From: sqlalchemy@googlegroups.com [mailto:sqlalch...@googlegroups.com] On 
Behalf Of Michael Bayer
Sent: Wednesday, February 10, 2010 3:18 PM
To: sqlalchemy@googlegroups.com
Subject: Re: [sqlalchemy] Warnings take a really long time / NotImplementedError


On Feb 10, 2010, at 4:13 PM, Jeff Peterson wrote:

The first time I render that view, the reflection takes place and it takes the 
30-40 seconds to load the page (during which time the warnings are being 
generated), once it's mapped it is very fast.

You should probably have reflection occur when your app starts, not when a page 
is hit, but anyway, I dont know why that view takes 40 seconds to reflect and I 
would advise enabling SQL echoing to see what queries are taking place and 
roughly how long they take to complete.

It's touching a ton of tables, hundreds...if I had to guess I'd say every table 
in the schema.  The reasons for this are unknown to me, certainly all those 
tables are not related specifically to the single view I am attempting to 
reflect.

The other thing I am noticing is the code in the lib does this:

reflectTables(metadata)
createTables(metadata)
notify(DatabaseSetupEvent(metadata))

reflectTables does, among other things, a metadata.reflect(bind=engine)

createTables looks like:

def createTables(metadata):
Create class-specified tables.

engine = Engine()
metadata.create_all(engine)


So, the metadata has (or could have) both reflected tables/views and tables 
needing to be created, which would be fine assuming the reflected class is 
actually a table in the DB, which we know in this case it isn't it's a view and 
so it tried to create it.  So, the issue is what's the best solution for 
stopping this?  Should the reflected views simply be removed from the metadata 
after reflection? Or is there some simpler answer?

TIA,

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to 
sqlalchemy@googlegroups.commailto:sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.commailto:sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.
--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to 
sqlalchemy@googlegroups.commailto:sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.commailto:sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to 
sqlalchemy@googlegroups.commailto:sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.commailto:sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



RE: [sqlalchemy] Oracle, ownership and unrecognized type warnings

2010-01-11 Thread Jeff Peterson
Sorry for not getting back sooner.

--
Jeffrey D Peterson
Webmaster
Crary Industries, Inc.

-Original Message-
From: sqlalchemy@googlegroups.com [mailto:sqlalch...@googlegroups.com] On 
Behalf Of Michael Bayer
Sent: Wednesday, January 06, 2010 1:23 PM
To: sqlalchemy@googlegroups.com
Subject: Re: [sqlalchemy] Oracle, ownership and unrecognized type warnings

 crary_web wrote:
 For production I will need to connect as user webserv who has no 
 ownership at all only select grants, and will only have access to 
 views.  Currently, with what I have deciphered for myself, I can't do 
 this.  I cannot reflect a view at all, it complains about primary keys 
 which I can understand, but is there a way around this as it's not 
 practical to ask our DBA to put pks on the hundreds of views I will 
 possibly need to access, if he can at all.

 its true that there's no built in functionality to reflect views.  In the 
 case of your views, just create Table instances manually, specifying the
 view names, column names, and column types explicitly.   Set the
 primary_key=True flag on those columns which you'd like to consider as
 part of the primary key within your application.   No database change is
 necessary.


So, does doing this actually create a new table in the DB?  I tried this, and 
in my instance it failed, but I could see it trying to run a create table 
command.  Is this the way it works? Is there no other way?  The user I connect 
with will not have privileges to do this.


If you need to reflect an actual table in the DB, and you'd like to override 
what columns are considered as part of the primary key within your 
application, you specify those columns explicitly as in 
http://www.sqlalchemy.org/docs/05/metadata.html#overriding-reflected-columns.


 I can reflect a table as long as I connect as the table owner which as 
 I stated before I will not be able to do.  What configuration flag am 
 I missing or parameter I am not passing to make this ok?

 pass the schema='someowner' flag to each Table object.

This worked, for the case of tables.  FYI, using megrok.rdb you must include 
the class property __table_args__ as a dictionary i.e. 

Class ReflectedTable(megrok.rdb.Model):

megrok.rdb.reflected()
__table_args__ = {'schema':'someowner'}




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




RE: [sqlalchemy] Oracle, ownership and unrecognized type warnings

2010-01-11 Thread Jeff Peterson
Hmmm, must be something megrok.rdb is doing I am not seeing, I will look into 
it.

--
Jeffrey D Peterson
Webmaster
Crary Industries, Inc.


-Original Message-
From: sqlalchemy@googlegroups.com [mailto:sqlalch...@googlegroups.com] On 
Behalf Of Michael Bayer
Sent: Monday, January 11, 2010 2:31 PM
To: sqlalchemy@googlegroups.com
Subject: RE: [sqlalchemy] Oracle, ownership and unrecognized type warnings

Jeff Peterson wrote:
 its true that there's no built in functionality to reflect views.  In 
 the case of your views, just create Table instances manually, 
 specifying the
 view names, column names, and column types explicitly.   Set the
 primary_key=True flag on those columns which you'd like to consider as
 part of the primary key within your application.   No database change is
 necessary.


 So, does doing this actually create a new table in the DB?

no.   SQLAlchemy never creates tables unless you tell it to.  By create,
i meant, sometable = Table(...), i.e., you are creating a Python object in 
your application which represents the structure of a table or view that
is present in the database.   *Not* calling table.create().


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




RE: [sqlalchemy] Oracle, ownership and unrecognized type warnings

2010-01-11 Thread Jeff Peterson
OK, it is definitely megrok.rdb, the last thing it does is call 
metadata.create_all() so, I will email the megrok folks.  Thanks a bunch.

--
Jeffrey D Peterson
Webmaster
Crary Industries, Inc.


-Original Message-
From: sqlalchemy@googlegroups.com [mailto:sqlalch...@googlegroups.com] On 
Behalf Of Jeff Peterson
Sent: Monday, January 11, 2010 3:14 PM
To: sqlalchemy@googlegroups.com
Subject: RE: [sqlalchemy] Oracle, ownership and unrecognized type warnings

Hmmm, must be something megrok.rdb is doing I am not seeing, I will look into 
it.

--
Jeffrey D Peterson
Webmaster
Crary Industries, Inc.


-Original Message-
From: sqlalchemy@googlegroups.com [mailto:sqlalch...@googlegroups.com] On 
Behalf Of Michael Bayer
Sent: Monday, January 11, 2010 2:31 PM
To: sqlalchemy@googlegroups.com
Subject: RE: [sqlalchemy] Oracle, ownership and unrecognized type warnings

Jeff Peterson wrote:
 its true that there's no built in functionality to reflect views.  In 
 the case of your views, just create Table instances manually, 
 specifying the
 view names, column names, and column types explicitly.   Set the
 primary_key=True flag on those columns which you'd like to consider as
 part of the primary key within your application.   No database change is
 necessary.


 So, does doing this actually create a new table in the DB?

no.   SQLAlchemy never creates tables unless you tell it to.  By create,
i meant, sometable = Table(...), i.e., you are creating a Python object in 
your application which represents the structure of a table or view that
is present in the database.   *Not* calling table.create().


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




[sqlalchemy] Getting a join in one object

2009-11-21 Thread Jeff Cook
Hi all.

I want to get a join in one object. I have ret = db.query(a,
b).select_from(orm.join(a, b, a.a == b.a)).all() and that returns a
tuple with an a object in [0] and a b object in [1]. I want to have SA
return the object directly, and I want that object to include the
joined fields, so that I can access everything with just
ret.anything_i_want.

Thanks in advance for the help.

--

You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=.




[sqlalchemy] Re: Saved Queries (or text representation of the SQL)

2009-10-23 Thread jeff

I would like to save a number of these in a database so size is
important (the serialized select() was somewhat large...)  so I would
like to get the string representation of the raw SQL directly useable
by sqlalchemy if possible.  As I have in my examples, the str(select)
doesn't seem directly useable as it is missing the parameters -
upper(host_status.site) = %(upper_1)s instead of upper
(host_status.site) = 'LCO' for example.   Is there a way to get the
raw SQL text just as it is sent to the database and ready for reuse by
sqlalchemy (execute(text(SQLtext) )?   Or do I have to construct my
own by doing string replaces on the parameters with the parameters as
found in .context?  Seems like the raw SQL has to be available at some
point but don't know if the user has access to it.

Thanks!
Jeff

On Oct 22, 7:06 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 On Oct 22, 2009, at 6:33 PM, jeff wrote:



  I would like to allow user's to save favorite queries in my Python
  app.  Is there a way to find out the SQL statement as a string that
  can be then reused (e.g. Engine.execute(text(savedQueryText) )  )?
  Or is there another solution to this need?

 your best bet would be to serialize them using the serializer extension.

 http://www.sqlalchemy.org/docs/05/reference/ext/serializer.html?highl...

 the examples are ORM centric but you can use select() constructs too -  
 the session argument is optional.

 otherwise if you just want the raw SQL string, just call str
 (statement).    the string SQL is less wieldy being sent back into  
 SQLAlchemy though.



  I generate a select object to execute.  The string representations
  don't appear in a format that can be readily reused (or are missing
  the parameter values):
  str(query)
  'SELECT host_status.host_name, host_status.version,
  host_status.host_id, host_status.sys_uptime, host_status.host_uptime,
  host_status.load_avg, count(%(param_1)s) AS CMs,
  host_status.db_size, host_status.db_status, host_status.update_time
  \nFROM host_status, cm_status \nWHERE upper(host_status.site) = %
  (upper_1)s AND host_status.host_name = cm_status.host_name GROUP BY
  host_status.host_name, host_status.version, host_status.host_id,
  host_status.sys_uptime, host_status.host_uptime, host_status.load_avg,
  host_status.db_size, host_status.db_status, host_status.update_time
  ORDER BY host_status.host_name ASC'

  str(self.execute(query))
  2009-10-22 16:19:42,642 INFO sqlalchemy.engine.base.Engine.0x...67b0
  SELECT host_status.host_name, host_status.version,
  host_status.host_id, host_status.sys_uptime, host_status.host_uptime,
  host_status.load_avg, count(%(param_1)s) AS CMs,
  host_status.db_size, host_status.db_status, host_status.update_time
  FROM host_status, cm_status
  WHERE upper(host_status.site) = %(upper_1)s AND host_status.host_name
  = cm_status.host_name GROUP BY host_status.host_name,
  host_status.version, host_status.host_id, host_status.sys_uptime,
  host_status.host_uptime, host_status.load_avg, host_status.db_size,
  host_status.db_status, host_status.update_time ORDER BY
  host_status.host_name ASC
  2009-10-22 16:19:42,642 INFO sqlalchemy.engine.base.Engine.0x...67b0
  {'param_1': 'cm_status.host_name', 'upper_1': 'LCO'}
  'sqlalchemy.engine.base.ResultProxy object at 0x01D33F90'

  Thanks!
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: invalid transaction rollback etc and making a join join

2009-10-19 Thread Jeff Cook

I removed SQLSoup and the caching/stale results issues are gone now.
Have to wait a little longer to see if there is any bearing on the
MySQL has gone away/invalid transaction thing.

Changing the default beaker timeout to something small, like 15 secs,
doesn't seem to fix things.

Thanks for all the help everyone.

On Sat, Oct 17, 2009 at 8:08 PM, cd34 mcd...@gmail.com wrote:

 On Oct 17, 6:49 pm, Jeff Cook cookieca...@gmail.com wrote:
 Unfortunately, from a support-seeker point of view, such rhetoric is
 often necessary. How many times have you written a mailing list or hit

 sqlalchemy.pool_recycle = 10

 I had odd issues -- even though mysql was set to

 | interactive_timeout     | 60    |
 | net_read_timeout        | 30    |
 | net_write_timeout       | 60    |
 | slave_net_timeout       | 3600  |
 | table_lock_wait_timeout | 50    |
 | wait_timeout            | 60    |

 A request that was valid and executed once before had problems when
 pool_recycle was = 30 with pylons.

 In project/model/meta.py

 I set

 Session = scoped_session(sessionmaker(autocommit=True))

 which eliminated the situation where the transaction error required
 restarting paster.  Setting this is a bad thing, but then you can see
 if the issue is within the sql and your translation to sqlalchemy, or,
 whether it is something external.  This prevents the inevitable pylons
 'hangup' when you get the sqlalchemy error reported in pylons and have
 to restart the server to continue debugging.

 As for caching, you might doublecheck your beaker settings to turn it
 off while testing.  I haven't seen pylons excessively cache with their
 default site template.
 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: invalid transaction rollback etc and making a join join

2009-10-17 Thread Jeff Cook

Unfortunately, from a support-seeker point of view, such rhetoric is
often necessary. How many times have you written a mailing list or hit
up an IRC channel and had everyone write you off as incompetent just
by default when they can't figure something out? It's important to
establish that I _do_ know what's going on generally to avoid that
kind of crap. It happens almost every time the instant solutions don't
work.

It seemed to me that guy was trying to infer that it's nothing
involving SQLAlchemy when it obviously is. I obviously believe that
SQLAlchemy is worth learning and that coming here could help correct
any latent problems in my code, so please don't be offended, I didn't
mean it a bad or offensive way.

Thanks for your response, though.

On Fri, Oct 16, 2009 at 5:57 PM, empty mtr...@gmail.com wrote:

 On Oct 15, 11:50 pm, Jeff Cook cookieca...@gmail.com wrote:

 So, SQLAlchemy is doing something here. There probably is some
 incorrect code in my program, which is why I am writing this list, to
 figure out what that is. I'm an experienced developer and I don't
 appreciate your disrespect. I can follow tutorials and documentation
 and I did so here, and it's always ended up with lots of errors. Your
 docs need work. I'm sorry if this idea offends you. : ( I still like
 you and I still like SQLAlchemy. Let's cultivate an environment of
 mutual professional respect here. : )


 I'm sorry but I have to jump in here. This sort of talk is completely
 inappropriate. Mike Bayer spends day and night work on SQLAlchemy, and
 as far as SQL toolkits in the Python world, and most other languages
 for that matter, it is the best of the best. In addition to coding on
 it he spends significant time and energy answering almost every
 question on this list with thoughtful and helpful responses. This sort
 of level of support is just completely unheard of. He does it for you,
 me and everyone else that benefits from SQLAlchemy every day. Even in
 the face of being ridiculed he continued to help you think through
 things, because that's who Mike is.

 Regarding the docs, they are incredible. The problem often with the
 docs is not that they aren't any good it is that a lot of people don't
 have the terminology framework to understand this stuff. It takes time
 to because there's so much there, but often folks are not willing to
 spend that time. That's not a deficiency in the documentation, but in
 people hoping to get a quick fix.

 I understand being frustrated; it happens to us all. It's especially
 frustrating when you are expected to embrace new technologies and be
 up to speed and productive in no time.  It's just a fact in our
 industry. But do remember that the people on this list, those writing
 the docs, those slugging through test cases, and those on IRC day and
 night are doing it because they love technology, they love helping
 folks, and they're committed to creating something great. I wish that
 all open source projects were led by a Mike Bayer.

 Michael
 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: invalid transaction rollback etc and making a join join

2009-10-16 Thread Jeff Cook

Thanks for the help. I wasn't assigning the session to SQLSoup's
session, but I am now, and I'm still getting stale data and I don't
know yet if I'll still be getting the invalid transaction/MySQL has
gone away thing. I think I'll try just taking SQLSoup out of the thing
all together and try SQLAlchemy's reflection. I'll let you know if it
fixes things. Thanks.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] invalid transaction rollback etc and making a join join

2009-10-15 Thread Jeff Cook

Dear All People:

I'm using SQLSoup and getting errors like  InvalidRequestError: Can't
reconnect until invalid transaction is rolled back and MySQL Server
has gone away. I have set autoflush=True and this has helped mitigate
some errors, but not these, they happen with varying frequency when I
haven't restarted the servers for a while. I am almost never using
db.flush(). Will that fix this? How can I make this things work? I
really want them to work and I hate MySQL but am stuck using it for
now.

SQLAlchemy thus far has added several hours to development time, which
I find sad. I have this block of code:
#this has consumed too much time
#will come back and fix but for now we have to give up on it
#ret = Session.query(sets).select_from(orm.outerjoin((slices,
slices.asset_slice_asset_id == id), (stores, stores.asset_store_id ==
slices.asset_slice_store_scheme_id))).filter(id == sets.asset_id).all
()
#how I hate sqlalchemy right now
#the raw query took under five minutes
#the other thing took more than hour and still doesn't work
ret = db.bind.execute( SELECT * FROM assets a
LEFT OUTER JOIN asset_slices `as` on as.asset_slice_asset_id =
a.asset_id
LEFT OUTER JOIN asset_storage_schemes `astor` ON
astor.asset_store_id = as.asset_slice_store_scheme_id
WHERE a.asset_id = {bid}
ORDER BY asset_slice_row ASC.format(bid = id))

So I need that actual query to be executable via SQLAlchemy,
preferably with SQLSoup. It doesn't work right now.

All help is loved and appreciated.

Signed
Jeff
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: invalid transaction rollback etc and making a join join

2009-10-15 Thread Jeff Cook

It's not faster. I use pyscopg2 directly on some other projects and
it's definitely a lot faster to just be able to write the query I want
than to try to think of how to convert it to SQLAlchemy's contexts and
functions. Maybe it's just learning curve thing, but as shown, I can't
get that join to actually work, and I was trying different things for
hours; it gives me silly errors of every which-a-kind. If I can just
write something as simple as that in under five minutes, why does it
take so long for SQLAlchemy to let me do it? There's definitely a
problem somewhere in there.

File 
'/home/jeff/vspy/lib/python2.6/site-packages/WebError-0.10.1-py2.6.egg/weberror/evalexception.py',
line 431 in respond
  app_iter = self.application(environ, detect_start_response)
File 
'/home/jeff/vspy/lib/python2.6/site-packages/Beaker-1.4.1-py2.6.egg/beaker/middleware.py',
line 73 in __call__
  return self.app(environ, start_response)
File 
'/home/jeff/vspy/lib/python2.6/site-packages/Beaker-1.4.1-py2.6.egg/beaker/middleware.py',
line 152 in __call__
  return self.wrap_app(environ, session_start_response)
File 
'/home/jeff/vspy/lib/python2.6/site-packages/Routes-1.10.3-py2.6.egg/routes/middleware.py',
line 130 in __call__
  response = self.app(environ, start_response)
File 
'/home/jeff/vspy/lib/python2.6/site-packages/Pylons-0.9.7-py2.6.egg/pylons/wsgiapp.py',
line 125 in __call__
  response = self.dispatch(controller, environ, start_response)
File 
'/home/jeff/vspy/lib/python2.6/site-packages/Pylons-0.9.7-py2.6.egg/pylons/wsgiapp.py',
line 324 in dispatch
  return controller(environ, start_response)
File '/home/jeff/projecs/projecs/lib/base.py', line 18 in __call__
  return WSGIController.__call__(self, environ, start_response)
File 
'/home/jeff/vspy/lib/python2.6/site-packages/Pylons-0.9.7-py2.6.egg/pylons/controllers/core.py',
line 221 in __call__
  response = self._dispatch_call()
File 
'/home/jeff/vspy/lib/python2.6/site-packages/Pylons-0.9.7-py2.6.egg/pylons/controllers/core.py',
line 172 in _dispatch_call
  response = self._inspect_call(func)
File 
'/home/jeff/vspy/lib/python2.6/site-packages/Pylons-0.9.7-py2.6.egg/pylons/controllers/core.py',
line 107 in _inspect_call
  result = self._perform_call(func, args)
File 
'/home/jeff/vspy/lib/python2.6/site-packages/Pylons-0.9.7-py2.6.egg/pylons/controllers/core.py',
line 60 in _perform_call
  return func(**args)
File '/home/jeff/projecs/projecs/controllers/viewer.py', line 18 in index
  c.assets = assets.list_assets()
File '/home/jeff/projecs/projecs/model/assets.py', line 7 in list_assets
  return db.join(db.assets, db.asset_storage_schemes,
db.assets.asset_storage_scheme_id==db.asset_storage_schemes.asset_store_id).order_by(db.assets.asset_id.desc()).all()
File 
'/home/jeff/vspy/lib/python2.6/site-packages/SQLAlchemy-0.5.6-py2.6.egg/sqlalchemy/orm/query.py',
line 1267 in all
  return list(self)
File 
'/home/jeff/vspy/lib/python2.6/site-packages/SQLAlchemy-0.5.6-py2.6.egg/sqlalchemy/orm/query.py',
line 1361 in __iter__
  return self._execute_and_instances(context)
File 
'/home/jeff/vspy/lib/python2.6/site-packages/SQLAlchemy-0.5.6-py2.6.egg/sqlalchemy/orm/query.py',
line 1364 in _execute_and_instances
  result = self.session.execute(querycontext.statement,
params=self._params, mapper=self._mapper_zero_or_none())
File 
'/home/jeff/vspy/lib/python2.6/site-packages/SQLAlchemy-0.5.6-py2.6.egg/sqlalchemy/orm/session.py',
line 755 in execute
  clause, params or {})
File 
'/home/jeff/vspy/lib/python2.6/site-packages/SQLAlchemy-0.5.6-py2.6.egg/sqlalchemy/engine/base.py',
line 824 in execute
  return Connection.executors[c](self, object, multiparams, params)
File 
'/home/jeff/vspy/lib/python2.6/site-packages/SQLAlchemy-0.5.6-py2.6.egg/sqlalchemy/engine/base.py',
line 872 in _execute_clauseelement
  parameters=params
File 
'/home/jeff/vspy/lib/python2.6/site-packages/SQLAlchemy-0.5.6-py2.6.egg/sqlalchemy/engine/base.py',
line 938 in __create_execution_context
  return dialect.execution_ctx_cls(dialect, connection=self, **kwargs)
File 
'/home/jeff/vspy/lib/python2.6/site-packages/SQLAlchemy-0.5.6-py2.6.egg/sqlalchemy/engine/default.py',
line 170 in __init__
  self.cursor = self.create_cursor()
File 
'/home/jeff/vspy/lib/python2.6/site-packages/SQLAlchemy-0.5.6-py2.6.egg/sqlalchemy/engine/default.py',
line 258 in create_cursor
  return self._connection.connection.cursor()
File 
'/home/jeff/vspy/lib/python2.6/site-packages/SQLAlchemy-0.5.6-py2.6.egg/sqlalchemy/engine/base.py',
line 576 in connection
  raise exc.InvalidRequestError(Can't reconnect until invalid
transaction is rolled back)
InvalidRequestError: Can't reconnect until invalid transaction is rolled back

I don't fully understand what you're talking about. I have this error
and I need to make it stop. I just want SQLAlchemy to connect, run the
query I instructed, and give me the results back and do this reliably
without necessitating consistent server restarts. Thus far, it's been
a serious pain managing connection errors

[sqlalchemy] Re: invalid transaction rollback etc and making a join join

2009-10-15 Thread Jeff Cook

I see. So Pylons should handle this by default, but it's not doing so?
That's highly disappointing. Clearly, something is quite incorrect
here. Is my usage of SQLSoup causing rollback not to run?

On Thu, Oct 15, 2009 at 1:16 PM, Michael Bayer mike...@zzzcomputing.com wrote:

 Jeff Cook wrote:
 I don't fully understand what you're talking about. I have this error
 and I need to make it stop. I just want SQLAlchemy to connect, run the
 query I instructed, and give me the results back and do this reliably
 without necessitating consistent server restarts. Thus far, it's been
 a serious pain managing connection errors and so on. SQLSoup may
 complicate this because they have no mention anywhere in their docs
 explaining the necessity to close your connections, and all methods I
 tried (explicit session.close()s at the end of each query, explicit
 db.close()s, and now autoflush=True) to make sure that the resources
 are being returned correctly to the pooler have failed and caused
 other blow-up problem attacks.

 none of the statements regarding SQLA in that paragraph are accurate.
 close() is not needed, autoflush=True is the default setting (did you mean
 autocommit?  that's a feature better left off), SQLAlchemy always returns
 resources to their original pooled state when a transaction is not in
 progress.

 What is necessary, however, is that you must call rollback() when an
 exception is raised.   I see you're using Pylons, the default Pylons
 template establishes this pattern within the BaseController.

 unfortunately there is no feature within SQLAlchemy that can fix your
 issue.  Your application needs to get a handle on transaction failures.  A
 transaction is only invalid if an error were already raised in a
 previous operation within the same transaction, and you haven't attached
 any stack trace for that.



 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: invalid transaction rollback etc and making a join join

2009-10-15 Thread Jeff Cook

And when things _do_ work, there are serious caching problems.
Sometimes it gives me the transaction rollback error, sometimes it
gives me an old version of the page, and sometimes it gives me a
current version of the page. I assume this has something to do with
what connection is getting used. How can I remove these problems?
Would pool_recycle be of any use?

On Thu, Oct 15, 2009 at 1:27 PM, Jeff Cook cookieca...@gmail.com wrote:
 I see. So Pylons should handle this by default, but it's not doing so?
 That's highly disappointing. Clearly, something is quite incorrect
 here. Is my usage of SQLSoup causing rollback not to run?

 On Thu, Oct 15, 2009 at 1:16 PM, Michael Bayer mike...@zzzcomputing.com 
 wrote:

 Jeff Cook wrote:
 I don't fully understand what you're talking about. I have this error
 and I need to make it stop. I just want SQLAlchemy to connect, run the
 query I instructed, and give me the results back and do this reliably
 without necessitating consistent server restarts. Thus far, it's been
 a serious pain managing connection errors and so on. SQLSoup may
 complicate this because they have no mention anywhere in their docs
 explaining the necessity to close your connections, and all methods I
 tried (explicit session.close()s at the end of each query, explicit
 db.close()s, and now autoflush=True) to make sure that the resources
 are being returned correctly to the pooler have failed and caused
 other blow-up problem attacks.

 none of the statements regarding SQLA in that paragraph are accurate.
 close() is not needed, autoflush=True is the default setting (did you mean
 autocommit?  that's a feature better left off), SQLAlchemy always returns
 resources to their original pooled state when a transaction is not in
 progress.

 What is necessary, however, is that you must call rollback() when an
 exception is raised.   I see you're using Pylons, the default Pylons
 template establishes this pattern within the BaseController.

 unfortunately there is no feature within SQLAlchemy that can fix your
 issue.  Your application needs to get a handle on transaction failures.  A
 transaction is only invalid if an error were already raised in a
 previous operation within the same transaction, and you haven't attached
 any stack trace for that.



 



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: invalid transaction rollback etc and making a join join

2009-10-15 Thread Jeff Cook

OK, man, well, I have one function that has the calls to perform the
listing. I refresh and sometimes get old data and sometimes don't.
There is no clustering and no other databases, there is no possibility
that the server is retrieving old data. I haven't changed the base
Pylons classes at all, so, according to you, I shouldn't be getting
that error at all. But I am. How would you advise me to continue?
Scorch earth and start over? That's pretty much the same; there's not
much SQLAlchemy code in use right now, it's only a few queries ... yet
we're still having these issues.

So, SQLAlchemy is doing something here. There probably is some
incorrect code in my program, which is why I am writing this list, to
figure out what that is. I'm an experienced developer and I don't
appreciate your disrespect. I can follow tutorials and documentation
and I did so here, and it's always ended up with lots of errors. Your
docs need work. I'm sorry if this idea offends you. : ( I still like
you and I still like SQLAlchemy. Let's cultivate an environment of
mutual professional respect here. : )

Love
Jeff

On Thu, Oct 15, 2009 at 6:13 PM, Michael Bayer mike...@zzzcomputing.com wrote:


 On Oct 15, 2009, at 3:32 PM, Jeff Cook wrote:


 And when things _do_ work, there are serious caching problems.
 Sometimes it gives me the transaction rollback error, sometimes it
 gives me an old version of the page, and sometimes it gives me a
 current version of the page. I assume this has something to do with
 what connection is getting used. How can I remove these problems?
 Would pool_recycle be of any use?

 there's no flag that is going to make your program work.  your app is
 buggy, plain and simple.   You need to do a code review, ensure no
 exceptions are being squashed, do some ab testing, and watch your
 logs.     its likely theres just one activity in your app, maybe two,
 screwing everything up.  the longer you just try to guess the cause
 and try random things the more confused things will seem.    SQLSoup
 doesn't have anything to do with connection pools or transactions, its
 basically a thin layer over creating mappers and Query objects.   It
 uses the same Session as everyone else, using the threaded
 ScopedSession to keep things local.   And a default setup of Pylons
 does catch errors and roll the session back.  There's also little to
 no caching used by the session and certainly not across requests
 unless you've worked very hard to store everything in global variables
 across requests.






 On Thu, Oct 15, 2009 at 1:27 PM, Jeff Cook cookieca...@gmail.com
 wrote:
 I see. So Pylons should handle this by default, but it's not doing
 so?
 That's highly disappointing. Clearly, something is quite incorrect
 here. Is my usage of SQLSoup causing rollback not to run?

 On Thu, Oct 15, 2009 at 1:16 PM, Michael Bayer mike...@zzzcomputing.com
  wrote:

 Jeff Cook wrote:
 I don't fully understand what you're talking about. I have this
 error
 and I need to make it stop. I just want SQLAlchemy to connect,
 run the
 query I instructed, and give me the results back and do this
 reliably
 without necessitating consistent server restarts. Thus far, it's
 been
 a serious pain managing connection errors and so on. SQLSoup may
 complicate this because they have no mention anywhere in their docs
 explaining the necessity to close your connections, and all
 methods I
 tried (explicit session.close()s at the end of each query, explicit
 db.close()s, and now autoflush=True) to make sure that the
 resources
 are being returned correctly to the pooler have failed and caused
 other blow-up problem attacks.

 none of the statements regarding SQLA in that paragraph are
 accurate.
 close() is not needed, autoflush=True is the default setting (did
 you mean
 autocommit?  that's a feature better left off), SQLAlchemy always
 returns
 resources to their original pooled state when a transaction is not
 in
 progress.

 What is necessary, however, is that you must call rollback() when an
 exception is raised.   I see you're using Pylons, the default Pylons
 template establishes this pattern within the BaseController.

 unfortunately there is no feature within SQLAlchemy that can fix
 your
 issue.  Your application needs to get a handle on transaction
 failures.  A
 transaction is only invalid if an error were already raised in a
 previous operation within the same transaction, and you haven't
 attached
 any stack trace for that.







 


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: query().all() OK, query().delete() can't locate bind

2009-09-16 Thread Jeff Enderwick
So is this likely a sqlalchemy bug? Should I just dig in a look into why the
delete is not finding the bind where the select is?
FWIW, here's a ref to delete+where:
http://dev.mysql.com/doc/refman/5.0/en/delete.html

2009/9/16 Orca trueo...@gmail.com


 you have to use Session.execute with explicit mapper defined, for some
 reason session.query.delete/update is not detecting any bindables in
 statement as it does for select-queries.
 if your TreeNode Table defined as Tree_Node_Table:

 Session.execute(Tree_Node_Table.delete(Tree_Node_Table.c.guid.in_
 (deadNodeGuids)), mapper = TreeNode)

 Not sure about 'where'-clause for 'delete' method.


 On 15 сен, 21:41, jeff.enderw...@gmail.com
 jeff.enderw...@gmail.com wrote:
  I'm trying to delete in bulk using query().  query() seems to work
  fine:
 
  (Pdb) Session.query(TreeNode).filter(TreeNode.guid.in_
  (deadNodeGuids)).all()
  [lajolla.main.tree.TreeNode object at 0x81c82c8c,
  lajolla.main.tree.TreeNode object at 0x81c8220c]
 
  But delete() is not happy:
 
  (Pdb) Session.query(TreeNode).filter(TreeNode.guid.in_
  (deadNodeGuids)).delete()
  *** UnboundExecutionError: Could not locate a bind configured on SQL
  expression or this Session
 
  I'm using 0.55, 'binds' (more than one engine), and scoped_session.
  Any idea why the binds would 'stick' for the read query but not the
  delete? Here's the binds:
 
  Session.configure(binds={User:engineAccounts, TreeRoot:engineTrees,
  TreeNode:engineTrees, Content:engineTrees})
 
  TIA,
  Jeff

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: TimeoutError: QueuePool limit of size 5 overflow 10 reached, connection timed out, timeout 30

2009-05-24 Thread Jeff FW

You need to make sure that you are closing your sessions--otherwise,
they keep your connections open, and are never returned to the pool.
Make sure to read up on sessions here:
http://www.sqlalchemy.org/docs/05/session.html

Also, read up on logging:
http://www.sqlalchemy.org/docs/05/dbengine.html#configuring-logging

It's nice to be able to log your pool status at some points, so that
you can see how many connections are open.

-Jeff

On May 22, 4:41 am, Marcin Krol mrk...@gmail.com wrote:
 Hello everyone,

 After some time of operation (a day or so), from time to time I'm
 getting this error:

 TimeoutError: QueuePool limit of size 5 overflow 10 reached, connection
 timed out, timeout 30

 I have to stress that not much activity is taking place on that OS as it
 is development installation.

 Does anybody know what could be causing this?

 Restarting apache or postgres eliminates that problem, but then it
 reappears.

 I'm getting sick of this. Does anybody know what could be the root
 cause? How to fix this?

 My app uses mod_python / SQLA 5.3.

 The backend is Postgres 8.1 on RH 5.3.

 Regards,
 mk
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: django middleware or signals for sqlalchemy Session

2009-04-24 Thread Jeff FW

Definitely go with middleware--it's very clean and simple.  Also, make
sure to use sqlalchemy.orm.scoped_session()--it makes using sessions
in Django pretty much transparent; any time you need to work with a
session, you call Session(), and it either uses your current one, or
creates a new one if necessary.

Coincidentally, I actually wrote in that first thread you linked
to. :-)

-Jeff

On Apr 23, 5:30 pm, Michael Trier mtr...@gmail.com wrote:
 Hi,

 On Thu, Apr 23, 2009 at 4:05 PM, davidlmontgomery 



 davidlmontgom...@gmail.com wrote:

  I would like to know if there is a consensus on the best
  way to set up and remove sqlalchemy Sessions in django.

  I figure I'm either going to use middleware, something like this
  thread:

 http://groups.google.com/group/django-users/browse_thread/thread/e674...

  or I'm going to use signals, something like this post:

 http://translate.google.com/translate?hl=ensl=jau=http://d.hatena.n...

  Any clear advantages or disadvantages for the two approaches?

 Personally I find the Middleware approach cleaner and allows you to handle
 exceptions with rollbacks as indicated in the django-users thread.  There
 was a project Tranquil (http://code.google.com/p/tranquil/) that expanded on
 this idea to inject more stuff into the request automatically. It's pretty
 much dead at this point from what I understand.
 Additionally I'll point out that I have a  project called Django-SQLAlchemy
 (http://gitorious.org/projects/django-sqlalchemy) that has the aim of
 automatically making SQLAlchemy accessible through Django applications.

 Good luck.

 --
 Michael Trierhttp://michaeltrier.com/http://thisweekindjango.com/
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Using sqlalchemy in twisted.

2009-03-16 Thread Jeff FW

Pedro,

I don't really have much of anything special as far as the connection
goes--SQLAlchemy already does a pretty awesome job of handling all
that for you.  I just keep a module-level variable that I can import
as needed, and call a function in that module to set up the connection
with settings from a config file.

As far as the models/mapping goes, I have all of the tables, classes,
and mapping between then defined in one module, and I can just import
from there whenever needed.  Nothing about sessions/connections in
there at all--that way there's no need to worry about them.  It's
really a pretty ordinary set-up.

Unfortunately, all this code is for work, so I can't share it.  I'll
happily answer your questions, though.

-Jeff

On Mar 15, 2:42 pm, Pedro Algarvio, aka, s0undt3ch u...@ufsoft.org
wrote:
 On Mar 11, 2:13 pm, 一首诗 newpt...@gmail.com wrote:



  Hi Jeff,

  In my project I use the *model* to indicate an instance of Database.
  I don't really need multiple instances of Database.  But when I wrote
  unit test, I always create an new one database in setup, which create
  a new sqlite in memory database to avoid conflicts between test cases.

  About the trick to make *sess* a keywork parameter,  that's really
  clever!
  Thanks a lot!

  On Mar 11, 9:05 pm, Jeff FW jeff...@gmail.com wrote:

   Logging SA objects *after* the session is gone will always be a
   problem, unless you make sure to detach all of them from the session.
   I'd just log the original request, instead.  In my case, I have to
   convert all of my SA objects to something Perspective Broker
   understands, so I actually log those *after* that, as they're no
   longer part of the session--but I'm not sure if you can do that in
   your case.

   As for the decorator--I got a little confused with your names--you
   call it model in your decorator, but it's really an instance of
   Database when it gets passed in as self.  One way to get rid of that
   parameter would be to make sess a keyword argument, like so:

   def wrapper(*args, **kw):
   sess = model.Session()
   try:
   return f(sess=sess, *args, **kw)

   and then change your method:

   def _getObjectById(self, klass, id, sess=None):
   return sess.query(klass).get(id)

   That way, self will get passed in *args with no problem.

   Are you planning to have multiple instances of your Database class?
   If not, I'd suggest changing everything in it into class methods, so
   that way you can call it *without* an instance at all, and don't have
   to worry about connecting to the database multiple times by accident.
   Just a thought.

   -Jeff

   On Mar 10, 10:38 am, 一首诗 newpt...@gmail.com wrote:

Hi Jeff,

Thanks for your kind suggestion.

I first add some log decorators, but i found when it might cause to
print sqalchemy objects which has not been bound to any session.

And I am not quite sure about how to make the decorator mor genreal.

Actually, I think I must use model as the first parameter because as a
instance method, _getObjectById require the first parameter to be
self.
Can you write a few lines of code to show your suggestion?

On Mar 8, 5:06 am, Jeff FW jeff...@gmail.com wrote:

 That's pretty similar to what I do, actually, if a bit simpler (but
 that's good!)  One suggestion would be to throw an except (maybe for
 the base SQLAlchemy exception class)  in your try block, otherwise you
 run the risk of things dying in an ugly way.  I'm not familiar with
 pyamf, so I don't know how it would handle errors, but twisted usually
 carries on as if nothing happens.

 Also, I'd make the decorator a bit more general--don't put the model
 argument in wrapper().  Put sess first, then take *args and **kwargs,
 and pass those right to the inner function f(). That way you can reuse
 it for anything that requires a DB session.

 Other things you could add (if so inclined) are decorators for logging
 and other types of error handling (like catching IntegrityErros thrown
 by duplicates.)  I do those things, but I might be a bit OCD :-)

 -Jeff

 On Mar 7, 1:41 am, 一首诗 newpt...@gmail.com wrote:

  Hi, Thanks for your reply.  I'm using it the way like you.  The only
  difference is that I am using pyamf instead of PB.

  On every request, I delegate required db operations to a class 
  called
  Database, similar to these code below.

  I used to use scope_session instead of create and close session 
  every
  time.   But as I said in my earlier mails, they don't work.

  These code below seems to work right now.  But if you have more
  suggestion,  I will be very thankful.

  #=

  def require_session(f):
  '''create and close session for each synchronous method'''
  def wrapper(model, *args, **kw):
  sess

[sqlalchemy] getting the number of records in a result set from select

2009-03-13 Thread jeff

hi. this question should be easy. i've searched around though and
haven't found the answer. all i want to do is know the number of
records in a result set i get using an execute statement with a simple
select. so if i do:

s=select([raw_table],and_(raw_table.c.name==m
['name'],raw_table.c.as_of=i['first_time']))
rec_list=conn.execute(s)

is there a simple way to get back the number of records that exist in
rec_list?

thanks, i'm sure it's simple and i missed something.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] getting the record count of records returned in a select

2009-03-13 Thread jeff

hi. this question should be easy. i've searched around though and
haven't found the answer. all i want to do is know the number of
records in a result set i get using an execute statement with a simple
select. so if i do:

s=select([raw_table],and_(raw_table.c.name==m
['name'],raw_table.c.as_of=i['first_time']))
rec_list=conn.execute(s)

is there a simple way to get back the number of records that exist in
rec_list?

thanks, i'm sure it's simple and i missed something.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: getting the number of records in a result set from select

2009-03-13 Thread jeff

thanks i will use select count (*)

i was making a leap that there would be something in pgdb which allows
a function like:

sql_txt = select * from addresses
cursor.execute(sql_txt)
rows=cursor.fetchall()
rows_returned = cursor_result.rowcount

where the rowcount property contains the number of rows returned by
the select statement defined and executed.

just wanted to see whether such a property was available in sqlalchemy
using the relational expression interface.


On Mar 13, 9:38 am, Michael Bayer mike...@zzzcomputing.com wrote:
 database cursors are essentially iterators so a total rowcount, without
 fetching all the rows, is not available in a platform-agnostic way.

 the usual strategy to find out how many rows of something exist in the DB
 is to do SELECT COUNT(*).



 Stephen Emslie wrote:

  Well, I would have expected ResultProxy.rowcount to do just that
  (return the number of rows in the last executed statement) but I just
  get 0 from it. Perhaps someone could explain how to use it correctly.

  Stephen Emslie

  On Thu, Mar 12, 2009 at 5:20 PM, jeff jeffre...@gmail.com wrote:

  hi. this question should be easy. i've searched around though and
  haven't found the answer. all i want to do is know the number of
  records in a result set i get using an execute statement with a simple
  select. so if i do:

  s=select([raw_table],and_(raw_table.c.name==m
  ['name'],raw_table.c.as_of=i['first_time']))
  rec_list=conn.execute(s)

  is there a simple way to get back the number of records that exist in
  rec_list?

  thanks, i'm sure it's simple and i missed something.- Hide quoted text -

 - Show quoted text -
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: getting the number of records in a result set from select

2009-03-13 Thread jeff

thank you that got me where i was trying to get.

originally in the first example i was not adding the fetchall(). len()
and rowcount were not yielding anything in that case. then once
fetchall() was added i used len() as suggested and it worked.

thanks.

On Mar 13, 9:30 pm, Mike Conley mconl...@gmail.com wrote:
 If you use

     rows = cursor.fetchall()

 you have already executed the query and the result is a list of RowProxy's
 returned by the query. Count then is simply

     count = len(rows)

 Otherwise, the count(*) approach is correct.

 --
 Mike Conley



 On Fri, Mar 13, 2009 at 4:42 PM, jeff jeffre...@gmail.com wrote:

  thanks i will use select count (*)

  i was making a leap that there would be something in pgdb which allows
  a function like:

  sql_txt = select * from addresses
  cursor.execute(sql_txt)
  rows=cursor.fetchall()
  rows_returned = cursor_result.rowcount

  where the rowcount property contains the number of rows returned by
  the select statement defined and executed.

  just wanted to see whether such a property was available in sqlalchemy
  using the relational expression interface.

  On Mar 13, 9:38 am, Michael Bayer mike...@zzzcomputing.com wrote:
   database cursors are essentially iterators so a total rowcount, without
   fetching all the rows, is not available in a platform-agnostic way.

   the usual strategy to find out how many rows of something exist in the DB
   is to do SELECT COUNT(*).

   Stephen Emslie wrote:

Well, I would have expected ResultProxy.rowcount to do just that
(return the number of rows in the last executed statement) but I just
get 0 from it. Perhaps someone could explain how to use it correctly.

Stephen Emslie

On Thu, Mar 12, 2009 at 5:20 PM, jeff jeffre...@gmail.com wrote:

hi. this question should be easy. i've searched around though and
haven't found the answer. all i want to do is know the number of
records in a result set i get using an execute statement with a simple
select. so if i do:

s=select([raw_table],and_(raw_table.c.name==m
['name'],raw_table.c.as_of=i['first_time']))
rec_list=conn.execute(s)

is there a simple way to get back the number of records that exist in
rec_list?

thanks, i'm sure it's simple and i missed something.- Hide quoted text
  -

   - Show quoted text -- Hide quoted text -

 - Show quoted text -
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Using sqlalchemy in twisted.

2009-03-11 Thread Jeff FW

Logging SA objects *after* the session is gone will always be a
problem, unless you make sure to detach all of them from the session.
I'd just log the original request, instead.  In my case, I have to
convert all of my SA objects to something Perspective Broker
understands, so I actually log those *after* that, as they're no
longer part of the session--but I'm not sure if you can do that in
your case.

As for the decorator--I got a little confused with your names--you
call it model in your decorator, but it's really an instance of
Database when it gets passed in as self.  One way to get rid of that
parameter would be to make sess a keyword argument, like so:

def wrapper(*args, **kw):
sess = model.Session()
try:
return f(sess=sess, *args, **kw)

and then change your method:

def _getObjectById(self, klass, id, sess=None):
return sess.query(klass).get(id)

That way, self will get passed in *args with no problem.

Are you planning to have multiple instances of your Database class?
If not, I'd suggest changing everything in it into class methods, so
that way you can call it *without* an instance at all, and don't have
to worry about connecting to the database multiple times by accident.
Just a thought.

-Jeff

On Mar 10, 10:38 am, 一首诗 newpt...@gmail.com wrote:
 Hi Jeff,

 Thanks for your kind suggestion.

 I first add some log decorators, but i found when it might cause to
 print sqalchemy objects which has not been bound to any session.

 And I am not quite sure about how to make the decorator mor genreal.

 Actually, I think I must use model as the first parameter because as a
 instance method, _getObjectById require the first parameter to be
 self.
 Can you write a few lines of code to show your suggestion?

 On Mar 8, 5:06 am, Jeff FW jeff...@gmail.com wrote:

  That's pretty similar to what I do, actually, if a bit simpler (but
  that's good!)  One suggestion would be to throw an except (maybe for
  the base SQLAlchemy exception class)  in your try block, otherwise you
  run the risk of things dying in an ugly way.  I'm not familiar with
  pyamf, so I don't know how it would handle errors, but twisted usually
  carries on as if nothing happens.

  Also, I'd make the decorator a bit more general--don't put the model
  argument in wrapper().  Put sess first, then take *args and **kwargs,
  and pass those right to the inner function f(). That way you can reuse
  it for anything that requires a DB session.

  Other things you could add (if so inclined) are decorators for logging
  and other types of error handling (like catching IntegrityErros thrown
  by duplicates.)  I do those things, but I might be a bit OCD :-)

  -Jeff

  On Mar 7, 1:41 am, 一首诗 newpt...@gmail.com wrote:

   Hi, Thanks for your reply.  I'm using it the way like you.  The only
   difference is that I am using pyamf instead of PB.

   On every request, I delegate required db operations to a class called
   Database, similar to these code below.

   I used to use scope_session instead of create and close session every
   time.   But as I said in my earlier mails, they don't work.

   These code below seems to work right now.  But if you have more
   suggestion,  I will be very thankful.

   #=

   def require_session(f):
   '''create and close session for each synchronous method'''
   def wrapper(model, *args, **kw):
   sess = model.Session()
   try:
   return f(model, sess, *args, **kw)
   finally:
   sess.close()
   return wrapper

   class Database()
   def __init__(self, conn_str):
   self.conn_str = conn_str
   self.engine = create_engine(self.conn_str, echo=False)
   self.Session = sessionmaker(bind = self.engine,
   expire_on_commit=False)

   def getObjectById(self, klass, id):
   return threads.deferToThread(self._getObjectById, klass, id)

   @require_session

   def _getObjectById(self, sess, klass, id):

   return sess.query(klass).get(id)
   #=

   On Mar 6, 5:44 am, Jeff FW jeff...@gmail.com wrote:

Don't use scoped_session--you'll run into problems no matter what you
do.  I'm using Perspective Broker from Twisted with SQLAlchemy.  I
make sure to create and commit/rollback a session for *every* PB
request.  It works perfectly, and that's the only way I was really
able to get it to work in all cases.

Assuming you're using Twisted in a similar way, you could write a
simple decorator to wrap any functions that need a database session in
the begin/commit stuff as necessary.

If you can give more details of how you're using Twisted, I might be
able to offer some more insight.

-Jeff

On Mar 5, 12:33 am, 一首诗 newpt...@gmail.com wrote:

 I'm not quite sure, but I think I'm pretty careful of sharing objects
 between threads.

 1st, I

[sqlalchemy] Re: Using sqlalchemy in twisted.

2009-03-05 Thread Jeff FW

Don't use scoped_session--you'll run into problems no matter what you
do.  I'm using Perspective Broker from Twisted with SQLAlchemy.  I
make sure to create and commit/rollback a session for *every* PB
request.  It works perfectly, and that's the only way I was really
able to get it to work in all cases.

Assuming you're using Twisted in a similar way, you could write a
simple decorator to wrap any functions that need a database session in
the begin/commit stuff as necessary.

If you can give more details of how you're using Twisted, I might be
able to offer some more insight.

-Jeff

On Mar 5, 12:33 am, 一首诗 newpt...@gmail.com wrote:
 I'm not quite sure, but I think I'm pretty careful of sharing objects
 between threads.

 1st, I only cached as few as possible orm objects.  I tried to detach
 them, but I found that if I detach them,  I can't access any of their
 fields any more.

 2nd, I create new orm objects based on client request, pass them to
 class Database and then merge them to scoped sessions, change, commit
 and then discard these objects.

 3rd, I switch to sqlite frequently to check if there is any database
 operation outside Database, because sqlite doesn't allow multi-thread
 access.

 Actually it seems to work until 2 or 3 days ago suddenly cases hang
 the server.

 Ah, as I've already written lots of code in ORM, I think maybe I
 should try to change Database to use a dedicated thread to handle all
 database operations.

 That might be a bottle neck of my application, but I really can't give
 up orm as these mapper classes are used everywhere in my application.

 On Mar 4, 7:26 pm, 一首诗 newpt...@gmail.com wrote:

  Hi, all

  I am using sqlalchemy in twisted in my project in the way below.
  Defer any database operation so the twisted's main thread won't be
  blocked.

  And I use scoped_session, so that sessions won't have to be created
  again and again.

  ==
  class Database()
  def __init__(self, conn_str):
  self.conn_str = conn_str
  self.engine = create_engine(self.conn_str, echo=False)
  self.Session = scoped_session(sessionmaker(bind = self.engine,
   expire_on_commit=False))

  def getObjectById(self, klass, id):
  return threads.deferToThread(self._getObjectById, klass, id)

  def _getObjectById(self, klass, id):
  sess = self.Session()
  return sess.query(klass).get(id)
  ==

  The code doesn't work.   When I limit the thread numbers to 1

  reactor.suggestThreadPoolSize(1)

  Everything goes fine.  Other wise the server would be blocked and must
  be killed by kill 9 

  The result conflicts with my understanding of sqlalchemy.  Since I
  don't share any object between threads, there should be no problem!

  Ah  It always have risk to use something you haven't tried
  before 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Execute statement sends ROLLBACK commands to database

2009-02-19 Thread Jeff Cook

Using the following code with Pylons, SQLAlchemy and SQLSoup:

def add(self):
ians = text ( SELECT * FROM insert_clinic(:name, :addr,
:city, :state, :zip, :taxid, :note, :website, :addedby) )
conn = meta.soup.engine.connect()
ins = conn.execute(ians, name=request.params['name'],
addr=request.params['address'], city=request.params['city'],
state=request.params['state'], zip=request.params['zip'],
taxid=request.params['taxid'],  note=request.params['note'],
website=request.params['website'], addedby=session['user_id'])
Session.commit()

Having set my PostgreSQL logs to debug and always show statements, I
can see that the statement produced by this code is hitting the
database and that it is a correct statement in and of itself, as it
works when applied in a standard query outside of my application;
however, when this code is executed by the application, the working
statement is passed but immediately followed by several ROLLBACK
commands which do not happen when the generated statement is copied
into pgadmin3's query designer verbatim. This prevents any rows from
being committed; when I experienced this problem earlier, ordering a
Session.commit() fixed it, but as seen here, that's not working right
now.

All help is deeply appreciated and needed. Thank you. : )

Signed
Jeff

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: DataError inserting CURRVAL in an inline query with SQLAlchemy and SQLSoup

2009-02-19 Thread Jeff Cook

This worked after using func() to pass currval and making literal
escaped single-quotes, like this: func.currval('users_user_id'). Thank
you everyone for the help.

On Mon, Feb 16, 2009 at 11:58 PM, jo jose.soa...@sferacarta.com wrote:

 You cannot pass currval('users_user_id_seq') as a parameter value, you
 have to pass an integer value instead.
 I solved this problem in this way:

 INSERT INTO employees

 (employee_user_id, employee_id, employee_first_name,
 employee_last_name, employee_address, employee_city, employee_state,
 employee_zip, employee_extension)

 VALUES

 ( currval('users_user_id_seq'), %(employee_id)s, %(employee_first_name)s, 
 %(employee_last_name)s, %
 (employee_address)s, %(employee_city)s, %(employee_state)s, %
 (employee_zip)s, %(employee_extension)s)

 %

 {'employee_first_name': u'Jeff', 'employee_city': u'Olathe', 
 'employee_state': u'KS',
 'employee_address': u'150', 'employee_id': 3L, 'employee_extension':
 u'1112', 'employee_user_id': nextval('users_user_id_seq'),
 'employee_zip': u'66062', 'employee_last_name': u'Poller'
 }


 j


 Jeff Cook wrote:
 Hi all,

 I want to use the return value of a CURRVAL call as the value of a
 column in a row I'm inserting, to link together related records. I'm
 using Pylons with SQLAlchemy and SQLSoup. SQLAlchemy spits back at me
 a DataError because I'm trying to place CURRVAL in an integer field.
 How do I get the thing to reference the integer instead of taking my
 words as the literal field value?

 This is the error I receive from Pylons:
 class 'sqlalchemy.exc.DataError': (DataError) invalid input syntax
 for integer: CURRVAL('users_user_id_seq') 'INSERT INTO employees
 (employee_id, employee_user_id, employee_first_name,
 employee_last_name, employee_address, employee_city, employee_state,
 employee_zip, employee_extension) VALUES (%(employee_id)s, %
 (employee_user_id)s, %(employee_first_name)s, %(employee_last_name)s, %
 (employee_address)s, %(employee_city)s, %(employee_state)s, %
 (employee_zip)s, %(employee_extension)s)' {'employee_first_name':
 u'Jeff', 'employee_city': u'Olathe', 'employee_state': u'KS',
 'employee_address': u'150', 'employee_id': 3L, 'employee_extension':
 u'1112', 'employee_user_id': 'CURRVAL(users_user_id_seq)',
 'employee_zip': u'66062', 'employee_last_name': u'Poller'}

 Thanks in advance. : )

 Signed
 Jeff

 



 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Execute statement sends ROLLBACK commands to database

2009-02-19 Thread Jeff Cook

More specifically, it seems to be erroring out with this: DEBUG:  SET
TRANSACTION ISOLATION LEVEL must be called before any query
before the ROLLBACK happens.

On Thu, Feb 19, 2009 at 4:42 PM, Jeff Cook cookieca...@gmail.com wrote:
 Using the following code with Pylons, SQLAlchemy and SQLSoup:

def add(self):
ians = text ( SELECT * FROM insert_clinic(:name, :addr,
 :city, :state, :zip, :taxid, :note, :website, :addedby) )
conn = meta.soup.engine.connect()
ins = conn.execute(ians, name=request.params['name'],
 addr=request.params['address'], city=request.params['city'],
 state=request.params['state'], zip=request.params['zip'],
 taxid=request.params['taxid'],  note=request.params['note'],
 website=request.params['website'], addedby=session['user_id'])
Session.commit()

 Having set my PostgreSQL logs to debug and always show statements, I
 can see that the statement produced by this code is hitting the
 database and that it is a correct statement in and of itself, as it
 works when applied in a standard query outside of my application;
 however, when this code is executed by the application, the working
 statement is passed but immediately followed by several ROLLBACK
 commands which do not happen when the generated statement is copied
 into pgadmin3's query designer verbatim. This prevents any rows from
 being committed; when I experienced this problem earlier, ordering a
 Session.commit() fixed it, but as seen here, that's not working right
 now.

 All help is deeply appreciated and needed. Thank you. : )

 Signed
 Jeff


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



  1   2   >