[sqlalchemy] alternate ways to removing event listeners?

2016-09-26 Thread Iain Duncan
Hi folks, I'm having an issue with event listener registration. I figured
out that I need to deregister my listeners after each functional test so
they don't get registered twice in the same thread and execute too many
times. But I was using an anonymous callable to register them, like this:

event.listen(resource_class, 'after_insert', ModelVersioner(resource_class,
'create'))

So I can't unlisten, because I don't store the callable created by the call
to ModelVersioner, so I can't replicate the exact signature of the the call
to listen in my call to remove. I'd prefer not to make and track a whole
bunch of callables, and I wouldn't need to if I could remove listeners
without needing to pass in the exact callable. So I'm hoping there is some
other way to nuke listeners other than passing target, event, callback to
event.remove?

Like say
Perhaps event.remove(target, event, *everything*)
or altering the event registry itself? (this is just for tests, ok if it's
not stable API)

thanks!
iain

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Joining, if join not already done?

2016-09-09 Thread Iain Duncan
ok thanks. +1 for adding that in the long term!

iain

On Fri, Sep 9, 2016 at 2:17 PM, Mike Bayer <mike...@zzzcomputing.com> wrote:

>
>
> On 09/09/2016 05:13 PM, Iain Duncan wrote:
>
>> Hi all, I have a query that gets conditionally extending depending on
>> what's going with a given request. I've got a section where it should
>> join something and search on that join, but if the same join has already
>> happened, I get an error. I've figured out i can do the following, which
>> looks like I'm stepping out of the public api:
>>
>> if UserOrgRoleAssoc  not in [ent.class_ for ent in query._join_entities]:
>>   query.join(UserOrgRoleAssoc)
>>
>> Is there a better (proper?) way to do this?
>>
>
> A real API for this kind of thing is proposed long term:
>
> https://bitbucket.org/zzzeek/sqlalchemy/issues/3225/query-he
> uristic-inspection-system
>
> for now, not really, either look at the private attributes or track these
> things on the outside (safer).
>
>
>
>
>> thanks!
>> Iain
>>
>> --
>> You received this message because you are subscribed to the Google
>> Groups "sqlalchemy" group.
>> To unsubscribe from this group and stop receiving emails from it, send
>> an email to sqlalchemy+unsubscr...@googlegroups.com
>> <mailto:sqlalchemy+unsubscr...@googlegroups.com>.
>> To post to this group, send email to sqlalchemy@googlegroups.com
>> <mailto:sqlalchemy@googlegroups.com>.
>> Visit this group at https://groups.google.com/group/sqlalchemy.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Joining, if join not already done?

2016-09-09 Thread Iain Duncan
Hi all, I have a query that gets conditionally extending depending on
what's going with a given request. I've got a section where it should join
something and search on that join, but if the same join has already
happened, I get an error. I've figured out i can do the following, which
looks like I'm stepping out of the public api:

if UserOrgRoleAssoc  not in [ent.class_ for ent in query._join_entities]:
  query.join(UserOrgRoleAssoc)

Is there a better (proper?) way to do this?

thanks!
Iain

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] postgres/pyscopg failing to connect after a certain number of tests, stuck!

2016-09-08 Thread Iain Duncan
Thanks Mike! engine.dispose() fixed it. Makes sense that it was a garbage
collection problem.

For any others finding this, I just added a call to engine.dispose() to
class level tearDown and it's all good.

On Thu, Sep 8, 2016 at 10:39 AM, Iain Duncan <iainduncanli...@gmail.com>
wrote:

> I've discovered too that the issue only happens when all my test suites
> get called from one call to discovery. I've I run then in 8 calls (one to
> each package, with each holding 1 to 5 test suites of about 1 to 10 tests
> each), there's no issue. :-/
>
> iain
>
> On Thu, Sep 8, 2016 at 10:36 AM, Iain Duncan <iainduncanli...@gmail.com>
> wrote:
>
>> Thanks Mike, no we aren't. I think we are just making a new engine per
>> suite, not per test though, we have the engine init code in the class level
>> setup function, and per test setUp is making new sessions. We do drop the
>> tables per test, which I expect we are not doing correctly as I'm a
>> postgres newb.
>>
>> Should I be adding dispose in class level teardown?
>>
>> thanks!
>> Iain
>>
>>
>>
>>
>> On Thu, Sep 8, 2016 at 10:26 AM, Mike Bayer <mike...@zzzcomputing.com>
>> wrote:
>>
>>> Engine per test is v inefficient, are you at least calling dispose() on
>>> each one in teardown?
>>>
>>>
>>> On Thursday, September 8, 2016, Iain Duncan <iainduncanli...@gmail.com>
>>> wrote:
>>>
>>>> Hi Jonathan, sure can, here it is below. I think maybe this has
>>>> something to do with the fact that we are creating a new engine and
>>>> dropping the schema on each pass maybe? Because it seems to *only* happen
>>>> when we run about 75 tests in a row.
>>>>
>>>> Thanks for looking!
>>>>
>>>> class FunctionalTestSuite(unittest.TestCase):
>>>> """
>>>> base class for functional test suite with db seeding"
>>>> """
>>>>
>>>> # derived class needs to set this to the package of the app under test
>>>> # IE app_module = examp.app
>>>> app_package = None
>>>>
>>>> # derived class needs to put settings in here
>>>> settings = {
>>>> 'sqlalchemy.url': None
>>>> }
>>>>
>>>> def __init__(self, *args, **kwargs):
>>>> super(FunctionalTestSuite, self).__init__(*args, **kwargs)
>>>>
>>>> @classmethod
>>>> def setUpClass(cls):
>>>> "class level setUp creates tables"
>>>> configure_mappers()
>>>> # separate session factory for our test runner
>>>> cls.engine = engine_from_config(cls.settings, prefix='sqlalchemy.')
>>>> cls.session_factory = sessionmaker(bind=cls.engine)
>>>>
>>>> @classmethod
>>>> def init_db(cls):
>>>> "initialize the database and create sessions, normally called from 
>>>> setUp"
>>>> # drop and recreate tables once per test
>>>> # NB: this has to be done differently for postgres because it's a 
>>>> constraint nazi
>>>> if 'postgres' in cls.settings['sqlalchemy.url']:
>>>> cls.engine.execute("drop owned by semaphore")
>>>> cls.engine.execute("create schema if not exists public")
>>>> else:
>>>> Base.metadata.drop_all(bind=cls.engine)
>>>> Base.metadata.create_all(bind=cls.engine)
>>>>
>>>> def init_sessions(self):
>>>> "init seed and confirm sessions, normally called from setUp"
>>>> # create sessions for seeding and separate for confirming
>>>> self.seed_dbs = self.session_factory()
>>>> self.confirm_dbs = self.session_factory()
>>>>
>>>> def init_app(self):
>>>> "init test wsgi app, normally called from setUp"
>>>> self.app = self.app_package.main({}, **self.settings)
>>>> self.testapp = webtest.TestApp(self.app)
>>>>
>>>> def setUp(self):
>>>> # for tests where we want fresh db on *every* test, we use this
>>>> self.init_db()
>>>> self.init_sessions()
>>>> self.init_app()
>>>>
>>>> def tearDown(self):
>>>&g

Re: [sqlalchemy] postgres/pyscopg failing to connect after a certain number of tests, stuck!

2016-09-08 Thread Iain Duncan
I've discovered too that the issue only happens when all my test suites get
called from one call to discovery. I've I run then in 8 calls (one to each
package, with each holding 1 to 5 test suites of about 1 to 10 tests each),
there's no issue. :-/

iain

On Thu, Sep 8, 2016 at 10:36 AM, Iain Duncan <iainduncanli...@gmail.com>
wrote:

> Thanks Mike, no we aren't. I think we are just making a new engine per
> suite, not per test though, we have the engine init code in the class level
> setup function, and per test setUp is making new sessions. We do drop the
> tables per test, which I expect we are not doing correctly as I'm a
> postgres newb.
>
> Should I be adding dispose in class level teardown?
>
> thanks!
> Iain
>
>
>
>
> On Thu, Sep 8, 2016 at 10:26 AM, Mike Bayer <mike...@zzzcomputing.com>
> wrote:
>
>> Engine per test is v inefficient, are you at least calling dispose() on
>> each one in teardown?
>>
>>
>> On Thursday, September 8, 2016, Iain Duncan <iainduncanli...@gmail.com>
>> wrote:
>>
>>> Hi Jonathan, sure can, here it is below. I think maybe this has
>>> something to do with the fact that we are creating a new engine and
>>> dropping the schema on each pass maybe? Because it seems to *only* happen
>>> when we run about 75 tests in a row.
>>>
>>> Thanks for looking!
>>>
>>> class FunctionalTestSuite(unittest.TestCase):
>>> """
>>> base class for functional test suite with db seeding"
>>> """
>>>
>>> # derived class needs to set this to the package of the app under test
>>> # IE app_module = examp.app
>>> app_package = None
>>>
>>> # derived class needs to put settings in here
>>> settings = {
>>> 'sqlalchemy.url': None
>>> }
>>>
>>> def __init__(self, *args, **kwargs):
>>> super(FunctionalTestSuite, self).__init__(*args, **kwargs)
>>>
>>> @classmethod
>>> def setUpClass(cls):
>>> "class level setUp creates tables"
>>> configure_mappers()
>>> # separate session factory for our test runner
>>> cls.engine = engine_from_config(cls.settings, prefix='sqlalchemy.')
>>> cls.session_factory = sessionmaker(bind=cls.engine)
>>>
>>> @classmethod
>>> def init_db(cls):
>>> "initialize the database and create sessions, normally called from 
>>> setUp"
>>> # drop and recreate tables once per test
>>> # NB: this has to be done differently for postgres because it's a 
>>> constraint nazi
>>> if 'postgres' in cls.settings['sqlalchemy.url']:
>>> cls.engine.execute("drop owned by semaphore")
>>> cls.engine.execute("create schema if not exists public")
>>> else:
>>> Base.metadata.drop_all(bind=cls.engine)
>>> Base.metadata.create_all(bind=cls.engine)
>>>
>>> def init_sessions(self):
>>> "init seed and confirm sessions, normally called from setUp"
>>> # create sessions for seeding and separate for confirming
>>> self.seed_dbs = self.session_factory()
>>> self.confirm_dbs = self.session_factory()
>>>
>>> def init_app(self):
>>> "init test wsgi app, normally called from setUp"
>>> self.app = self.app_package.main({}, **self.settings)
>>> self.testapp = webtest.TestApp(self.app)
>>>
>>> def setUp(self):
>>> # for tests where we want fresh db on *every* test, we use this
>>> self.init_db()
>>> self.init_sessions()
>>> self.init_app()
>>>
>>> def tearDown(self):
>>> self.seed_dbs.close()
>>> self.confirm_dbs.close()
>>>
>>>
>>> On Wed, Sep 7, 2016 at 8:37 PM, Jonathan Vanasco <jvana...@gmail.com>
>>> wrote:
>>>
>>>> can you share the test harness?
>>>>
>>>> --
>>>> You received this message because you are subscribed to the Google
>>>> Groups "sqlalchemy" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>> an email to sqlalchemy+unsubscr...@googlegroups.com.
>>>> To post to this group, send email to sqlalchemy@googlegroups.com.
>

Re: [sqlalchemy] postgres/pyscopg failing to connect after a certain number of tests, stuck!

2016-09-08 Thread Iain Duncan
Thanks Mike, no we aren't. I think we are just making a new engine per
suite, not per test though, we have the engine init code in the class level
setup function, and per test setUp is making new sessions. We do drop the
tables per test, which I expect we are not doing correctly as I'm a
postgres newb.

Should I be adding dispose in class level teardown?

thanks!
Iain




On Thu, Sep 8, 2016 at 10:26 AM, Mike Bayer <mike...@zzzcomputing.com>
wrote:

> Engine per test is v inefficient, are you at least calling dispose() on
> each one in teardown?
>
>
> On Thursday, September 8, 2016, Iain Duncan <iainduncanli...@gmail.com>
> wrote:
>
>> Hi Jonathan, sure can, here it is below. I think maybe this has something
>> to do with the fact that we are creating a new engine and dropping the
>> schema on each pass maybe? Because it seems to *only* happen when we run
>> about 75 tests in a row.
>>
>> Thanks for looking!
>>
>> class FunctionalTestSuite(unittest.TestCase):
>> """
>> base class for functional test suite with db seeding"
>> """
>>
>> # derived class needs to set this to the package of the app under test
>> # IE app_module = examp.app
>> app_package = None
>>
>> # derived class needs to put settings in here
>> settings = {
>> 'sqlalchemy.url': None
>> }
>>
>> def __init__(self, *args, **kwargs):
>> super(FunctionalTestSuite, self).__init__(*args, **kwargs)
>>
>> @classmethod
>> def setUpClass(cls):
>> "class level setUp creates tables"
>> configure_mappers()
>> # separate session factory for our test runner
>> cls.engine = engine_from_config(cls.settings, prefix='sqlalchemy.')
>> cls.session_factory = sessionmaker(bind=cls.engine)
>>
>> @classmethod
>> def init_db(cls):
>> "initialize the database and create sessions, normally called from 
>> setUp"
>> # drop and recreate tables once per test
>> # NB: this has to be done differently for postgres because it's a 
>> constraint nazi
>> if 'postgres' in cls.settings['sqlalchemy.url']:
>> cls.engine.execute("drop owned by semaphore")
>> cls.engine.execute("create schema if not exists public")
>> else:
>> Base.metadata.drop_all(bind=cls.engine)
>> Base.metadata.create_all(bind=cls.engine)
>>
>> def init_sessions(self):
>> "init seed and confirm sessions, normally called from setUp"
>> # create sessions for seeding and separate for confirming
>> self.seed_dbs = self.session_factory()
>> self.confirm_dbs = self.session_factory()
>>
>> def init_app(self):
>> "init test wsgi app, normally called from setUp"
>> self.app = self.app_package.main({}, **self.settings)
>> self.testapp = webtest.TestApp(self.app)
>>
>> def setUp(self):
>> # for tests where we want fresh db on *every* test, we use this
>> self.init_db()
>> self.init_sessions()
>> self.init_app()
>>
>> def tearDown(self):
>> self.seed_dbs.close()
>> self.confirm_dbs.close()
>>
>>
>> On Wed, Sep 7, 2016 at 8:37 PM, Jonathan Vanasco <jvana...@gmail.com>
>> wrote:
>>
>>> can you share the test harness?
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "sqlalchemy" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to sqlalchemy+unsubscr...@googlegroups.com.
>>> To post to this group, send email to sqlalchemy@googlegroups.com.
>>> Visit this group at https://groups.google.com/group/sqlalchemy.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "sqlalchemy" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to sqlalchemy+unsubscr...@googlegroups.com.
>> To post to this group, send email to sqlalchemy@googlegroups.com.
>> Visit this group at https://groups.google.com/group/sqlalchemy.
>> For more options, visit https://groups.google.com/d/optout.
>>
> --
> You received this message because you are subscribed to the Google Groups
> &quo

Re: [sqlalchemy] Re: postgres/pyscopg failing to connect after a certain number of tests, stuck!

2016-09-08 Thread Iain Duncan
Hi Jonathan, sure can, here it is below. I think maybe this has something
to do with the fact that we are creating a new engine and dropping the
schema on each pass maybe? Because it seems to *only* happen when we run
about 75 tests in a row.

Thanks for looking!

class FunctionalTestSuite(unittest.TestCase):
"""
base class for functional test suite with db seeding"
"""

# derived class needs to set this to the package of the app under test
# IE app_module = examp.app
app_package = None

# derived class needs to put settings in here
settings = {
'sqlalchemy.url': None
}

def __init__(self, *args, **kwargs):
super(FunctionalTestSuite, self).__init__(*args, **kwargs)

@classmethod
def setUpClass(cls):
"class level setUp creates tables"
configure_mappers()
# separate session factory for our test runner
cls.engine = engine_from_config(cls.settings, prefix='sqlalchemy.')
cls.session_factory = sessionmaker(bind=cls.engine)

@classmethod
def init_db(cls):
"initialize the database and create sessions, normally called
from setUp"
# drop and recreate tables once per test
# NB: this has to be done differently for postgres because
it's a constraint nazi
if 'postgres' in cls.settings['sqlalchemy.url']:
cls.engine.execute("drop owned by semaphore")
cls.engine.execute("create schema if not exists public")
else:
Base.metadata.drop_all(bind=cls.engine)
Base.metadata.create_all(bind=cls.engine)

def init_sessions(self):
"init seed and confirm sessions, normally called from setUp"
# create sessions for seeding and separate for confirming
self.seed_dbs = self.session_factory()
self.confirm_dbs = self.session_factory()

def init_app(self):
"init test wsgi app, normally called from setUp"
self.app = self.app_package.main({}, **self.settings)
self.testapp = webtest.TestApp(self.app)

def setUp(self):
# for tests where we want fresh db on *every* test, we use this
self.init_db()
self.init_sessions()
self.init_app()

def tearDown(self):
self.seed_dbs.close()
self.confirm_dbs.close()


On Wed, Sep 7, 2016 at 8:37 PM, Jonathan Vanasco  wrote:

> can you share the test harness?
>
> --
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] postgres/pyscopg failing to connect after a certain number of tests, stuck!

2016-09-07 Thread Iain Duncan
Hi folks, I'm not sure if this is the right place to ask really. I have
some functional tests, using webtest, in which I make an engine and make a
couple of session that get used in addition to the webtest app. I close the
sessions in the tearDown method after each test.

Now that the number of functional tests is over about 100, they have
started failing, but *only* when I run the whole mess together. Run file by
file, they are all fine.

The error I'm getting is pasted below. I'm running stock postgres on os x
(installed from the universal binary, no special tweaking).

I've tried changing pool_size to 20 and max_overflow to -1 but that doesn't
seem to make a difference.

Any clues would be lovely!

thanks
iain

traceback:

src/warp.test/warp/test/functional.py:70: in setUp
self.init_db()
src/warp.test/warp/test/functional.py:51: in init_db
cls.engine.execute("drop owned by alimetrix")
env/lib/python2.7/site-packages/sqlalchemy/engine/base.py:1990: in execute
connection = self.contextual_connect(close_with_result=True)
env/lib/python2.7/site-packages/sqlalchemy/engine/base.py:2039: in
contextual_connect
self._wrap_pool_connect(self.pool.connect, None),
env/lib/python2.7/site-packages/sqlalchemy/engine/base.py:2078: in
_wrap_pool_connect
e, dialect, self)
env/lib/python2.7/site-packages/sqlalchemy/engine/base.py:1405: in
_handle_dbapi_exception_noconnection
exc_info
env/lib/python2.7/site-packages/sqlalchemy/util/compat.py:202: in
raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
env/lib/python2.7/site-packages/sqlalchemy/engine/base.py:2074: in
_wrap_pool_connect
return fn()
env/lib/python2.7/site-packages/sqlalchemy/pool.py:376: in connect
return _ConnectionFairy._checkout(self)
env/lib/python2.7/site-packages/sqlalchemy/pool.py:713: in _checkout
fairy = _ConnectionRecord.checkout(pool)
env/lib/python2.7/site-packages/sqlalchemy/pool.py:480: in checkout
rec = pool._do_get()
env/lib/python2.7/site-packages/sqlalchemy/pool.py:1060: in _do_get
self._dec_overflow()
env/lib/python2.7/site-packages/sqlalchemy/util/langhelpers.py:60: in
__exit__
compat.reraise(exc_type, exc_value, exc_tb)
env/lib/python2.7/site-packages/sqlalchemy/pool.py:1057: in _do_get
return self._create_connection()
env/lib/python2.7/site-packages/sqlalchemy/pool.py:323: in
_create_connection
return _ConnectionRecord(self)
env/lib/python2.7/site-packages/sqlalchemy/pool.py:449: in __init__
self.connection = self.__connect()
env/lib/python2.7/site-packages/sqlalchemy/pool.py:607: in __connect
connection = self.__pool._invoke_creator(self)
env/lib/python2.7/site-packages/sqlalchemy/engine/strategies.py:97: in
connect
return dialect.connect(*cargs, **cparams)
env/lib/python2.7/site-packages/sqlalchemy/engine/default.py:385: in connect
return self.dbapi.connect(*cargs, **cparams)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

dsn = 'dbname=alimetrix_test user=alimetrix password=alimetrix
host=localhost', database = 'alimetrix_test', user = 'alimetrix', password
= 'alimetrix', host = 'localhost', port = None
connection_factory = None, cursor_factory = None, async = False, kwargs =
{}, items = [('dbname', 'alimetrix_test'), ('user', 'alimetrix'),
('password', 'alimetrix'), ('host', 'localhost')]
k = 'host', v = 'localhost'

def connect(dsn=None,
database=None, user=None, password=None, host=None, port=None,
connection_factory=None, cursor_factory=None, async=False,
**kwargs):
"""
Create a new database connection.

The connection parameters can be specified either as a string:

conn = psycopg2.connect("dbname=test user=postgres
password=secret")

or using a set of keyword arguments:

conn = psycopg2.connect(database="test", user="postgres",
password="secret")

The basic connection parameters are:

- *dbname*: the database name (only in dsn string)
- *database*: the database name (only as keyword argument)
- *user*: user name used to authenticate
- *password*: password used to authenticate
- *host*: database host address (defaults to UNIX socket if not
provided)
- *port*: connection port number (defaults to 5432 if not provided)

Using the *connection_factory* parameter a different class or
connections
factory can be specified. It should be a callable object taking a
dsn
argument.

Using the *cursor_factory* parameter, a new default cursor factory
will be
used by cursor().

Using *async*=True an asynchronous connection will be created.

Any other keyword parameter will be passed to the underlying client
library: the list of supported parameters depends on the library
version.

"""
items 

[sqlalchemy] experience with marshmallow-sqlachemy?

2016-06-07 Thread Iain Duncan
Hey y'all, just want to know if anyone has used and would like to share
experiences/opinions re marshmallow and marshmallow-sqalchemy.

thanks!

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: strategies/tools for conversion from json dicts to SQLAlchemy objects in large apps?

2016-06-07 Thread Iain Duncan
I should add that I'm not married to Colander either, if people have had
good experiences with Marshmallow-Sqlalchemy, I could very well switch to
that.

thanks!

On Tue, Jun 7, 2016 at 9:17 AM, Iain Duncan <iainduncanli...@gmail.com>
wrote:

> Hi folks, I'm working on an internal framework using Pyramid, Colander,
> and SQLAlchemy, and it's for much more enterprisey apps than I have
> previously done. We're looking at having a full fledged service layer, and
> using the ZCA as a DI framework a fair bit. I'm hoping folks can share
> their opinions on what they've found the best place and way to convert from
> validated dicts (originating from JSON from angular.js) to SA mapped
> objects is. I'm using Colander to validate the incoming json, which might
> be nested two or three levels deep, so after Colander conversion I know the
> fields in the dicts are ok, but they are still just python dicts.
>
> In the past, I used formencode to do both validation and conversion, but
> in this case I'm hoping to keep SQLA logic and coupling to a business/model
> layer that is more insulated from the web request layer so it can be used
> without issue from non-web-request contexts (rabbitqm jobs, scripts, etc).
> So I'm imagining right now that outside this business/model layer the web
> controllers are dealing with json and validating with colander, inside the
> business/model we are dealing with either proxy objects to the model
> objects or direct model objects, and somewhere (?) we convert.
>
> Any suggestions on what people have found to be good strategies or tools
> (or reading!) for localizing and controlling the conversion from dicts to
> mapped objects with relationships would be much appreciated. Or suggestions
> that I'm just wrong and why.
>
> thanks,
> Iain
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] strategies/tools for conversion from json dicts to SQLAlchemy objects in large apps?

2016-06-07 Thread Iain Duncan
Hi folks, I'm working on an internal framework using Pyramid, Colander, and
SQLAlchemy, and it's for much more enterprisey apps than I have previously
done. We're looking at having a full fledged service layer, and using the
ZCA as a DI framework a fair bit. I'm hoping folks can share their opinions
on what they've found the best place and way to convert from validated
dicts (originating from JSON from angular.js) to SA mapped objects is. I'm
using Colander to validate the incoming json, which might be nested two or
three levels deep, so after Colander conversion I know the fields in the
dicts are ok, but they are still just python dicts.

In the past, I used formencode to do both validation and conversion, but in
this case I'm hoping to keep SQLA logic and coupling to a business/model
layer that is more insulated from the web request layer so it can be used
without issue from non-web-request contexts (rabbitqm jobs, scripts, etc).
So I'm imagining right now that outside this business/model layer the web
controllers are dealing with json and validating with colander, inside the
business/model we are dealing with either proxy objects to the model
objects or direct model objects, and somewhere (?) we convert.

Any suggestions on what people have found to be good strategies or tools
(or reading!) for localizing and controlling the conversion from dicts to
mapped objects with relationships would be much appreciated. Or suggestions
that I'm just wrong and why.

thanks,
Iain

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] natural vs surrogate keys for user groups?

2015-07-23 Thread Iain Duncan
Hi folks, I'm starting a green field project and one component is a stand
alone auth micro service. It will have one job only, log people in and set
their group list. After that, the group list and user id will be going in
jwt tokens and will be used in subsequent apps as fields for the remote
user in the wsgi env.

This has me wondering whether I should be using natural primary keys for
the group id to further simplify things, it seems like it would make life
easier but as I have only, to this day, used surrogate integer keys, I
don't really know the downsides of them. Would love to hear feedback from
anyone on this or similar issues.

thanks!
(And it's nice to be getting back to SQAlchemy from Angular land, woot!)

Iain

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] best books on DB design for sqlalchemy users?

2015-07-23 Thread Iain Duncan
I feel like I should really take up my db game for an upcoming set of
projects, and am wondering if there are any real standout books on db
design that fit well with the design philosophy of SQLA. Recos much
appreciated!

thanks
Iain

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Mysql issue with big composite primary keys

2014-11-09 Thread Iain Duncan
Hi folks, I've been googling and cargo culting to no avail here, hope
someone can help.
I'm trying to get an app running on MySQL on OS X, that was running fine on
MySQL on Linux.

The issue is I have some large composite primarky keys:

class VarTrigger(Base):
assoc table that associates actions when a var is set
__tablename__ = var_trigger
# actual primary key is var_id, p_value
var_id = Column(Integer, ForeignKey(var.id), nullable=True,
primary_key=True)
p_value = Column( String(255), primary_key=True, nullable=False )
...

When I try to create the tables I'm getting the following:

qlalchemy.exc.OperationalError: (OperationalError) (1709, 'Index column
size too large. The maximum column size is 767 bytes.') '\nCREATE TABLE
var_description (\n\tid INTEGER NOT NULL AUTO_INCREMENT, \n\tvar_id
INTEGER, \n\tp_value VARCHAR(256) NOT NULL, \n\tlook TEXT, \n\tlist TEXT,
\n\tsearch TEXT, \n\tPRIMARY KEY (id), \n\tUNIQUE (var_id, p_value),
\n\tFOREIGN KEY(var_id) REFERENCES var (id)\n)\n\n' ()

I've been trying out things like changing character collation and so one
with no luck. Anyone know what should be done about this? I tried making a
surrogate primary key and just declaring a unique constraint on the two
columns and got the same message.

thanks
Iain

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] queue pooling, mysql sleeping, for very occasional use apps?

2014-09-21 Thread Iain Duncan
Ok thanks all, I'm pretty sure a low timeout is all I'll need, but was
curious about Null pool. Thanks for the info.

iain

On Sat, Sep 20, 2014 at 7:52 AM, Michael Bayer mike...@zzzcomputing.com
wrote:


 On Sep 20, 2014, at 10:50 AM, Michael Bayer mike...@zzzcomputing.com
 wrote:


 On Sep 19, 2014, at 8:43 PM, Iain Duncan iainduncanli...@gmail.com
 wrote:

 So can anyone tell me what exactly happens with NullPool, I'm not clear
 from the docs. Does that mean a fresh connection will be made on a new hit,
 and we'll never get the gone away, but at the expense of slower
 connections?


 with the important caveat that your app returns connections to the pool
 when its idle.   If you check out a connection and just hold onto it
 somewhere for hours (which you shouldn’t), you can still get that error.


 I bring this up because, pool_timeout is all you need otherwise.   if you
 are still getting that error with a low pool_timeout, you may want to look
 into connections being released.





 Is is terribly slower?

 thanks!
 Iain


 On Fri, Sep 19, 2014 at 2:27 PM, Iain Duncan iainduncanli...@gmail.com
 wrote:

 Thanks guys!

 Iain

 On Fri, Sep 19, 2014 at 12:41 PM, Jonathan Vanasco jvana...@gmail.com
 wrote:

 Just to add to jeff's response, I had a similar situation with Postgres
 under a twisted app a while back.

 Someone else on the twisted list was using Mysql + Twisted + Sqlalchemy
 and pointed to some pessimistic reconnect code they used in buildbot :


 http://twistedmatrix.com/pipermail/twisted-python/2014-June/028410.html



 --
 You received this message because you are subscribed to the Google
 Groups sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send
 an email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.




 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.



 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.


  --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] queue pooling, mysql sleeping, for very occasional use apps?

2014-09-19 Thread Iain Duncan
Hi folks, I'm putting together something for a client that will get used
very occasionally, and has a some worker processes that use SQLA. I'm
wondering what the right approach is for the dreaded MySQL has gone away.
Should I just use some absurdly high number for pool_recycle ( ie months)
or is there a smarter way to do it. It's a process to send out a bunch of
sms messages via twilio, so it totally doesn't matter if takes a bit to
wake up.

Is disabling pooling with NullPool the right way to go?

thanks!
Iain

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: queue pooling, mysql sleeping, for very occasional use apps?

2014-09-19 Thread Iain Duncan
Thanks guys!

Iain

On Fri, Sep 19, 2014 at 12:41 PM, Jonathan Vanasco jvana...@gmail.com
wrote:

 Just to add to jeff's response, I had a similar situation with Postgres
 under a twisted app a while back.

 Someone else on the twisted list was using Mysql + Twisted + Sqlalchemy
 and pointed to some pessimistic reconnect code they used in buildbot :


 http://twistedmatrix.com/pipermail/twisted-python/2014-June/028410.html


  --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: queue pooling, mysql sleeping, for very occasional use apps?

2014-09-19 Thread Iain Duncan
So can anyone tell me what exactly happens with NullPool, I'm not clear
from the docs. Does that mean a fresh connection will be made on a new hit,
and we'll never get the gone away, but at the expense of slower
connections?

Is is terribly slower?

thanks!
Iain


On Fri, Sep 19, 2014 at 2:27 PM, Iain Duncan iainduncanli...@gmail.com
wrote:

 Thanks guys!

 Iain

 On Fri, Sep 19, 2014 at 12:41 PM, Jonathan Vanasco jvana...@gmail.com
 wrote:

 Just to add to jeff's response, I had a similar situation with Postgres
 under a twisted app a while back.

 Someone else on the twisted list was using Mysql + Twisted + Sqlalchemy
 and pointed to some pessimistic reconnect code they used in buildbot :


 http://twistedmatrix.com/pipermail/twisted-python/2014-June/028410.html


  --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/d/optout.




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: using SQLAlchemy on mobile devices?

2014-04-28 Thread Iain Duncan
For future browsers, apparently it can be done with Kivy, but I haven't
tried it. Kivy looks very promising as a way to use Python on iOs and
Android though!

iain


On Sun, Apr 27, 2014 at 10:30 AM, Iain Duncan iainduncanli...@gmail.comwrote:

 Sorry if this is a stupid question, but does anyone here know if any of
 the various Python-to-native-mobile platforms allow one to use sqlalchemy
 with sqlite for persistence? (Kivy? Pythonista? Whatever else is out
 there?) I'm wondering whether it's possible (yet?) to write an app that:

 -  runs locally on iphone and android without data connection to a server
 - looks half decent
 - can use sqlalchemy for persistence

 thanks!
 Iain


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] using SQLAlchemy on mobile devices?

2014-04-27 Thread Iain Duncan
Sorry if this is a stupid question, but does anyone here know if any of the
various Python-to-native-mobile platforms allow one to use sqlalchemy with
sqlite for persistence? (Kivy? Pythonista? Whatever else is out there?) I'm
wondering whether it's possible (yet?) to write an app that:

-  runs locally on iphone and android without data connection to a server
- looks half decent
- can use sqlalchemy for persistence

thanks!
Iain

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Sorting on joined columns with a mapper?

2014-02-08 Thread Iain Duncan
Hi folks, I have a mapper that has an order by clause, and I was hoping to
get it to work with secondary sorts on the joined tables, wondering if this
is possible, or if not, how I could add some other kind of more complex
sorting?

Here's what I hoped would work but didn't:


 mappers['Membership'] = mapper(Membership, membership_table, properties = {
'client': relation(Client, backref=backref('memberships',
cascade='all, delete, delete-orphan') ),

'billing_type' : relation(BillingType, backref='memberships'),
'membership_type' : relation(MembershipType, backref='memberships'),
'orgunit' : relation(Orgunit, backref=backref('memberships',
cascade='all, delete, delete-orphan') ),
'roles_assocs' : relation(MembershipsRolesAssoc, cascade='all,
delete, delete-orphan', backref='membership')  # was 'memberships'
}, order_by=[membership_table.c.date_end, client_table.c.name_last] )


And the error:
OperationalError: (OperationalError) (1054, Unknown column
'client.name_last' in 'order clause') 'SELECT membership.id AS
membership_id, membership.client_id AS membership_client_id,
membership.orgunit_id AS membership_orgunit_id, membership.date_start AS
membership_date_start, membership.date_end AS membership_date_end,
membership.billing_type_id AS membership_billing_type_id,
membership.membership_type_id AS membership_membership_type_id,
membership.is_active AS membership_is_active, membership.notes AS
membership_notes \nFROM membership \nWHERE %s = membership.orgunit_id ORDER
BY membership.date_end, client.name_last' (21L,)


Any tips much appreciated!
thanks
Iain

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] table inheritance: how to change a record from base type to derived type?

2013-12-11 Thread Iain Duncan
On Tue, Dec 10, 2013 at 3:54 PM, Michael Bayer mike...@zzzcomputing.comwrote:


 On Dec 10, 2013, at 5:04 PM, Iain Duncan iainduncanli...@gmail.com
 wrote:

 Hi, I'm stuck on how to do something that I'm sure must be possible.

 I have two kinds of records, using table inheritance, Client, and
 SpecialClient. SpecialClient has extra fields and it's own polymorphic
 identity. I need to *promote* a and existing client record to a special
 client. I tried just making a SpecialClient and copying over attributes,
 but this is giving me can't set attribute error messages. Can anyone tell
 me what the *right* way to do change a polymorphic type, while keeping it's
 ID the same, would be? In my case, the ID in the derived type is an fkey to
 the base type table's id col.


 if its joined inheritance, you have to INSERT into the new table manually,
 that feature isn’t supported right now.   if it’s single, try emitting an
 UPDATE for the “discriminator” column, then reloading the object.


I think it's joined inheritance? The derived table's primarky key is an
id column that is also an fkey to the base table's id column and the
 base table has a _type column holding the poly id. Does this mean I
should drop into sql directly to create the record in the derived table and
it will just work?

thanks Michael!

iain






 Thanks!
 Iain

 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/groups/opt_out.




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


[sqlalchemy] table inheritance: how to change a record from base type to derived type?

2013-12-10 Thread Iain Duncan
Hi, I'm stuck on how to do something that I'm sure must be possible.

I have two kinds of records, using table inheritance, Client, and
SpecialClient. SpecialClient has extra fields and it's own polymorphic
identity. I need to *promote* a and existing client record to a special
client. I tried just making a SpecialClient and copying over attributes,
but this is giving me can't set attribute error messages. Can anyone tell
me what the *right* way to do change a polymorphic type, while keeping it's
ID the same, would be? In my case, the ID in the derived type is an fkey to
the base type table's id col.

Thanks!
Iain

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] Re: most established SQLA fixture tools?

2013-10-17 Thread Iain Duncan
Thanks Jonathan, Alembic has been on my to-check-out list for a while.
Does anyone here use it for test db population?

I'm honestly not seeing the big advantage of populating from json files,
but I suppose there must be some. At any rate I have them from a previous
project the same client has done. I suppose it might be a trivial job to
populate the db from json with just SA code in a helper class, maybe a
dependency for doing this is over kill??

thanks
Iain



On Wed, Oct 16, 2013 at 3:30 PM, Jonathan Vanasco jonat...@findmeon.comwrote:

 I saw this a while back:


https://pypi.python.org/pypi/mixer/1.1.6

 which claims to handle both Django  SqlAlchemy, so it might be more
 familiar to them.

 there are a few fixtures projects for Sqlalchemy on PyPi

 I *think* you might be able to get away with some stuff using Alembic, but
 don't quote me on that.


  --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/groups/opt_out.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


[sqlalchemy] locking tables with orm? need to generate PK manually...

2013-10-16 Thread Iain Duncan
Hi folks, I'm working on a legacy database that has some issues with PK
generation. The long and short of it is that for now, I need to generate an
PK by locking a table, because that's what they've been doing. I'm using
the orm, and basically want to be able to use a session briefly with the
table locked. ( to find the highest PK in there and increment. I
knowhope we can fix that later ). I'm wondering what the best way to do
this is.

I could use a connection to lock the table with raw sql, but then can I use
a session for the query?

I think what I want in magical pseudo code is this, comments appreciated
(aside from the fact that this is not a good way to get a pk, no choice
there)

# lock the table somehow?
session.LOCKER( table to lock)
last_pk = session.query(Client).order_by( desc(Client.pk)
).limit(1).get().pk
# insert new client
session.add( Client(pk= last_pk+1, **other_values) )
session.commit()
# unlock table
session.UNLOCKER

thanks
Iain

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] locking tables with orm? need to generate PK manually...

2013-10-16 Thread Iain Duncan
Thanks Micheal, that's what I hoped!

iain


On Wed, Oct 16, 2013 at 12:13 PM, Michael Bayer mike...@zzzcomputing.comwrote:


 On Oct 16, 2013, at 1:38 PM, Iain Duncan iainduncanli...@gmail.com
 wrote:

  Hi folks, I'm working on a legacy database that has some issues with PK
 generation. The long and short of it is that for now, I need to generate an
 PK by locking a table, because that's what they've been doing. I'm using
 the orm, and basically want to be able to use a session briefly with the
 table locked. ( to find the highest PK in there and increment. I
 knowhope we can fix that later ). I'm wondering what the best way to do
 this is.
 
  I could use a connection to lock the table with raw sql, but then can I
 use a session for the query?

 sure.  session.execute(lock the table...) then keep using that Session
 in the same transaction.   once that Session has commit() called the
 transaction is over and the lock will be cleared.






-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


[sqlalchemy] most established SQLA fixture tools?

2013-10-16 Thread Iain Duncan
Hi folks, I'm trying to stick to the most established packages for doing a
lot of stuff I'm used to doing with my own tools years and years ago, in
order to keep the new job happy. What are most people using with SQLAlchemy
for populating test databases?  They are used to Django, so if there's
something that works similarly to the fixtures in the django test framework
that would be ideal..

Thanks!
Iain

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


[sqlalchemy] sqlalchmey 0.8 stuck on a join with reflected tables

2013-10-12 Thread Iain Duncan
Hey folks, hit a new error message and the googling is not turning up much,
and as I'm new to postgres, I'm frankly over my head. I have a wack of
reflected tables and I'm trying to get a join working. The one I'm
ultimately after has a composite primary key of 2 different foreign keys
and an int (can be 1,3,6,12). Here's it's postgres describe:

   Table public.appurls
Column|  Type  | Modifiers | Storage  | Description
--++---+--+-
 urlagentcode | character varying(255) |   | extended |
 appfee   | double precision   |   | plain|
 groupagentid | integer| not null  | plain|
 payterm  | character varying(10)  | not null  | extended |
 tierid   | integer| not null  | plain|
 billingcycle | character varying(255) |   | extended |
 alias| character varying(255) |   | extended |
 fulfillment  | character varying(2)   |   | extended |
Indexes:
pk_appurls PRIMARY KEY, btree (groupagentid, tierid, payterm)
appurls_appfee_idx btree (appfee)
appurls_groupagentid_idx btree (groupagentid)
appurls_payterm_idx btree (payterm)
appurls_tierid_idx btree (tierid)
Has OIDs: no


I set up my table  and my mappers like so:

appurl_table = Table(appurls, metadata,
Column('tierid', Integer, ForeignKey('tier.tierid'), primary_key=True ),
Column('groupagentid', Integer, ForeignKey('groupagent.groupagentid'),
primary_key=True ),
autoload=True
)

mappers['AppUrl'] = mapper(AppUrl, appurl_table, properties={
  # specify relationship to Tier
  'tier': relation(Tier, backref='appurls',
primaryjoin=(appurl_table.c.tierid == tier_table.c.tierid) ),
  # specify relationship to GroupAgent
  'groupagent': relation(GroupAgent, backref='appurls',
primaryjoin=(appurl_table.c.groupagentid ==
groupagent_table.c.groupagentid) )
})

And then I attempt a query with a daisy chain of joins and I get the
traceback I've posted at the bottom with the message:

ArgumentError: Could not locate any simple equality expressions involving
locally mapped foreign key columns for primary join condition
'appurls.groupagentid = groupagent.groupagentid' on relationship
AppUrl.groupagent.  Ensure that referencing columns are associated with a
ForeignKey or ForeignKeyConstraint, or are annotated in the join condition
with the foreign() annotation. To allow comparison operators other than
'==', the relationship can be marked as viewonly=True.

I'm just joining with
app_url = request.db.query(AppUrl
).join(Tier
).join(GroupAgent ( it goes on and on)

I *thought* I'd specified the join conditions well enough in the above, but
clearly I'm missing something, any help much appreciated!
thanks!
Iain

traceback:
  File /home/qualbe/src/PaymentProxy/paymentproxy/helpers.py, line 30, in
get_appurl
app_url = request.db.query(AppUrl
  File
/home/qualbe/src/eggs/SQLAlchemy-0.8.2-py2.7-linux-x86_64.egg/sqlalchemy/orm/session.py,
line 1107, in query
return self._query_cls(entities, self, **kwargs)
  File
/home/qualbe/src/eggs/SQLAlchemy-0.8.2-py2.7-linux-x86_64.egg/sqlalchemy/orm/query.py,
line 115, in __init__
self._set_entities(entities)
  File
/home/qualbe/src/eggs/SQLAlchemy-0.8.2-py2.7-linux-x86_64.egg/sqlalchemy/orm/query.py,
line 124, in _set_entities
self._set_entity_selectables(self._entities)
  File
/home/qualbe/src/eggs/SQLAlchemy-0.8.2-py2.7-linux-x86_64.egg/sqlalchemy/orm/query.py,
line 157, in _set_entity_selectables
ent.setup_entity(*d[entity])
  File
/home/qualbe/src/eggs/SQLAlchemy-0.8.2-py2.7-linux-x86_64.egg/sqlalchemy/orm/query.py,
line 2861, in setup_entity
self._with_polymorphic = ext_info.with_polymorphic_mappers
  File
/home/qualbe/src/eggs/SQLAlchemy-0.8.2-py2.7-linux-x86_64.egg/sqlalchemy/util/langhelpers.py,
line 612, in __get__
obj.__dict__[self.__name__] = result = self.fget(obj)
  File
/home/qualbe/src/eggs/SQLAlchemy-0.8.2-py2.7-linux-x86_64.egg/sqlalchemy/orm/mapper.py,
line 1458, in _with_polymorphic_mappers
configure_mappers()
  File
/home/qualbe/src/eggs/SQLAlchemy-0.8.2-py2.7-linux-x86_64.egg/sqlalchemy/orm/mapper.py,
line 2153, in configure_mappers
mapper._post_configure_properties()
  File
/home/qualbe/src/eggs/SQLAlchemy-0.8.2-py2.7-linux-x86_64.egg/sqlalchemy/orm/mapper.py,
line 1275, in _post_configure_properties
prop.init()
  File
/home/qualbe/src/eggs/SQLAlchemy-0.8.2-py2.7-linux-x86_64.egg/sqlalchemy/orm/interfaces.py,
line 231, in init
self.do_init()
  File
/home/qualbe/src/eggs/SQLAlchemy-0.8.2-py2.7-linux-x86_64.egg/sqlalchemy/orm/properties.py,
line 1028, in do_init
self._setup_join_conditions()
  File
/home/qualbe/src/eggs/SQLAlchemy-0.8.2-py2.7-linux-x86_64.egg/sqlalchemy/orm/properties.py,
line 1102, in _setup_join_conditions
can_be_synced_fn=self._columns_are_mapped
  

Re: [sqlalchemy] Session creation without scoped session?

2013-10-11 Thread Iain Duncan
Thanks Michael, that's very helpful.

Iain


On Fri, Oct 11, 2013 at 7:18 AM, Michael Bayer mike...@zzzcomputing.comwrote:


 On Oct 10, 2013, at 8:57 PM, Iain Duncan iainduncanli...@gmail.com
 wrote:

  Seems like there are some variety of opinion on some stuff on the
 pyramid list so I thought I'd come straight to the source. In Pyramid, I
 can attach a server wide shared object to the registry which is created on
 server startup, and I can specify a factory for making requests that
 creates the request object, available as a local argument (*not* a thread
 local request import) to all code. In the request object I can also add an
 end-of-life callback.
 
  I've been using SQLAlchemy by:
 
  - creating the engine and sessionmaker at startup time, stashing in the
 ZCA registry
  - calling the session maker at the beginning of the request lifecycle
 and then creating the session for that request and attaching to the request.
  - doing a cleanup at end of request
 
  According to my (possibly flawed) understanding of the SA docs, this
 should mean I do not need to use a scoped session maker.

 that is correct.


  The session object never gets imported as a magic global thread local
 thing, it's always explicitly passed in as an arg.

 you can also link the two things together like this, not that you'd need
 to:

 from pyramid.threadlocal import get_current_request
 Session = scoped_session(sessionmaker(), scopefunc=get_current_request)

 now you're only using Pyramid's thread local voodoo, none in SQLAlchemy.


  Am I correct in this? Is there any advantage in using a scoped session
 anyway?

 I find plenty.   I'm usually concerned about code that needs to run
 outside the context of a web request, like background jobs and utilities -
 this code still makes full use of the web app's model and all, just not the
 request.So without Pyramid's hook for session creation, someone else
 has to establish the new Session.   ScopedSession is nice because nobody
 has to tag themselves as the creator of the Session, it just creates the
 new Session as soon as someone starts using it.

 Also having to pass a session around to all methods and functions that
 aren't web-request-aware can become burdensome.  My app over here actually
 has three different sessions for different use cases, so I guess if I
 weren't using scoped sessions I'd need to create some more substantial
 context object that I pass around to all these methods which has all three
 sessions, but then I'd need to reconcile that context object with Pyramid's
 request also.Or perhaps, you can just make Pyramid's request the
 context itself and when you're running in a non-web context just use
 DummyRequest or something, but doesn't that seem strange/weird also?
 Model objects that have nothing to do with web requests all have to become
 hardcoded to the concept of a request, that seems no less distasteful
 than thread locals to me.

 With a scoped session, none of these decisions even need to be made, you
 don't have to pass anything around to your model, the model has a place to
 get its context for database connectivity just like it has a place to get
 context for anything else it needs.  There's no downside to scoped sessions
 that I've observed and the Pyramid community's aversion to thread local
 voodoo IMHO is slightly misplaced.  Thread locals are, like so many other
 things, easy to misuse, particularly when random bits of code use them to
 communicate state and such.  But having a few well-located and globally
 accessible registries that are specific to your application, not a problem
 at all.  I see no evidence for Pyramid's assertion that they decrease
 testability, it's standard practice to patch well-known globals for
 testing, and in fact mock.patch is part of the Python standard library
 now.





-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] Re: using SQLA in distributed, concurrent, or asynchronous contexts?

2013-10-05 Thread Iain Duncan
Thanks guys. Jonathan, that is, I think, what will be going on.

So, I'm not quite clear from your comments, using SQLAlchemy with Twisted
is practical now or not? Is it better to do it with Twisted than with
Gevent?

If you have the time to explain further about the persisting the session
comment that would be great.

At the moment I'm just trying to figure out what I want to figure out. ;-)

thanks
Iain


On Thu, Oct 3, 2013 at 5:05 PM, Jonathan Vanasco jonat...@findmeon.comwrote:

 Also, a decent strategy to use is this:

 a) Your client does a lot of business logic in Pyramid.  When you need to
 bill you hit an internal API or create a celery task

 b) You render a waiting page, and have ajax, or refresh, check to see if
 the internal API , or celery, is done processing

 c) when the internal api is done processing, you show the thank you
 screen.

 getting my SqlAlchemy model to work on Celery was a bit of a nightmare.  i
 can try and dig up exactly what I did.

 IIRC, the problem was the Celery didn't seem to have an 'on
 initialization' routine, so I had to do some janky shit call my sqlalchemy
 initialization stuff.

 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/groups/opt_out.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


[sqlalchemy] using SQLA in distributed, concurrent, or asynchronous contexts?

2013-10-03 Thread Iain Duncan
Hi folks, I just got a new position and am hoping I can advocate for using
SQLAlchemy there. I will be doing some backend web service stuff that
interacts with a db, but likely in some form of distributed architecture.
It's all going to be on AWS. I believe there will be some service
implementation that either needs to be highly performant or highly
concurrent or some combination of both. IE there may be lots of clients
hitting the service and the service has to interact with third party
payment services which might be slow, and we don't want the server choking
just waiting on the third party responses.

Up till now I haven't had to do stuff in this domain. Can anyone tell me
what I might want to look into as far as ways of handling concurrency or
non-blocking services that will play fair with SQLAlchemy? I read that
trying to get SA working on Twisted is not a good plan.

Would love any pointers on what to read up on, use, etc. Right now all my
experience is on Pylons/Pyramid and has not had to use any fancy messaging,
deferred processing, or concurreny handling so I think I have a lot reading
ahead of me.

Thanks!
Iain

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [sqlalchemy] finding sqlalchemy jobs?

2013-09-27 Thread Iain Duncan
Thanks Michael, much appreciated!

Iain


On Fri, Sep 27, 2013 at 7:43 AM, Michael Bayer mike...@zzzcomputing.comwrote:

 I'll keep my ears open.Where I work we've taken on remote contractors
 in the past but at the moment it's pretty tight.


 On Sep 25, 2013, at 6:49 PM, Iain Duncan iainduncanli...@gmail.com
 wrote:

 Hi folks, apologies for sounding spammy. I'm wondering if anyone has
 suggestions on how to find remote work doing SQLAlchemy stuff, been wanting
 to leave the uncertain world of freelancing and do some actual coding for a
 while. I found some posts tagging SQLAlchemy on Stack Overflow Careers, but
 if anyone else has suggestions on how to find job posts for specific Python
 technologies, I'd love to hear them.

 Thanks!
 Iain
 idun...@xornot.com
 www.xornot.com

 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to sqlalchemy+unsubscr...@googlegroups.com.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 Visit this group at http://groups.google.com/group/sqlalchemy.
 For more options, visit https://groups.google.com/groups/opt_out.




-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


[sqlalchemy] finding sqlalchemy jobs?

2013-09-25 Thread Iain Duncan
Hi folks, apologies for sounding spammy. I'm wondering if anyone has
suggestions on how to find remote work doing SQLAlchemy stuff, been wanting
to leave the uncertain world of freelancing and do some actual coding for a
while. I found some posts tagging SQLAlchemy on Stack Overflow Careers, but
if anyone else has suggestions on how to find job posts for specific Python
technologies, I'd love to hear them.

Thanks!
Iain
idun...@xornot.com
www.xornot.com

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/groups/opt_out.


[sqlalchemy] wake of RoR sql injection vulnerability

2013-01-09 Thread Iain Duncan
Just curious as to whether anyone has seen changes in interest in
SQLAlchemy in the wake of the Ruby on Rails SQL injection vulnerability, or
if anyone has any thoughts on it. Or worse, if it's going to tar  SQLA with
the same brush.

This is pure conjecture, and should be taken with a giant grain of salt,
but I wonder whether the monolithic, almost closed-garden nature of the RoR
ecosystem contributed to the situation compared to the situation in Python.
Of course that could just be a big confirmation bias on my part. Would
welcome thoughts from those more experienced than me.

Iain

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] wake of RoR sql injection vulnerability

2013-01-09 Thread Iain Duncan
Thanks Michael, that's really helpful.

FWIW, the Pyramid version of Zope's traversal does require explicit
handling, thus allowing one the pros of traversal without surrendering fine
grained control. I *believe* part of the impetus for that was to reduce the
Zope Magic in the Zope 2 traversal mechanism.

iain

On Wed, Jan 9, 2013 at 9:27 AM, Michael Bayer mike...@zzzcomputing.comwrote:

 I've just read http://www.insinuator.net/2013/01/rails-yaml/ and really,
 the huge fail here is that Rails allows the construction of arbitrary
 application objects based on parameters.   So while SQL injection is of
 course part of the hack here, the ORM part of rails isn't what's failing in
 this case - it's that the web framework is basically allowing the
 environment of the whole application to be controlled from a web request,
 and this post coins a great new term to describe it called an object
 injection attack.There are plenty of other ways to attack an
 application with this that don't even need the database.

 SQLAlchemy and any other ORM of course provides for special SQL expression
 objects to be placed into any statement, where you can provide the string
 contents.  But granting that capability to your web users is pretty much
 like running untrusted input through eval().

 This is kind of the nightmare scenario that makes me avoid Zope's
 traversal idea like the plague, even though I know that is something
 different, the whole concept of GET/POST data having any kind of implicit
 link to application structure without explicit code mediating those paths
 is very scary.

 So if any approach is to take a hit here, it's monolithic web stacks that
 provide wide and complex data pipes with a heavy emphasis on implicit
 translation of data between tiers.


 On Jan 9, 2013, at 12:07 PM, Michael Bayer wrote:

  I'd say if RoR was actually quoting strings in their ORM then things are
 terribly mis-designed.   SQLAlchemy always defers to the DBAPI for bind
 parameter handling, the only area where we have had a vulnerability was
 that we weren't using these binds for the LIMIT/OFFSET parameters of a
 SELECT statement.   That issue was fixed years ago, there were many
 messages generated from automated Linux distro trackers and such but that
 was pretty much it.
 
  There's very little to compare between ActiveRecord and SQLAlchemy as
 the philosophies are completely different (for example, SQLAlchemy tries to
 stay agnostic of any convention as possible) so I wouldn't think the
 security issues of RoR would have any tendency to spread doubt about other
 projects.   SQLAlchemy's approach insists that the user has a clear idea of
 what he or she is doing in the first place, as opposed to relying upon
 tools to figure everything out (tools are for automation, not design), so
 that puts us in a much better spot right off.
 
  That said, my constant defenses of SQLAlchemy are generally oriented
 around dispelling ORM myths spread by people's experiences with other ORMs,
 so in that sense whatever failures RoR is bringing aren't really anything
 new.
 
 
  On Jan 9, 2013, at 11:57 AM, Iain Duncan wrote:
 
  Just curious as to whether anyone has seen changes in interest in
 SQLAlchemy in the wake of the Ruby on Rails SQL injection vulnerability, or
 if anyone has any thoughts on it. Or worse, if it's going to tar  SQLA with
 the same brush.
 
  This is pure conjecture, and should be taken with a giant grain of
 salt, but I wonder whether the monolithic, almost closed-garden nature of
 the RoR ecosystem contributed to the situation compared to the situation in
 Python. Of course that could just be a big confirmation bias on my part.
 Would welcome thoughts from those more experienced than me.
 
  Iain
 
  --
  You received this message because you are subscribed to the Google
 Groups sqlalchemy group.
  To post to this group, send email to sqlalchemy@googlegroups.com.
  To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.
 
  --
  You received this message because you are subscribed to the Google
 Groups sqlalchemy group.
  To post to this group, send email to sqlalchemy@googlegroups.com.
  To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.
 

 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group

Re: [sqlalchemy] query joining and filtering with table inheritance

2012-09-28 Thread Iain Duncan
On Fri, Sep 28, 2012 at 7:06 AM, Michael Bayer mike...@zzzcomputing.comwrote:

 This is the right answer.   if you
 query(Contact).with_polymorphic([Person, Organization]), you can refer to
 Person.attribute and Organization.attribute in your query.   It will
 produce a LEFT OUTER JOIN from contact to each of the related tables.


You guys rock, thanks!
Iain







 On Sep 28, 2012, at 7:05 AM, lenart...@volny.cz wrote:

  Hello.
 
  I am not at my development environment now so I cannot test it, but I
 think you want this:
 
 
 http://docs.sqlalchemy.org/en/rel_0_7/orm/inheritance.html?highlight=of_type#basic-control-of-which-tables-are-queried
 
  Ladislav Lenart
 
  PS: I'm by no means a DB or SQLA expert too :-)
 
 
  Od: Iain Duncan iainduncanli...@gmail.com
  First off, sorry if it turns out that what I'm trying to do is smoke
 crack
  here, I'm by no means a DB or SQLA expert. =)
 
  I have two classes, Person and Organization, that use table inheritance
 to
  inherit from Contact. ( Person.id is an fkey to Contact.id ), which is
  working fine.
 
  I want to search from a search box and get a list of all contacts.
  Normally, when I'm after a set of things, I do this:
 
  def build_query(self):
   return self.session.query(Person)
 
  def filter_query(self, query)
 if self.search_form_values.get('name_last',None):
 query =
  query.filter(Person.name_last==self.search_form_values.get('name_last')
 )
 ...
 return query
 
  And then elsewhere the query gets executed. The problem is that I want
 to
  get back a list of Contact objects,
  but I want to filter on Organization.name, Person.name_last, and
  Person.name_first, all from the values typed
  into the 'name' box on the search form. I can't figure out how to do
 this
  with one query, not sure if it's even possible.
  I'd sure like to avoid having two separate queries with interleaved
 results
  though.
 
  Is there some filtering kung fu that would allow me to do the above? ie
 
  - get back all Contacts that:
   - have name_last or name_first match, if the Contact is a Person
   - have name match, if the contact is an Organization
 
  Thanks
  Iain
 
  --
  You received this message because you are subscribed to the Google
 Groups sqlalchemy group.
  To post to this group, send email to sqlalchemy@googlegroups.com.
  To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.
 
 
 
  --
  You received this message because you are subscribed to the Google
 Groups sqlalchemy group.
  To post to this group, send email to sqlalchemy@googlegroups.com.
  To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.
 



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] query joining and filtering with table inheritance

2012-09-27 Thread Iain Duncan
First off, sorry if it turns out that what I'm trying to do is smoke crack
here, I'm by no means a DB or SQLA expert. =)

I have two classes, Person and Organization, that use table inheritance to
inherit from Contact. ( Person.id is an fkey to Contact.id ), which is
working fine.

I want to search from a search box and get a list of all contacts.
Normally, when I'm after a set of things, I do this:

def build_query(self):
  return self.session.query(Person)

def filter_query(self, query)
if self.search_form_values.get('name_last',None):
query =
query.filter(Person.name_last==self.search_form_values.get('name_last') )
...
return query

And then elsewhere the query gets executed. The problem is that I want to
get back a list of Contact objects,
but I want to filter on Organization.name, Person.name_last, and
Person.name_first, all from the values typed
into the 'name' box on the search form. I can't figure out how to do this
with one query, not sure if it's even possible.
I'd sure like to avoid having two separate queries with interleaved results
though.

Is there some filtering kung fu that would allow me to do the above? ie

- get back all Contacts that:
  - have name_last or name_first match, if the Contact is a Person
  - have name match, if the contact is an Organization

Thanks
Iain

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Saas deployment suggestions for SQLAlchemy?

2012-05-01 Thread Iain Duncan
Hey all, we've been using SQLAlchemy for about 5 years now, currently with
Pyramid, but have not ever deployed to anything other than a standard vps
for a one client install. We're now gearing up to make some of our products
available as monthly subscriptions, and am looking for opinions and
pointers on that whole world. Our needs are:

- must be able to keep using SQLAlchemy, Pyramid, Chameleon, FormEncode
- must be able to connect to per-client db and central account management
db, likely using wsgi middleware for the central account part, but quite
possibly having the application talk to both dbs itself
- we're really small, so we'd prefer to pay more per user for high
reliability low headache situations
- ideally we'd like to know that as many users can signup as possible
without our interference, but I'm not married to that idea if everything
else leans another way
- we are unlikely to have that many users, and bandwidth use will likely be
low ( it's not likely to go boom and suddenly need to scale like crazy,
rather specialized client base)

The only things I've thought of are:
- do it manually on a big ass vps and monitor
- use Heroku
- ???

Another question I have, do most people deploying something as a SaaS set
it up so there is one python worker serving all the users or is it better
to have a python process/worker per user?

War stories welcome!
thanks
Iain

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] help decrypting exception re session management error?

2011-12-19 Thread Iain Duncan
Ok, I know this is my fault but maybe someone can help me know what kind of
error I'm producing here, in saving a user, I'm getting the following
expection, but only occasionally ( like 10% of the time? )

15:13:18,160 INFO [xornot.dram.views] EditAction.post() Exception from
self.update: Can't attach instance Group at 0xb660d6c; another instance
with key (class 'xornot.auth.model.group.Group', (3L,)) is already
present in this session.

I suspect it as to do with this code which tries to add the everyone group
to everyone before update:

def pre_update_hook(self):
  log.info(UserActionMixing.pre_update_hook())
  # users can never be removed from the Everyone group
  everyone = self.model.get_by(Group, name='Everyone')
  admin = self.model.get_by(Group, name='Admin')
  if everyone not in self.valid_values['groups']:
self.valid_values['groups'].append(everyone)

It's a long shot, but does the above info give anyone any idea what kind of
thing I might be doing wrong and how to track down something like this?

thanks
Iain

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Meetup group in Vancouver BC

2011-12-08 Thread Iain Duncan
Hey all, I just created a meetup group in Vancouver BC for Pyramid and
Pylons users. I'm wondering if I should really make it
Pyramid/Pylons/SQLAlchemy. At any rate, SA certainly seems to be the most
popular persistence mechanism for Pyramid. If you're in Van, I'd love to
hear from you.

http://www.meetup.com/Vancouver-Pyramid-Pylons

thanks
Iain
xornot studios
www.xornot.com

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] sqla on app engine?

2011-10-08 Thread Iain Duncan
Hey folks, just saw the announcement that google appengine now has an sql
backend:
http://code.google.com/apis/sql/

I don't know near enough about app engine to understand whether this means
we could use SQLAlchemy on it now, would love to hear from those who do!

thanks
Iain

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] sqla on app engine?

2011-10-08 Thread Iain Duncan
Awesome, thanks for the update MIke.

Iain

On Sat, Oct 8, 2011 at 1:52 PM, Michael Bayer mike...@zzzcomputing.comwrote:

 we have a ticket from a ways back referring to the task of getting this to
 work on GAE:

 http://www.sqlalchemy.org/trac/ticket/1958

 I was having some communication with Wesley Chun at the time but somehow
 the task fell by the wayside.

 However, the good news is that we're in very good shape to support their
 backend, which is a MySQL derivation.Depending on the specifics of their
 implementation we may be able to re-use sqlalchemy.connectors.mysqldb or
 similar - in any case it would likely be based on mysql.dialects.mysql and
 not require very much code at all.It's just a matter of someone getting
 a basic hello world test running on GAE to get started.I'm just not
 sure how one runs unit tests on GAE.




 On Oct 8, 2011, at 4:06 PM, Iain Duncan wrote:

 Hey folks, just saw the announcement that google appengine now has an sql
 backend:
 http://code.google.com/apis/sql/

 I don't know near enough about app engine to understand whether this means
 we could use SQLAlchemy on it now, would love to hear from those who do!

 thanks
 Iain

 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.


 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: replacing or altering a mapper?

2011-04-23 Thread Iain Duncan


 Right, I didn't do a thorough job of explaining it, so I agree with the
 your assessment of the monkey patch, but my pattern isn't quite what you
 described ( because of my poor explanation! ). I'm using Pyramid and taking
 advantage of the Zope Component Architecture quite a bit. So there is a an
 application 'package' in ZCA speak, with it's own set of ZCA registrations,
 domain model classes and their user interface classes are all registered
 using ZCA interfaces. My client app includes packages from the base
 framework, with their domain model registrations and mapper registrations.
 One of those package is my base auth package, good enough as is for most
 client apps that include that package. The whole framework is broken up into
 packaged by functionality ( auth package, cms package, etc ). Part of what a
 package does is register it's model components in the registry and register
 a utility for mapping those. That way a domain app can use any of the
 components, or easily override any of the components by registering it's own
 version of one of the components under the same interface ( IUser or
 whatever). Mapper making callables are one of the components. In this case,
 my client app needs a different user model, so it registers a different
 class using the IUser interface. I got stuck on how to override one mapper
 only.

 Reading your description, I can see why I wouldn't want to clobber an
 already created mapper, though that would have been easiest from my client
 app perspective. The solution I've come to that seems to be an okay
 compromise is to replace the entire auth packages make_mappers callable with
 an override from the client app, in which I replace the User mapper. As this
 callable is registered  as a ZCA utility named 'auth_mappers', if I register
 it through a ZCA package that is included with includeOverrides.. it just
 hides the other auth mappers callable and we don't have the User mapper
 getting called twice.

 Long winded, but might be useful to someone googling some day! Thanks for
 your explanation, it no doubt saved me from trying some kind of surgical
 mapper deletion that would bite me in the ass later. =)


 I'll try to study this more carefully tomorrow though Im tempted to tweet
 something thoughtless... all of that registration and action at a distance
 sounds like what it was like back when I used EJB or Spring.


The best thing to look at for examples similar to what I'm doing would the
Pyramid docs in the section on Extending an Application. I wouldn't
recommend using the ZCA that way for simple things, but in the case of our
large framework, it works beautifully for separating client/domain apps from
the guts, and allows for very easy overriding of only the components that
need to be different. The book Web Component Development with Zope 3 has
good stuff too, as does this:

http://www.muthukadan.net/docs/zca.html

Personally, I'm now a fanatical fan of using the zope interfaces and the ZCA
as the single central train station for plugging as much of the
architectural bits into each other as possible. There is a bit a of a
learning curve, but much like SQAlchemy, the thornier the issues and the
bigger the apps, the more I wind up liking it. =)

Iain

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] replacing or altering a mapper?

2011-04-22 Thread Iain Duncan
Hi folks, hoping someone can help here, couldn't figure it out from
the mapper docs. I have some mappers being created in modules that are
loaded separately. That has to stay that way, essentially our
framework has base modules that get reused and they have ZCA utilities
registered that make mappers in a make_mappers callable. This is
working well for us. What I want to do is replace or extend on of
those mapper objects in the make_mappers callable loaded from the
domain application, in a case where the base mapper utilities have
already been called. I am passing the existing dict of mappers into
each callable so they can (hopefully) be altered. In this case, for
this domain app, I need to map the User class differently to add in
new properties stored in a different table.

make_mappers( mappers):
  mappers['User'] = ( replace the User mapper here ).

I can't just clobber mappers['User'], I get an exception that the User
class has already been mapped. Makes sense.

Nor can I do:

  del mappers['User']

So I'd like to know how I can either wipe out that one mapper to
replace it with the custom version, or change the primary mapped table
to a join. I tried doing

mappers['User'].mapped_table = join(user_table, user_extra_table)

but that seemed to just result in loss of the original user table.
Anyone have suggestions on how I can change the mapping of the User
class *without* having to touch the module that makes the base user
mapper?

thanks!
Iain

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: replacing or altering a mapper?

2011-04-22 Thread Iain Duncan

 mappers only support the addition of new attributes after construction.   
 There is of course clear_mappers() but that removes the entire graph of 
 mappers (emphasis on graph, which is why we don't get into the surgical 
 removal of individual mappers).   There is also the prospect of creating a 
 new class, which I'm assuming is not desirable here.   the mapped table 
 itself is completely core to the mapper itself after construction.

 The real question would be , why can't your join() be created before the 
 mapping.    I can understand this kind of issue when using declarative, but 
 with plain mappers you'd want all Table metadata assembled ahead of time.

 A standard strategy when this kind of issue exists is to defer the generation 
 of mappers, which if its more convenient can be done using your own mapper() 
 callable that gathers up all the various mapper() arguments into a list, then 
 can emit the actual mapper() calls at some later point (such as, 
 build_my_mappers()).  

Thanks Mike. My mapper generation is actually deferred, but the point
here is that the default modules build their mappers, so I can't just
*extend* the base framework app if I can't replace one of it's
mappers. I don't think this is such a weird pattern really, but if
there is no other option, I suppose I will have to have the extension
app make an explicit list of all mappers.

I guess consider this a use case for why someone might want to be able
to delete a mapper. I'm a bit surprised that there is no way to do so,
or am I missing something?

thanks
Iain

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: replacing or altering a mapper?

2011-04-22 Thread Iain Duncan

 
  mappers only support the addition of new attributes after construction.
   There is of course clear_mappers() but that removes the entire graph of
 mappers (emphasis on graph, which is why we don't get into the surgical
 removal of individual mappers).   There is also the prospect of creating a
 new class, which I'm assuming is not desirable here.   the mapped table
 itself is completely core to the mapper itself after construction.
 
  The real question would be , why can't your join() be created before the
 mapping.I can understand this kind of issue when using declarative, but
 with plain mappers you'd want all Table metadata assembled ahead of time.
 
  A standard strategy when this kind of issue exists is to defer the
 generation of mappers, which if its more convenient can be done using your
 own mapper() callable that gathers up all the various mapper() arguments
 into a list, then can emit the actual mapper() calls at some later point
 (such as, build_my_mappers()).
 
  Thanks Mike. My mapper generation is actually deferred, but the point
  here is that the default modules build their mappers, so I can't just
  *extend* the base framework app if I can't replace one of it's
  mappers. I don't think this is such a weird pattern really, but if
  there is no other option, I suppose I will have to have the extension
  app make an explicit list of all mappers.

 let me see if i understand.  You have some core library of classes, that
 are designed to talk to particular database tables.  The core library
 generates the classes, defines table metadata, then maps the classes to the
 tables.

 then, an application which is a user of that library, would like to import
 it, have the library build up its full persistence model, some of it being
 incorrect vs. what the application wants, then the application would
 redefine how arbitrary members of that class hierarchy are mapped, as an
 activity of extending the library.

 This seems like a relational database monkeypatch.if your library is
 designed to be extended in such a way that an importer can redefine how the
 library's class hierarchy relates to a set of database tables, it seems like
 the library would need to provide explicit injection points for that kind of
 functionality where the persistence is set up just once.  As opposed to the
 library defining a full persistence model, then tearing down some parts and
 replacing them.

 Its also not clear why the extension here wouldn't be a more traditional
 form of extension, that of subclassing those library classes you'd wish to
 extend.  User can be subclassed with MySpecialUser, MySpecialUser can then
 be mapped in any way the application sees fit.   It doesn't make much sense
 that the core library knows about a User and knows what table it's mapped
 to, and then that library has functions that would continue to work as
 expected when User is suddenly re-mapped to a join instead of a single
 table, I can't really imagine how that works actually.





  I guess consider this a use case for why someone might want to be able
  to delete a mapper. I'm a bit surprised that there is no way to do so,

 Its easy enough to delete a mapper, remove its instrumentation from the
 class and de-associate the mapper totally.   I can show you the API calls.
 But then all the other mappers that might reference that class are now in an
 invalid state, referencing a dead mapper.   Join conditions inside of
 relationships reference possibly dead columns linked to possibly dead
 tables.   All of that state is invalid, and replacing with a new mapper
 would mean we'd have to track all of those endpoints and surgically figure
 out all the new linkages - these are linkages whose calculation is deferred
 until the endpoints exist, and then are locked down to create sometimes
 very complicated graphs of dependencies between classes.So the ratio of
 complexity (huge) to usefulness (marginal, as there is always a way to defer
 mapping until the model is complete) is why this is not supported.


Right, I didn't do a thorough job of explaining it, so I agree with the your
assessment of the monkey patch, but my pattern isn't quite what you
described ( because of my poor explanation! ). I'm using Pyramid and taking
advantage of the Zope Component Architecture quite a bit. So there is a an
application 'package' in ZCA speak, with it's own set of ZCA registrations,
domain model classes and their user interface classes are all registered
using ZCA interfaces. My client app includes packages from the base
framework, with their domain model registrations and mapper registrations.
One of those package is my base auth package, good enough as is for most
client apps that include that package. The whole framework is broken up into
packaged by functionality ( auth package, cms package, etc ). Part of what a
package does is register it's model components in the registry and register
a utility for mapping those. That way a 

[sqlalchemy] Re: how to make a transaction fail for testing?

2009-12-14 Thread iain duncan
thanks Alex. I guess for integration tests one could also monkey patch
it to override the commit method?

thanks
iain

On Dec 13, 2:47 pm, Alex Brasetvik a...@brasetvik.com wrote:
 On Dec 13, 2009, at 22:25 , iain duncan wrote:

  Hey folks, I'm wondering how I can make a transaction fail
  deliberately for testing how my transaction handling code is working.
  Is there some simple mechanism for making sure
  a section in a try block with session.commit() raises an exception
  without altering the code?

 Mock the session/connection.

 There are several mocking libraries for Python 
 -http://www.google.com/search?q=python+mock+objects

 Personally, I prefer Mocker, which supports patching which is handy in this 
 case ---http://labix.org/mocker#head-83d80b643ee8544628ee7a8301a72443ac51a173

 If you are unfamiliar with the technique, this looks like a decent 
 introduction:http://agiletesting.blogspot.com/2009/07/python-mock-testing-techniqu...

 Hope this helps. :-)

 Regards,
 Alex Brasetvik

--

You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




[sqlalchemy] how to make a transaction fail for testing?

2009-12-13 Thread iain duncan
Hey folks, I'm wondering how I can make a transaction fail
deliberately for testing how my transaction handling code is working.
Is there some simple mechanism for making sure
a section in a try block with session.commit() raises an exception
without altering the code?

thanks
Iain

--

You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.




[sqlalchemy] Re: auto-creating one-to-one relations with the orm?

2009-05-03 Thread Iain Duncan

On Sun, 2009-05-03 at 09:43 -0400, Michael Bayer wrote:
 
 
 On May 2, 2009, at 8:16 PM, Iain Duncan wrote:
 
 
  On Sat, 2009-05-02 at 09:41 -0400, Michael Bayer wrote:
 
  you'd set cascade=all, delete-orphan on the Purchase.registration
  side (in this case its your backref).
 
  Thanks Mike. So what is the recommended way of creating the attached
  item though, or is it supposed to happen automatically from the  
  mapper?
 
 
 you can say p.registration = r, or Purchase(registration=r), the  
 cascade option would enforce the constraint at flush time.

Ah, so that is the reverse of what I was thinking. You see, the form's
job is to create a registration. So is it equally fine to go the other
way?

r = Registration( purchase=Purchase() )

or even better ( i hope? )

class Registration(object):
  def __init__(self, **kwargs):
self.purchase = Purchase()
for k,v in kwargs.items(): setattr(self,k,v)

? 
thanks Mike
Iain

 
 
 
 
  Iain
 
 
  On May 2, 2009, at 1:42 AM, Iain Duncan wrote:
 
 
  Hi all, I have a mapper that looks like this:
 
  mapper( Registration, registration_table, properties={
'purchase': relation(Purchase,
  backref=backref('registration', uselist=False),
cascade='all' ),
  })
 
  My registration table has a foreign key to purchases, and I want to
  make
  sure no registration exists without an associated purchase. I have  
  the
  referrence backwards like a many-to-one because purchases may also  
  be
  referred to in other tables ( like memberships ). Maybe this is  
  not a
  good idea, not sure. Feel free to tell me. Should this perhaps be
  handled in a separate associating table with primary key with one
  column
  primary key constraints on it?
 
  But, my immediate problem is that I don't know what the correct  
  way to
  make sure I create a purchase on creation of a registration. Should
  I be
  creating a purchase in the registration constructor? Should I be  
  doing
  something like:
 
  r = Registration( purchase=Purchase() )
 
  Or should it work magically if I set up my cascade options
  differently?
 
  Thanks
  Iain
 
 
 
 
 
 
 
 
 
  
 
 
  


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: auto-creating one-to-one relations with the orm?

2009-05-02 Thread Iain Duncan

On Sat, 2009-05-02 at 09:41 -0400, Michael Bayer wrote:
 
 you'd set cascade=all, delete-orphan on the Purchase.registration  
 side (in this case its your backref).

Thanks Mike. So what is the recommended way of creating the attached
item though, or is it supposed to happen automatically from the mapper? 

Iain
 
 
 On May 2, 2009, at 1:42 AM, Iain Duncan wrote:
 
 
  Hi all, I have a mapper that looks like this:
 
  mapper( Registration, registration_table, properties={
 'purchase': relation(Purchase,
  backref=backref('registration', uselist=False),
 cascade='all' ),
  })
 
  My registration table has a foreign key to purchases, and I want to  
  make
  sure no registration exists without an associated purchase. I have the
  referrence backwards like a many-to-one because purchases may also be
  referred to in other tables ( like memberships ). Maybe this is not a
  good idea, not sure. Feel free to tell me. Should this perhaps be
  handled in a separate associating table with primary key with one  
  column
  primary key constraints on it?
 
  But, my immediate problem is that I don't know what the correct way to
  make sure I create a purchase on creation of a registration. Should  
  I be
  creating a purchase in the registration constructor? Should I be doing
  something like:
 
  r = Registration( purchase=Purchase() )
 
  Or should it work magically if I set up my cascade options  
  differently?
 
  Thanks
  Iain
 
 
 
  
 
 
  


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] auto-creating one-to-one relations with the orm?

2009-05-01 Thread Iain Duncan

Hi all, I have a mapper that looks like this:

mapper( Registration, registration_table, properties={
'purchase': relation(Purchase, 
backref=backref('registration', uselist=False), 
cascade='all' ),
})

My registration table has a foreign key to purchases, and I want to make
sure no registration exists without an associated purchase. I have the
referrence backwards like a many-to-one because purchases may also be
referred to in other tables ( like memberships ). Maybe this is not a
good idea, not sure. Feel free to tell me. Should this perhaps be
handled in a separate associating table with primary key with one column
primary key constraints on it?

But, my immediate problem is that I don't know what the correct way to
make sure I create a purchase on creation of a registration. Should I be
creating a purchase in the registration constructor? Should I be doing
something like:

r = Registration( purchase=Purchase() )

Or should it work magically if I set up my cascade options differently?

Thanks
Iain



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Clearing out a scoped session

2009-03-29 Thread Iain Duncan


 
  Yes, the seed class makes new objects that I want in the db as
  fixture. I want to use the class as a namespace for a standard set of
  fixture objects, mostly because of the ease of setting it up that
  gives me, and because I can do things like make variants of the
  standard fixture with a simple inherit and override. Of course the
  class code gets run at import time of the class, so the objects in the
  seed class are created then, with the active scoped session. However,
  I want to reuse that fixture declaration over and over again, with a
  fresh session and fresh tables. So when the tests start, the seed
  class has created objects in the session imported from turbogears.
  This session doesn't get flushed. The first test recreates the tables,
  copies the seed objects into the new session for the first test,
  flushes it to the db, does it's test stuff, and then wipes out that
  session. Then I hit the snag because I can't recopy these objects into
  another session after they have been persisted and I'd like to repeat
  that process:
  - drop tables
  - make new session
  - copy fixture objects into the new session
  - flush to db
  - do tests
 
  I guess I need to either:
  - wipe out the SQLA knowledget that they have been persisted
  or
  - clone the seed objects some how so I have new ones for each session
 
 I think the best way is to create a function that gives you new objects,
 and is loaded in during the setup() for your tests.  its the same amount
 of typing as declaring them module-wide.the second way is to clone the
 objects or create them from some kind of state, like dicts.   the third,
 merge() them into the session - the original objects will be unchanged. 
 fourth, and this is the most hacky way of all, from sqlalchemy.orm import
 attributes;  del attributes.instance_state(instance).key.   you'd have to
 do that on all objects in the whole graph which you want to reuse.   i
 don't see that as being any easier than just cloning them.

Hi Mike, I tried out the merging, but am hitting the snag that objects
referred to in relations are now going into the db twice. Ie

class MySeedData(SeedData):
  van = Region(name=Vancouver)
  event_1 = Event(name=Event 1, region=van)

is putting the Region in twice.


I also tried out the instance clearing tip. I'm on sa 0.4.8, but there
is no orm.attributes.instance_key. There is attributes.InstanceKey, but
if use that to create an object, the object created doesn't have an
attribute named 'key' either. I'd still like to try out the instance
clearing method if you any suggestions for 0.4.8

Thanks
Iain
  
 
  


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Clearing out a scoped session

2009-03-28 Thread iain duncan

Thanks Mike. I made some progress ( I think! ). I now have the test
class setup method successfully creating a new scoped session on each
pass, and the objects that were created originally in a turbogears
session context are being saved into that session and flushed to the
db. I think this is the right approach, fresh session for each fresh
fixture.

Now however, I'm getting the following on the second test:
InvalidRequestError: Instance 'reg...@0x2db6950' is already persistent
and looking at the session.identity_map, it's empty.

If I replace session.save(obj) with session.save_or_update(obj) the
exception does not get thrown. The new object *is* in the identity map
of the new session at the end of the test, but no SQL to insert the
object into the actual DB gets generated. ( I'm on sa 0.4.2, yes it's
a bit of a legacy case... )

I have the fixture drop and recreate the tables, so I'm hoping there
is some way of copying these objects form their prototype declaration
class into a new session and flush to the newly created tables on each
pass.

Any tips much appreciated again,
Iain
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Clearing out a scoped session

2009-03-28 Thread iain duncan

On Mar 28, 8:05 am, Michael Bayer mike...@zzzcomputing.com wrote:
 the error means the object was loaded from the database in some session.  
 it can't be sent to save().   if you send it to save_or_update() or add(),
 it's already INSERTed as far as SQLA is concerned.

 each test you create should not use any instances of your mapped entities
 from the previous test.  when you remove the session, all state which was
 associated with that session should be discarded.  you could alternatively
 try to re-add those objects back in but that's cumbersome.

Thanks Mike. I figured this is probably not the intended use. The
thing is, I'm trying to make a declarative template for test seed data
and declaring it all in a class like so makes for the easiest quickest
way of extending my fixture:

class SeedData1(SeedData):
  obj1 = ModelObject( ...kwargs... )
  obj2 = ModelObject2( ...kwargs...

This allows me to make new seed data fixtures by simply inheriting
from SeedData1, and makes typing up seed fixtures dead quick.

Sooo, even if it is cumbersome, if you could tell me how to bypass
SQLA's knowledge that these have already been persisted and readd the
objects in that would be great. The cumbersome jiggery pokery will all
be hidden in a test utility method so I'm fine with ugliness in there
in order to make writing the test fixtures as quick as possible. Or

Thanks for all the help
Iain

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Clearing out a scoped session

2009-03-28 Thread iain duncan

I guess the real issue is how do I negate:
it's already INSERTed as far as SQLA is concerned. 

The tables are being dropped and recreated as part of the setup
method, so I want SA not to know *anything* about any objects again. I
was under the impression that a new session would do it, which was
obviously a misunderstanding on my part!

Thanks
Iain

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Clearing out a scoped session

2009-03-28 Thread iain duncan



On Mar 28, 2:21 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 iain duncan wrote:
  Thanks Mike. I figured this is probably not the intended use. The
  thing is, I'm trying to make a declarative template for test seed data
  and declaring it all in a class like so makes for the easiest quickest
  way of extending my fixture:

  class SeedData1(SeedData):
    obj1 = ModelObject( ...kwargs... )
    obj2 = ModelObject2( ...kwargs...

  This allows me to make new seed data fixtures by simply inheriting
  from SeedData1, and makes typing up seed fixtures dead quick.

 I don't see what this has to do with clearing out sessions and the objects
 that are associated within.   Is SeedData1 some kind of magic class that
 persists obj1, obj2, etc. in the Session?  why can't whatever SeedData1
 does be done for each test ?

Yes, the seed class makes new objects that I want in the db as
fixture. I want to use the class as a namespace for a standard set of
fixture objects, mostly because of the ease of setting it up that
gives me, and because I can do things like make variants of the
standard fixture with a simple inherit and override. Of course the
class code gets run at import time of the class, so the objects in the
seed class are created then, with the active scoped session. However,
I want to reuse that fixture declaration over and over again, with a
fresh session and fresh tables. So when the tests start, the seed
class has created objects in the session imported from turbogears.
This session doesn't get flushed. The first test recreates the tables,
copies the seed objects into the new session for the first test,
flushes it to the db, does it's test stuff, and then wipes out that
session. Then I hit the snag because I can't recopy these objects into
another session after they have been persisted and I'd like to repeat
that process:
- drop tables
- make new session
- copy fixture objects into the new session
- flush to db
- do tests

I guess I need to either:
- wipe out the SQLA knowledget that they have been persisted
or
- clone the seed objects some how so I have new ones for each session

I thought the first would be the easiest, but maybe I'm wrong there.
If I can make sure that I have a completely fresh slate in SQLA, I
think I should be able to reimport the seedclass to use it again the
same way I do in the first test. I guess the first one works because
the original import is from  a session that never gets flushed.

thanks again for all the help!
Iain

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Clearing out a scoped session

2009-03-27 Thread iain duncan

Hi folks, I'm working on a set of test utilities for functional tests
using SA. Because they use a turbogears model, they import the tg
model and so get the SA class definitions with the mappers from the TG
scoped_session. My problem is that I want to reseed this data on each
test. So I have a fixture setup that drops and recreates the tables
used, and then it's supposed to put my seed data into the db for each
test. The problem is that once the seed data objects ( instantiated
model objects) have gone through this once, they won't do it again
because the the objects identites conflict with persistent objects in
the scoped_session. I don't seem to be able to completely clean out
the session in order to resave the objects. Can anyone give me any
tips on how I can:

- create a model object
- save it to the scoped session
- flush the session
- wipe out the whole mess *except* the model object
- resave and reflush those same objects again and again

Much appreciated,
Iain
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: problems with py2app

2008-04-26 Thread iain duncan

On Sat, 2008-26-04 at 05:04 -0700, Koen Bok wrote:
 Hey Iain,
 
 If you build apps with py2app it tries to figure out which modules to
 include automatically. If these modules are nested in some weird way
 it sometimes chokes. A solution is to import that module by hand in
 your main script (your-app-name.py) or telling py2app it needs to add
 the module in the setup dict (see manual).
 
 Good luck!
 
 Koen - madebysofa.com

Thanks! Can you tell from that traceback whether it is a module in SA
called logging or whether SA is trying to import the python core logging
module?

Iain

 
 On Apr 26, 7:53 am, iain duncan [EMAIL PROTECTED] wrote:
  Hi folks, I seem to be having a problem with sqlalchemy and py2app, but
  I am very new to OS X and py2app, so I could be doing something stupid.
  When we try to build the app we get this:
 
  ImportError: No module named logging
  Traceback (most recent call last):
 File
  /Users/bear/iain/booking-wx/dist/booking_main.app/Contents/Resources/__boo 
  t__.py,
  line 137, in module
   _run('booking_main.py')
 File
  /Users/bear/iain/booking-wx/dist/booking_main.app/Contents/Resources/__boo 
  t__.py,
  line 134, in _run
   execfile(path, globals(), globals())
 File
  /Users/bear/iain/booking-wx/dist/booking_main.app/Contents/Resources/booki 
  ng_main.py,
  line 10, in module
   from model_extern import *
 File model_extern.pyc, line 15, in module
 File sqlalchemy/__init__.pyc, line 29, in module
 File sqlalchemy/engine/__init__.pyc, line 54, in module
 File sqlalchemy/engine/base.pyc, line 16, in module
 File sqlalchemy/logging.pyc, line 35, in module
  ImportError: No module named logging
  2008-04-25 22:43:27.066 booking_main[457] booking_main Error
  2008-04-25 22:43:27.067 booking_main[457] booking_main Error
  An unexpected error has occurred during execution of the main script
 
  I'm not import logging, and the only other libraries being used are
  wxPython and formencode. wx is working ok.
 
  Any tips or even stories of success/failure would be much appreciated!
  Thanks
  Iain
  


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] problems with py2app

2008-04-25 Thread iain duncan

Hi folks, I seem to be having a problem with sqlalchemy and py2app, but
I am very new to OS X and py2app, so I could be doing something stupid.
When we try to build the app we get this:

ImportError: No module named logging
Traceback (most recent call last):
   File 
/Users/bear/iain/booking-wx/dist/booking_main.app/Contents/Resources/__boot__.py,
 
line 137, in module
 _run('booking_main.py')
   File 
/Users/bear/iain/booking-wx/dist/booking_main.app/Contents/Resources/__boot__.py,
 
line 134, in _run
 execfile(path, globals(), globals())
   File 
/Users/bear/iain/booking-wx/dist/booking_main.app/Contents/Resources/booking_main.py,
 
line 10, in module
 from model_extern import *
   File model_extern.pyc, line 15, in module
   File sqlalchemy/__init__.pyc, line 29, in module
   File sqlalchemy/engine/__init__.pyc, line 54, in module
   File sqlalchemy/engine/base.pyc, line 16, in module
   File sqlalchemy/logging.pyc, line 35, in module
ImportError: No module named logging
2008-04-25 22:43:27.066 booking_main[457] booking_main Error
2008-04-25 22:43:27.067 booking_main[457] booking_main Error
An unexpected error has occurred during execution of the main script

I'm not import logging, and the only other libraries being used are
wxPython and formencode. wx is working ok.
 
Any tips or even stories of success/failure would be much appreciated!
Thanks
Iain


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Ensuring a clean database for each test case

2008-04-18 Thread iain duncan


 How can I realize this concept of a completely new and isolated DB
 environment for each single test case that's being run?

Not sure if this is a useful answer or not, but I just made starting sql
files for each of my test suites. It's an extra step but then you have a
convenient file to put starting data in.

def setup(self):
  os.sys(mysql -ufoo -pbar -Ddb_name  test_db_1.sql)
  etc etc

Works ok and is actually pretty quick.
Iain



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: mappers for having problems with associations and flushing

2008-04-14 Thread iain duncan

here are my mappers in case that helps, maybe I don't understand the
cascading properly?


# relations for the bookings
mapper(  Booking, booking_table, properties={
'client': relation( Client ),
'institution': relation( Institution, ),
'category': relation( Category ),
'staff_associations': relation(BookingStaffAssoc, lazy=False, 
cascade='all, delete-orphan',
order_by=booking_staff_table.c.ordering )
})

# Association mapper, this is the important part!
mapper(  BookingStaffAssoc, booking_staff_table,
  primary_key=[ booking_staff_table.c.booking_id,
booking_staff_table.c.staff_id],
  properties={ 
'booking' : relation(Booking, lazy=False, cascade='all,
delete-orphan' ), 
'staff': relation(Staff, lazy=False, cascade='all, delete-orphan') 
})


Thanks!



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Extension proposal

2008-04-14 Thread iain duncan

On Mon, 2008-14-04 at 10:26 -0400, Michael Bayer wrote:
 
 On Apr 14, 2008, at 10:11 AM, Paul Johnston wrote:
 
  Hi,
 
  In my case, the data is coming from FormEncode. I guess it could  
  come from JSON, but that would be JSON client to server, which is  
  not the usual way round.
 
 JSON is just an object encoding, in my mind it has no implications  
 regarding clients/servers.
 
  The proposal is for a standalone function, I'm thinking you'd do  
  something like:
 
  from sqlalchemy.ext.proc_hash import proc_hash
  ...
  proc_hash(myobj, mydata)
 
  As this is just plumbing python data structures into database  
  objects, I think it's fitting to the SA-core goals.
 
 
 the function so far seems pretty use-case specific.  (only works in  
 one direction, is hardcoded to the obj.mapper convention, sort of  
 looks like JSON but isn't, etc.)  Even if it were JSON I dont think  
 thats an SA core feature.

But would still be useful to a lot of folks like me. Would it possibly
be worth having extension-extensions? Like recommended extras or
something that aren't in the SA core? I totally respect keeping the core
lean and maintainable, but maybe a cookbooky section of the site with
extras or something?

just a thought!
Iain



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Stuck on flush vs commit problems

2008-04-13 Thread iain duncan

On Sun, 2008-13-04 at 11:28 -0400, Michael Bayer wrote:
 
 On Apr 12, 2008, at 5:39 PM, iain duncan wrote:
 
   File /home/xornot/www/booking/booking-wx/model.py, line 227, in
  update_resource_obj
 session.flush(resource_obj)
   File
  /usr/lib/python2.4/site-packages/SQLAlchemy-0.4.3-py2.4.egg/ 
  sqlalchemy/orm/session.py, line 764, in flush
 self.uow.flush(self, objects)
   File
  /usr/lib/python2.4/site-packages/SQLAlchemy-0.4.3-py2.4.egg/ 
  sqlalchemy/orm/unitofwork.py, line 189, in flush
 objset = util.Set([o._state for o in objects])
  TypeError: iteration over non-sequence
 
 
 the objects parameter to session.flush() is currently a list, i.e.  
 session.flush([resource_obj]) .

Thanks! And thanks for the great docs on Session too, I *think* I'm
using it somewhat correctly now. I was using contextual-session in a
somewhat cargoish manner but the docs cleared that up!

Iain



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Stuck on flush vs commit problems

2008-04-12 Thread iain duncan

Hi everyone, I'm stuck going around in circles here so any help would be
great. I don't know whether I should be using transactions or not to
solve this, but haven't got it working yet with either flushes or
commits.

Problem:
- a gui app manages two resources, bookings and clients
 - bookings have clients, as a relation, working
- when a form is opened to create a new booking or client, it creates a
new one in the session
- if the user saves, this resource object ref is passed to a model that
updates the db and fires listeners
- in that case, the ref needs to be flushed/committed
- if the user cancels after opening a form, the form destructor calls
the model to expunge the resource object with 
  session.expunge(resource_obj) ( working ok )

My problem is when a new booking is created, and a new client is going
to be created too:
- booking form opens, makes new 'on-deck' booking ref
  - client sub form is opened, makes new 'on-deck' client ref
  - client form is saved, problem is how to flush *only* the new client
while still keeping the on-deck booking 'on-deck'

I've tried various combinations but am certainly in over my head so if
anyone has recommend procedures I would love to hear them. I know that
autoflush is out because the bookings cannot be flushed until they have
a valid client ( on purpose, good! ). Flushing just one object at a time
seemed to be a possibility, but I get this traceback when trying that
out on saving an already existing booking attached to an existing
client:

(log output)
- setting attribute size to 22
- attempting session.flush(( 2008-04-12 - 14:00:00 - BS Sec - 22 people
- Client 18282 : Armstrong, Neil ))
(traceback)
Traceback (most recent call last):
  File /home/xornot/www/booking/booking-wx/form_builder.py, line 385,
in onSave
self.model.update_resource_obj('edit', self.resource_obj,
result['valid_values'])
  File /home/xornot/www/booking/booking-wx/model.py, line 227, in
update_resource_obj
session.flush(resource_obj)
  File
/usr/lib/python2.4/site-packages/SQLAlchemy-0.4.3-py2.4.egg/sqlalchemy/orm/session.py,
 line 764, in flush
self.uow.flush(self, objects)
  File
/usr/lib/python2.4/site-packages/SQLAlchemy-0.4.3-py2.4.egg/sqlalchemy/orm/unitofwork.py,
 line 189, in flush
objset = util.Set([o._state for o in objects])
TypeError: iteration over non-sequence

Any input welcome! Thanks,
Iain



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Stuck on flush vs commit problems

2008-04-12 Thread iain duncan

Narrowing it down, I guess what I'm really confused about is why, given
two booking objects, in complete and ok to save and one not, I can't do 

session.save_or_update( complete_booking )
session.flush( complete_booking )

( keeping booking incomplete around until later )

On the other hand I seem to be able to do:

session.save_or_update( complete_booking )
session.flush() # generates exception about incomplete booking

- the complete booking goes into the table ok, the incomplete one not,
but I don't like the idea of just ignoring exceptions when I don't fully
understand what's going on.

Thanks!
Iain




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: A General Thank You

2008-02-11 Thread iain duncan

On Sat, 2008-19-01 at 17:48 -0500, Michael Bayer wrote:
 youre very welcome !   glad you've had a positive experience.

Yeah, the SA docs are awesome. The whole package is awesome. I was
recently really impressed with how easy it was to migrate to 0.4.

Iain



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: 0.4 deprecation warnings, what is the new get_by/select_by?

2007-12-20 Thread iain duncan

On Thu, 2007-20-12 at 09:58 +0100, Gaetan de Menten wrote:
 On Dec 20, 2007 8:50 AM, iain duncan [EMAIL PROTECTED] wrote:
 
  Sorry if this seems a stupid question, but I thought that Mike had said
  that in sa0.4, if you used session_context that this
 
  User.query.get_by(name='john')
 
  was the replacement for the old assign mapper convenience call.
  But I'm getting deprecation warnings. What should I be doing instead of
  the (almost as) convenient:
 
  User.query.get_by( **kwargs )
 
 User.query.filter_by(**kwargs).first()
 
  User.query.select_by( **kwargs )
 
 User.query.filter_by(**kwargs).all()
 
  User.query.select()
 
 User.query.filter(xxx).all()

Thanks, that makes sense now.
Iain



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] 0.4 deprecation warnings, what is the new get_by/select_by?

2007-12-19 Thread iain duncan

Sorry if this seems a stupid question, but I thought that Mike had said
that in sa0.4, if you used session_context that this 

User.query.get_by(name='john')

was the replacement for the old assign mapper convenience call. 
But I'm getting deprecation warnings. What should I be doing instead of
the (almost as) convenient:

User.query.get_by( **kwargs )
User.query.select_by( **kwargs )
User.query.select()

Thanks
iain


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Doc bug? 0.3 tutorial not working

2007-11-17 Thread iain duncan

Hi Michael et al. I don't know if this is a versioning conflict or what
not. I know I had done this fine before in the past, so maybe something
changed. Anyway, with sa 0.3.11 and python 2.4.3 and mysql 5.0.22,
following the sa 0.3 tutorial:

from sqlalchemy import *
db = create_engine('mysql://web:[EMAIL PROTECTED]/db_name')
metadata = MetaData()

metadata.bind(db)
TypeError: 'NoneType' object is not callable

However this still works:
metadata = MetaData(db) 

The tutorial still says the two are equivalent, so I guess something's
 fishy?

Thanks
Iain


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Doc bug? 0.3 tutorial not working

2007-11-17 Thread iain duncan


 
  from sqlalchemy import *
  db = create_engine('mysql://web:[EMAIL PROTECTED]/db_name')
  metadata = MetaData()
 
  metadata.bind(db)
  TypeError: 'NoneType' object is not callable
 
  However this still works:
  metadata = MetaData(db)
 
  The tutorial still says the two are equivalent, so I guess something's
  fishy?
 
 
 the correct syntax is metadata.bind = db.Im not seeing  
 metadata.bind(db) anywhere ?

Well that's what I get for trying to stay up as late as my studying
girlfriend. My sincere apologies! :/

Sheepish Iain


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: stuck on self referrencing mapper thingy WORKING, thanks

2007-11-10 Thread iain duncan


 OK this is what happens here, the relationship you want to establish is:
 
 product_table  (product_table.id=collections.parent_id)---  
 collections (collections.child_id=product_table.id)  
 product_table
 
 so above, youve got product_table.id, youve got the collections table  
 (secondary), and youve got the primary join although i think you mean  
 for it to be  
 primaryjoin 
 =collections_products_table.c.parent_id==product_table.c.id. the  
 mapper also needs you to give it the secondaryjoin,  
 secondaryjoin 
 =collections_products_table.c.collection_id==product_table.c.id,  
 because it sees that there is more than one way to make a join between  
 products and collections so it cant go any further (you could say  
 that it should use the process of elimination since you gave it the  
 primaryjoin, but thats a topic for another discussion :)  ).  so setup  
 primaryjoin and secondaryjoin and youre good to go.


Thanks Mike! I think I speak for many here in saying that we really
appreciate how helpful you are on this list in addition to all the work
that has obviously gone into SA itself! ( Looking forward to that book,
btw! )

I had to rejiggle some column names to match your example but for any
interested, below is what works ( so far! ). I'm sure it would have
taken me hours of frustration to figure it out myself. If something
looks funny in the below and anyone wants to correct me, please do!


product_table = Table(products, metadata,
Column( id, Integer, primary_key=True ),
Column( name, String(100) ),
Column( description, String, nullable=True ),
Column( ordering, Integer, default=1 ),
Column( price, Numeric, default=0.00 ),
Column( pst, Numeric, default=0.06 ),
Column( gst, Numeric, default=0.07 ),
Column( plt, Numeric, default=0 ),
# is this product a collection ( basket, upgrade )
Column( collection, Boolean, default='0', nullable=False ),
)

# many to many of collection-products to products
collections_products_table = Table('collections_products', metadata,
Column('parent_id', Integer, ForeignKey('products.id') ),
Column('child_id', Integer, ForeignKey('products.id') ),
)

class Product(object):
pass

assign_mapper(session.context, Product, product_table, properties={
 'children': relation( Product, secondary=collections_products_table,
 primaryjoin =
collections_products_table.c.parent_id==product_table.c.id,
 secondaryjoin =
collections_products_table.c.child_id==product_table.c.id,
backref='products', lazy=True ),
})




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] stuck on self referrencing mapper thingy

2007-11-09 Thread iain duncan

I've tried to find something similar in the SA 0.3 docs but haven't
managed yet. They are dense though so I could easily have missed
something. Apologies if this seems a simple problem! :/

I have a product table, some are allowed to be collections ( ie gift
basket. ) They will have children, which are in turn products. I put in
a boolean attribute so I can control whether they are collections, and I
only need one layer deep, so products can be collections or children,
and not both. Each product can be the child of many collections.

I was hoping I could handle this by having a many to many table of
parent products to children so I could do:
collection.children

Below is what I have tried and failed with:

# many to many of collection-products to products
collections_products_table = Table('collections_products', metadata,
Column('collection_id', Integer, ForeignKey('products.id') ),
Column('product_id', Integer, ForeignKey('products.id') ),
)

#mapper
assign_mapper(session.context, Product, product_table, properties={
'children': relation( Product, secondary=collections_products_table, 
lazy=True, )
})


or 
assign_mapper(session.context, Product, product_table, properties={
'children': relation( Product, secondary=collections_products_table, 
primaryjoin =
collections_products_table.c.collection_id==product_table.c.id,
lazy=True ),
})


Neither of the above work. Tg loads ok but when I try to make a product
I get the following:

ArgumentError: Error determining primary and/or secondary join for
relationship 'children' between mappers 'Mapper|Product|products' and
'Mapper|Product|products'.  If the underlying error cannot be corrected,
you should specify the 'primaryjoin' (and 'secondaryjoin', if there is
an association table present) keyword arguments to the relation()
function (or for backrefs, by specifying the backref using the backref()
function with keyword arguments) to explicitly specify the join
conditions.  Nested error is Cant determine join between 'products' and
'collections_products'; tables have more than one foreign key constraint
relationship between them.  Please specify the 'onclause' of this join
explicitly.

I'm in over my head here, so if anyone has the time to comment on the
above that would be luverly.

Thanks
Iain



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] rlike with assign_mapper?

2007-11-01 Thread iain duncan

Others gave some pointers a while back on using rlike with query
filters. I'm wondering whether there is some way to get rlike with
assign_mapper syntax?

I'm using right now

Resource.select_by( **query_dict )

where query dict is name/val pairs, I'd like to be able to make those
name/val pairs be re matches. Any tips most appreciated!

Thanks
Iain



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: vote for Python - PLEASE!

2007-10-21 Thread iain duncan

On Fri, 2007-19-10 at 17:06 -0700, Monty Taylor wrote:
 Daniel Haus wrote:
  Looks like you're frighteningly successful. You're right, python could
  use much more love, but look at this! Obviously the poll is not
  representative anymore, is it...
 
 Yeah - a little skewed there.
 
 On the other hand, the poll wasn't exactly very scientific in the first 
 place. Maybe it'll still trick someone into at least making the python 
 links on the MySQL website actually work. :)

I voted, and I'm not lying. Seriously thought, maybe SA is affecting
that sort of thing in a bigger way than we think! First thing I do know
for any misc db utility is figure out whether I can just use SA and the
autoload feature to do it in python! =)

iain



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: RLIKE, REGEXP

2007-10-21 Thread iain duncan

On Wed, 2007-10-10 at 16:41 -0700, jason kirtland wrote:
 Jim Musil wrote:
  I don't see support for RLIKE, NOT RLIKE, or REGEXP anywhere.
  
  Is there support for this?
 
 Yes, you can use these operators and any others via the .op() method:
 
table.c.col1.op('rlike')('re')
not_(table.c.col1.op('rlike')('re'))
table.c.col1.op('regexp')('re')

Is it possible to use them with the ORM too? Or am I misunderstanding
the above? If someone has the time to post an example of how one would
do an rlike version of 

Resource.select_by( foo=bar )

I'd love to see it.
Thanks
Iain



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: RLIKE, REGEXP

2007-10-21 Thread iain duncan

On Sun, 2007-21-10 at 17:12 -0400, Michael Bayer wrote:
 
 On Oct 21, 2007, at 4:36 PM, Paul Johnston wrote:
 
 
  Hi,
 
  Resource.select_by( foo=bar )
 
 
  Untested, but I reckon this will work:
 
  Resource.query.filter(Resource.foo.op('rlike')('bar'))
 
 
 might need to call op() off the Table instance for now, might not  
 have added op() to the class-based properties

Excuse what may be a stupid question, but is this for both 0.3 and 0.4
or just the new stuff? ( I know, I gotta upgrade soon now! )

Thanks
Iain



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] deletion behavior question

2007-02-12 Thread iain duncan

I would like some objects that are related through many to many tables
to delete the many to many entry on deletion, but NOT the endpoint. It
seems that cascade=all deletes both, and no arg to cascade leaves left
over invalid entries in the manytomany table. Is there a suggested way
to deal with this?

Should I add a destructor to the classes that wipes out the many to many
entry with a sql delete clause?

Thanks
Iain


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] offlist thanks Re: [sqlalchemy] Re: making crud with assign_mapper

2007-02-07 Thread iain duncan

On Wed, 2007-07-02 at 11:53 +0200, svilen wrote:
  If you have the time and know how, do you know what the efficient
  way to iterate through all the class definitions in a module would
  be to grab all the classes that have a certain member?
 
 import module
 for k,v in module.__dict__.iteritems()
   if isinstance(v,type):
 if issubclass(v, somebase):   #i would advice this, it's up to u
   if hasattr( v, yourmember):
   
 

Thanks! I didn't know how to use iteritems that way
 btw. u should read the sources _more_, reading the docs aside.
 What's the point of using interpreted language _and_ open-source  
 otherwise?

yes I'm sure you're right. But the SA sources right now look a ways over
my head!

Anyway, thanks so much for the tips.

Iain



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: offlist thanks Re: [sqlalchemy] Re: making crud with assign_mapper

2007-02-07 Thread iain duncan

Darn, apologies to all. Clearly I am too sleepy to be doing this as I
can't even work my mail client. :/

Thanks

On Wed, 2007-07-02 at 01:58 -0800, iain duncan wrote:
 On Wed, 2007-07-02 at 11:53 +0200, svilen wrote:
   If you have the time and know how, do you know what the efficient
   way to iterate through all the class definitions in a module would
   be to grab all the classes that have a certain member?
  
  import module
  for k,v in module.__dict__.iteritems()
if isinstance(v,type):
  if issubclass(v, somebase): #i would advice this, it's up to u
  if hasattr( v, yourmember):
  
  
 
 Thanks! I didn't know how to use iteritems that way
  btw. u should read the sources _more_, reading the docs aside.
  What's the point of using interpreted language _and_ open-source  
  otherwise?
 
 yes I'm sure you're right. But the SA sources right now look a ways over
 my head!
 
 Anyway, thanks so much for the tips.
 
 Iain
 
 
 
  


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Using assoc proxy with a regular field, help?

2007-02-06 Thread iain duncan

On Tue, 2007-06-02 at 06:25 -0200, Roger Demetrescu wrote:
 Hi iain
 
 On 2/5/07, iain duncan [EMAIL PROTECTED] wrote:
  Below is my code in case someone has time to look at it. At the moment it 
  works to get
  article.ordering as a list instead of one field. Is there a way to tell it 
  that this is only supposed to be one item?
 
 uselist is your friend..  :)

Great, that did it. ( It's actually on the ordering field that I needed
it, but it worked ). Now after selecting and article I can use ordering
directly as a field: 

[(a.article_name, a.ordering ) for a in Article.select_by(page_id=2)]

One thing I'm still not clear on is whether I can do an order_by on an
object field, or whether that always needs to be with the select on
table notation like so:

arts = Article.select(page_table.c.page_id==2,
order_by=[page_article_table.c.ordering])

Is there someway to do the above like one of these but with ordering
added now that ordering is accessible as either
page_article_table.c.ordering or Article.ordering:

arts = Article.select_by(page_id=2) 

arts = Page.get(2).articles   


A big thanks to the makers of association proxy and session context and
assign mapper! 

Thanks
Iain


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Using assoc proxy with a regular field, help?

2007-02-05 Thread iain duncan

Hi folks. I have worked my way through the association and association
proxy examples and almost got what I want going, but I'm stuck on a
couple of points.

I have an assoc table with two foreign keys and one extra numerical
field. In the example in the docs, the third extra field is a third
foreign key to a table instead of an integer, ( in that case it is for
Users )

I see how the example grabs the user data by using:

mapper(KeywordAssociation, itemkeywords_table,
primary_key=[itemkeywords_table.c.article_id, 
itemkeywords_table.c.keyword_id],
properties={
'keyword' : relation(Keyword, lazy=False), 
'user' : relation(User, lazy=False) 
}
)

which can the be accessed directly using the association proxy off of 

class Article(object):
keywords = AssociationProxy('keyword_associations', 'keyword')

thus allowing us to use:  article.keyword.keyword_name  
instead of: article.keyword_association.keyword.keyword_name


I'm trying to figure out how to do the same thing with an extra property that 
is not a foreign key.
So that I can do article.ordering where ordering is the integer key in the 
assoc table. 

Below is my code in case someone has time to look at it. At the moment it works 
to get
article.ordering as a list instead of one field. Is there a way to tell it that 
this is only supposed to be one item?

Thanks
Iain


page_table = Table('pages', metadata,
Column('page_id', Integer, primary_key = True),
Column('page_name', String(50))
)
article_table = Table('articles', metadata,
Column('article_id', Integer, primary_key = True),
Column('article_name', String(150) ),
)
page_article_table = Table('page_article', metadata,
Column('page_id', Integer, ForeignKey(pages.page_id)),
Column('article_id', Integer, ForeignKey(articles.article_id)),
Column('ordering', Integer ),
)

# class definitions
class Page(object):
def __init__(self, name):
self.page_name = name
# create articles proxied association
articles = AssociationProxy('article_associations', 'article')

class Article(object):
def __init__(self, name):
self.article_name = name
# this works to get the ordering field but gets it as a list instead of one 
int
ordering = AssociationProxy('ordering_association', 'ordering')

class PageArticleAssoc(object):
pass

# Page mapper, relates to articles via Association object
mapper(Page, page_table, properties={
'article_associations':relation(PageArticleAssoc, lazy=False, cascade=all, 
delete-orphan)
}
)

# keyword mapper
mapper(Article, article_table, properties={
'pages':relation(PageArticleAssoc, lazy=False, cascade=all, 
delete-orphan),
}
)

# mapper for PageArticleAssoc
mapper( PageArticleAssoc, page_article_table,
primary_key= [ page_article_table.c.page_id, 
page_article_table.c.article_id ],
properties={
'article' : relation(Article, lazy=False), 
'page' : relation(Page, lazy=False), 
}
)




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: sqlalchemy lost connection?

2007-02-04 Thread iain duncan

On Sun, 2007-04-02 at 10:33 -0200, Jorge Godoy wrote:
 iain duncan [EMAIL PROTECTED] writes:
 
  ridiculous but at least it works. I suppose in the case of a web app the
  cron job could even be on any old server using wget. 
 
 I'd put the cron job on the same server as the app... :-)  The load is
 ridiculous and there's no need to have an extra machine (be it old or not). 

I meant in the case that you are forced to deploy in a serving
arrangement that won't allow cron jobs, as is sometimes the case on
budget hosts.

Iain



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: sqlalchemy lost connection?

2007-02-04 Thread iain duncan

On Sun, 2007-04-02 at 22:00 +, Michael Bayer wrote:
 three people have replied to this user and nobody has referenced the
 documented (as well as FAQ'ed) feature specifically built to deal with
 this ?!  sorry im so cranky...but man you guys have to help me a
 little more :)
 
 use pool_recycle=some number of seconds less than 8 hours on your
 create_engine().

Sorry Mike, I didn't know that solution. Mostly because when I hit the
problem with TG nobody mentioned it to me either! 

While I did read the mention of that in your ( excellent ) docs, it was
frankly over me head and I didn't realize it would cut out the problem. 

Iain



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] doc bug for Association Object example code

2007-02-04 Thread iain duncan

Not sure if this is the right place, hope so. 

In the example for Association Object on
http://www.sqlalchemy.org/docs/datamapping.myt#datamapping

The bottom part reads:

for a in alist:
for k in a.keywords:
if k.keyword.name == 'jacks_stories':
print k.user.user_name


Line 3 should be:

if k.keyword.keyword_name == 'jacks_stories':


( Tested it out ).

Thanks
iain


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Assign mapper many-to-many with extra columns

2007-02-03 Thread iain duncan

On Fri, 2007-02-02 at 21:41 -0500, Karl Guertin wrote:
 Did my suggestion of dropping 'ordering' off of the article and adding
 an order_by clause to the relation not work?
 
 # article mapper
 assign_mapper( session.context, Article, article_table)
 
 # page mapper with m-to-m relation on articles
 assign_mapper(session.context, Page, page_table, properties = {
articles: relation(Article, secondary=page_article_table,
order_by=page_article_table.c.ordering,
 backref=pages, lazy=False)})
 
 assign_mapper( session.context, Page_Article, page_article_table )
 

No, I got this error message:

InvalidRequestError: Given column 'page_article.ordering', attached to
table 'page_article', failed to locate a corresponding column from table
'article_36c9'


Thanks,
Iain




--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: sqlalchemy lost connection?

2007-02-03 Thread iain duncan

On Sat, 2007-03-02 at 20:53 -0500, Mike Bernson wrote:
 Mysql has a timeout idle connection.  You might be running into this. I
 think it defaults to 8 hours It sounds like you are running into the
 idle timeout.
 
 dischdennis wrote:
  Hi I am using sqlalchemy with zope and when I have the server running
  a while ( 6 hours) without interaction, I cannot access any site,
  (OperationalError) (2013, 'Lost connection to MySQL server during
  query'),
  and have to restart the server.

yeah, this is a total freaking pain with MySQL. One of recommended
deployment hacks is to put a cron job on your server to make a pointless
hit on the db server in question once an hour or whatever. Seems
ridiculous but at least it works. I suppose in the case of a web app the
cron job could even be on any old server using wget. 

Iain



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Assign mapper many-to-many with extra columns

2007-02-03 Thread iain duncan

On Sat, 2007-03-02 at 18:44 -0500, Karl Guertin wrote:
 On 2/3/07, iain duncan [EMAIL PROTECTED] wrote:
  InvalidRequestError: Given column 'page_article.ordering', attached to
  table 'page_article', failed to locate a corresponding column from table
  'article_36c9'
 
 I'd think that you're missing the .c. in the middle:
 
 page_article.c.ordering

Nope, just tried again. The mappers are this:

assign_mapper( session.context, Page_Article, page_article_table )
assign_mapper( session.context, Article, article_table ) 

assign_mapper(session.context, Page, page_table, properties = {
   articles: relation(Article, secondary=page_article_table,
   order_by=page_article_table.c.ordering,
backref=pages, lazy=False)})

And the error message is still:
InvalidRequestError: Given column 'page_article.ordering', attached to
table 'page_article', failed to locate a corresponding column from table
'article_61f0'

I am sure that I have dropped all tables, re-run the tg-admin sql create
command, and restarted the shell with fresh imports. 

It looks like association proxy is what I need, so I've been spending
the day reading the SA docs thoroughly. Brain is full!

Thanks for the help
Iain


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---