Re: [sqlalchemy] Re: can I use SA to dump a table's DDL to a file?

2011-06-02 Thread Randy Syring
That was it, thanks.  I was trying to go through the column and looking 
at it's foreign_keys collection.  When I set those values, it didn't 
affect the output.  Reflects my ignorance of SA, obviously.


Thanks again.

--
Randy Syring
Intelicom
Direct: 502-276-0459
Office: 502-212-9913

For the wages of sin is death, but the
free gift of God is eternal life in
Christ Jesus our Lord (Rom 6:23)


On 06/02/2011 11:03 AM, Michael Bayer wrote:

I think if you went through the Table object's .constraints collection and looked for 
ForeignKeyConstraint objects you could set up the "onupdate" and "ondelete" 
attributes on them, sure.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Sqlalchemy exception for mysql in window: TypeError: immutabledict object is immutable

2011-06-02 Thread zhwolf
Thanks, the patch works now.

Jun Zhou

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Dynamic relationship

2011-06-02 Thread Ben Chess
Sounds far more complicated than something I'd want to take on.
Thanks for your thoughts and the detailed response.

Ben

On Thu, Jun 2, 2011 at 6:28 PM, Michael Bayer  wrote:
>
> On Jun 2, 2011, at 8:25 PM, Ben Chess wrote:
>
>> On Thu, Jun 2, 2011 at 5:18 PM, Michael Bayer  
>> wrote:
>>>
>>> On Jun 2, 2011, at 8:11 PM, Ben Chess wrote:
>>>
 I'm not intending for the contents of the BLOB to be readable to
 MySQL.  It would only be cracked open and read from within Python.
 Meaning Python only knows what the key actually is.  So yeah, I
 understand the caveats of this approach.  I merely want to provide a
 mechanism to, as a second & separate SELECT query, load the relationed
 instance and be able to access it from the child python instance.

 Hopefully that clears some things up.
>>>
>>> How do you want to identify what row contains the target key ?   Is the 
>>> comparison on the SQL side or the Python side ?
>>
>> In Python.  Below is an example of what I have.
>>
>> class Account:
>>  data = Column(JSONType)
>>
>>  @property
>>  def salesperson(self):
>>    session = orm.session.object_session(self)
>>    return session.query(Salesperson, id=self.data['salesperson_id']).one()
>
> OK that comparison, assuming you mean to have a "filter()" in there, is on 
> the SQL side.  The extraction of the comparison value from the JSON is python 
> side.
>
> relationship() really has no way to deal with load or persist operations that 
> aren't about columns in the two tables being compared.   Also, an eager load, 
> at least as far as being able to load Parent and related Child objects (I'm 
> using Parent/Child instead of Account/Salesperson here) with less than N+1 
> queries is not possible here, since that requires a set of rows representing 
> parents be compared to a set of rows representing children on the SQL side 
> and that's not an option.   The relationship also can't really do too much 
> with SQL criteria, at least not query.join(), so it would be limited at best 
> with class level behavior.
>
> So what you would get from a hypothetical relationship are:
>
> 1. the attribute is managed, such that it loads when non-populated, remains 
> in __dict__ for subsequent returns, gets expired when the object is expired, 
> after a commit(), etc.
> 2. if you assign to the attribute, "cascade" can establish that the child 
> object or objects are brought into the session
> 3. merge() will merge it in
> 4. potentially the unit of work could fire off rules to establish necessary 
> state on the parent or child based on the object association as well as its 
> history
>
> There is a rough outline of a new extension here, where the end user could 
> plug in functions to establish "how am I loading the object", "I'm flushing, 
> what needs to be populated", "here's my SQL comparison logic", but this would 
> be a big job.  It would be daunting for new users, it would have a complex 
> usage contract.  A rudimentary version could probably happen quickly, but 
> when people start using it for real, the number of edge cases is huge.   
> Could be a nice feature, but would be an enormous maintenance job for many 
> months, or alternatively a little used and weakly supported feature.   
> Attached is a sketch of part of it, does a little bit of the above.    You're 
> free to use it but it uses more of the internals than I think you'd want to 
> (these are internals that aren't guaranteed to stay the same).
>
> But beyond all the work it would require for testing and maintenance, how 
> useful would it actually be, and how confusing would it be that you can kind 
> of build an attribute like this on your own outside of SQLAlchemy, as you've 
> done ?
>
> If all you need is so that hitting salesperson doesn't do the load every 
> time, @memoized_property is easy for that.   Another example for that is 
> attached, it also expires itself when the object is expired.   Doesn't do as 
> much.  It's not clear how much of relationship() you'd really need.
>
> Of course if you could find a way to copy this one aspect of your JSON 
> structure to a plain foreign key attribute (and there's many ways to fire off 
> an event like that), all of this goes away and you get SQL-level eager 
> loading and joins and all that.
>
>
>
>
>
>
>
>
>>
>>

 On Thu, Jun 2, 2011 at 4:10 PM, Michael Bayer  
 wrote:
> Using a BLOB as a key is a really bad idea and wont work on all backends, 
> but other than the database-level limitations inherent, SQLAlchemy will 
> let you set up whatever column you'd like to use as the key just fine 
> within a relationship().     Guessing what the problem might be.  Foreign 
> key ?  If you rely upon your table metadata to emit CREATE TABLE 
> statements, you can forego using ForeignKey with your table metadata and 
> configure the foreign key data on the relationship itself using 
> "foreign_keys=[table.c.my_referencing_blob_column], 

Re: [sqlalchemy] Dynamic relationship

2011-06-02 Thread Michael Bayer

On Jun 2, 2011, at 8:25 PM, Ben Chess wrote:

> On Thu, Jun 2, 2011 at 5:18 PM, Michael Bayer  
> wrote:
>> 
>> On Jun 2, 2011, at 8:11 PM, Ben Chess wrote:
>> 
>>> I'm not intending for the contents of the BLOB to be readable to
>>> MySQL.  It would only be cracked open and read from within Python.
>>> Meaning Python only knows what the key actually is.  So yeah, I
>>> understand the caveats of this approach.  I merely want to provide a
>>> mechanism to, as a second & separate SELECT query, load the relationed
>>> instance and be able to access it from the child python instance.
>>> 
>>> Hopefully that clears some things up.
>> 
>> How do you want to identify what row contains the target key ?   Is the 
>> comparison on the SQL side or the Python side ?
> 
> In Python.  Below is an example of what I have.
> 
> class Account:
>  data = Column(JSONType)
> 
>  @property
>  def salesperson(self):
>session = orm.session.object_session(self)
>return session.query(Salesperson, id=self.data['salesperson_id']).one()

OK that comparison, assuming you mean to have a "filter()" in there, is on the 
SQL side.  The extraction of the comparison value from the JSON is python side.

relationship() really has no way to deal with load or persist operations that 
aren't about columns in the two tables being compared.   Also, an eager load, 
at least as far as being able to load Parent and related Child objects (I'm 
using Parent/Child instead of Account/Salesperson here) with less than N+1 
queries is not possible here, since that requires a set of rows representing 
parents be compared to a set of rows representing children on the SQL side and 
that's not an option.   The relationship also can't really do too much with SQL 
criteria, at least not query.join(), so it would be limited at best with class 
level behavior.

So what you would get from a hypothetical relationship are:

1. the attribute is managed, such that it loads when non-populated, remains in 
__dict__ for subsequent returns, gets expired when the object is expired, after 
a commit(), etc.
2. if you assign to the attribute, "cascade" can establish that the child 
object or objects are brought into the session
3. merge() will merge it in
4. potentially the unit of work could fire off rules to establish necessary 
state on the parent or child based on the object association as well as its 
history

There is a rough outline of a new extension here, where the end user could plug 
in functions to establish "how am I loading the object", "I'm flushing, what 
needs to be populated", "here's my SQL comparison logic", but this would be a 
big job.  It would be daunting for new users, it would have a complex usage 
contract.  A rudimentary version could probably happen quickly, but when people 
start using it for real, the number of edge cases is huge.   Could be a nice 
feature, but would be an enormous maintenance job for many months, or 
alternatively a little used and weakly supported feature.   Attached is a 
sketch of part of it, does a little bit of the above.You're free to use it 
but it uses more of the internals than I think you'd want to (these are 
internals that aren't guaranteed to stay the same).

But beyond all the work it would require for testing and maintenance, how 
useful would it actually be, and how confusing would it be that you can kind of 
build an attribute like this on your own outside of SQLAlchemy, as you've done 
?   

If all you need is so that hitting salesperson doesn't do the load every time, 
@memoized_property is easy for that.   Another example for that is attached, it 
also expires itself when the object is expired.   Doesn't do as much.  It's not 
clear how much of relationship() you'd really need.

Of course if you could find a way to copy this one aspect of your JSON 
structure to a plain foreign key attribute (and there's many ways to fire off 
an event like that), all of this goes away and you get SQL-level eager loading 
and joins and all that.







> 
> 
>>> 
>>> On Thu, Jun 2, 2011 at 4:10 PM, Michael Bayer  
>>> wrote:
 Using a BLOB as a key is a really bad idea and wont work on all backends, 
 but other than the database-level limitations inherent, SQLAlchemy will 
 let you set up whatever column you'd like to use as the key just fine 
 within a relationship(). Guessing what the problem might be.  Foreign 
 key ?  If you rely upon your table metadata to emit CREATE TABLE 
 statements, you can forego using ForeignKey with your table metadata and 
 configure the foreign key data on the relationship itself using 
 "foreign_keys=[table.c.my_referencing_blob_column], 
 primary_join=table.c.my_key_blob_column==table.c.my_referencing_blob_column".
 
 As far as your eager load, both joined eager loading and subquery eager 
 loading rely on being able to JOIN to the target so if your backend is not 
 letting the JOIN part happen, I'm not sure exactly what SQL y

Re: [sqlalchemy] Dynamic relationship

2011-06-02 Thread Ben Chess
On Thu, Jun 2, 2011 at 5:18 PM, Michael Bayer  wrote:
>
> On Jun 2, 2011, at 8:11 PM, Ben Chess wrote:
>
>> I'm not intending for the contents of the BLOB to be readable to
>> MySQL.  It would only be cracked open and read from within Python.
>> Meaning Python only knows what the key actually is.  So yeah, I
>> understand the caveats of this approach.  I merely want to provide a
>> mechanism to, as a second & separate SELECT query, load the relationed
>> instance and be able to access it from the child python instance.
>>
>> Hopefully that clears some things up.
>
> How do you want to identify what row contains the target key ?   Is the 
> comparison on the SQL side or the Python side ?

In Python.  Below is an example of what I have.

class Account:
  data = Column(JSONType)

  @property
  def salesperson(self):
session = orm.session.object_session(self)
return session.query(Salesperson, id=self.data['salesperson_id']).one()


>>
>> On Thu, Jun 2, 2011 at 4:10 PM, Michael Bayer  
>> wrote:
>>> Using a BLOB as a key is a really bad idea and wont work on all backends, 
>>> but other than the database-level limitations inherent, SQLAlchemy will let 
>>> you set up whatever column you'd like to use as the key just fine within a 
>>> relationship().     Guessing what the problem might be.  Foreign key ?  If 
>>> you rely upon your table metadata to emit CREATE TABLE statements, you can 
>>> forego using ForeignKey with your table metadata and configure the foreign 
>>> key data on the relationship itself using 
>>> "foreign_keys=[table.c.my_referencing_blob_column], 
>>> primary_join=table.c.my_key_blob_column==table.c.my_referencing_blob_column".
>>>
>>> As far as your eager load, both joined eager loading and subquery eager 
>>> loading rely on being able to JOIN to the target so if your backend is not 
>>> letting the JOIN part happen, I'm not sure exactly what SQL you'd like to 
>>> emit.    If you'd like the "lazy" loader to just fire off immediately, you 
>>> can use the "immediate" style of loader - lazy="immediate" or 
>>> immediateload() as an option - but that won't save you on SELECT statements.
>>>
>>>
>>>
>>>
>>> On Jun 2, 2011, at 6:15 PM, Ben Chess wrote:
>>>
 Hi,

 I want to establish a relationship with an object whose key is defined
 inside a JSON BLOB column in the child.  Naively, I know I can do this
 via a regular python @property that uses object_session() to then do a
 query() using the id from inside the blob.  Is there a better way that
 lets sqlalchemy manage the relationship and caching?

 Also, I'd like to be able to sometimes eagerly load this
 relationship.  Obviously I can't do in with a JOIN of the original
 query, but is there some way to define the relationship, perhaps using
 a column_property, to be able to undefer()?

 Thanks,
 Ben Chess

 --
 You received this message because you are subscribed to the Google Groups 
 "sqlalchemy" group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.

>>>
>>> --
>>> You received this message because you are subscribed to the Google Groups 
>>> "sqlalchemy" group.
>>> To post to this group, send email to sqlalchemy@googlegroups.com.
>>> To unsubscribe from this group, send email to 
>>> sqlalchemy+unsubscr...@googlegroups.com.
>>> For more options, visit this group at 
>>> http://groups.google.com/group/sqlalchemy?hl=en.
>>>
>>>
>>
>> --
>> You received this message because you are subscribed to the Google Groups 
>> "sqlalchemy" group.
>> To post to this group, send email to sqlalchemy@googlegroups.com.
>> To unsubscribe from this group, send email to 
>> sqlalchemy+unsubscr...@googlegroups.com.
>> For more options, visit this group at 
>> http://groups.google.com/group/sqlalchemy?hl=en.
>>
>
> --
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy" group.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> To unsubscribe from this group, send email to 
> sqlalchemy+unsubscr...@googlegroups.com.
> For more options, visit this group at 
> http://groups.google.com/group/sqlalchemy?hl=en.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Dynamic relationship

2011-06-02 Thread Michael Bayer

On Jun 2, 2011, at 8:11 PM, Ben Chess wrote:

> I'm not intending for the contents of the BLOB to be readable to
> MySQL.  It would only be cracked open and read from within Python.
> Meaning Python only knows what the key actually is.  So yeah, I
> understand the caveats of this approach.  I merely want to provide a
> mechanism to, as a second & separate SELECT query, load the relationed
> instance and be able to access it from the child python instance.
> 
> Hopefully that clears some things up.

How do you want to identify what row contains the target key ?   Is the 
comparison on the SQL side or the Python side ?




> 
> On Thu, Jun 2, 2011 at 4:10 PM, Michael Bayer  
> wrote:
>> Using a BLOB as a key is a really bad idea and wont work on all backends, 
>> but other than the database-level limitations inherent, SQLAlchemy will let 
>> you set up whatever column you'd like to use as the key just fine within a 
>> relationship(). Guessing what the problem might be.  Foreign key ?  If 
>> you rely upon your table metadata to emit CREATE TABLE statements, you can 
>> forego using ForeignKey with your table metadata and configure the foreign 
>> key data on the relationship itself using 
>> "foreign_keys=[table.c.my_referencing_blob_column], 
>> primary_join=table.c.my_key_blob_column==table.c.my_referencing_blob_column".
>> 
>> As far as your eager load, both joined eager loading and subquery eager 
>> loading rely on being able to JOIN to the target so if your backend is not 
>> letting the JOIN part happen, I'm not sure exactly what SQL you'd like to 
>> emit.If you'd like the "lazy" loader to just fire off immediately, you 
>> can use the "immediate" style of loader - lazy="immediate" or 
>> immediateload() as an option - but that won't save you on SELECT statements.
>> 
>> 
>> 
>> 
>> On Jun 2, 2011, at 6:15 PM, Ben Chess wrote:
>> 
>>> Hi,
>>> 
>>> I want to establish a relationship with an object whose key is defined
>>> inside a JSON BLOB column in the child.  Naively, I know I can do this
>>> via a regular python @property that uses object_session() to then do a
>>> query() using the id from inside the blob.  Is there a better way that
>>> lets sqlalchemy manage the relationship and caching?
>>> 
>>> Also, I'd like to be able to sometimes eagerly load this
>>> relationship.  Obviously I can't do in with a JOIN of the original
>>> query, but is there some way to define the relationship, perhaps using
>>> a column_property, to be able to undefer()?
>>> 
>>> Thanks,
>>> Ben Chess
>>> 
>>> --
>>> You received this message because you are subscribed to the Google Groups 
>>> "sqlalchemy" group.
>>> To post to this group, send email to sqlalchemy@googlegroups.com.
>>> To unsubscribe from this group, send email to 
>>> sqlalchemy+unsubscr...@googlegroups.com.
>>> For more options, visit this group at 
>>> http://groups.google.com/group/sqlalchemy?hl=en.
>>> 
>> 
>> --
>> You received this message because you are subscribed to the Google Groups 
>> "sqlalchemy" group.
>> To post to this group, send email to sqlalchemy@googlegroups.com.
>> To unsubscribe from this group, send email to 
>> sqlalchemy+unsubscr...@googlegroups.com.
>> For more options, visit this group at 
>> http://groups.google.com/group/sqlalchemy?hl=en.
>> 
>> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy" group.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> To unsubscribe from this group, send email to 
> sqlalchemy+unsubscr...@googlegroups.com.
> For more options, visit this group at 
> http://groups.google.com/group/sqlalchemy?hl=en.
> 

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Dynamic relationship

2011-06-02 Thread Ben Chess
I'm not intending for the contents of the BLOB to be readable to
MySQL.  It would only be cracked open and read from within Python.
Meaning Python only knows what the key actually is.  So yeah, I
understand the caveats of this approach.  I merely want to provide a
mechanism to, as a second & separate SELECT query, load the relationed
instance and be able to access it from the child python instance.

Hopefully that clears some things up.

On Thu, Jun 2, 2011 at 4:10 PM, Michael Bayer  wrote:
> Using a BLOB as a key is a really bad idea and wont work on all backends, but 
> other than the database-level limitations inherent, SQLAlchemy will let you 
> set up whatever column you'd like to use as the key just fine within a 
> relationship().     Guessing what the problem might be.  Foreign key ?  If 
> you rely upon your table metadata to emit CREATE TABLE statements, you can 
> forego using ForeignKey with your table metadata and configure the foreign 
> key data on the relationship itself using 
> "foreign_keys=[table.c.my_referencing_blob_column], 
> primary_join=table.c.my_key_blob_column==table.c.my_referencing_blob_column".
>
> As far as your eager load, both joined eager loading and subquery eager 
> loading rely on being able to JOIN to the target so if your backend is not 
> letting the JOIN part happen, I'm not sure exactly what SQL you'd like to 
> emit.    If you'd like the "lazy" loader to just fire off immediately, you 
> can use the "immediate" style of loader - lazy="immediate" or immediateload() 
> as an option - but that won't save you on SELECT statements.
>
>
>
>
> On Jun 2, 2011, at 6:15 PM, Ben Chess wrote:
>
>> Hi,
>>
>> I want to establish a relationship with an object whose key is defined
>> inside a JSON BLOB column in the child.  Naively, I know I can do this
>> via a regular python @property that uses object_session() to then do a
>> query() using the id from inside the blob.  Is there a better way that
>> lets sqlalchemy manage the relationship and caching?
>>
>> Also, I'd like to be able to sometimes eagerly load this
>> relationship.  Obviously I can't do in with a JOIN of the original
>> query, but is there some way to define the relationship, perhaps using
>> a column_property, to be able to undefer()?
>>
>> Thanks,
>> Ben Chess
>>
>> --
>> You received this message because you are subscribed to the Google Groups 
>> "sqlalchemy" group.
>> To post to this group, send email to sqlalchemy@googlegroups.com.
>> To unsubscribe from this group, send email to 
>> sqlalchemy+unsubscr...@googlegroups.com.
>> For more options, visit this group at 
>> http://groups.google.com/group/sqlalchemy?hl=en.
>>
>
> --
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy" group.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> To unsubscribe from this group, send email to 
> sqlalchemy+unsubscr...@googlegroups.com.
> For more options, visit this group at 
> http://groups.google.com/group/sqlalchemy?hl=en.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Dynamic relationship

2011-06-02 Thread Michael Bayer
Using a BLOB as a key is a really bad idea and wont work on all backends, but 
other than the database-level limitations inherent, SQLAlchemy will let you set 
up whatever column you'd like to use as the key just fine within a 
relationship(). Guessing what the problem might be.  Foreign key ?  If you 
rely upon your table metadata to emit CREATE TABLE statements, you can forego 
using ForeignKey with your table metadata and configure the foreign key data on 
the relationship itself using 
"foreign_keys=[table.c.my_referencing_blob_column], 
primary_join=table.c.my_key_blob_column==table.c.my_referencing_blob_column".

As far as your eager load, both joined eager loading and subquery eager loading 
rely on being able to JOIN to the target so if your backend is not letting the 
JOIN part happen, I'm not sure exactly what SQL you'd like to emit.If you'd 
like the "lazy" loader to just fire off immediately, you can use the 
"immediate" style of loader - lazy="immediate" or immediateload() as an option 
- but that won't save you on SELECT statements.




On Jun 2, 2011, at 6:15 PM, Ben Chess wrote:

> Hi,
> 
> I want to establish a relationship with an object whose key is defined
> inside a JSON BLOB column in the child.  Naively, I know I can do this
> via a regular python @property that uses object_session() to then do a
> query() using the id from inside the blob.  Is there a better way that
> lets sqlalchemy manage the relationship and caching?
> 
> Also, I'd like to be able to sometimes eagerly load this
> relationship.  Obviously I can't do in with a JOIN of the original
> query, but is there some way to define the relationship, perhaps using
> a column_property, to be able to undefer()?
> 
> Thanks,
> Ben Chess
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy" group.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> To unsubscribe from this group, send email to 
> sqlalchemy+unsubscr...@googlegroups.com.
> For more options, visit this group at 
> http://groups.google.com/group/sqlalchemy?hl=en.
> 

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Dynamic relationship

2011-06-02 Thread Ben Chess
Hi,

I want to establish a relationship with an object whose key is defined
inside a JSON BLOB column in the child.  Naively, I know I can do this
via a regular python @property that uses object_session() to then do a
query() using the id from inside the blob.  Is there a better way that
lets sqlalchemy manage the relationship and caching?

Also, I'd like to be able to sometimes eagerly load this
relationship.  Obviously I can't do in with a JOIN of the original
query, but is there some way to define the relationship, perhaps using
a column_property, to be able to undefer()?

Thanks,
Ben Chess

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: TRUNCATE sql statement

2011-06-02 Thread Michael Bayer
not to answer every question today, but we also added an "autocommit" flag for 
this kind of thing:

connection.execution_options(autocommit=True).execute("statement that normally 
isn't autocommit")

some background:

http://www.sqlalchemy.org/docs/core/connections.html#understanding-autocommit



On Jun 2, 2011, at 3:16 PM, RVince wrote:

> I found the answer. I needs to be wrapped in a transaction, like this:
> 
>engine = engine_from_config(config, 'sqlalchemy.')
>connection = engine.connect()
>trans = connection.begin()
>try:
>connection.execute('truncate table cms_history;')
>trans.commit()
>except :
>trans.rollback()
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy" group.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> To unsubscribe from this group, send email to 
> sqlalchemy+unsubscr...@googlegroups.com.
> For more options, visit this group at 
> http://groups.google.com/group/sqlalchemy?hl=en.
> 

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: TRUNCATE sql statement

2011-06-02 Thread RVince
I found the answer. I needs to be wrapped in a transaction, like this:

engine = engine_from_config(config, 'sqlalchemy.')
connection = engine.connect()
trans = connection.begin()
try:
connection.execute('truncate table cms_history;')
trans.commit()
except :
trans.rollback()

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] TRUNCATE sql statement

2011-06-02 Thread RVince
I am trying to issue a straight SQL TRUNCATE table command:

engine = engine_from_config(config, 'sqlalchemy.')
connection = engine.connect()
connection.execute('truncate table cms_history;')

Everything runs fine without error -- but the table does not truncate
-- nothing is affected. Am I missing something obviouis to others in
trying to do this? Thanks, RVince

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Long running transaction issues with commit

2011-06-02 Thread Aalok Sood
Yes its all so very clear now, kind of obvious :)
Thank you for taking out time to help me.

Regards
Aalok Sood.

On Jun 2, 9:20 pm, Michael Bayer  wrote:
> On Jun 2, 2011, at 11:35 AM, Aalok Sood wrote:
>
>
>
>
>
>
>
>
>
> > Hello Michael
>
> > Thanks a lot for your quick response.
> > I changed this loop to:
> > or i in range(1000):
> >   :     sleep(15)
> >   :     print "commiting"
> >   :     mm.name=u'old name'
> >   :     s.commit()
> >   :     sleep(25)
> >   :     mm.name=u'new name'
> >   :     print "commiting2"
> >   :     s.commit()
>
> > That is to say, I always have something in s.dirty when i commit. This
> > makes session interact with DB for sure. This loop works fine w/o any
> > exceptions.
> > So what you say will definitely solve the issue. However my doubts are
> > more theoretical in nature.
>
> > SQLAlchemy documentation says that if autocommit=False, on every
> > commit(), a new transaction is begun. So even if I do not have any
> > dirty changes and session need not send any data to the DB, still it
> > would have to contact the DB to begin a new transaction.
> > The behaviour as we saw, however, does not comply with this logic.
> > Can you please tell me where the loophole in my understanding of the
> > whole thing is.
>
> > Once again, Thanks a lot for your help. I really appreciate it.
>
> The confusion here is that the documentation for "commit", which I'm assuming 
> you're reading ishttp://www.sqlalchemy.org/docs/orm/session.html#committing, 
> refers to the term "transaction", which, while it's not deeply described at 
> that point for the sake of clarity, is not the same thing as an actual 
> transaction on an individual DBAPI connection connected to a database across 
> a network.
>
> An in-depth description of what "transaction" means with regards to the 
> session is later down the page 
> athttp://www.sqlalchemy.org/docs/orm/session.html#managing-transactions, 
> pretty much the first paragraph.
>
> To restate, a "transaction" with regards to the Session is a boundary which 
> will be terminated upon the next call to commit() or rollback().   The 
> transaction itself is an in-memory structure which includes a collection of 
> zero or more DBAPI connections.   When the transaction is committed or rolled 
> back, the commit() or rollback() method is ultimately called upon each DBAPI 
> connection that is present.  A DBAPI connection becomes present in the 
> current "transaction" when it is first asked to participate in a statement 
> execution.   For those engines for which no statement execution has occurred 
> within the scope of the Session's "transaction", no DBAPI connection is 
> procured and no  network communication occurs.
>
> Hope this clears it up !
>
>
>
>
>
>
>
>
>
> > Regards
> > Aalok Sood
>
> > On Jun 2, 8:17 pm, Michael Bayer  wrote:
> >> The Session doesn't interact with the database until statements are first 
> >> emitted, so while its being put into a new transaction each time with your 
> >> block of code, probably nothing is being sent to the DB.  If you stuck a 
> >> line:   s.execute("select 1") in there, that would likely wake it up.
>
> >> On Jun 2, 2011, at 9:32 AM, Aalok Sood wrote:
>
> >>> Hello Everyone
>
> >>> My mysql server wait_timeout is set to 35.
> >>> and if i run this code:
>
> >>> # Session s made with autocommit=False
> >>> mm=s.query(ss.Machine).get(1)
>
> >>> In [9]:
>
> >>> In [10]: for i in range(1000):
> >>>   :     sleep(15)
> >>>   :     print "commiting"
> >>>   :     s.commit()
> >>>   :     sleep(25)
> >>>   :     mm.name=u'new name'
> >>>   :     print "commiting2"
> >>>   :     s.commit()
>
> >>> Even though the second sleep is only for 25 seconds, I see an error
> >>> while commiting which says
> >>> 'Mysql server has gone away'
>
> >>> The SQLAlchemy documentation says that a new transaction is begun on a
> >>> commit(). If that is the case, I should not see the above error.
> >>> Maybe its an issue with commiting w/o any changes to the loaded
> >>> instances.
>
> >>> Can anyone throw some light on this.
> >>> Any help would be much appreciated.
>
> >>> --
> >>> You received this message because you are subscribed to the Google Groups 
> >>> "sqlalchemy" group.
> >>> To post to this group, send email to sqlalchemy@googlegroups.com.
> >>> To unsubscribe from this group, send email to 
> >>> sqlalchemy+unsubscr...@googlegroups.com.
> >>> For more options, visit this group 
> >>> athttp://groups.google.com/group/sqlalchemy?hl=en.
>
> > --
> > You received this message because you are subscribed to the Google Groups 
> > "sqlalchemy" group.
> > To post to this group, send email to sqlalchemy@googlegroups.com.
> > To unsubscribe from this group, send email to 
> > sqlalchemy+unsubscr...@googlegroups.com.
> > For more options, visit this group 
> > athttp://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to th

Re: [sqlalchemy] Re: Long running transaction issues with commit

2011-06-02 Thread Michael Bayer

On Jun 2, 2011, at 11:35 AM, Aalok Sood wrote:

> Hello Michael
> 
> Thanks a lot for your quick response.
> I changed this loop to:
> or i in range(1000):
>   : sleep(15)
>   : print "commiting"
>   : mm.name=u'old name'
>   : s.commit()
>   : sleep(25)
>   : mm.name=u'new name'
>   : print "commiting2"
>   : s.commit()
> 
> That is to say, I always have something in s.dirty when i commit. This
> makes session interact with DB for sure. This loop works fine w/o any
> exceptions.
> So what you say will definitely solve the issue. However my doubts are
> more theoretical in nature.
> 
> SQLAlchemy documentation says that if autocommit=False, on every
> commit(), a new transaction is begun. So even if I do not have any
> dirty changes and session need not send any data to the DB, still it
> would have to contact the DB to begin a new transaction.
> The behaviour as we saw, however, does not comply with this logic.
> Can you please tell me where the loophole in my understanding of the
> whole thing is.
> 
> Once again, Thanks a lot for your help. I really appreciate it.

The confusion here is that the documentation for "commit", which I'm assuming 
you're reading is http://www.sqlalchemy.org/docs/orm/session.html#committing, 
refers to the term "transaction", which, while it's not deeply described at 
that point for the sake of clarity, is not the same thing as an actual 
transaction on an individual DBAPI connection connected to a database across a 
network.

An in-depth description of what "transaction" means with regards to the session 
is later down the page at 
http://www.sqlalchemy.org/docs/orm/session.html#managing-transactions , pretty 
much the first paragraph.

To restate, a "transaction" with regards to the Session is a boundary which 
will be terminated upon the next call to commit() or rollback().   The 
transaction itself is an in-memory structure which includes a collection of 
zero or more DBAPI connections.   When the transaction is committed or rolled 
back, the commit() or rollback() method is ultimately called upon each DBAPI 
connection that is present.  A DBAPI connection becomes present in the current 
"transaction" when it is first asked to participate in a statement execution.   
For those engines for which no statement execution has occurred within the 
scope of the Session's "transaction", no DBAPI connection is procured and no  
network communication occurs.

Hope this clears it up !






> 
> Regards
> Aalok Sood
> 
> 
> 
> On Jun 2, 8:17 pm, Michael Bayer  wrote:
>> The Session doesn't interact with the database until statements are first 
>> emitted, so while its being put into a new transaction each time with your 
>> block of code, probably nothing is being sent to the DB.  If you stuck a 
>> line:   s.execute("select 1") in there, that would likely wake it up.
>> 
>> On Jun 2, 2011, at 9:32 AM, Aalok Sood wrote:
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>>> Hello Everyone
>> 
>>> My mysql server wait_timeout is set to 35.
>>> and if i run this code:
>> 
>>> # Session s made with autocommit=False
>>> mm=s.query(ss.Machine).get(1)
>> 
>>> In [9]:
>> 
>>> In [10]: for i in range(1000):
>>>   : sleep(15)
>>>   : print "commiting"
>>>   : s.commit()
>>>   : sleep(25)
>>>   : mm.name=u'new name'
>>>   : print "commiting2"
>>>   : s.commit()
>> 
>>> Even though the second sleep is only for 25 seconds, I see an error
>>> while commiting which says
>>> 'Mysql server has gone away'
>> 
>>> The SQLAlchemy documentation says that a new transaction is begun on a
>>> commit(). If that is the case, I should not see the above error.
>>> Maybe its an issue with commiting w/o any changes to the loaded
>>> instances.
>> 
>>> Can anyone throw some light on this.
>>> Any help would be much appreciated.
>> 
>>> --
>>> You received this message because you are subscribed to the Google Groups 
>>> "sqlalchemy" group.
>>> To post to this group, send email to sqlalchemy@googlegroups.com.
>>> To unsubscribe from this group, send email to 
>>> sqlalchemy+unsubscr...@googlegroups.com.
>>> For more options, visit this group 
>>> athttp://groups.google.com/group/sqlalchemy?hl=en.
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy" group.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> To unsubscribe from this group, send email to 
> sqlalchemy+unsubscr...@googlegroups.com.
> For more options, visit this group at 
> http://groups.google.com/group/sqlalchemy?hl=en.
> 

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Long running transaction issues with commit

2011-06-02 Thread Aalok Sood
Hello Michael

Thanks a lot for your quick response.
I changed this loop to:
or i in range(1000):
   : sleep(15)
   : print "commiting"
   : mm.name=u'old name'
   : s.commit()
   : sleep(25)
   : mm.name=u'new name'
   : print "commiting2"
   : s.commit()

That is to say, I always have something in s.dirty when i commit. This
makes session interact with DB for sure. This loop works fine w/o any
exceptions.
So what you say will definitely solve the issue. However my doubts are
more theoretical in nature.

SQLAlchemy documentation says that if autocommit=False, on every
commit(), a new transaction is begun. So even if I do not have any
dirty changes and session need not send any data to the DB, still it
would have to contact the DB to begin a new transaction.
The behaviour as we saw, however, does not comply with this logic.
Can you please tell me where the loophole in my understanding of the
whole thing is.

Once again, Thanks a lot for your help. I really appreciate it.

Regards
Aalok Sood



On Jun 2, 8:17 pm, Michael Bayer  wrote:
> The Session doesn't interact with the database until statements are first 
> emitted, so while its being put into a new transaction each time with your 
> block of code, probably nothing is being sent to the DB.  If you stuck a 
> line:   s.execute("select 1") in there, that would likely wake it up.
>
> On Jun 2, 2011, at 9:32 AM, Aalok Sood wrote:
>
>
>
>
>
>
>
> > Hello Everyone
>
> > My mysql server wait_timeout is set to 35.
> > and if i run this code:
>
> > # Session s made with autocommit=False
> > mm=s.query(ss.Machine).get(1)
>
> > In [9]:
>
> > In [10]: for i in range(1000):
> >   :     sleep(15)
> >   :     print "commiting"
> >   :     s.commit()
> >   :     sleep(25)
> >   :     mm.name=u'new name'
> >   :     print "commiting2"
> >   :     s.commit()
>
> > Even though the second sleep is only for 25 seconds, I see an error
> > while commiting which says
> > 'Mysql server has gone away'
>
> > The SQLAlchemy documentation says that a new transaction is begun on a
> > commit(). If that is the case, I should not see the above error.
> > Maybe its an issue with commiting w/o any changes to the loaded
> > instances.
>
> > Can anyone throw some light on this.
> > Any help would be much appreciated.
>
> > --
> > You received this message because you are subscribed to the Google Groups 
> > "sqlalchemy" group.
> > To post to this group, send email to sqlalchemy@googlegroups.com.
> > To unsubscribe from this group, send email to 
> > sqlalchemy+unsubscr...@googlegroups.com.
> > For more options, visit this group 
> > athttp://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Long running transaction issues with commit

2011-06-02 Thread Michael Bayer
The Session doesn't interact with the database until statements are first 
emitted, so while its being put into a new transaction each time with your 
block of code, probably nothing is being sent to the DB.  If you stuck a line:  
 s.execute("select 1") in there, that would likely wake it up.


On Jun 2, 2011, at 9:32 AM, Aalok Sood wrote:

> Hello Everyone
> 
> My mysql server wait_timeout is set to 35.
> and if i run this code:
> 
> # Session s made with autocommit=False
> mm=s.query(ss.Machine).get(1)
> 
> In [9]:
> 
> In [10]: for i in range(1000):
>   : sleep(15)
>   : print "commiting"
>   : s.commit()
>   : sleep(25)
>   : mm.name=u'new name'
>   : print "commiting2"
>   : s.commit()
> 
> Even though the second sleep is only for 25 seconds, I see an error
> while commiting which says
> 'Mysql server has gone away'
> 
> The SQLAlchemy documentation says that a new transaction is begun on a
> commit(). If that is the case, I should not see the above error.
> Maybe its an issue with commiting w/o any changes to the loaded
> instances.
> 
> Can anyone throw some light on this.
> Any help would be much appreciated.
> 
> 
> 
> 
> 
> 
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy" group.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> To unsubscribe from this group, send email to 
> sqlalchemy+unsubscr...@googlegroups.com.
> For more options, visit this group at 
> http://groups.google.com/group/sqlalchemy?hl=en.
> 

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Sqlalchemy exception for mysql in window: TypeError: immutabledict object is immutable

2011-06-02 Thread Michael Bayer
This is a bug (you've just discovered it, thanks !).   I've added ticket #2181 
for this and I'm hoping to do a pass for 0.7.1 this weekend.   The issue is 
specific to windows and MySQL (not a platform we get as much opportunity to 
test on...).

I would be curious, if you are naming the table as:   Table("MyTable", ...), 
i.e. with mixed case.   If you put the name as all lower case, 
i.e.Table("mytable", ...),  that might resolve this.Otherwise, the issue 
should not be present in 0.6.7 as that particular dictionary is not immutable 
in 0.6.

Or if you feel like patching, this patch will probably work:

diff -r 8a9b42709726 lib/sqlalchemy/dialects/mysql/base.py
--- a/lib/sqlalchemy/dialects/mysql/base.py Thu Jun 02 03:09:08 2011 -0400
+++ b/lib/sqlalchemy/dialects/mysql/base.py Thu Jun 02 11:13:22 2011 -0400
@@ -1982,8 +1982,7 @@
 # For winxx database hosts.  TODO: is this really needed?
 if casing == 1 and table.name != table.name.lower():
 table.name = table.name.lower()
-lc_alias = sa_schema._get_table_key(table.name, table.schema)
-table.metadata.tables[lc_alias] = table
+table.metadata._add_table(table.name, table.schema, table)
 
 def _detect_charset(self, connection):
 raise NotImplementedError()



   
On Jun 2, 2011, at 3:41 AM, zhwolf wrote:

> Hi,
> I experienced the exception while accessing a mysql database in
> windows via sqlalchemy0.7.0.
> 
> It should be due to source code error in 0.7.0:
> The member tables of class MetaData is created as immutabledict,
>sqlalchemy\schema.py::line 2181
> self.tables = util.immutabledict()
> But the follow code try to change it's content:
> sqlalchemy\dialects\mysql\base.py::line 1986
>   table.metadata.tables[lc_alias] = table
> 
> Could any people assist to resolve this issue?
> 
> Thanks,
> 
> log:
> mysql://root:root@127.0.0.1/linelocationdb?charset=utf8&use_unicode=0
> Traceback (most recent call last):
>  File "d:\python25\lib\site-packages\sqlalchemy-0.7.0-py2.5.egg
> \sqlalchemy\schema.py", line 263, in __new__   table._init(name,
> metadata, *args, **kw)
>  File "d:\python25\lib\site-packages\sqlalchemy-0.7.0-py2.5.egg
> \sqlalchemy\schema.py", line 329, in _init
>reflecttable(self, include_columns=include_columns)
>  File "d:\python25\lib\site-packages\sqlalchemy-0.7.0-py2.5.egg
> \sqlalchemy\engine\base.py", line 2132, in reflecttable
>self.dialect.reflecttable(conn, table, include_columns)
>  File "d:\python25\lib\site-packages\sqlalchemy-0.7.0-py2.5.egg
> \sqlalchemy\engine\default.py", line 254, in reflecttable
>return insp.reflecttable(table, include_columns)
>  File "d:\python25\lib\site-packages\sqlalchemy-0.7.0-py2.5.egg
> \sqlalchemy\engine\reflection.py", line 353, in reflecttable
>dialect._adjust_casing(table)
>  File "d:\python25\lib\site-packages\sqlalchemy-0.7.0-py2.5.egg
> \sqlalchemy\dialects\mysql\base.py", line 1986, in _adjust_casing
>table.metadata.tables[lc_alias] = table
>  File "d:\python25\lib\site-packages\sqlalchemy-0.7.0-py2.5.egg
> \sqlalchemy\util\_collections.py", line 38, in _immutable
>raise TypeError("%s object is immutable" %
> self.__class__.__name__)
> TypeError: immutabledict object is immutable
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy" group.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> To unsubscribe from this group, send email to 
> sqlalchemy+unsubscr...@googlegroups.com.
> For more options, visit this group at 
> http://groups.google.com/group/sqlalchemy?hl=en.
> 

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: can I use SA to dump a table's DDL to a file?

2011-06-02 Thread Michael Bayer

On Jun 2, 2011, at 3:14 AM, Randy Syring wrote:

> Looks like I beat you to the answer by about 3 mins, thanks for
> answering though. :)
> 
> I have a follow-up though.  The foreign key constraints are not
> getting created with "ON DELETE CASCADE" as expected.  I tried this in
> both MSSQL and Postgres.  Is this expected behavior?

yeah I don't think this aspect of the FK is reflected right now.Plenty of 
room for that as a new feature of course

> 
> If so, is it possible to modify the FKs after they are reflected to
> set ondelete and onupdate as appropriate?  My test code:
> 
> http://paste.pocoo.org/show/399307/
> 
> And my output:
> 
> CREATE TABLE blog_comments (
>   id INTEGER NOT NULL,
>   blog_id INTEGER,
>   CONSTRAINT blog_comments_pkey PRIMARY KEY (id),
>   CONSTRAINT blog_comments_blog_id_fkey FOREIGN KEY(blog_id) REFERENCES
> blog (id)
> )
> 
> It would seem to be a reflection issue though, b/c if I create a table
> manually, the "ON DELETE CASCADE" is added correctly.

I think if you went through the Table object's .constraints collection and 
looked for ForeignKeyConstraint objects you could set up the "onupdate" and 
"ondelete" attributes on them, sure.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Sqlalchemy exception for mysql in window: TypeError: immutabledict object is immutable

2011-06-02 Thread zhwolf
Hi,
I experienced the exception while accessing a mysql database in
windows via sqlalchemy0.7.0.

It should be due to source code error in 0.7.0:
The member tables of class MetaData is created as immutabledict,
sqlalchemy\schema.py::line 2181
 self.tables = util.immutabledict()
But the follow code try to change it's content:
 sqlalchemy\dialects\mysql\base.py::line 1986
   table.metadata.tables[lc_alias] = table

Could any people assist to resolve this issue?

Thanks,

log:
mysql://root:root@127.0.0.1/linelocationdb?charset=utf8&use_unicode=0
Traceback (most recent call last):
  File "d:\python25\lib\site-packages\sqlalchemy-0.7.0-py2.5.egg
\sqlalchemy\schema.py", line 263, in __new__   table._init(name,
metadata, *args, **kw)
  File "d:\python25\lib\site-packages\sqlalchemy-0.7.0-py2.5.egg
\sqlalchemy\schema.py", line 329, in _init
reflecttable(self, include_columns=include_columns)
  File "d:\python25\lib\site-packages\sqlalchemy-0.7.0-py2.5.egg
\sqlalchemy\engine\base.py", line 2132, in reflecttable
self.dialect.reflecttable(conn, table, include_columns)
  File "d:\python25\lib\site-packages\sqlalchemy-0.7.0-py2.5.egg
\sqlalchemy\engine\default.py", line 254, in reflecttable
return insp.reflecttable(table, include_columns)
  File "d:\python25\lib\site-packages\sqlalchemy-0.7.0-py2.5.egg
\sqlalchemy\engine\reflection.py", line 353, in reflecttable
dialect._adjust_casing(table)
  File "d:\python25\lib\site-packages\sqlalchemy-0.7.0-py2.5.egg
\sqlalchemy\dialects\mysql\base.py", line 1986, in _adjust_casing
table.metadata.tables[lc_alias] = table
  File "d:\python25\lib\site-packages\sqlalchemy-0.7.0-py2.5.egg
\sqlalchemy\util\_collections.py", line 38, in _immutable
raise TypeError("%s object is immutable" %
self.__class__.__name__)
TypeError: immutabledict object is immutable

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Long running transaction issues with commit

2011-06-02 Thread Aalok Sood
Hello Everyone

My mysql server wait_timeout is set to 35.
and if i run this code:

# Session s made with autocommit=False
mm=s.query(ss.Machine).get(1)

In [9]:

In [10]: for i in range(1000):
   : sleep(15)
   : print "commiting"
   : s.commit()
   : sleep(25)
   : mm.name=u'new name'
   : print "commiting2"
   : s.commit()

Even though the second sleep is only for 25 seconds, I see an error
while commiting which says
'Mysql server has gone away'

The SQLAlchemy documentation says that a new transaction is begun on a
commit(). If that is the case, I should not see the above error.
Maybe its an issue with commiting w/o any changes to the loaded
instances.

Can anyone throw some light on this.
Any help would be much appreciated.








-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: Speed matters

2011-06-02 Thread fribes
Hi Sergey,

I'll give it a try, thanks !

On 2 June 2011 03:15, Sergey V.  wrote:

> Hi,
>
> One easy/obvious improvement would be to delete all user's
> "identifiers" and "groups" at once without iterating over them for
> every user. Actually, iterating over the list of user_ids is not
> necessary too I think.
>
> 
> session.query(Identifier).filter(Identifier.user_id.in_(user_ids)).delete()
> session.query(User).filter(User.id.in_(user_ids)).delete()
> 
>
> I'm not sure about deleting "groups" in your code - I suppose you
> don't want to delete the actual group but only the association between
> the user and the group, i.e. the record from the intermediate table.
> But the idea is the same - .filter(blah-
> blah.user_id.in_(user_ids)).delete()
>
> An even better solution would be to set up proper cascade rules on
> your relationships so all "dependent" items are deleted automatically
> when a user is deleted. Then the method will be a one-liner:
>
> 
> session.query(User).filter(User.id.in_(user_ids)).delete()
> 
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Multi-Processing Environment

2011-06-02 Thread ddarko
I use PostgreSQL, but it does not matter, because when someone changes
the configuration it has still work.

"a new Engine needs to be created in each child fork"

If a worker can perform at any given time only one synchronous request
and for each worker I create a new engine, it does not have any pool.

Do not know if I understand correctly, but it overlooks the fact that
no matter whether it is an entirely new process, or created using fork
() ("memory of the parent process is` copied `Effectively")?

The engine is not shared. I need to create it again.

On May 28, 4:53 pm, Michael Bayer  wrote:
> On May 28, 2011, at 4:25 AM, ddarko wrote:
>
>
>
>
>
>
>
>
>
> >http://lists.unbit.it/pipermail/uwsgi/2011-May/002078.html:
>
> > "...Unfortunately, when running our
> > app in uWSGI with more than one worker, we get sporadic, but frequent,
> > sqlalchemy-related exceptions when testing under load. Following is an
> > example of one of the more common errors we get.
>
> > Error - :
> > (OperationalError) server closed the connection unexpectedly
> >        This probably means the server terminated abnormally
> >        before or while processing the request.
>
> > It would seem that our app, or sqlalchemy, is making an assumption
> > that
> > is no longer true when running as multiple workers in uWSGI"
>
> > uWSGI 0.9.8
> > Python 3.2
> > SQLAlchemy 0.7.0
>
> > Does anyone use this combination?
> > How to share a pool of connections between workers?
>
> The mechanics of fork() are such that the memory of the parent process is 
> effectively "copied" to the child process.  The SQLAlchemy Engine by default 
> maintains a pool of database connections, which like everything else is 
> replicated into the child process.  Each database connection in the pool 
> ultimately references a TCP/IP handle to the remote database, unless you're 
> using SQLite in which case it's an open file handle.   These handles lose 
> their meaning when referenced in a new process, that is the child fork, and 
> attempts to use the connections in the new process will fail.
>
> So the short answer is a new Engine needs to be created in each child fork, 
> or otherwise pooling can be turned off using NullPool.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: can I use SA to dump a table's DDL to a file?

2011-06-02 Thread Randy Syring
Looks like I beat you to the answer by about 3 mins, thanks for
answering though. :)

I have a follow-up though.  The foreign key constraints are not
getting created with "ON DELETE CASCADE" as expected.  I tried this in
both MSSQL and Postgres.  Is this expected behavior?

If so, is it possible to modify the FKs after they are reflected to
set ondelete and onupdate as appropriate?  My test code:

http://paste.pocoo.org/show/399307/

And my output:

CREATE TABLE blog_comments (
id INTEGER NOT NULL,
blog_id INTEGER,
CONSTRAINT blog_comments_pkey PRIMARY KEY (id),
CONSTRAINT blog_comments_blog_id_fkey FOREIGN KEY(blog_id) REFERENCES
blog (id)
)

It would seem to be a reflection issue though, b/c if I create a table
manually, the "ON DELETE CASCADE" is added correctly.

On Jun 1, 8:36 pm, Michael Bayer  wrote:
> sure, you'd use the "mock" executor as in the second example 
> here:http://www.sqlalchemy.org/trac/wiki/FAQ#HowcanIgettheCREATETABLEDROPT...
>
> On Jun 1, 2011, at 8:01 PM, Randy Syring wrote:
>
>
>
>
>
>
>
> > I'd like to be able to dump an MS SQL server's objects to text on the
> > local file system.  I have a working solution for views, stored
> > procedures, and functions, but tables are a different story.  Can i
> > use SA's reflection and table creation abilities to write create table
> > DDL to a text file?
>
> > --
> > You received this message because you are subscribed to the Google Groups 
> > "sqlalchemy" group.
> > To post to this group, send email to sqlalchemy@googlegroups.com.
> > To unsubscribe from this group, send email to 
> > sqlalchemy+unsubscr...@googlegroups.com.
> > For more options, visit this group 
> > athttp://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.