[sqlalchemy] '_Label' object has no attribute 'nullable'

2012-05-17 Thread Eric Ongerth
(using sqlalchemy 0.8.0b1)

Using the new 8.0 inspection mechanism, I can perform the following on one 
of my model classes in an ipython or bpython shell and obtain a boolean 
result:

 [... bunch of imports to load up my model classes ...]
 inspect(SomeClassOfMine).attr['foo'].columns[0].nullable
True

But it seems I can only do this successfully in a shell.  When the same 
line of code is encountered at runtime, I get the following error:
AttributeError: '_Label' object has no attribute 'nullable'

Further info: this is in a shell initiated with respect to my Pyramid 
project by invoking pshell -p bpython development.ini.

What I'm actually trying to do: got some mako templates generating HTML 
forms from my model classes, and just want to have non-nullable fields 
generate the required keyword in my input elements.  Perhaps there is 
another better way to reach the 'nullable' property of a column?

Fishing for an easy answer here, but if none is forthcoming I'll knuckle 
down and build a testcase.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/y90dnrsvKZkJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: '_Label' object has no attribute 'nullable'

2012-05-17 Thread Eric Ongerth
Answered my own question here.

My code was actually working fine at runtime, it's just that one of the 
objects it encountered which I had thought was a simple column was actually 
a ColumnProperty which of course had no 'nullable' attribute.


On Thursday, May 17, 2012 3:08:21 AM UTC-7, Eric Ongerth wrote:

 (using sqlalchemy 0.8.0b1)

 Using the new 8.0 inspection mechanism, I can perform the following on one 
 of my model classes in an ipython or bpython shell and obtain a boolean 
 result:

  [... bunch of imports to load up my model classes ...]
  inspect(SomeClassOfMine).attr['foo'].columns[0].nullable
 True

 But it seems I can only do this successfully in a shell.  When the same 
 line of code is encountered at runtime, I get the following error:
 AttributeError: '_Label' object has no attribute 'nullable'

 Further info: this is in a shell initiated with respect to my Pyramid 
 project by invoking pshell -p bpython development.ini.

 What I'm actually trying to do: got some mako templates generating HTML 
 forms from my model classes, and just want to have non-nullable fields 
 generate the required keyword in my input elements.  Perhaps there is 
 another better way to reach the 'nullable' property of a column?

 Fishing for an easy answer here, but if none is forthcoming I'll knuckle 
 down and build a testcase.



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/tnJBFf0OOtEJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Syntax Error on *legitimate* statement using Informix dialect [SOLVED]

2012-05-17 Thread Adam Tauno Williams
On Tue, 2012-05-15 at 20:32 -0400, Adam Tauno Williams wrote: 
 On Tue, 2012-05-15 at 19:51 -0400, Adam Tauno Williams wrote: 
  On Tue, 2012-05-15 at 16:27 -0400, Michael Bayer wrote: 
   Could be tough, you'd need to test it against the informix dbapi 
   directly, 
  using bound parameters, to see what it needs.   Could be a typing issue.
  Ok, I got a response from an Informix guru and maintainer of the
  informix dbapi.
  quote
  I see two different approaches around this issue:
  * Use literal values instead of bound parameters in the projection
  clause.
  * If you must use bound parameters, use type casts for the parameters
  that are used in the projection clause. For example: CASE WHEN
  (xrefr.xr_supersede = :1) THEN :2::int ELSE :3::int
  /quote
  I assume this same kind of issue must be addressed in other dialects???
 I've constructed a sequence that seems to work perfectly.

For anyone in the future [list archives  search engines] trying to use
the Informix dialect I've posted a BLOG post about this issue:
http://www.whitemiceconsulting.com/2012/05/informix-dialect-with-case-derived.html

 class XrefrRecord(Base):
  
 __tablename__= 'xrefr'
 record_id= Column(xr_serial_no, Integer, primary_key=True)
 sku  = Column(xr_stock_no, Integer, nullable=False)
 
 list_price   = Column(xr_list_price, Float(precision=3))
 _supersede   = Column(xr_supersede, String(1))
 is_supersede = column_property( 
case( [ ( _supersede == 'S', 
  literal_column('1', Integer) ) ],
  else_ = literal_column('0', Integer) 
) 
)
   
 __mapper_args__ = { 'polymorphic_on': is_supersede }
 
 
 class Cross(XrefrRecord):  
 __mapper_args__ = {'polymorphic_identity': 0}  
 
 
 class Supersede(XrefrRecord):  
 __mapper_args__ = {'polymorphic_identity':



signature.asc
Description: This is a digitally signed message part


[sqlalchemy] Best and easy web framwork for sqlalchemy

2012-05-17 Thread Matteo Boscolo

Hi All,
I'm looking for a good web framework to show my sqlalchemy information 
on the web.


I google and I found several solution like piramid,django, 
glashammer,flask but I' do not know exactly witch is the more simple for 
sqlalchemy.


could you give me some advice based on your experience ?

regards,
Matteo

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Understanding memory usage under SQLA

2012-05-17 Thread Vlad K.

  
  

Hello.

I have a problem when processing relatively large number of rows.
For example, when selecting 5000 main rows, each having a number of
many-to-one relationships, memory usage shown by top skyrockets into
200+ MB range (RES), while heapy shows cca 20MB of Python heap.
PostgreSQL backend via psycopg2.

I've made a minimum example case based on the problem I'm noticing
in my Pyramid app, so the session.commit() at line 130 is there to
simulate commit done by Transaction used in Pyramid at the end of
each request. If I'm understanding things correctly, committing
would expire all objects involved in the session, and I even tried
manual session.expunge(row), but there is no difference in memory
usage.

The following is source of an example case. Requires SQLAlchemy
(tested with 0.7.5 and 0.7.7), guppy, psycopg2 (tested with 2.4.2
and 2.4.4). Happens both on Fedora 15 64-bit and CentOS 6.2 32-bit,
though of course the 32-bit shows some 30% lower RES in top.

http://pastebin.com/UFgduWVw


Usage: setup a test database, update line 25 config. Prepopulate
database with -p flag, then run again without any flags.

I don't see where and how would any objects remain in memory, and
heapy showing much lower memory use suggests something is retained
in the involved C extensions?  I also tried with pympler, diff
before and after selecting rows, shows nothing near reported by top.
I guess there is no "leak" in traditional sense of the word because
repeating the task does not yield growing memory consumption. It
stabilizes at certain value and stays there.

Heapy before selecting rows:

Partition of a set of 102014 objects. Total size = 13160672
  bytes.
   Index  Count   % Size   % Cumulative  % Kind (class / dict of
  class)
   0  45901  45  4395296  33   4395296  33 str
   1  26041  26  2186184  17   6581480  50 tuple
   2   7039   7   900992   7   7482472  57 types.CodeType
   3   6836   7   820320   6   8302792  63 function
   4    235   0   761608   6   9064400  69 dict of module
   5    608   1   689792   5   9754192  74 dict (no owner)
   6    676   1   648544   5  10402736  79 dict of type
   7    676   1   608344   5  11011080  84 type
   8    199   0   206248   2  11217328  85 dict of class
   9    185   0   167320   1  11384648  87
  sqlalchemy.sql.visitors.VisitableType
  334 more rows. Type e.g. '_.more' to view.

Heapy after 5000 rows have been selected:

Partition of a set of 102587 objects. Total size = 16455168
  bytes.
   Index  Count   % Size   % Cumulative  % Kind (class / dict of
  class)
   0  45923  45  4397632  27   4397632  27 str
   1  1   0  3146024  19   7543656  46
  sqlalchemy.orm.identity.WeakInstanceDict
   2  26090  25  2189480  13   9733136  59 tuple
   3   7039   7   900992   5  10634128  65 types.CodeType
   4   6859   7   823080   5  11457208  70 function
   5    235   0   761608   5  12218816  74 dict of module
   6    657   1   705048   4  12923864  79 dict (no owner)
   7    676   1   650464   4  13574328  82 dict of type
   8    676   1   608344   4  14182672  86 type
   9    199   0   206248   1  14388920  87 dict of class
  372 more rows. Type e.g. '_.more' to view.




What am I doing wrong? I'm hoping something trivial and blatantly
obvious that I'm oblivious to. :)


Thanks.

-- 

.oO V Oo.
  




-- 
You received this message because you are subscribed to the Google Groups "sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com.

For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.




Re: [sqlalchemy] Best and easy web framwork for sqlalchemy

2012-05-17 Thread Adam Tauno Williams
On Thu, 2012-05-17 at 13:23 +0200, Matteo Boscolo wrote:

(1) - Do not hi-jack threads!

 I'm looking for a good web framework to show my sqlalchemy information 
 on the web.
 I google and I found several solution like piramid,django, 
 glashammer,flask but I' do not know exactly witch is the more simple for 
 sqlalchemy.
 could you give me some advice based on your experience ?

Depends on what you want to do.  If you want to use SQLAlchemy I'd avoid
Django;  something like Flask is probably a better choice.  There is
also Tornado if you want to do lots of Async.



signature.asc
Description: This is a digitally signed message part


Re: [sqlalchemy] Best and easy web framwork for sqlalchemy

2012-05-17 Thread Timuçin Kızılay

On 17-05-2012 14:23, Matteo Boscolo wrote:

Hi All,
I'm looking for a good web framework to show my sqlalchemy information
on the web.

I google and I found several solution like piramid,django,
glashammer,flask but I' do not know exactly witch is the more simple for
sqlalchemy.

could you give me some advice based on your experience ?

regards,
Matteo



I'm using turbogears and SA is the main reason form me to use tg.

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Wired connection problem

2012-05-17 Thread Michael Bayer

On May 16, 2012, at 8:33 PM, limodou wrote:

 On Thu, May 17, 2012 at 8:03 AM, limodou limo...@gmail.com wrote:
 On Thu, May 17, 2012 at 7:46 AM, Michael Bayer mike...@zzzcomputing.com 
 wrote:
 
 On May 16, 2012, at 7:32 PM, limodou wrote:
 
 
 OK you need to use two different connections here, for the read from one 
 connection, persist on another in a short transaction pattern.  Again 
 the with engine.begin() thing, which is new as of 0.7...6?  makes this 
 pretty succinct:
 
 
 while True:
with engine.begin() as conn:
for row in conn.execute(select...):
with engine.begin() as c2:
c2.execute(update...)
   sleep(...)
 
 
 
 Ok, I see. And the last word, is there a way to disable the
 accumulation of transactional state?
 
 depending on backend and DBAPI there are ways to decrease the transaction 
 isolation level.   SQLA has a create_engine() parameter isolation_level 
 understood by Postgresql, MySQL, and SQLite so far, for example.  The docs 
 seem to be partially missing at the moment, so accepted values are 
 'SERIALIZABLE',  'READ UNCOMMITTED', 'READ COMMITTED', 'REPEATABLE READ'.  
 Different backends may interpret these differently - PG for example has 
 ultimately only two effective isolation levels.   These levels can also be 
 affected per-connection using 
 connection.execution_options(isolation_level='...').
 
 So there's that, then there's just returning the connection to the pool/ 
 rolling back as needed.
 
 
 Ok, I'll try it. Thanks a lot.
 
 
 I tried like this:
 
 engine = create_engine()
 conn = engine.connect()
 
 while True:
conn.execution_options(isolation_level='REPEATABLE READ')
for row in conn.execute(select(...)):
conn.execute(update(...))
 
 So I reset the conn isolation level each time. And it does the work if
 there is new changed records later after fetching empty result. But I
 can't find a way to display isolation level of a connection or engine,
 I just know how to set it. So how to display the exactly isolation
 level of each sql statement execution? Is there a function to do that?
 
 So I think I can reuse the same connection if I can reset the isolation level.

uh yeah I don't know that there's a consistent way to do that, you'd need to 
execute() a statement against the database which shows the current isolation 
level.  This varies depending on which DB you're using, I'd google their docs 
for that.  might be worthwhile just to keep track of it.


 
 -- 
 I like python!
 UliPad The Python Editor: http://code.google.com/p/ulipad/
 UliWeb simple web framework: http://code.google.com/p/uliweb/
 My Blog: http://hi.baidu.com/limodou
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Syntax Error on *legitimate* statement using Informix dialect [SOLVED]

2012-05-17 Thread Michael Bayer
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On May 17, 2012, at 7:15 AM, Adam Tauno Williams wrote:

 On Tue, 2012-05-15 at 20:32 -0400, Adam Tauno Williams wrote: 
 On Tue, 2012-05-15 at 19:51 -0400, Adam Tauno Williams wrote: 
 On Tue, 2012-05-15 at 16:27 -0400, Michael Bayer wrote: 
 Could be tough, you'd need to test it against the informix dbapi directly, 
 using bound parameters, to see what it needs.   Could be a typing issue.
 Ok, I got a response from an Informix guru and maintainer of the
 informix dbapi.
 quote
 I see two different approaches around this issue:
 * Use literal values instead of bound parameters in the projection
 clause.
 * If you must use bound parameters, use type casts for the parameters
 that are used in the projection clause. For example: CASE WHEN
 (xrefr.xr_supersede = :1) THEN :2::int ELSE :3::int
 /quote
 I assume this same kind of issue must be addressed in other dialects???
 I've constructed a sequence that seems to work perfectly.
 
 For anyone in the future [list archives  search engines] trying to use
 the Informix dialect I've posted a BLOG post about this issue:
 http://www.whitemiceconsulting.com/2012/05/informix-dialect-with-case-derived.html

thanks for this.  im amazed the informix dialect even works, actually, I've 
never tested it personally



 
 class XrefrRecord(Base):
 
__tablename__= 'xrefr'
record_id= Column(xr_serial_no, Integer, primary_key=True)
sku  = Column(xr_stock_no, Integer, nullable=False)

list_price   = Column(xr_list_price, Float(precision=3))
_supersede   = Column(xr_supersede, String(1))
is_supersede = column_property( 
   case( [ ( _supersede == 'S', 
 literal_column('1', Integer) ) ],
 else_ = literal_column('0', Integer) 
   ) 
   )
 
__mapper_args__ = { 'polymorphic_on': is_supersede }
 
 
 class Cross(XrefrRecord):  
__mapper_args__ = {'polymorphic_identity': 0}  
 
 
 class Supersede(XrefrRecord):  
__mapper_args__ = {'polymorphic_identity':
 

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (Darwin)
Comment: GPGTools - http://gpgtools.org

iQEcBAEBAgAGBQJPtPaOAAoJEDMCOcHE2v7hQn4H/3Tug16dDLFgk035GG5QNVNz
PkdRmwKr7lH6kMIAWuHtIlsakfJKxjl7dt/e07wDfo2PNwsx6Symq/u+fkDX2k6g
Oo+t53obW9bUA3lXmxkArX7x9xtsN8eB1Vevq+xpQmwKVJuIpViY4yq21fpNFBk1
tyfOWIEgLs+mJ635EGq+OZX8TByYfxSW8WtEtoLbRXpM6KO/31CbNUCew62MFtB1
Al/dtKU7C/2QZXTu7scxp4FpmI8+22AVXOQDF9fGLvtbR1ja01ftOnmI247vwSCG
Z0gm0b7isGFO1Meo+9JipaHjy0YZaVNu1yVuDHja6bwZD5T+dDrgVR/etpB6pmw=
=CDzi
-END PGP SIGNATURE-

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Best and easy web framwork for sqlalchemy

2012-05-17 Thread Michael Bayer
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I've used Pylons mostly which is obsolete, and have used Pyramid a bit which 
will probably be what I use in the future.  However Flask is great too.

If I really could do whatever on the job I'd just roll my own actually :), its 
quite easy to do.   But yeah use flask or pyramid.




On May 17, 2012, at 7:35 AM, Adam Tauno Williams wrote:

 On Thu, 2012-05-17 at 13:23 +0200, Matteo Boscolo wrote:
 
 (1) - Do not hi-jack threads!
 
 I'm looking for a good web framework to show my sqlalchemy information 
 on the web.
 I google and I found several solution like piramid,django, 
 glashammer,flask but I' do not know exactly witch is the more simple for 
 sqlalchemy.
 could you give me some advice based on your experience ?
 
 Depends on what you want to do.  If you want to use SQLAlchemy I'd avoid
 Django;  something like Flask is probably a better choice.  There is
 also Tornado if you want to do lots of Async.
 

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (Darwin)
Comment: GPGTools - http://gpgtools.org

iQEcBAEBAgAGBQJPtPcdAAoJEDMCOcHE2v7hh+0H/iceyk5cTkhFO046LoDHNbOh
cKy/vSLHFx9rxsukCLJB+t2YE0rtfRiSjdyQ79xLF6Tyqknb3lIs6K58ttN34tGV
LNVeCwg3sBW91pBmiGIkXk24qXdNN9jzErVBl3nQE4D5nns11nUt7shOgfWxTg8E
Cl0RGfz+bHGfPKHoA0J0TCeIpEuAKZs4MppbijtpEEU38KeqPzOU5ZSSbCRAYBG1
cKs26jJr2+cZ3RkI4+YTvHM2lIWiYMNYKCwu95ou5C1dPtyEx/+d5FZjYGthoTts
6n1ygLAyq4/mXtIg1nrFDM86inCADM8hhGWOtQ6E531Lt2xDVF45kGO4LaD7xoo=
=OTGM
-END PGP SIGNATURE-

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Wired connection problem

2012-05-17 Thread limodou
On Thu, May 17, 2012 at 8:59 PM, Michael Bayer mike...@zzzcomputing.com wrote:

 On May 16, 2012, at 8:33 PM, limodou wrote:

 On Thu, May 17, 2012 at 8:03 AM, limodou limo...@gmail.com wrote:
 On Thu, May 17, 2012 at 7:46 AM, Michael Bayer mike...@zzzcomputing.com 
 wrote:

 On May 16, 2012, at 7:32 PM, limodou wrote:


 OK you need to use two different connections here, for the read from 
 one connection, persist on another in a short transaction pattern.  
 Again the with engine.begin() thing, which is new as of 0.7...6?  
 makes this pretty succinct:


 while True:
    with engine.begin() as conn:
        for row in conn.execute(select...):
            with engine.begin() as c2:
                c2.execute(update...)
   sleep(...)



 Ok, I see. And the last word, is there a way to disable the
 accumulation of transactional state?

 depending on backend and DBAPI there are ways to decrease the transaction 
 isolation level.   SQLA has a create_engine() parameter isolation_level 
 understood by Postgresql, MySQL, and SQLite so far, for example.  The docs 
 seem to be partially missing at the moment, so accepted values are 
 'SERIALIZABLE',  'READ UNCOMMITTED', 'READ COMMITTED', 'REPEATABLE READ'.  
 Different backends may interpret these differently - PG for example has 
 ultimately only two effective isolation levels.   These levels can also be 
 affected per-connection using 
 connection.execution_options(isolation_level='...').

 So there's that, then there's just returning the connection to the pool/ 
 rolling back as needed.


 Ok, I'll try it. Thanks a lot.


 I tried like this:

 engine = create_engine()
 conn = engine.connect()

 while True:
    conn.execution_options(isolation_level='REPEATABLE READ')
    for row in conn.execute(select(...)):
        conn.execute(update(...))

 So I reset the conn isolation level each time. And it does the work if
 there is new changed records later after fetching empty result. But I
 can't find a way to display isolation level of a connection or engine,
 I just know how to set it. So how to display the exactly isolation
 level of each sql statement execution? Is there a function to do that?

 So I think I can reuse the same connection if I can reset the isolation 
 level.

 uh yeah I don't know that there's a consistent way to do that, you'd need to 
 execute() a statement against the database which shows the current isolation 
 level.  This varies depending on which DB you're using, I'd google their docs 
 for that.  might be worthwhile just to keep track of it.


Thank you very much.

-- 
I like python!
UliPad The Python Editor: http://code.google.com/p/ulipad/
UliWeb simple web framework: http://code.google.com/p/uliweb/
My Blog: http://hi.baidu.com/limodou

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Understanding memory usage under SQLA

2012-05-17 Thread Michael Bayer
There's a few different parts to what you're asking.

The first is that you're comparing Python's use of OS memory (I'm assuming this 
is the 200+ MB) to Python's actual amount of objects present.   This is a 
common mistake.  Python up through version 2.6 does not release memory back to 
the OS once taken - this was improved in 2.7.   There's an old article about 
this here:  
http://effbot.org/pyfaq/why-doesnt-python-release-the-memory-when-i-delete-a-large-object.htm
 as well as Alex Martelli's answer: http://stackoverflow.com/a/1316799/34549 .

Second is, what exactly is the large object you're creating here ?   Answer - 
first, psycopg2 by default buffers the result set fully before returning it to 
SQLAlchemy - so it is first a list of 5000 tuples.  Second, the ORM itself also 
by default buffers the full set of rows from the result set in the form of 
mapped objects, so 5000 objects plus their related objects.A way to modify 
this behavior is to use the yield_per() option of Query, which will also in the 
case of psycopg2 tell psycopg2 to use its server side cursors feature which 
does not buffer.

However, yield_per() is not compatible with eager loading as eager loading 
involves being able to load collections across the full set of original 
objects.   Typically the better way to deal with large numbers of rows is to 
paginate, using either LIMIT/OFFSET or using window functions (see 
http://www.sqlalchemy.org/trac/wiki/UsageRecipes/WindowedRangeQuery ).

Thirdly, there is a modest growth in memory when a series of mappings are used 
for the first time, including the configuration of mappers, initialization of 
TypeEngine value processors, and such.   But the initial large resultset is the 
main thing causing the higher initial memory footprint.You'll notice this 
isn't a leak at all, as it doesn't grow.


On May 17, 2012, at 7:33 AM, Vlad K. wrote:

 
 Hello.
 
 I have a problem when processing relatively large number of rows. For 
 example, when selecting 5000 main rows, each having a number of many-to-one 
 relationships, memory usage shown by top skyrockets into 200+ MB range (RES), 
 while heapy shows cca 20MB of Python heap. PostgreSQL backend via psycopg2.
 
 I've made a minimum example case based on the problem I'm noticing in my 
 Pyramid app, so the session.commit() at line 130 is there to simulate commit 
 done by Transaction used in Pyramid at the end of each request. If I'm 
 understanding things correctly, committing would expire all objects involved 
 in the session, and I even tried manual session.expunge(row), but there is no 
 difference in memory usage.
 
 The following is source of an example case. Requires SQLAlchemy (tested with 
 0.7.5 and 0.7.7), guppy, psycopg2 (tested with 2.4.2 and 2.4.4). Happens both 
 on Fedora 15 64-bit and CentOS 6.2 32-bit, though of course the 32-bit shows 
 some 30% lower RES in top.
 
 http://pastebin.com/UFgduWVw
 
 
 Usage: setup a test database, update line 25 config. Prepopulate database 
 with -p flag, then run again without any flags.
 
 I don't see where and how would any objects remain in memory, and heapy 
 showing much lower memory use suggests something is retained in the involved 
 C extensions?  I also tried with pympler, diff before and after selecting 
 rows, shows nothing near reported by top. I guess there is no leak in 
 traditional sense of the word because repeating the task does not yield 
 growing memory consumption. It stabilizes at certain value and stays there.
 
 Heapy before selecting rows:
 
 Partition of a set of 102014 objects. Total size = 13160672 bytes.
  Index  Count   % Size   % Cumulative  % Kind (class / dict of class)
  0  45901  45  4395296  33   4395296  33 str
  1  26041  26  2186184  17   6581480  50 tuple
  2   7039   7   900992   7   7482472  57 types.CodeType
  3   6836   7   820320   6   8302792  63 function
  4235   0   761608   6   9064400  69 dict of module
  5608   1   689792   5   9754192  74 dict (no owner)
  6676   1   648544   5  10402736  79 dict of type
  7676   1   608344   5  11011080  84 type
  8199   0   206248   2  11217328  85 dict of class
  9185   0   167320   1  11384648  87 
 sqlalchemy.sql.visitors.VisitableType
 334 more rows. Type e.g. '_.more' to view.
 
 Heapy after 5000 rows have been selected:
 
 Partition of a set of 102587 objects. Total size = 16455168 bytes.
  Index  Count   % Size   % Cumulative  % Kind (class / dict of class)
  0  45923  45  4397632  27   4397632  27 str
  1  1   0  3146024  19   7543656  46 
 sqlalchemy.orm.identity.WeakInstanceDict
  2  26090  25  2189480  13   9733136  59 tuple
  3   7039   7   900992   5  10634128  65 types.CodeType
  4   6859   7   823080   5  11457208  70 function
  5235   0   761608   5  12218816  74 dict of module
  6657   1   705048   4  12923864  79 dict (no owner)
  7676   1   650464   4  

Re: [sqlalchemy] Understanding memory usage under SQLA

2012-05-17 Thread Claudio Freire
On Thu, May 17, 2012 at 10:21 AM, Michael Bayer
mike...@zzzcomputing.com wrote:
 There's a few different parts to what you're asking.

 The first is that you're comparing Python's use of OS memory (I'm assuming
 this is the 200+ MB) to Python's actual amount of objects present.   This is
 a common mistake.  Python up through version 2.6 does not release memory
 back to the OS once taken - this was improved in 2.7.   There's an old
 article about this here:
  http://effbot.org/pyfaq/why-doesnt-python-release-the-memory-when-i-delete-a-large-object.htm
 as well as Alex Martelli's answer: http://stackoverflow.com/a/1316799/34549

I think you're mistaken about the version number in which that was changed.
The article you pointed to (and my memory) seems to say it was on 2.5, not 2.7

 Second is, what exactly is the large object you're creating here ?   Answer
 - first, psycopg2 by default buffers the result set fully before returning
 it to SQLAlchemy - so it is first a list of 5000 tuples.  Second, the ORM
 itself also by default buffers the full set of rows from the result set in
 the form of mapped objects, so 5000 objects plus their related objects.    A
 way to modify this behavior is to use the yield_per() option of Query, which
 will also in the case of psycopg2 tell psycopg2 to use its server side
 cursors feature which does not buffer.

It's important to notice that all that you mention on the py side
would show up on heapy.
But psycopg2's buffers would not. Nor would libpq's.

 On May 17, 2012, at 7:33 AM, Vlad K. wrote:
 The following is source of an example case. Requires SQLAlchemy (tested with
 0.7.5 and 0.7.7), guppy, psycopg2 (tested with 2.4.2 and 2.4.4). Happens
 both on Fedora 15 64-bit and CentOS 6.2 32-bit, though of course the 32-bit
 shows some 30% lower RES in top.

Update to psycopg2 2.4.5, it fixes some memory leaks (they call them
reference counting problems):
http://initd.org/psycopg/articles/2012/03/29/psycopg-245-released/

Also... which version of libpq did you build psycopg2 with?

 I don't see where and how would any objects remain in memory, and heapy
 showing much lower memory use suggests something is retained in the involved
 C extensions?

C extensions, or memory fragmentation.

If you read spanish, check this out:
http://python.org.ar/pyar/Charlas#Depuraci.2BAPM-n_y_defragmentaci.2BAPM-n_de_memoria_en_Python

If not... ehm... try google translate ;-)

 Heapy before selecting rows:
 Heapy after 5000 rows have been selected:

Try the difference for more insight:

h1 = hpy.heap()
# do some
h2 = hpy.heap()
print repr(h2 - h1)

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Understanding memory usage under SQLA

2012-05-17 Thread Vlad K.


Hi, thanks for your reply.

Yes, I know the Python memory management model and that while it may 
free internally, it does not to OS. I've read somewhere that it has been 
fixed in 2.7 as well, but my testing on Fedora's 2.7.1 still shows the 
issue. That's why I thought perhaps there are dangling references in the 
C extension parts that are not visible to heapy (is that even possible?).


I tried with yield_per() and you once told me how that can't work if I 
use subqueryload, so I tried without subqueries. The problem is that the 
script then shoots from 5 seconds to over a minute to process same 
dataset which is unacceptable to me for other reasons (which is expected 
as there are two additional queries per each of the 5000 rows, making 
the app do 10001 queries + ORM overhead on each). However, with 
yield_per() the memory consumption stays as low as before the querying 
begins.


I've got three possible solutions here. One is repeated querying with 
limited result set AND subqueryloading which works like yield_per, 
except it requires additional sorting and offset. I just tried that and 
it indeed consumes much less memory. With sets 500 rows at once (and 
with full subqueryloads) the memory consumption is 1/10 of loading all 
rows at once which figures, 500 is 1/10 of 5000. This is acceptable.


Another is (materialized) views on the DB end with triggers and entire 
new model to select data from.


And yet another solution is to drop ORM and construct queries manually, 
returning relational data as subselects in arrays, and add a thin 
ORM-like layer that just converts row columns to named tuples so that 
the consumers of this data can use same model interface. But I'm 
guessing this is no different than the (materialized) views approach 
except the combining is done in the DB and not in the Python app. I 
still need separate model class or named tuples.





.oO V Oo.


On 05/17/2012 03:21 PM, Michael Bayer wrote:

There's a few different parts to what you're asking.

The first is that you're comparing Python's use of OS memory (I'm 
assuming this is the 200+ MB) to Python's actual amount of objects 
present.   This is a common mistake.  Python up through version 2.6 
does not release memory back to the OS once taken - this was improved 
in 2.7.   There's an old article about this here: 
http://effbot.org/pyfaq/why-doesnt-python-release-the-memory-when-i-delete-a-large-object.htm 
as well as Alex Martelli's answer: 
http://stackoverflow.com/a/1316799/34549 .


Second is, what exactly is the large object you're creating here ?   
Answer - first, psycopg2 by default buffers the result set fully 
before returning it to SQLAlchemy - so it is first a list of 5000 
tuples.  Second, the ORM itself also by default buffers the full set 
of rows from the result set in the form of mapped objects, so 5000 
objects plus their related objects.A way to modify this behavior 
is to use the yield_per() option of Query, which will also in the case 
of psycopg2 tell psycopg2 to use its server side cursors feature 
which does not buffer.


However, yield_per() is not compatible with eager loading as eager 
loading involves being able to load collections across the full set of 
original objects.   Typically the better way to deal with large 
numbers of rows is to paginate, using either LIMIT/OFFSET or using 
window functions (see 
http://www.sqlalchemy.org/trac/wiki/UsageRecipes/WindowedRangeQuery ).


Thirdly, there is a modest growth in memory when a series of mappings 
are used for the first time, including the configuration of mappers, 
initialization of TypeEngine value processors, and such.   But the 
initial large resultset is the main thing causing the higher initial 
memory footprint.You'll notice this isn't a leak at all, as it 
doesn't grow.



On May 17, 2012, at 7:33 AM, Vlad K. wrote:



Hello.

I have a problem when processing relatively large number of rows. For 
example, when selecting 5000 main rows, each having a number of 
many-to-one relationships, memory usage shown by top skyrockets into 
200+ MB range (RES), while heapy shows cca 20MB of Python heap. 
PostgreSQL backend via psycopg2.


I've made a minimum example case based on the problem I'm noticing in 
my Pyramid app, so the session.commit() at line 130 is there to 
simulate commit done by Transaction used in Pyramid at the end of 
each request. If I'm understanding things correctly, committing would 
expire all objects involved in the session, and I even tried manual 
session.expunge(row), but there is no difference in memory usage.


The following is source of an example case. Requires SQLAlchemy 
(tested with 0.7.5 and 0.7.7), guppy, psycopg2 (tested with 2.4.2 and 
2.4.4). Happens both on Fedora 15 64-bit and CentOS 6.2 32-bit, 
though of course the 32-bit shows some 30% lower RES in top.


http://pastebin.com/UFgduWVw


Usage: setup a test database, update line 25 config. Prepopulate 
database with -p flag, then run again 

Re: [sqlalchemy] Understanding memory usage under SQLA

2012-05-17 Thread Claudio Freire
On Thu, May 17, 2012 at 11:57 AM, Vlad K. v...@haronmedia.com wrote:
 PostgreSQL 9.0.7, libpq is part of the same version -devel package
 PostgreSQL 9.1.3 (via pgsql yum repo), libpq is part of the same version
 -devel package

9.1 (the one you have in production) also has leak-related fixes. They
don't necessarily apply to your problem, they're somewhat esoteric,
but it's worth noticing.

Even the tiniest memory leak will fragment your process' heap, and
cause it not to release those 200M.

In those cases, especially when the leaks are one-time (like in the
libpq case), it's convenient to force the leak to happen at load time.

Precompiling queries in SQLA, to populate the various SQLA's compiler
caches, doing some queries that cause libpq and psycopg2 to excercise
(and thus to allocate whatever permanent data structures it needs to),
all at load time, will help keep fragmentation to a minimum.

Fragmentation is a complex issue, and both python and SQLA are quite
prone to it. But it can be worked around.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] How should I do inheritance using DeclarativeReflectedBase?

2012-05-17 Thread Michael Bayer

On May 14, 2012, at 6:32 AM, Ignas Mikalajunas wrote:

 On Sun, May 13, 2012 at 1:05 PM, Michael Bayer mike...@zzzcomputing.com 
 wrote:
 Michael Bayer wrote:
 
 On May 11, 2012, at 1:16 PM, Ignas Mikalajunas wrote:
 
 Even though the latest version of the DeclarativeRefletive example
 includes some handling for inheritance, I still can not get it to
 work. I try doing (mostly modified example from
 https://bitbucket.org/sqlalchemy/sqlalchemy/src/408388e5faf4/examples/declarative_reflection/declarative_reflection.py):
 
 
 and sqlalchemy tries to find the type column in the table 'bar'. Am I
 doing the inheritance set up wrong or is it some bug in
 DeclarativeReflectiveBase?
 
 would need to spend some time with it, the declarativereflective example
 hasn't been worked out for setting up an inheritance relationship as of
 yet.   So there could be any number of issues with it (part of why it's
 only an example and not a real feature)..
 
 
 this feature will require changes to the declarative extension itself,
 which I will attempt to prepare as part of 0.8.  when the feature is done,
 the 0.8 version of declarative.py can be used with 0.7 as well in the
 interim before 0.8 is released.
 
 
 I see, thank you very much. I have a very good test case waiting for
 this feature :)

Okey doke if you want to test against 0.8 tip, here are the docs for the new 
feature:

http://docs.sqlalchemy.org/en/latest/orm/extensions/declarative.html#sqlalchemy.ext.declarative.DeferredReflection

any issues you find please report and I'll try to get them fixed.



 
 Ignas
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Foreign key reflection error?

2012-05-17 Thread Demitri Muna
Hi,

I'd like to revive this thread from a little while back as I'm still having 
problems. Thanks again to Michael for the help.

In short, I'm having problems with SQLAlchemy determining the foreign key 
relationship between two tables in two different schemas. For example, this

VisitSpectrum.catalogObject = relation(CatalogObject, backref=visitSpectra)

gives me the error:

ArgumentError: Could not determine join condition between parent/child tables 
on relationship VisitSpectrum.fitsHeaderValues.  Specify a 'primaryjoin' 
expression.  If 'secondary' is present, 'secondaryjoin' is needed as well.

while this

CatalogObject.visitSpectra = relation(VisitSpectrum, backref=catalogObject)

works. I am using SQLAlchemy 0.7.7 and Python 2.7.

I would have thought that these two lines should be functionally identical. SA 
is able to determine the relationship one direction, so the information is 
certainly there. The problem has to do, as Michael noted, with the schema 
search path, but even so I don't see where the ambiguity lies.

I've removed the public schema from my database, and in my Python model 
classes where I always explicitly set the schema value in __table_args__ to 
avoid any ambiguity. The tables are each in a different schema:

apogeedb.VisitSpectrum
catalogdb.CatalogObject

When the search path is this:

catalogdb, apogeedb

this works:

VisitSpectrum.catalogObject = relation(CatalogObject, backref=visitSpectra)

When the search path is

apogeedb, catalogdb

this works:

CatalogObject.visitSpectra = relation(VisitSpectrum, backref=catalogObject)

Flipping any of these gives the error above. No other schema (there are two 
more) contain tables with these names. This still strikes me as a bug 
somewhere. Any thoughts?

Michael's second suggestion (below) of not explicitly specifying the schema 
won't work in my case as I have a few schemas that do have overlapping names 
(hence, the schema separation). 

Cheers,
Demitri

---

On Nov 4, 2011, at 5:25 PM, Michael Bayer wrote:

 I think the important thing here is that the table definition on the Python 
 side needs to represent the table in the same way that the foreign key def 
 will represent it from PG.
 
 It's based on this fact:
 
 I was reading the descriptions from PGAdmin3 - apparently they drop the 
 schema in the display when the table is on the search path. There's nothing 
 that I can do to further put the table in the schema, as it were.
 
 
 So SQLA does this:
 
 1. reflect boss.spectrum_header
 2. see that boss.spectrum_header has a foreign key - to a table called 
 survey.  No schema is given for this FK def.  So SQLAlchemy creates a new 
 table called survey in the metadata collection.  The schema is None.
 3. SQLAlchemy then reflects survey, all its columns. PG's liberal search 
 path allows this to work without issue.
 4. The application then goes to reflect what is, from SQLAlchemy's 
 perspective, an entirely different table called platedb.survey.   Populates 
 that table with things too.
 5. The MetaData now has three tables:  boss.spectrum_header, platedb.survey, 
 survey.   Errors ensue since boss.spectrum_header points to survey and not 
 platedb.survey.
 
 Solutions:
 
 1. set the search path to be only public for the application's connection.  
 Use explicit schema names for all constructs outside of public.  A 
 connection event that emits SET search_path TO public on each new 
 connection will achieve this without affecting the database outside of the 
 app.
 
 2. Leave the liberal search path in place.  Remove the usage of schema 
 within the SQLAlchemy application and let PG's liberal search path find 
 things.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Foreign key reflection error?

2012-05-17 Thread Demitri Muna
Hi,

As a quick follow up, the inability to cross schema in one direction means that 
join tables won't work regardless of path order. For example, given

apogeedb.Calibration
platedb.Exposure

neither of these will work since, I'm guessing, the join must be made in both 
directions:

Calibration.exposures = relation(Exposure, backref=calibrations)
Exposure.calibrations = relation(Calibration, backref=exposures)

Again, the schema arg is explicitly set. This does work:

Calibration.exposures = relation(Exposure, 
secondary=ExposureToCalibration.__table__, backref=calibrations)


Cheers,
Demitri

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Foreign key reflection error?

2012-05-17 Thread Michael Bayer

secondary is never implied in relationship(), you must always specify this 
table explicitly.

Also I'd advise caution when mapping to a secondary table - SQLAlchemy may 
perform persistence operations twice against it, unless the relationship is set 
as viewonly=True.


On May 17, 2012, at 12:14 PM, Demitri Muna wrote:

 Hi,
 
 As a quick follow up, the inability to cross schema in one direction means 
 that join tables won't work regardless of path order. For example, given
 
 apogeedb.Calibration
 platedb.Exposure
 
 neither of these will work since, I'm guessing, the join must be made in both 
 directions:
 
 Calibration.exposures = relation(Exposure, backref=calibrations)
 Exposure.calibrations = relation(Calibration, backref=exposures)
 
 Again, the schema arg is explicitly set. This does work:
 
 Calibration.exposures = relation(Exposure, 
 secondary=ExposureToCalibration.__table__, backref=calibrations)
 
 
 Cheers,
 Demitri
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Foreign key reflection error?

2012-05-17 Thread Michael Bayer

On May 17, 2012, at 12:02 PM, Demitri Muna wrote:

 Hi,
 
 I'd like to revive this thread from a little while back as I'm still having 
 problems. Thanks again to Michael for the help.
 
 In short, I'm having problems with SQLAlchemy determining the foreign key 
 relationship between two tables in two different schemas. For example, this
 
 VisitSpectrum.catalogObject = relation(CatalogObject, backref=visitSpectra)
 
 gives me the error:
 
 ArgumentError: Could not determine join condition between parent/child tables 
 on relationship VisitSpectrum.fitsHeaderValues.  Specify a 'primaryjoin' 
 expression.  If 'secondary' is present, 'secondaryjoin' is needed as well.
 
 while this
 
 CatalogObject.visitSpectra = relation(VisitSpectrum, backref=catalogObject)
 
 works. I am using SQLAlchemy 0.7.7 and Python 2.7.

they are equivalent and I'd consider here that perhaps in the second case you 
aren't actually testing the mapping ?   The error is only raised once mappers 
go through the deferred configure step, which is the first time you use the 
mappings.

A bug report for SQLA would look like this:

t1 = Table(table1, metadata, schema=schema1, autoload=True, 
autoload_with=some_engine)
t2 = Table(table2, metadata, schema=schema2, autoload=True, 
autoload_with=some_engine)

assert t1.c.some_column.references(t2.c.some_other_column)

that is, some_column is expected to have a ForeignKey to some_other_column 
based on the reflection.

Please provide that including the raw CREATE TABLE statements, removing any 
extraneous columns and data not needed to reproduce the issue.






 
 I would have thought that these two lines should be functionally identical. 
 SA is able to determine the relationship one direction, so the information is 
 certainly there. The problem has to do, as Michael noted, with the schema 
 search path, but even so I don't see where the ambiguity lies.
 
 I've removed the public schema from my database, and in my Python model 
 classes where I always explicitly set the schema value in __table_args__ to 
 avoid any ambiguity. The tables are each in a different schema:
 
 apogeedb.VisitSpectrum
 catalogdb.CatalogObject
 
 When the search path is this:
 
 catalogdb, apogeedb
 
 this works:
 
 VisitSpectrum.catalogObject = relation(CatalogObject, backref=visitSpectra)
 
 When the search path is
 
 apogeedb, catalogdb
 
 this works:
 
 CatalogObject.visitSpectra = relation(VisitSpectrum, backref=catalogObject)
 
 Flipping any of these gives the error above. No other schema (there are two 
 more) contain tables with these names. This still strikes me as a bug 
 somewhere. Any thoughts?
 
 Michael's second suggestion (below) of not explicitly specifying the schema 
 won't work in my case as I have a few schemas that do have overlapping names 
 (hence, the schema separation). 
 
 Cheers,
 Demitri
 
 ---
 
 On Nov 4, 2011, at 5:25 PM, Michael Bayer wrote:
 
 I think the important thing here is that the table definition on the Python 
 side needs to represent the table in the same way that the foreign key def 
 will represent it from PG.
 
 It's based on this fact:
 
 I was reading the descriptions from PGAdmin3 - apparently they drop the 
 schema in the display when the table is on the search path. There's nothing 
 that I can do to further put the table in the schema, as it were.
 
 
 So SQLA does this:
 
 1. reflect boss.spectrum_header
 2. see that boss.spectrum_header has a foreign key - to a table called 
 survey.  No schema is given for this FK def.  So SQLAlchemy creates a new 
 table called survey in the metadata collection.  The schema is None.
 3. SQLAlchemy then reflects survey, all its columns. PG's liberal search 
 path allows this to work without issue.
 4. The application then goes to reflect what is, from SQLAlchemy's 
 perspective, an entirely different table called platedb.survey.   Populates 
 that table with things too.
 5. The MetaData now has three tables:  boss.spectrum_header, platedb.survey, 
 survey.   Errors ensue since boss.spectrum_header points to survey and not 
 platedb.survey.
 
 Solutions:
 
 1. set the search path to be only public for the application's connection. 
  Use explicit schema names for all constructs outside of public.  A 
 connection event that emits SET search_path TO public on each new 
 connection will achieve this without affecting the database outside of the 
 app.
 
 2. Leave the liberal search path in place.  Remove the usage of schema 
 within the SQLAlchemy application and let PG's liberal search path find 
 things.
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 

-- 
You received this message because you are subscribed to the Google Groups 

Re: [sqlalchemy] Choices django like with sqlalchemy

2012-05-17 Thread Michael Bayer
that's the basic idea and for a more complete example see the blog post at 
http://techspot.zzzeek.org/2011/01/14/the-enum-recipe/ .


On May 17, 2012, at 12:07 PM, Artur Felipe Sousa wrote:

 Hey guys!
 
 Which way you take to map some custom data (not in database) with sqlalchemy?
 
 In django it will be:
 GENDER_CHOICES = (
 ('M', 'Male'),
 ('F', 'Female'),
 )
 class Foo(models.Model):
 gender = models.CharField(max_length=1, choices=GENDER_CHOICES)
 
 I saw an implamantion of it using types in 
 http://stackoverflow.com/questions/6262943/sqlalchemy-how-to-make-django-choices-using-sqlalchemy:
 
 import sqlalchemy.types as types
 class ChoiceType(types.TypeDecorator):
 impl = types.String
 
 def __init__(self, choices, **kw):
 self.choices = dict(choices)
 super(ChoiceType, self).__init__(**kw)
 
 def process_bind_param(self, value, dialect):
 return [k for k, v in self.choices.iteritems() if v == value][0]
 
 def process_result_value(self, value, dialect):
 return self.choices[value]
 
 Is it alright? Or you guys do it another way?
 
 Thanks a lot!
 
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To view this discussion on the web visit 
 https://groups.google.com/d/msg/sqlalchemy/-/M5Qi6UMvI-IJ.
 To post to this group, send email to sqlalchemy@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] overriding inherited mapper properties supported?

2012-05-17 Thread Kent
Suppose I am using single table polymorphic inheritance like the docs 
Employee/Manager/Engineer example.  I have a relationship that I want to 
have a different cascade or loading strategy for, depending on the mapper.  
Can I inherit from the base mapper and override the property, like this:

employee_mapper = mapper(Employee, employees_table, \
polymorphic_on=employees_table.c.type, polymorphic_identity='employee',
properties={'customer':relationship(Customer, lazy=False, 
cascade='save-update,refresh-expire,merge')})
manager_mapper = mapper(Manager, inherits=employee_mapper,
polymorphic_identity='manager',
properties={'customer':relationship(Customer, lazy=True, 
cascade='save-update,refresh-expire')})
engineer_mapper = mapper(Engineer, inherits=employee_mapper,
polymorphic_identity='engineer',
properties={'customer':relationship(Customer, lazy=True, 
cascade='save-update,refresh-expire')})


Here, the 'customer' relationship only joined loads and merges for 
Employee, not Engineer nor Manager.  Is this supported?  


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/sqlalchemy/-/Oz-YnA_dInwJ.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] overriding inherited mapper properties supported?

2012-05-17 Thread Michael Bayer

On May 17, 2012, at 5:00 PM, Kent wrote:

 Suppose I am using single table polymorphic inheritance like the docs 
 Employee/Manager/Engineer example.  I have a relationship that I want to have 
 a different cascade or loading strategy for, depending on the mapper.  Can I 
 inherit from the base mapper and override the property, like this:
 
 employee_mapper = mapper(Employee, employees_table, \
 polymorphic_on=employees_table.c.type, polymorphic_identity='employee',
 properties={'customer':relationship(Customer, lazy=False, 
 cascade='save-update,refresh-expire,merge')})
 manager_mapper = mapper(Manager, inherits=employee_mapper,
 polymorphic_identity='manager',
 properties={'customer':relationship(Customer, lazy=True, 
 cascade='save-update,refresh-expire')})
 engineer_mapper = mapper(Engineer, inherits=employee_mapper,
 polymorphic_identity='engineer',
 properties={'customer':relationship(Customer, lazy=True, 
 cascade='save-update,refresh-expire')})
 
 
 Here, the 'customer' relationship only joined loads and merges for Employee, 
 not Engineer nor Manager.  Is this supported?  

it's not.


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Foreign key reflection error?

2012-05-17 Thread Michael Bayer

On May 17, 2012, at 6:18 PM, Demitri Muna wrote:

 
 A bug report for SQLA would look like this:
 
 t1 = Table(table1, metadata, schema=schema1, autoload=True, 
 autoload_with=some_engine)
 t2 = Table(table2, metadata, schema=schema2, autoload=True, 
 autoload_with=some_engine)
 
 assert t1.c.some_column.references(t2.c.some_other_column)
 
 that is, some_column is expected to have a ForeignKey to 
 some_other_column based on the reflection.
 
 Please provide that including the raw CREATE TABLE statements, removing 
 any extraneous columns and data not needed to reproduce the issue.
 
 See attached. I've created a very simple toy model to demonstrate this. The 
 full database I created is provided as an SQL dump. I'm using PostgreSQL 
 9.0.5. No data is required to make the assertions I made fail. The code 
 provided is in three Python files - one that defines the database connection, 
 another that defines the model classes (both designed for use by many 
 scripts), and a third script that is the one that should be executed. The 
 database user is schema_test. The code of interest appears in lines 37-57 
 of ModelClasses.py. Finally, you'll have to update the database connection 
 information for your setup at the top of multiple_schema_test.py.

OK this is way too much verbiage.  If the error is as you describe, a bug 
involving reflection of two tables across two schemas, I'd need a test which 
illustrates this alone, no ORM or full database dumps, with a simple failure to 
identify a foreign key constraint between them.   I've attached a sample of 
exactly what I'd like to see.

If the issue is not as simple as this, then we will have to dig into multiple 
tables, mappings, etc. but you should be able to reproduce the specific case 
here with just two tables.




from sqlalchemy import *


CREATE TABLE test_schema.table_a (id INTEGER primary key);
CREATE TABLE test_schema_2.table_b (id INTEGER primary key, ta_id INTEGER REFERENCES test_schema.table_a(id));



e = create_engine(postgresql://scott:tiger@localhost/test, echo=True)

m = MetaData()
t1 = Table(table_a, schema=test_schema, autoload=True, autoload_with=e)
t2 = Table(table_b, schema=test_schema_2, autoload=True, autoload_with=e)

assert t2.c.ta_id.references(t1.c.id)-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] How should I do inheritance using DeclarativeReflectedBase?

2012-05-17 Thread Ignas Mikalajunas
On Thu, May 17, 2012 at 6:54 PM, Michael Bayer mike...@zzzcomputing.com wrote:

 On May 14, 2012, at 6:32 AM, Ignas Mikalajunas wrote:

 On Sun, May 13, 2012 at 1:05 PM, Michael Bayer mike...@zzzcomputing.com 
 wrote:
 Michael Bayer wrote:

 On May 11, 2012, at 1:16 PM, Ignas Mikalajunas wrote:

 Even though the latest version of the DeclarativeRefletive example
 includes some handling for inheritance, I still can not get it to
 work. I try doing (mostly modified example from
 https://bitbucket.org/sqlalchemy/sqlalchemy/src/408388e5faf4/examples/declarative_reflection/declarative_reflection.py):


 and sqlalchemy tries to find the type column in the table 'bar'. Am I
 doing the inheritance set up wrong or is it some bug in
 DeclarativeReflectiveBase?

 would need to spend some time with it, the declarativereflective example
 hasn't been worked out for setting up an inheritance relationship as of
 yet.   So there could be any number of issues with it (part of why it's
 only an example and not a real feature)..


 this feature will require changes to the declarative extension itself,
 which I will attempt to prepare as part of 0.8.  when the feature is done,
 the 0.8 version of declarative.py can be used with 0.7 as well in the
 interim before 0.8 is released.


 I see, thank you very much. I have a very good test case waiting for
 this feature :)

 Okey doke if you want to test against 0.8 tip, here are the docs for the new 
 feature:

 http://docs.sqlalchemy.org/en/latest/orm/extensions/declarative.html#sqlalchemy.ext.declarative.DeferredReflection

 any issues you find please report and I'll try to get them fixed.

Thanks, everything seems to be working very well.

Small issues so far:

I can't set neither polymorphic_on nor sort_on columns in mapper
properties without defining columns explicitly and that overrides the
reflected columns.

There is no way to ignore just SAWarnings just on some tables now,
only either turn them off for all tables (around prepare call) or
reflect the tables explicitly before calling prepare. (otherwise I
keep getting the warnings for indexes/columns sqlalchemy does not
understand)

the third one required a patch to the prepare method:

  @classmethod
  def prepare(cls, engine):
  Reflect all :class:`.Table` objects for all current
  :class:`.DeferredReflection` subclasses
  to_map = [m for m in _MapperConfig.configs.values()
  if issubclass(m.cls, cls)]
  for thingy in to_map:
  cls.__prepare__(thingy.args, engine)
+
+ for thingy in to_map:
  thingy.map()


so that all the tables would get defined before models. I can't come
up with an sqlite based testcase at the moment.
Without this some I was having some_table has no column id errors.


Ignas

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Foreign key reflection error?

2012-05-17 Thread Demitri Muna

On May 17, 2012, at 6:33 PM, Michael Bayer wrote:

 OK this is way too much verbiage.  If the error is as you describe, a bug 
 involving reflection of two tables across two schemas, I'd need a test which 
 illustrates this alone, no ORM or full database dumps, with a simple failure 
 to identify a foreign key constraint between them.   I've attached a sample 
 of exactly what I'd like to see.
 
 If the issue is not as simple as this, then we will have to dig into multiple 
 tables, mappings, etc. but you should be able to reproduce the specific case 
 here with just two tables.

I've refactored the example to the simplest case. We can see if the fix 
addresses many-to-many relationships as well.

Demitri

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

from sqlalchemy import *
from sqlalchemy.ext.declarative import declarative_base


CREATE TABLE people.band ( id integer primary key, name text, venue_id integer );
CREATE TABLE things.venue ( id integer primary key, name text );
ALTER TABLE ONLY people.band
ADD CONSTRAINT venue_fk FOREIGN KEY (venue_id) REFERENCES things.venue(id) ON UPDATE CASCADE ON DELETE SET NULL;



e = create_engine(postgresql://scott:tiger@localhost/test, echo=True)
#e = create_engine(postgresql://schema_test:schema_test@localhost:9000/schema_test_db, echo=True)

Base = declarative_base(bind=e)

class Band(Base):
	__tablename__ = 'band'
	__table_args__ = {'autoload' : True, 'schema' : 'people'}

class Venue(Base):
	__tablename__ = 'venue'
	__table_args__ = {'autoload' : True, 'schema' : 'things'}

assert Band.__table__.c.venue_id.references(Venue.__table__.c.id)